Top Ad unit 728 × 90

VR ARToolkit OpenCV and use augmented reality creation

ARToolkit OpenCV and use augmented reality creation

Today is not surprising in various sources of information published on the news about virtual reality, ie games - it has become quite common. The  augmented reality (Augmented Reality - AR hereinafter)  or at least the concept might also have heard. It is aimed at the everyday and the ordinary reality of our (image) to enrich it įkomponuojant various virtual (3D) objects. In particular, it has become popular, as web / video cameras spread on mobile devices - Qualcomm initiative came first or demonstrations, mini-games and so on. In this article, we look at this technology and take a closer look at how to use ARToolkit + library and other tools or building.

I. Prologue

Whether the concept is quite wide - includes not only the visible image on the monitor or on the phone screen "embellishment" but often images / additional information is projected onto the real objects in the environment - the car window, building walls, etc. Once you enter into the Google search for "Augmented Reality" will find countless images ranging mobile technology, the automotive industry or even Google Glass. AR has long been used in military / aerospace industry and so on. Scopes are actually quite a few.

Create or decisions can anyone previously unfamiliar with this technology, its principles of operation, and with a little programming knowledge. The main problem of any kind or decision - to know how to properly incorporated non-existent objects in the real scene to make things look more natural as possible. We need to identify new 3D object orientation and position, not to mention their lighting applications by stage lighting. With static scene (eg. Building wall) everything easier - you just have to know the exact dimensions and surface shape. If you are working with a dynamic environment (eg .: mobile camera visible views), it is to predict what is visible in the scene - it is simply impossible. In this case, using different tags / markers are known and the image that is determined to facilitate the orientation of the work surface.

It is the most advanced technology-based markers, as algorithms for detection of the image has long been described and are actually quite accurate. The simplest form of black-and-white (binary) markers on their contrasting shapes are most easily detected at any stage. The maximum values ​​of the color makes it easier to distinguish the outline and avoid errors, or even part of their rebuilding - thus increasing reliability. Binary markers are of different types. One uses as a template for us to understand thumbnail image / text (eg. Letter of the alphabet), others - binary or another block code encoded 2D matrix, which allows to determine even many tags simultaneously, and each of their ID and a specific orientation.
The time now is available in a number of different tools that facilitate the marker determination process - if you create a game you can use Unity 3D or Qualcomm's Vuforia or tools, as well as the dozens of libraries as ARtoolkit + , Arucas , ARMA  , etc. For this work I chose a well-known ARtoolkit + library, which is well refined and optimized, and has a number of useful features - supports up to 4096 simultaneous tracking markers without significant loss of speed, maintaining a camera calibration, uses effective markers orientation detection, and so on.

II. ARToolkit +

ARtoolkit can operate in two modes - visible markers to track or follow the one visible marker as a single object (the plane). Following the plane of the first file is described markers and their relative positioning on it. Further, the description is loaded into ARToolkit above from a number of visible markers may approximate other cursor location and planar orientation. 5-6 is enough to recognize markers on the plane such that an orientation very good detection accuracy. In tests, I chose to use the individual (single) cursor tracking method, however, go to many - quite simple.

Of course, any demands on accuracy or decisions need to calibrate the camera - just for the tangential or radial lens imperfections visible in the picture is not entirely accurate. ARToolkit supports Camera Calibration Toolbox for MATLAB tool format, thereby enhancing the entire work process. Calibration calculated as important camera settings - focal length, distortion coefficients, and so on. of this data can be played back camera projection matrix. Of course, the camera is also possible to be in focus at the right distance, the discovery would be as accurate as possible.

Working with ARToolkit + library, we will need to download and compile. That I've made using MS Visual Studio 2010 IDE. Here we can see how to use bilioteką markers for detection. First of all, the library supports various types of markers - simple (thick black edging), BCH and described her. Of all the alternatives in my opinion the best option seemed BCH markers. Unlike the development of their own, they have been described and automatically sets the library, and their number is followed at the same time can be up to 4096. Also, these binary markers than usual, have error correction function - so that their detection accuracy is even better. And finally, all these markers 4096 images already prepared and added to the library - so it remains to increase the markers to the required size and printed on paper. My use A4 size paper (is the project number) has four 40mm markers at the corners as shown below:

We discussed how we use the functionality, so now let's look at how to initialize and to work with the same library. Need to add a few banners in the source file:

To use the library functions will also need to contribute and the relevant ARtoolkit.lib library. ARtoolkit initialization is performed as shown below:

After initializing markers can start detecting image. From where we take an image - for each choice. Will it be a camera or a static image or video file - depend only on the task. In any case, in order to detect the markers need a video frame pixel RGB information. This information is in my case I took the video camera used the OpenCV library functions. Thus, the detection process - just one line, and the rest - drawing / imaging features:

Detected markers drawing the reader is left to realize. The final design is used for both OpenGL and OpenCV and functionality to display the results. Generally, after detecting get information about the cursor position in the center, edges, projections and OpenGL style modelview arrays - orientation, as well as receive and ID markers by which to know which markers are visible. 

The results are demonstrated in the video below - OpenCV and OpenGL windows. In order to realize at least a minimum unloaded AR and project ideas to the 3D model of tree drawing code on each tag center.

III. Summary and outlook

Generally ARToolkit not entirely perfect solution - but pretty good. The shortcomings can be noted is that, at least one angle cursor hiding behind the scenes, the cursor is no longer measurable, although there may be seen a large part of the cursor. Sometimes the cursor orientation matrix (modelview) can be metered (zero) while the cursor is detected and visible - particularly true when using RDP orientation method. It is well, good stability, still found the cursor position and orientation is very little change / shaking (jitters) resting over time, as the automatically measured values ​​of the past (history). With this library you can do some really interesting things - it is worthy of attention, but the worst that nebetobulinama :(

It also should be mentioned at the end of the actively developed or areas, which aims to use markers, but only natural and other properties - such as the human hand (Handy AR -  source ) following his fingers and so on. as shown below:

Other new methods (used in Qualcomm's Vuforia et al.) Is based on the characteristics of the search figures - compared corners, edges and the like. Sought-like images (eg magazine cover, photo) in the new image with the same or similar features:

Another promising approach - tracking features for a few frames ( PTAM  - PARALLER Tracking and Mapping). This approach follows the features of the environment consists of a set of possible planes and on further allows drawing or object video below:

The whole of the draft code can be downloaded from the link below. Good luck and experiments to the meeting!

Project source - download
VR ARToolkit OpenCV and use augmented reality creation Reviewed by Unknown on July 12, 2017 Rating: 5

No comments:

All Rights Reserved by JackyLe © 2018 © 2017
Edit bởi: Jacky Le | Youtube Channel: JACKY LE

Send email form


Email *

Message *

Powered by Blogger.