Why Model Tracking?
Model Tracking enables you to localize desired objects in the camera image with means of computer vision techniques. In order to do that, Model Tracking uses 3D (CAD) data of these real physical objects (also called tracking targets) as reference information to enable and “tell” the computer vision system about the objects it should find. […]
Can I track own, custom objects with VisionLib using my 3D models?
Of course, you can. You will need well matching 3D models of the physical objects you want to track. VisionLib needs them as so-called model references for tracking. You can change these 3D models used for tracking before deployment and dynamically during runtime. You can manage 3D model references in the .vl config file. In Unity, […]
The tracking doesn’t (re-)initialize correctly
By default, VisionLib collects and stores further poses on-the-run, in order use them as fallback poses, when tracking gets lost. Sometimes, these poses might be corrupted or inappropriate, though. You can either call a “soft reset”, which sets VisionLib tracking state to “lost” and resets to the original init-pose. If this doesn’t help, you can […]
The augmentation on HoloLens has an offset.
We recommend testing the tracking of your model with a mobile or desktop application first, to optimize tracking parameters more quickly. Still, holograms on HoloLens can be displaced some centimeters, depending on the position of the glasses in front of your eyes. This can easily be solved by moving the HoloLens on your head a […]
While initialization, I can’t match the line model with my tracking object: line model or augmentation appear skewed
Aside from the quality threshold ( minInlierRatioInit ), which may be set too high in the -.vl config file for a good initialization, this usually might indicate bad calibration. Whenever you feel that tracking and initialization are working only moderate, or your line model and your tracked object appear misfitting “here and there” while trying […]
How can I change the model reference used for tracking?
You can change the 3D model used for tracking before deployment and dynamically during runtime. You can change and manage 3D model references in the .vl config file. In Unity, the easiest way is to place your model inside the Streaming Assets folder in your project’s Assets directory and reference it in the .vl config file. […]
Is there an ARKit or ARCore or HoloLens SLAM integration?
Yes there is an integration for ARKit, ARCore and HoloLens SLAM. You can use external SLAM techniques and VisionLib’s Model Tracking together. Details on how to use this can be found in the documentation. With the 20.10.1 release, we also added an integration for ARFoundation. Find more details on the ARFoundation documentation page.
How can I set and change the init pose?
In Unity, init poses are created and set by using the VLInitCamera prefab. Drag&Drop this prefab into your hierarchy. During development, you can control the init pose by pointing the VLInitCamera towards the 3D model and by moving and placing it accordingly. Choose a placement, that gives you a nice and reasonable spot on your […]
Can I change the init pose during runtime?
Yes, you can change the init pose during runtime by moving the VLInitCamera prefab. Please see also these articles on working with init poses and re-initialization in the documentation.
Can I detect my physical objects from any position?
Yes, basically you can. It’s a question of setting an appropriate so called init pose (more info in the documentation). Or you can use Auto Initialization. However, not every object has a well detectable shape or geometry from all angles. Thus, we recommend avoiding to detect or initialize from such (unsuitable) angles that might lead […]