VisionLib Release 2.3.0

Object Tracking. Real Smooth. 

The new release comes with strong improvements when combining Model Tracking and SLAM. While the first is the de-facto standard for detecting and localizing objects, the latter tracks the environment and lets you quickly place superimpositions stable inside the world.  

Combined, both make a perfect match to get more of AR: register content precisely to objects and also into the space around them.

The new release is all about better and more stable augmentation results at the verge of model tracking and SLAM combined. With overall better tracking results, faster re-initialization and more control over the states in between both tracking techniques. 

Improved URP Support

Unity’s Universal Render Pipeline has evolved into a powerful graphics solution that combines appeal with speed and performance. Be it for marketing purposes, product visualisation or after sales solutions to enhance the buying process: some AR projects simply need a stunning graphical experience.

With the new release, there are fewer boundaries to accomplish this. While an URP extension was available for some time, we’re excited that with this release, URP gets major and better support.  

Improved Re-Intialisation for SLAM-extended Model Tracking

With this release we introduce a revised workflow and enable to recover from “critical” tracking more quickly, when using SLAM-extended Model Tracking.

Tracking state “critical” indicates that tracking is either losing quality or is about to get lost altogether. Using only model tracking, state “critical” quickly causes tracking to stop, enabling users to re-initialize. With SLAM-enhanced tracking enabled, however, the SLAM map sometimes appeared to be valid, but eventually drifted or moved away from the model target, causing the augmentation to stuck at incorrect places.

In the new release, VisionLib will now decide much faster when to re-initialize the (model) tracking. An overall enhanced handling of invalid or implausible SLAM data now helps to improve the tracking quality. And, there is more control over the AR behavior for developers, in order to ease use for users.

Technically, VisionLib uses a pose predicted by SLAM alongside the model tracking pose. Developers gain control over this functionality with two new options to re-initialize model tracking even if SLAM prediction is still possible. This allows deciding if and when the SLAM pose overrides the model tracking pose and vice versa:

allowedNumberOfFramesSLAMPrediction: Limits the total number of frames which should be predicted via SLAM – a smaller number lets the user re-initialize faster, manually

allowedNumberOfFramesSLAMPredictionObjectVisible: Limits the number of prediction frames in which one is looking at the predicted model position – a smaller number will enable to re-initialize sooner, when the model has moved while one didn’t look at it.

Try the new behavior and test it with your projects and use cases.

Updates For Unity

Improved URP Support

We’ve talked about URP support in the beginning. Within a few steps, it’s now easy to get started to develop VisionLib projects based on URP. Read our new article to learn how to get started, or how to upgrade existing projects to URP. › Read URP Support article.

New Tracking Config Scene

In order to help assess model tracking results and keep the workflow of integrating VisionLib into Unity projects lean, we introduce a new ModelTrackingSetup scene for standard and mobile as well as HoloLens development. It replaces the former AdvancedModelTracking scene.

This scene helps to test models inside Unity directly, with options to tweak parameters, and assess overall tracking quality before starting custom scene development.

For these purposes, it offers a debug-view, along with UI elements enabling direct manipulation of tracking parameters and initialization data at runtime.

Once a suitable setup is found, you can now save a .vl configuration file directly from within the scene. Either run the scene inside Unity or deploy it on mobile devices, make changes and save them there.

When finished, fetch the saved configuration from the mobile and use it on your desktop for further development. A time-safer, particularly when you would need to go to an object with the mobile device, as it cannot be reached and tested from the desktop:

For development, we’ve added the ModelTrackerParameters component to the VLModelTracker prefab. Here, one can set an URI and call SaveCurrentConfiguration to save the current configuration to a file.
For HoloLens we’ve added the new HoloLens2ModelTrackingSetup scene. It uses MRTK functionality to provide a good and consistent UI to adjust and save tracking parameters directly on the HoloLens 2.
There is also a new VLImageSourceParameters prefab variant for HoloLens scenes that contains the FieldOfView parameter

Learn more about workflows to create custom tracking configurations in Unity with this scene:

for standard and mobile development

for model tracking on HoloLens

Updates to the Tracking Configuration Component

Updates to the Tracking Configuration Component
We’ve added an input source selection to the `TrackingConfiguration` component and expanded existing input options. Now, choose here to either use what’s been specified in the tracking configuration; or enable users to select from available input sources at runtime; or select the new option to use an image sequence as input source for tracking inside the editor.

The latter makes development much easier. And with the new functionality, it is easy to change the input sources without having to manually edit the tracking configuration file.

Other Changes Worth Highlighting

For HoloLens we’ve added the example package VisionLib.SDK.MRTK.Examples including examples for using VisionLib together with MRTK.
We reduced the file size of the libraries by about 6 % to 11 % (depending on the platform)
The PosterTracker now has the state critical when the pose was only predicted using SLAM. Previously, it would report the tracked state in this situation. This now makes it possible to distinguish between both cases.
For image sequences which are recorded with extendibleTracking active, we will also record the timestamps. This timestamp will be read and used when replaying the image sequence.
We’ve improved pose smoothing (especially with static scenes), Also, pose smoothing is much faster and cleaner now for staticScene enabled

staticScene, a parameter to indicate that tracked objects are not expected to move in order to increase tracking performance, has changed behavior. If you used it before, learn about the minor changes in the Change Log.
Convenience has increased when working with licenses under Windows. The new behavior is less sensitive, for example, when changing network settings.
Config file JSON syntax is now more type sensitive, so you get better hints for errors during development. E.g., if a parameter expects an int but instead gets an int inside a string, this is now noted.

Head’s Up

For HoloLens we’ve added the example package VisionLib.SDK.MRTK.Examples including examples for using VisionLib together with MRTK.
We reduced the file size of the libraries by about 6 % to 11 % (depending on the platform)
The PosterTracker now has the state critical when the pose was only predicted using SLAM. Previously, it would report the tracked state in this situation. This now makes it possible to distinguish between both cases.
For image sequences which are recorded with extendibleTracking active, we will also record the timestamps. This timestamp will be read and used when replaying the image sequence.
We’ve improved pose smoothing (especially with static scenes), Also, pose smoothing is much faster and cleaner now for staticScene enabled

staticScene, a parameter to indicate that tracked objects are not expected to move in order to increase tracking performance, has changed behavior. If you used it before, learn about the minor changes in the Change Log.
Convenience has increased when working with licenses under Windows. The new behavior is less sensitive, for example, when changing network settings.
Config file JSON syntax is now more type sensitive, so you get better hints for errors during development. E.g., if a parameter expects an int but instead gets an int inside a string, this is now noted.

All Updates In Detail

There are many more improvements and changes to the overall SDK. Make yourself familiar and read the detailed Release Notes and have a look at our Release Blog post.

Learn VisionLib – Workflow, FAQ & Support

We help you apply Augmented- and Mixed Reality in your project, on your platform, or within enterprise solutions. Our online documentation and Video Tutorials grow continuously and deliver comprehensive insights into VisionLib development.

Check out these articles: Whether you’re a pro or a starter, our new documentation articles on understanding tracking and the new workflow around setting up tracking configurations in Unity will help you get a clearer view on VisionLib and Model Tracking.

For troubleshooting, lookup the FAQ or write us a message at request@visionlib.com.

Find these and more helpful insights at our Support Page.

The post VisionLib Release 2.3.0 appeared first on Visometry.

Share on twitter
Share on linkedin

Stay Informed

Exciting news around industrial #AR, core technology and #XR enterpise services – for developers, business leaders and every one else. Stay up to date and join the conversation:

Get VisionLib

VisionLib for Developers
 
Get the Tracking SDK and a trial license for Untiy & start developing immeditaley.
For Business Leaders
 
Make AR happen in your business with our expertise & learn how from our experts.