![]() ![]() For example, body tracking could be run concurrently with a ball object detection model. Users can now run multiple instances of Object Detection and/or Body Tracking simultaneously. Added Concurrent execution of different AI models.Introduced Object Detection as a standalone module, separate from Body Tracking.BODY_70: New body model with hand and finger tracking, with 4 key points per finger.The body fitting now provides accurate orientations for the feet and hands, with full orientations. BODY_38: New body model with feet and simplified hands.This new module goes beyond the existing pose models that are trained on only 17 key points and enable localization of a new topology of 38 and 70 human body key points, making it ideal for advanced body tracking use cases. The new module employs ML to infer up to 70 landmarks of a body from a single frame. Introduced new Body Tracking Gen 2 module.The Neural depth in 4.0 offers a glimpse of what's to come, as we plan to roll out an even more robust model later in the year. Our new ZEDnet Gen 2 AI model now powers the depth sensing module for stereo depth perception, providing enhanced performance.Improved NEURAL depth mode which is now more robust to challenging situations such as low-light, heavy compression, noise, and textureless areas such as interior plain walls, overexposed areas, and exterior sky.By fusing visual odometry with GPS data, we can compensate for GPS dropouts in challenging outdoor environments and provide more accurate and reliable positioning information in real time.During a geo-tracking session, the API can constantly update the device's position in the real world by combining data from an external GPS and ZED camera odometry as it moves, delivering latitude and longitude with centimeter-level accuracy. Introduced new Geo-tracking API for accurate global location tracking.All SDK capabilities will be added to the Fusion API in the final 4.0 release. ![]() In 4.0 EA (Early Access), multi-camera fusion supports multi-camera capture, calibration, and body tracking fusion. The new API can be found in the header sl/Fusion.hpp as sl::Fusion API.Additionally, the Fusion module offers redundancy in case of camera failure or occlusions, making it a reliable solution for critical applications. The new module allows seamless synchronization and integration of data from multiple cameras in real time, providing more accurate and reliable information than a single camera setup. Introduced new Multi-Camera Fusion API.Introduced new VIDEO_SETTINGS, RESOLUTION, and sl::InputType parameters to enable users to use the same code for both GMSL or USB 3.0 cameras without any added complexity.Added support for the new ZED X and ZED X Mini cameras.Here's a closer look at some of the new features in ZED SDK 4.0: We believe that these updates will unlock even more potential for our users to create innovative applications that push the boundaries of what is possible with depth-sensing technology. This module handles time synchronization and geometric calibration issues, along with 360° data fusion with noisy data coming from multiple cameras and sensor sources. We are proud to introduce the new multi-camera Fusion API, which makes it easier than ever to fuse data coming from multiple cameras. We are also introducing an improved NEURAL depth mode, which offers even more accurate depth maps in challenging situations such as low-light environments and textureless surfaces. Our latest update supports the ZED X and ZED X Mini cameras, designed specifically for autonomous mobile robots in indoor and outdoor environments. We are excited to announce the release of ZED SDK 4.0, which introduces a range of new features and enhancements to our ZED cameras. ![]()
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |