Development, begins together.
Banner alanı
IFM Sensor

Multimodal Vision Imaging Module for Robotic Platforms

Ahmet Ö.

Kurumsal
  • EMS Engineer
  • 1773907371427-108072-stm.jpg

    ## Multimodal Vision Imaging Module for Robotic Platforms

    STMicroelectronics and Leopard Imaging have developed a multimodal visual module compatible with NVIDIA Jetson, which combines image processing, depth sensing, and motion perception to support robotic vision and physical AI applications.

    This new module is designed for use in human-like and advanced robots, offering a compact and low-power architecture suitable for edge AI systems. It integrates 2D imaging, 3D depth sensing, and inertial motion tracking into a synchronized data stream, enabling robots to better understand their environment.

    ### Features of the Multimodal Imaging Module

    • 5.1-megapixel VB1940 RGB-IR image sensor supports rolling and global shutter modes, providing quality images without distortion in fast-moving objects
    • V943 sensor variant from the BrightSense family, suitable for industrial and consumer applications with monochrome or RGB-IR configurations
    • 6-axis LSM6DSV16X inertial measurement unit (IMU) with embedded machine learning core for motion tracking
    • Low-power sensor fusion technology (SFLP) and electrostatic Qvar sensing features support user interface and interaction-based robotic applications
    • VL53L9CX direct time-of-flight (dToF) LiDAR module performs depth sensing up to 9 meters with 54x42 zone resolution for precise spatial mapping
    • 55° × 42° field of view and approximately 1° angular resolution enable detection of small objects
    • Supports 3D scene reconstruction at up to 100 frames/second

    ### Full Compatibility with the NVIDIA Robotics Ecosystem

    The module supports multi-gigabit data transfer over Ethernet via NVIDIA Holoscan Sensor Bridge, enabling real-time transfer of high-bandwidth sensor data to Jetson platforms. Fully compatible with the NVIDIA Isaac platform, the module offers easy integration and rapid development cycles with AI models, simulation tools, and development libraries.

    ### Physical AI and Sensor Fusion in Robotics

    Real-time high-bandwidth data transfer and synchronized multi-sensor usage enable robots to process environmental data instantly. This allows AI to perform perception, decision-making, and action processes with low latency. Standardized integration of sensors within the NVIDIA ecosystem reduces development complexity while supporting scalable deployment in robotic platforms.

    This development reflects the industry trend towards tight integration of sensor and processing platforms, focusing on interoperability and data consistency in autonomous and semi-autonomous machines.
     
    Back
    Top