01 logo

Advantages of having RGB and depth data in a single time of flight camera

Recognizing the object to which the depth is measured in an embedded vision system requires the camera to deliver both the depth data and RGB data. Learn the advantages of obtaining both types of data in a single frame.

By e-con SystemsPublished 2 years ago 5 min read
Like

One of the most common depth-sensing technologies on the market today is time-of-flight cameras. To enable autonomous navigation, they use the light illumination approach to assess depth for obstacle detection. Autonomous mobile robots and autonomous guided vehicles are two of the most common uses for time of flight cameras. A time of flight camera must only give the depth data of the scene in order to measure depth. The ability to broadcast RGB data with it, on the other hand, has certain advantages. In this post, we'll look at those benefits as well as some of the instances in which you could need this capability.

Why don’t depth sensors capture color information by default?

Depth cameras determine the distance (or depth) between an item and its surroundings. Stereo cameras and time of flight cameras are the most common depth detecting devices. While stereo cameras employ stereo disparity to estimate depth, light illumination is used by LiDAR sensors and other time-of-flight cameras (LiDAR is a form of time-of-flight sensor).

One of the main distinctions between LiDAR and other types of time of flight sensors is that the former uses pulsed lasers to create a point cloud from which a 3D map is built. The latter, on the other hand, uses light detection to build 'depth maps,' which are normally created with an embedded camera. Color information is not necessary to calculate depth in any of these scenarios, and adding it would increase the computing load on the camera system. As a result, a standard depth camera does not have the capability of delivering colour or RGB data.

How does having depth and RGB data in a single frame help?

As we discussed above, a depth camera – such as a time of flight camera – by default does not come with the ability to capture color information. However, in certain applications that require object recognition, this is a disadvantage. This is because in most cases, identifying the type of a target object requires RGB data. For instance, let us consider an autonomous tractor. It has to identify when anything is in the proximity of the PTO (Power Take-Off) Unit. For safety reasons, there has to be more caution in the case of a human versus an animal. This would need an action like the tractor stopping at a distance of a few meters from the human compared to a few centimeters in the case of an animal or an object. This in turn would require the camera system in the tractor to identify the type of the approaching object which is possible only by analyzing RGB data. Now that we understand the need for capturing RGB data in a time of flight camera, the next obvious question is why do we need both the depth and RGB data to be delivered in a single frame? The problem in capturing both the data in different frames is that you need to do processing that involves pixel to pixel matching of the two frames (depth frame and the color frame) to make sure you get the matching data – a pixel in the RGB frame has to be merged with the same pixel in the depth frame. This requires additional processing and achieving accuracy with it is always a challenge. Serving the two pieces of data in a single frame solves the problem in one go and helps to skip the above step.Embedded vision applications that need both RGB and depth data in one frame

The need for having the two types of data in a single frame completely depends upon the end application. Let us take the same example of autonomous tractors. This feature would be required only if the tractor manufacturer is interested in identifying the object for improved safety. If the objective is to just detect an obstacle without recognizing the object, a normal depth camera or time of flight camera would do the job. Similar is the case with any other autonomous vehicle. Assuming that object recognition is required, the following are some of the other embedded vision applications that would require a combination of depth and RGB data in a single frame:

Autonomous mobile robots such as cleaning robots, goods to person robots, service robots, and companion robots

Robotic arms

Autonomous guided vehicles

People counting systems

e-con Systems’ time of flight camera with depth and RGB data in a single frame

e-con Systems – leveraging its experience of more than a decade in depth-sensing technologies – has developed DepthVista – a time of flight camera that delivers both the depth information as well as the RGB data in one frame for simultaneous depth measurement and object recognition. DepthVista comes with a combination of a CCD sensor (for depth measurement) and the ARO234 color global shutter sensor from Onsemi (for object recognition). The depth sensor streams a high resolution of 640 x 480 @ 30fps and the color global shutter sensor streams HD and FHD @30fps. This ability to read color and depth information in one go makes this camera suitable for applications that require both depth measurement and object recognition. To learn more about the features and applications of DepthVista, please visit the product page. Hope this article gave you an understanding of why RGB and depth data can work together to enhance the ability of your autonomous vehicle to enable seamless obstacle detection and object recognition. If you are looking for help in integrating this time of flight camera into your machine or device, please write to us at [email protected]. You could also visit the Camera Selector to have a look at our complete portfolio of cameras.

Reference: https://www.e-consystems.com/blog/camera/technology/advantages-of-having-rgb-and-depth-data-in-a-single-time-of-flight-camera/

gadgets
Like

About the Creator

e-con Systems

Established in 2003, e-con Systems has grown into a leading OEM camera manufacturer with wide global footprint.

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.