Researchers have developed a new technique, that improves the ability of Artificial Intelligence programs to identify three-dimensional objects, and how those objects relate to each other in space, using two-dimensional images. For example, the work would help the AI used in autonomous vehicles navigate in relation to other vehicles using the 2D images it receives from an onboard camera.
We live in a 3D world, but when you take a picture, it records that world in a 2D image. AI programmes receive visual input from cameras. So if we want AI to interact with the world, we need to ensure that it is able to interpret what 2D images can tell it about 3D space. In this research, we are focused on one part of that challenge: how we can get AI to accurately recognise 3D objects—such as people or cars—in 2D images, and place those objects in space.
– Tianfu Wu, Professor of Electrical and Computer Engineering at North Carolina State University
While the work may be important for autonomous vehicles, it also has applications for manufacturing and robotics. In the context of autonomous vehicles, most existing systems rely on lidar—which uses lasers to measure distance—to navigate 3D space. However, lidar technology is expensive. And because lidar is expensive, autonomous systems don’t include much redundancy. For example, it would be too expensive to put dozens of lidar sensors on a mass-produced driverless car.
If an autonomous vehicle could use visual inputs to navigate through space, you could build in redundancy. Because cameras are significantly less expensive than lidar, it would be economically feasible to include additional cameras—building redundancy into the system and making it both safer and more robust. Specifically, the technique is capable of identifying 3D objects in 2D images and placing them in a “bounding box,” which effectively tells the AI the outermost edges of the relevant object.
The technique builds on a substantial amount of existing work aimed at helping AI programs extract 3D data from 2D images. Many of these efforts train the AI by showing it 2D images and placing 3D bounding boxes around objects in the image. These boxes are cuboids, which have eight points—think of the corners on a shoebox. During training, the AI is given 3D coordinates for each of the box’s eight corners, so that the AI understands the height, width and length of the bounding box, as well as the distance between each of those corners and the camera.
The training technique uses this to teach the AI how to estimate the dimensions of each bounding box and instructs the AI to predict the distance between the camera and the car. After each prediction, the trainers correct the AI, giving it the correct answers. Over time, this allows the AI to get better and better at identifying objects, placing them in a bounding box, and estimating the dimensions of the objects.
What sets their work apart is how they train the AI, which builds on previous training techniques. Like the previous efforts, they place objects in 3D bounding boxes while training the AI. However, in addition to asking the AI to predict the camera-to-object distance and the dimensions of the bounding boxes, they also ask the AI to predict the locations of each of the box’s eight points and its distance from the centre of the bounding box in two dimensions.
As reported by OpenGov Asia, a new report showed that Artificial Intelligence (AI) has reached a critical turning point in its evolution. Substantial advances in language processing, computer vision and pattern recognition mean that AI is touching people’s lives daily—from helping people to choose a movie to aid in medical diagnoses.