We are creating some awesome events for you. Kindly bear with us.

AI Learns Cause and Effect of Navigations Tasks

business man hand holding a touch pad computer and 3d streaming images

Artificial Intelligence (AI) can learn to solve all sorts of problems, but whether these powerful, pattern-recognising algorithms actually understand the tasks they are performing remains an open question. Researchers at MIT have now shown that a certain type of AI can learn the true cause-and-effect structure of the navigation task it is being trained to perform.

Because these networks can understand the task directly from visual data, they should be more effective than other neural networks when navigating in a complex environment, like a location with dense trees or rapidly changing weather conditions. In the future, this work could improve the reliability and trustworthiness of machine learning agents that are performing high-stakes tasks, like driving an autonomous vehicle on a busy highway.

Because these brain-inspired machine-learning systems are able to perform reasoning in a causal way, we can know and point out how they function and make decisions. This is essential for safety-critical applications.

– Ramin Hasani, Co-lead Author, Computer Science and Artificial Intelligence Laboratory

The new research draws on previous work in which the researchers showed how a brain-inspired type of deep learning system called a Neural Circuit Policy (NCP), built by liquid neural network cells, is able to autonomously control a self-driving vehicle, with a network of only 19 control neurons.

The researchers observed that the NCPs performing a lane-keeping task kept their attention on the road’s horizon and borders when making a driving decision, the same way a human would while driving a car. Other neural networks they studied did not always focus on the road.

They found that, when an NCP is being trained to complete a task, the network learns to interact with the environment and account for interventions. In essence, the network recognises if its output is being changed by a certain intervention, and then relates the cause and effect together.

During training, the network is run forward to generate an output, and then backwards to correct for errors. The researchers observed that NCPs relate cause-and-effect during forward-mode and backward-mode, which enables the network to place very focused attention on the true causal structure of a task.

The researchers tested NCPs through a series of simulations in which autonomous drones performed navigation tasks. Each drone used inputs from a single camera to navigate. The drones were tasked with travelling to a target object, chasing a moving target, or following a series of markers in varied environments, including a redwood forest and a neighbourhood. They also travelled under different weather conditions, like clear skies, heavy rain, and fog.

The researchers found that the NCPs performed as well as the other networks on simpler tasks in good weather, but outperformed them all on the more challenging tasks, such as chasing a moving object through a rainstorm.

NCPs are the only network that pay attention to the object of interest in different environments while completing the navigation task, wherever you test it, and in different lighting or environmental conditions. This is the only system that can do this casually and actually learn the behaviour the researchers intend the system to learn.

Once the system learns what it is actually supposed to do, it can perform well in novel scenarios and environmental conditions it has never experienced. This is a big challenge of current machine learning systems that are not causal. In the future, the researchers want to explore the use of NCPs to build larger systems. Putting thousands or millions of networks together could enable them to tackle even more complicated tasks.

Send this to a friend