Machine learning gets physical, starting with self-driving cars

We're about to see real-world interfaces between humans and AI-driven machines

Today, there are many examples of artificial intelligence interacting with us to make our lives more efficient and effective. Machines recommend products for us to purchase through e-commerce websites, they rank news for us through social-media feeds, they introduce us to people on dating apps, price goods and services in real-time and so on.

However, the common factor with all of these is that each machine is limited to influencing our lives through a software interface with a website or an app. In 2021, AI will go beyond this. We will see the emergence of the first physical interfaces between humans and AI-driven machines.

Today’s autonomous machines operate in controlled and closed environments, such as factories and warehouses, physically separated from humanity. They are rigid, manually programmed machines with limited sensing and intelligence. However, advances in machine learning – such as self-supervised learning in computer vision, new techniques for probabilistic and generative modelling, and model-based reinforcement learning for control – have produced opportunities to create intelligent machines that can interact openly with society, and with limited human supervision.

Machine learning has had a transformational impact on many AI problems, most recently in computer vision and natural-language processing. This has been catalysed with increasing access to petabyte-scale datasets and massive cloud computing, enabling a shift from hand-designed representations to end-to-end machine learning, which allows them to gain understanding beyond their original programming.

The reason why this change hasn’t happened yet in robotics is because hardware is more challenging than software to scale safely, making training data more scarce in this domain. The recent breakthroughs in reinforcement learning, where machines are able to beat human world-champions at games such as Go and DOTA, relied on simulations, where infinite data could be generated to teach the machine. In 2021, however, we will take advantage of the petabytes of training data that have amassed, through many years of development, by mature robotic platforms such as self-driving cars.

One of the most interesting consequences of autonomous-driving technology is that society will be interacting with physical machines, without explicit consent, similar to how we interact with software machines today. Pedestrians will not consent to an autonomous robot driving down the street beside them; it will just be the norm because it is more reliable, safe and efficient. This will require extraordinary levels of trust from humanity in, and of performance of, self-driving technology, something that, thanks to the data we have now accrued in our work on autonomous vehicles, we are on track to achieve in 2021.

Alex Kendall is co-founder and CEO of Wayve

This article was originally published by WIRED UK