Manage episode 289364281 series 2793161
Data is the lifeblood of Artificial Intelligence. Quite simply, the better and richer the quality of data, the more capable the algorithm. Now this applies to both, the training data available to train the algorithm, but importantly, also the input data that is available for the algorithm to do its job.
Take the case of autonomous vehicles or advanced driver assistance systems. These systems rely on the eyes - cameras, LIDARs and RADARs - to see the environment around the vehicle. The input from these eyes is then passed on to the brain - the algorithm - which makes sense of what the eyes see.
Most state of the art ADAS and AV algorithms today are designed to perceive what these sensors see by drawing bounding boxes around road users. That’s how they perceive pedestrians, other road users, vehicles and obstacles.
But human behaviour rarely fits in a box. And human behaviour has a huge impact on how good or not an AV algorithm is. A bounding box alone is not sufficient to really perceive pedestrian behaviour, for instance. Is that pedestrian about to cross the road? How much risk does this road user pose? Is that a vulnerable road user?
Enter Humanising Autonomy. A company on a mission to create a global standard for human interaction with automated systems. This is an incredibly interesting company, and I was delighted to have the opportunity to speak to their Co-founder and Chief Product Officer, Leslie Nooteboom.
Think of Humanising Autonomy as a module you could add to the AV brain, that then makes the brain capable of perceiving - and predicting - human behaviour on roads. I would imagine a solution like this could improve road safety by orders of magnitude.
These guys are up to some really fascinating stuff that sits at the intersection of behavioural psychology, vision perception and artificial intelligence. How does that impact the world of autonomous driving? Find out in my very interesting chat with Leslie.