AI Technology Behind Autonomous Cars

September 17th, 2020

AI Behind Self-Driving Cars

CARS that can drive themselves, a staple of science-fiction, have started to appear on roads in real life. Self-driving cars have been the new exciting topic for quite some time now.

Cars were largely mechanical devices controlled by humans. Nowadays various complex technologies are taking over most of the functions performed by drivers. Many companies are implementing some autonomy or intelligence in their cars. Every big company out there has started making concept cars featuring self-driving or autonomous capabilities.

Through this post I want to put some light onto the technology used to make an autonomous car. A detailed explanation to implement different aspects of Computer Vision and Machine Learning in developing an autonomous system is outlined.

This post provides a detailed description of an autonomous car prototype that I worked on a few years ago. The project highlights how to make a rover-like prototype and add autonomous driving features to it for ease of navigation.

CARS that can drive themselve

Image credit: Waymo.company

So, what is an autonomous car and what does it do?

Most of our understanding about self-driving cars comes from different projects like Tesla, Uber or Waymo. An autonomous car has different systems and algorithms that control the steering and the speed of the car. An autonomous car is able to drive itself based on its environment.

How does an autonomous car work?

How does an autonomous car work

Image credit: The Economist

To be autonomous, the car must be able to understand and evaluate its environment and take necessary actions to move. To understand the environment mainly cameras are used. Usually there are cameras in the front, rear and sides to cover the entire area around the car.

Apart from this LIDARS are also used to map the surrounding area around a car. LIDAR (Light Detection and Ranging), is a method for measuring distances by illuminating the target with laser light and measuring the reflection with a sensor. Hence, using cameras or a combination of cameras and a LIDAR, the surrounding area around a car can be mapped.

Information from the cameras and LIDAR act as an input giving insight regarding the environment the car is moving in. Another option apart from the cameras and LIDAR is a Radar sensor to map the surrounding area around the car.

Other sensors in an autonomous car include GPS and ultrasonic sensors. GPS is used in path planning so that it can find its way to the required destination. Ultrasound sensors are also used for detection of objects that are very close to the vehicle. The radar is used to detect the distance objects further away and the speed of cars in the front and rear using the Doppler effect.

All the sensors provide inputs to the autonomous system in the car regarding the real-time environment the car is in. The CPU in the car computes on all this information and controls the steering wheel angle, braking and acceleration (considering that the car has automatic transmission). These are the main aspects that the central computer needs to control.

There are other secondary outputs like status of the wiper or indicators that need to be controlled. Primary inputs - the steering wheel angle, braking and acceleration actually drives the car.

So, in a nutshell, to be autonomous, the computer system in the car needs to compute on inputs from various sensors to manage the vehicle's steering wheel, braking and acceleration. Another point to consider is that the computer system in the car has to follow traffic rules and avoid collisions.

How to make an autonomous car prototype?

Everyone is fascinated by the concept of autonomous cars and most engineering students want to make an autonomous car prototype. A camera can be used instead of a lidar or radar to reduce the number of sensor interfaces. The camera can map the front of the bot and to detect lanes and obstacles. Ultrasonic sensors can be used to detect close obstacles in all directions.

A GPS along with a motion sensor can be used for navigation. Using these limited sensors, reduces the cost and complexity of the system without any big hit on performance. Such a scaled down version of an autonomous car prototype allows students and other researchers to be a part of the innovation in this field.

What was the intent behind this Project?

The motivation of this project was to contribute to the vision of a smart city by making travel autonomous, simple, safe and comfortable. Our aim was to build a vehicle size of a rover that could navigate autonomously using cameras, GPS and other sensors. The intent was to develop a basic prototype that has autonomous navigation ability.

What are the different aspects of the project?

This project has various aspects including Image Processing, Machine Learning and Robotics. Image Processing techniques were used to detect lanes, machine learning was used to detect obstacles in the path and Robotics was used to drive the bot. Various algorithms worked together to navigate the bot and plan its path.

  1. Image Processing and Computer Vision

    Image processing was used to process the input feed through the camera and extract meaningful information to detect lanes and obstacles. Algorithms to detect lanes, obstacles or sign boards are implemented on this data.

    A basic example of lane detection is shown below. An edge detection filter is applied and then the objects in the background are removed by using a Region-of-interest (RoI) mask. Then, Hough Transform is used to detect the lanes in the image. Another example of the lane detection system is below.

    Lane Detection using Edge detection filter and Hough Transform

    The lane detection algorithm consisted of :

    • Images were filtered and converted into binary using a thresholding function on each pixel of the image.
    • An edge detection filter (sobel filter) was implemented on the image.
    • Objects in the image were removed using Image subtraction pixel-by-pixel.
    • An RoI mask was used on the image to remove unnecessary data in the image.
    • Then a Hough Line transform was used to detect lanes in the image.
    • The output of these operations resulted in a black image with the lanes detected in white as shown below.

    Lane Detection using Edge detection filter and Hough Transform

  2. Machine Learning and Deep Learning

    Machine Learning was used to train the system for accuracy and eventually, complete autonomy. Machine learning models are used in this project for object detection.

    The obstacle detection algorithm used an edge detection filter along with a trained neural net to classify moving and stationary objects. Moving and stationary objects are tracked using a masking filter (red and blue bounding boxes). The moving objects are given a higher priority than stationary objects. Based on the object’s location, a motion planning algorithm figures out the trajectory required to avoid objects and move ahead.

    Object Detection using AI

    Object Detection using AI. Moving Objects detection

  3. Motion Control and Planning

    Motion control block process refined inputs from image processing block. This is to plan the path and motion, using rule-based decisions. It decides on left or right based on the curvature of the lanes. The IMUs help to facilitate maintaining a particular trajectory.

    Path Planning

    The motion planning system works on the lane detection algorithm outputs. The output has the entire lane in the camera’s field view. However, the immediate decision is dependent only on the direction upto a few meters ahead of the bot.

    For this a trapezoidal RoI filter is used to get the lanes only a few meters ahead. The filter removes additional data so that the path planning decisions can be taken to navigate for the next few meters only.

    Autonomous Navigation

    Path planning is governed by two systems - Local Navigation and Waypoint Navigation. Local Navigation uses inputs from the Image processing system to decide immediate motion plan. It also used data from the Inertial Measurement Units (IMUs) as a feedback for the motion control loop.

    Waypoint Navigation uses GPS locations that act as checkpoints throughout the desired path between the source point and destination. This enables the system to decide on long term actions when the local navigation system gives multiple path options.

    The system keeps an account of both Local and Waypoint navigation paths. In case of a conflict, the local navigation is given preference. Later the path planning will adjust to set on track with respect to the Waypoint navigation.

  4. Electrical System

    Various sensors are used in the system to detect the environment and take actions accordingly. LIDAR & Ultrasonic Sensors were used to detect close objects, suddenly appearing objects and objects appearing outside the frame of the camera.

    The system is driven by four DC Motors in differential mode. The direction and the Speed of the motor is controlled by the motor controller based on inputs from the microcontroller. The motion planning algorithm gives commands to the microcontroller through serial communication. The microcontroller decodes each actuation sequence into direction and speed commands for both motors.

     

System Design & Prototype

We developed a basic prototype to test the algorithms in real-time. We tested the prototype in a simulated environment to test the lane detection, obstacle detection and motion planning algorithms. The prototype bot was able to navigate in the set course also keeping the destination waypoint as its goal.

Self Driving Car Prototype

The prototype bot consisted of

  • 4 DC Motors - to drive the bot
  • 1 Motor Driver (Motor Control Unit) - to control the speed and direction of the motors
  • 1 LiPo Battery pack - to power the bot
  • 1 IMU sensor - for feedback on movement of the bot
  • 6 Ultrasonic Distance Sensors - to detect close objects
  • 1 Camera - to detect objects and lanes using image processing
  • 1 GPS - to get real-time GPS coordinates of the bot’s position

How can you make such a prototype and start contributing to this field?

Pulling off such a project requires a team with specialisations in various streams or disciplines. A team with good computer science engineers, electrical engineers and above all a bunch of passionate engineers and hobbyists. This project is quite simple if you divide it into different systems.

  • The brain of the bot here is a laptop.
  • A Camera is attached to the laptop and the live stream from the camera is fed to a Python program.
  • The image processing and machine learning modules can be implemented in Python.
  • The image processing operations on the live camera feed can be done using the OpenCV library of Python.
  • The OpenCV python library has various inbuilt functions to implement edge detection and Hough Transform.
  • Using these functions the initial lane detection algorithm can be implemented easily.

Scikit-learn is a library in python that can be used for machine learning and has inbuilt models that are trained by using a few lines of code. Very good documentation is available online to implement these machine learning models on any dataset. A dataset showing different obstacles has to be created for training the selected model on obstacle detection. Preparing a good dataset is very crucial for the accuracy of the machine learning model.

A very good machine learning model that can be used is Random Forest, implemented using the scikit-learn library in Python. This model gives a good enough accuracy for the system. The dataset of images cannot be directly used with Random Forest.

Features need to be extracted from the images using various extraction techniques like

  • Histogram of oriented gradients (HOG)
  • Speeded-up robust features (SURF)
  • Local binary patterns (LBP)
  • Haar wavelets or Color histogram

All these feature extraction techniques can be implemented on python using the OpenCV libraries or on MATLAB using the Image Processing Toolbox.

In this project we implemented Artificial Neural Networks (ANN) for obstacle classification since it gave a good accuracy and was better at classifying moving objects. To implement a neural net “Keras” library in Python, or the neural networks toolbox from “MATLAB” can be used. Feature extraction must be done on the images first and then given to the neural network model for image classification.

A neural network is easy to implement using these libraries. For better accuracy neural networks written from scratch can be used by tweaking parameters in the learning algorithm and network variables. Furthermore, other Deep Learning algorithms such as Convolutional Neural Networks (CNNs) and SegNets can be explored.

Motion Planning and Navigation algorithm can be written in Python only. Simple python libraries can be used to write algorithms for path planning. Other sensor inputs from Ultrasonic sensors and IMU can be integrated easily into any Arduino microcontroller. Libraries and codes to run these sensors are available on various github repositories for Arduino projects.

Apart from these aspects the Motor Control and Battery can also be managed easily. PWM based Directional controlled Motor Drivers are available easily which can be used to control the Motors used in this project. The Arduino microcontroller can use PWM signals and direction pins to control the motors. Rechargeable Li-Po batteries can be purchased that can power the bot.

A basic protection circuit needs to be designed for the battery with various voltage regulation units to provide required power supply to all individual components. Arduino and small sensors can also be powered using the USB port of the laptop or by using a rechargeable power bank.

In short, any engineering student, researcher or an enthusiast, can easily implement an autonomous car prototype. Such projects are first steps to contributing to the innovation and developments in the respective fields. I really hope many of you find this as a motivation to work on a prototype similar to this.

ABOUT THE AUTHOR

Cyril Joe Baby

Cyril Joe Baby is an Entrepreneur, Developer and Researcher.(Google Scholar link) He is the Co-founder and Chief Technical Officer of Fupro Innovation Private Limited, a high-tech research company in prosthetic and rehabilitation devices.

Cyril has worked on various projects and research works during the past years. His fields of interest and expertise include Artificial Intelligence, Robotics and Embedded Systems. Cyril continues to contribute to these fields by working on various exciting projects and guiding students and beginners to work on their projects. Cyril is also an alumnus of Corporate Gurukul. He can be contacted at cyrilbabyjoe@gmail.com..

0 likes
Average: 1.6 (57 votes)

Alumni Speaks

It started off in a more hectic manner than I could expect. ... read more

- Priyanshi Somani, Manipal Institute of Technology

“GAIP is perfectly aligned with someone's goal who wishes to experience an outburst of academic challenges while working on projec ... read more

- Sukriti Shaw, SRM Institute of Science and Technology

“Combining different characters and skillset from different institutes and domains in a new country and fantastic institute, it wa ... read more

- Shaolin Kataria, VIT, Vellore

“An enriching and enthralling experience. The course was extensive but worth every penny. ... read more

- Arudhra Narasimhan V, SASTRA DEEMED TO BE UNIVERSITY

“I personally learned quite a bit here but the 6-month project or LOR aren't as easy to get as was portrayed before. ... read more

- Dwait Bhatt, BITS PILANI

“It was a great experience for me, and far beyond my expectations. ... read more

- Shrikant Tarwani, LNM Institute of Information Technology

“This Internship is the perfect balance of theory and practical application. ... read more

- Mahima Borah, Manipal Institute of Technology

“This Internship has strengthened my concepts on Artificial Intelligence and Deep learning which are the hot words of today’s t ... read more

- Mansi Agarwal, Delhi Technological University

Please login to post comment, like the blog and its associated comments as well