What is this Site about

This website explores some major components needed to achieve/assist autonomy in vehicles/robotics like Computer Vision, Machine Learning, Path Planning, Sensor Fusion, Robotics, Controls, V2X. It also provides detailed insight on some of these components and discusses the results obtained using different techniques. Source code and instructions are also provided for the projects to clone and replicate yourself. Please contact Shubham Shrivastava if you have any questions.

Components of Autonomous Driving

Complete Autonomous Cars / Robots In Action

This section covers complete integrated autonomous car and robot actions in simulator.


Perception system provides self-driving cars 360 degrees of visibility around the car at up to 250 meters of range. This uses multiple sensor systems such as LiDAR, RADAR, and Camera

Deep Learning

Deep Learning plays a huge role in Autonomous Driving. It is a key technology that helps a self-driving car in multiple aspects right from perception to driving the car itself.


V2X stands for Vehicle-to-Everything which relies on either DSRC or Cellular network underneath and provides a way for the vehicle to communicate everything around it e.g vehicle, infrastructure, pedestrians.


Sensor Fusion intelligently combines data from multiple sensors and corrects for the deficiencies of the individual sensors to calculate accurate position and orientation information.


Localization is an essential part of any autonomous vehicle and it is imperative for them to localize themselves in the real world, globally and locally.


Path Planning or Motion Planning is a process that lets the autonomous vehicles or robots find either the shortest or the most optimal path between two points.


Shubham Shrivastava


“An autonomous-driving and robotics technology enthusiast who envisions to invent technologies and bringing them to life. A Machine Learning and Computer Vision Research Engineer, working towards making the autonomous car perceive the world like humans do."