Robotics

Building a complete autonomous car or a robotic system requires implementing following key components: perception system for sensing the environment, path planning, trajectory generation and follower subsystem, and controls subsystems. Perception subsystem includes sensors like cameras, LiDARs, and RADARS, used to sense the environment, localize the vehicle/robot, and plan trajectories such that it avoids any collision while traveling to its destination. Perception is one of the biggest building block necessary for autonomous navigation. Path Planning module is responsible for planning the trajectory for the vehicle/robot to follow in order to reach its destination. This module takes into account the objects detected by perception system and plans to avoid either getting too close or hitting them. It utilizes a known map to understand the constraints such as one-way roads,and other non-drivable area to plan a global path to destination. Finally, controls module provides commands to actuate and steer the car/robot to follow the trajectory very closely and get to the destination. Usually two separate controllers are employed for longitudinal and lateral movements; one controller is used to control the throttle/brake while the other controls the steering of the vehicle. Sections below demonstrates implementation of such systems which incorporates some/all of the key components necessary for autonomous navigation.

Tracking multiple drones with kalman filter

Kalman Filter is hands-down the best algorithm for estimating hidden state variables given the measurements observed over time. The have demonstrated to be extremely effective in various use-cases such as object tracking, and sensor fusion. Kalman Filter is a special case of Recursive Bayesian Filters and assumes the data to follow multi-variate Gaussian distribution. Consequently, they only work for linear transition models and linear measurement models. While dealing with a non-linear functions, we need to linearize them first using Taylor Series Approximation - this variant of Kalman Filter is called Extended Kalman Filter.

This algorithm consists of two major steps: (1) Prediction, and (2) Update/Correction. During the prediction step, we forecast the tracked state one time step forward through the linear transition model and propagate measurement errors. A measurement model is then used to cast observations onto a measured state. We then compute Kalman Gain which is a function of our prediction, measurement, and error covariances. This Kalman Gain is then used to correct our belief about the actual object state - much like Recursive Bayesian Filter where we use prior belief and measurements to estimate the posterior belief function.

An essential component required for multi-object tracking is the object association, which is necessary to associate each measurement with an object being tracked by our filter. The most trivial function used for association would be to first compute the euclidean distance between each measurement and every other tracked object, and then associate the measurement with the object at the smallest distance if this distance is under some threshold. More sophisticated methods uses Hungarian Algorithm or other statistical distance to do this association.

In this work, Mahalanobis Distance is used to compute the distance between measurements and tracked objects. If the smallest distance is within 3*σ of the estimated mean, then that object is deemed as a correct match. If no match is found, then a new track is instantiated; and similarly, if no updates have been received for any tracked object for over 100 milliseconds, then that object track is deleted.

In the figure, noisy measurements are received for two Drones, which are tracked by the filter. Red and Blue dots show the tracked mean of Drone1 and Drone2 respectively, and green ellipsoids show uncertainty in x,y,z.

Fly Tracking from 2D Observations in Image Plane using an EKF

Consider the scenario depicted in the figure where a robot tries to catch a fly that it tracks visually with its cameras.

To catch the fly, the robot needs to estimate the 3D position and linear velocity of the fly with respect to its camera coordinate system. The fly is moving randomly in a way that can be modeled by a discrete time double integrator.

The vision system of the robot consists of (unfortunately) only one camera. With the camera, the robot can observe the fly and receive noisy measurements z which are the pixel coordinates (u,v) of the projection of the fly onto the image.

We assume a known 3x3 camera intrinsic matrix. Initially, the fly is sitting on the fingertip of the robot when it is noticing it for the first time. Therefore, the robot knows the fly's initial position from forward kinematics (resting velocity). Trajectory of the fly has been simulated with added noisy observations for this problem.

Since the observation model is non-linear, tracking this fly in 3D using 2D observations require us to find the Jacobian of the observation model with respect to the fly's state in order to linearize it about its mean. This can then be used by an Extended Kalman Filter to estimate the position and velocity of the fly relative to the camera.

Self-Driving Car in Udacity Simulator

This project demonstrates the core functionality of the autonomous vehicle system, including traffic light detection, control, and waypoint following. Here is the brief description of each major node doing the heavy lifting towards making the autonomous car run smoothly stopping at each red traffic light and completing the 5-mile loop.

Waypoint Updater Node: This node subscribes to /base_waypoints and /current_pose and publishes to /final_waypoints.

DBW Node: Once the waypoint updater is publishing /final_waypoints, the /waypoint_follower node will start publishing messages to the /twist_cmd topic.

Traffic Light Detection: This is split into 2 parts:

  • Detection: Detect the traffic light and its color from /image_color.

  • Waypoint publishing: Once the traffic lights are correctly identified the traffic light and determined its position, it can be converted to a waypoint index and published.

Waypoint Updater Node: It uses /traffic_waypoint to change the waypoint target velocities before publishing to /final_waypoints. The car, following this waypoint now stops at red traffic lights and move when they are green.

Twist Controller: It executes the twist commands published by /waypoint_follower node over /twist_cmd topic.

Please visit the GitHub repository for detailed descriptions and code implementation.

TurtleBot3 Burger Autonomous Navigation in a Hallway

This is the simplest possible demonstration of an autonomous navigation system that implements Perception, Controls, and Path Planning. It demonstrates how these subsystems interact with each other as a whole in order to sense the surroundings, plan its path, and get to its destination. The complete implementation is within the ROS framework.

Problem

TurtleBot3 Burger has found itself in a hallway. We know the walls do not go on forever, but we don’t know how long they extend. Each time we run the simulation, the walls might extend a different amount. The task is to get the burger to move until the end of the hallway, turn around and return to the original position.

Challenges

  1. Use the sensors to map out the environment, walls position, orientation, distance from the robot, and length.

  2. Set the destination as the center of the walls at the end of the hallway (as far as the robot can see).

  3. Use two controllers to control the angular and linear velocity of the robot.

  4. Use robot Odometry to log the robot's initial pose and set that as a destination once the robot has reached the end of the hallway.

TurtleBot3 e-Manual

Visit the GitHub repository for detailed description and code implementation.