SHUBHAM SHRIVASTAVA

Education

Stanford University

Graduate Program - Artificial Intelligence

GPA: 4.3/4.0

Courses in Meta-Learning, Multi-Task Learning, Computer Vision, and 3D Reconstruction

August 2020 - Present

The University of Texas at Arlington

Master of Science in Electrical Engineering

GPA: 4.0/4.0

August 2014 - August 2016

Visvesvaraya Technological University

Bachelor of Engineering in Electronics and Communication Engineering

First Class with Distinction, GPA: 4.0/4.0, Aggregate Percentage: 86%

August 2014 - August 2016

Work Experience

Ford Greenfield LabsPalo Alto, CA

Machine Learning and Computer Vision Research Scientist

September 2019 - Present

I work in a research-intensive environment within a small group extensively focussed on machine learning, computer vision, and robotics.

My research includes working with convolutional neural networks, generative adversarial networks, variational autoencoders, deep reinforcement learning, and 3D perception; with major emphasis on object detection, semantics learning, 3D scene understanding, multi-view geometry, and visual odometry.

Topics of Research:

  • Generative Adversarial Networks for realistic image/video generation with semantic and cycle consistency from simulation data to fill the gap between both worlds.

  • [ RGB-D camera, Monocular RGB camera, and LiDAR ] - based 3D object detection and Classification in both indoor and outdoor environments.

  • Stereo Visual Odometry and topological mapping; Monocular RGB camera-based localization (6-DoF pose) in real-time.

  • Weakly-supervised 6-DoF vehicle pose annotation with known dimensions and camera parameters using non-linear optimization method.

  • Multi-headed multi-task neural networks for scene understanding. Incorporating Sim2Real methods for zero-cost training of the networks.

  • Automated extrinsic calibration of multiple spatially distributed sensors within the infrastructure to a common global frame; object detection and their 6-DoF pose tracking in a global frame – all with a robot

Renesas Electronics America, Inc.Farmington Hills, MI

Applications Engineer, Perception R&D - ADAS and Autonomous Driving

March 2017 - September 2019

I worked as part of a very small team towards building the ADAS & Autonomous Driving perception reference platform “Perception Quick Start” which includes end-to-end solutions for camera and LiDAR based road feature and object detection.

  • Developed complete lane detection pipeline from scratch. Pipeline includes lane pixel extraction using a combination of classical computer vision method and deep learning, lane detection, polyfit, noise suppression, lane tracking, lane smoothing, confidence computation, lane extrapolation, lane departure warning, lane offset, lane curvature, and lane types.

  • Developed C based Computer Vision Library for basic image processing functions like image read/write, hough transforms, edge detection (canny, horizontal, vertical), colorspace conversions, image filtering (sharpen, gaussian smooth, sobel, emboss, edge). Created advanced math library for functions like least-squares polyfit and matrix operations.

  • Stereo Camera calibration, rectification, disparity map generation, flat road free-space estimation, Object Detection using V-Disparity and 3D density-based clustering, 3D point cloud rendering along with 3D bounding-box, and depth perception.

  • Developed dynamic image ROI stabilization module for correcting rotation and translation using angular pose/velocity data by computing and applying homography at run-time. Developed a general-purpose positioning driver for bringing in GNSS/IMU data into the perception stack.

  • Optimized embedded implementation of algorithms for parallel computing on R-Car SoC HW Accelerators.

  • Developed the complete V2V Solution from Scratch for the Renesas’ V2X Platform including CAN Framework, GPS/INS driver, GPS+IMU fusion for localization, Concise Path History computation, Path Prediction, CSV and KML logging module, 360-degree lane-level Target Classification, Basic Safety Applications, and an HMI using QT for displaying Warnings, Vehicle Tracking, Maps, and Debug Information.

Changan US Research & Development Center, Inc.

Intelligent Vehicle Engineer (Connected Autonomous Vehicle Research Group)

August 2016 - March 2017

Worked within Changan's Connected and Autonomous Vehicle research team to design and develop various vehicle safety models for 360-degrees target classification and warnings notifications with and without line-of-sight requirements.

DELPHI (now known as APTIV)

Embedded Software Engineer (Intern)

May 2016 - August 2016

Worked with application teams, forward systems algorithm group, and controller design groups to define the functionality, develop algorithms, and implement them in accordance with the V-Model Software Development Life Cycle.

BlackBerry QNX

Software Development Intern (Board Support Package)

January 2016 - May 2016

Developed the BSP (Board Support Package) for custom hardware with i.MX6 Solo Processor and several peripherals. Worked towards low-level board bring-ups and provided support for the following peripherals.

  • Support for RAM file system to manipulate files during runtime.

  • Support for SPI NOR Flash and Parallel NOR Flash mounted as a filesystem at startup.

  • Support for Removable storage (SD, microSD, USB Flash). Also, provided support for its auto-detection and auto mounting at the attachment.

  • Support for USB OTG to be used for the Console Service, Mass-Storage Device, and USB-to-Ethernet Adapter attachments.

  • Added new features to the QNX OS for the BSP including auto-detection and switching between device stack to provide console service, host stack to provide auto-mounting of mass storage devices, and host stack to provide networking with USB-Ethernet Adapter based on the type of attachment.

The University of Texas at Arlington Research Institute

Research Intern

August 2015 - December 2015

Designed and Developed the control GUI for a prosthetic system used to help rehabilitate post-stroke patients. It used an Arduino controller for adaptive adjustment of the air bubble pressure at the desired psi value for various points on the leg. Also designed a GUI which allows the user to enter the desired psi values for each air bubble and simultaneously measure current bubble pressure and display it on the GUI in real-time.

  • Used two Arduino UNO, one for sending the signals to 32 solenoids controlling airflow from Alicat Mass Flow Controller into the respective air bubbles, and one for receiving sense signal from 32 corresponding air pressure sensor. Signals were also sent to the Solenoids for deflating the air bubbles when required.

  • Developed the communication protocol involving ACK and NACK and implemented them in MATLAB GUI for sending commands to one of the Arduino and receiving sense signals from another and hence provided the synchronization and feedback.

Indian Institute of Science (IISc)

Trainee Engineer

January 2014 - May 2014

Designed and developed a 2 Dimensional plotter (Smart XY Plotter) at the Mechatronics Lab, IISc, capable of plotting any 2D image using a pen, which was controlled by means of a Solenoid and two stepper motors (responsible for x, y, and z directional movement).

  • The control system was governed by an ARM Processor (STM32F4 Discovery Board) to plot images which features were extracted using MATLAB.

  • Developed the GUI in MATLAB which allows a user to either upload the image of their choice or select any other plot (Arbitrary interpolated curve, texts, shapes).

  • Used two timers for controlling and synchronizing the parallel movement of X and Y motors to provide any curve of any desired slope.

  • The solenoid setup was brought back to its initial position after every plot. Limit switches were used to detect its arrival at the desired reset position.

[YouTube Link]

Papers and Publications

We introduce a method for 3D object detection using a single monocular image. Starting from a synthetic dataset, we pre-train an RGB-to-Depth Auto-Encoder (AE). The embedding learnt from this AE is then used to train a 3D Object Detector (3DOD) CNN which is used to regress the parameters of 3D object poses after the encoder from the AE generates a latent embedding from the RGB image. We show that we can pre-train the AE using paired RGB and depth images from simulation data once and subsequently only train the 3DOD network using real data, comprising of RGB images and 3D object pose labels (without the requirement of dense depth). Our 3DOD network utilizes a particular `cubification' of 3D space around the camera, where each cuboid is tasked with predicting N object poses, along with their class and confidence values. The AE pre-training and this method of dividing the 3D space around the camera into cuboids give our method its name - CubifAE-3D. We demonstrate results for monocular 3D object detection in the Autonomous Vehicle (AV) use-case with the Virtual KITTI 2 and the KITTI datasets.
3D object detection and dense depth estimation are one of the most vital tasks in autonomous driving. Multiple sensor modalities can jointly attribute towards better robot perception, and to that end, we introduce a method for jointly training 3D object detection and monocular dense depth reconstruction neural networks. It takes as inputs, a LiDAR point-cloud, and a single RGB image during inference and produces object pose predictions as well as a densely reconstructed depth map. LiDAR point-cloud is converted into a set of voxels, and its features are extracted using 3D convolution layers, from which we regress object pose parameters. Corresponding RGB image features are extracted using another 2D convolutional neural network. We further use these combined features to predict a dense depth map. While our object detection is trained in a supervised manner, the depth prediction network is trained with both self-supervised and supervised loss functions. We also introduce a loss function, edge-preserving smooth loss, and show that this results in better depth estimation compared to the edge-aware smooth loss function, frequently used in depth prediction works.

Deep Learning has seen an unprecedented increase in vision applications since the publication of large-scale object recognition datasets and the introduction of scalable compute hardware. State-of-the-art methods for most vision tasks for Autonomous Vehicles (AVs) rely on supervised learning and often fail to generalize to domain shifts and/or outliers. Dataset diversity is thus key to successful real-world deployment. No matter how big the size of the dataset, capturing long tails of the distribution pertaining to task-specific environmental factors is impractical. The goal of this paper is to investigate the use of targeted synthetic data augmentation - combining the benefits of gaming engine simulations and sim2real style transfer techniques - for filling gaps in real datasets for vision tasks. Empirical studies on three different computer vision tasks of practical use to AVs - parking slot detection, lane detection, and monocular depth estimation - consistently show that having synthetic data in the training mix provides a significant boost in cross-dataset generalization performance as compared to training on real data only, for the same size of the training set.

Meta-learning models have two objectives. First, they need to be able to make predictions over a range of task distributions while utilizing only a small amount of training data. Second, they also need to adapt to new novel unseen tasks at meta-test time again by using only a small amount of training data from that task. It is the second objective where meta-learning models fail for non-mutually exclusive tasks due to task overfitting. Given that guaranteeing mutually exclusive tasks is often difficult, there is a significant need for regularization methods that can help reduce the impact of task-memorization in meta-learning. For example, in the case of N-way, K-shot classification problems, tasks become non-mutually exclusive when the labels associated with each task is fixed. Under this design, the model will simply memorize the class labels of all the training tasks, and thus will fail to recognize a new task (class) at meta-test time. A direct observable consequence of this memorization is that the meta-learning model simply ignores the task-specific training data in favor of directly classifying based on the test data input. In our work, we propose a regularization technique for meta-learning models that gives the model designer more control over the information flow during meta-training. Our method consists of a regularization function that is constructed by maximizing the distance between task-summary statistics, in the case of black-box models, and task-specific network parameters in the case of optimization-based models during meta-training. Our proposed regularization function shows an accuracy boost of ∼36% on the Omniglot dataset for 5-way, 1-shot classification using black-box method and for 20-way, 1-shot classification problem using optimization-based methods.
Training robots to navigate diverse environments is a challenging problem as it involves the confluence of several different perception tasks such as mapping and localization, followed by optimal path-planning and control. Recently released photo-realistic simulators such as Habitat allow for the training of networks that output control actions directly from perception: agents use Deep Reinforcement Learning (DRL) to regress directly from the camera image to a control output in an end-to-end fashion. This is data-inefficient and can take several days to train on a GPU. Our paper tries to overcome this problem by separating the training of the perception and control neural nets and increasing the path complexity gradually using a curriculum approach. Specifically, a pre-trained twin Variational AutoEncoder (VAE) is used to compress RGBD (RGB & depth) sensing from an environment into a latent embedding, which is then used to train a DRL-based control policy. A*, a traditional path-planner is used as a guide for the policy and the distance between start and target locations is incrementally increased along the A* route, as training progresses. We demonstrate the efficacy of the proposed approach, both in terms of increased performance and decreased training times for the PointNav task in the Habitat simulation environment. This strategy of improving the training of direct-perception based DRL navigation policies is expected to hasten the deployment of robots of particular interest to industry such as co-bots on the factory floor and last-mile delivery robots.
In recent years, autonomous driving has inexorably progressed from the domain of science fiction to reality. For a self-driving car, it is of utmost importance that it knows its surroundings. Several sensors like RADARs, LiDARs, and Cameras have been primarily used to sense the environment and make a judgment on the next course of action. Object detection is of a great significance in Autonomous Driving wherein the self-driving car needs to identify the objects around it and must take necessary actions to avoid a collision. Several perception-based methods like classical Computer Vision techniques and Convolutional Neural Networks (CNN) exist today which detects and classifies an object. This paper discusses an object detection technique based on Stereo Vision. One challenge in this process though is to eliminate regions of the image which are insignificant for the detection, like unoccupied road and buildings far ahead. This paper proposes a method to first get rid of such regions using V-Disparity and then detect objects using 3D density-based clustering. Results given in this paper show that the proposed system can detect objects on the road very accurately and robustly.
We describe a light-weight, weather and lighting invariant, Semantic Bird's Eye View (S-BEV) signature for vision-based vehicle re-localization. A topological map of S-BEV signatures is created during the first traversal of the route, which are used for coarse localization in subsequent route traversal. A fine-grained localizer is then trained to output the global 3-DoF pose of the vehicle using its S-BEV and its coarse localization. We conduct experiments on vKITTI2 virtual dataset and show the potential of the S-BEV to be robust to weather and lighting. We also demonstrate results with 2 vehicles on a 22 km long highway route in the Ford AV dataset.
Image-based learning methods for autonomous vehicle perception tasks require large quantities of labelled, real data in order to properly train without overfitting, which can often be incredibly costly. While leveraging the power of simulated data can potentially aid in mitigating these costs, networks trained in the simulation domain usually fail to perform adequately when applied to images in the real domain. Recent advances in domain adaptation have indicated that a shared latent space assumption can help to bridge the gap between the simulation and real domains, allowing the transference of the predictive capabilities of a network from the simulation domain to the real domain. We demonstrate that a twin VAE-based architecture with a shared latent space and auxiliary decoders is able to bridge the sim2real gap without requiring any paired, ground-truth data in the real domain. Using only paired, ground-truth data in the simulation domain, this architecture has the potential to generate perception tasks such as depth and segmentation maps. We compare this method to networks trained in a supervised manner to indicate the merit of these results.
National Highway Traffic Safety Administration (NTHSA) has been interested in vehicle-to-vehicle (V2V) communication as the next step in addressing grooving rates of fatalities from vehicle related crashes. Today’s crash avoidance technologies depend on on-board sensors like camera and radar to provide awareness input to the safety applications. These applications warn the driver of imminent danger or sometimes even act on the driver’s behalf. However, even technologies like those cannot “predict” a crash that might happen because of a vehicle which is not very close or not in the line of sight to the host vehicle. A technology that can “see” through another vehicle or obstacles like buildings and predict a danger can fill these gaps and reduce crashes drastically. V2V communications can provide vehicles the ability to talk to each other and therefore see around corners and through the obstacles over a longer distance compared to the current on-board sensors. It is estimated that V2X communications address up to 80% of the unimpaired crashes [1]. By means of Notice of Proposed Rulemaking (NPRM), NHTSA is working towards standardization of V2V communications and potentially mandating the broadcast of vehicle data (e.g. GPS coordinates, speed, acceleration) over DSRC through V2V.

Patents

  1. [us 16/838448] S Shrivastava. “Realistic Image Perspective Transformation Using Neural Networks”. A system based on a deep neural network to synthesize multiple realistic perspectives of an image.

  2. [us 17/016874] P Chakravarty, S Shrivastava, G Pandey, and X Wong. “Object Detection”. Automatic Calibration of Automobile Cameras – In the Factory & On The Road.

  3. [us 17/102557] N Raghavan, P Chakravarty, and S Shrivastava. “Vehicle Neural Network”. Zero-Cost Training of Perception Tasks using a Sim-to-Real Architecture with Auxiliary Decoding.

  4. [us 17/113171] M Voodarla, P Chakravarty, and S Shrivastava. “Vehicle Neural Network Localization”. Semantic Birds-Eye View Representation Learning for Weather and Lighting Invariant 3 DoF Localization.

  5. [us 16/914975] P Chakravarty, S Manglani, and S Shrivastava. “Determining Multi-Degree-Of-Freedom Pose For Sensor Calibration”. A robotic calibration device and a method of calculating a global multi-degree of freedom (MDF) pose of an array of cameras affixed to a structure.

  6. [us 17/072334] P Chakravarty, and S Shrivastava. “Vehicle Neural Network Perception and Localization”. Using Map-Perception Disagreement for Robust Perception and Localization with Generative Models.

  7. [us 17/141433] K Balakrishnan, P Chakravarty, and S Shrivastava. “Vision-Based Navigation By Coupling Deep Reinforcement Learning And A Path Planning Algorithm”. Robot Navigation Using Vision Embeddings and A* for Improved Training of Deep-Reinforcement Learning Policies.

  8. [us 17/172631] S Shrivastava. “Event-Based Vehicle Pose Estimation Using Monochromatic Imaging”. Vehicle pose estimation within an indoor infrastructure with statically mounted monocular cameras.

  9. [us 17/148994] P Chakravarty, and S Shrivastava. “Multi-Degree-Of-Freedom Pose For Vehicle Navigation”. A weakly-supervised method of 6-DoF pose annotation for known objects by means of keypoints, visual tracking, and non-linear optimization.

  10. [us 17/224181] S Shrivastava, P Chakravarty, and G Pandey. “Neural Network Object Detection”. Multi-Camera assisted Semi-Supervised Monocular 3D Object Detection.