Stanford UniversityGraduate Program - Artificial Intelligence
Courses in Meta-Learning, Multi-Task Learning, Computer Vision, and 3D Reconstruction
August 2020 - Present
The University of Texas at ArlingtonMaster of Science in Electrical Engineering
August 2014 - August 2016
Visvesvaraya Technological UniversityBachelor of Engineering in Electronics and Communication Engineering
First Class with Distinction, GPA: 4.0/4.0, Aggregate Percentage: 86%
August 2014 - August 2016
Machine Learning and Computer Vision Research Scientist
September 2019 - Present
I work in a research-intensive environment within a small group extensively focussed on machine learning, computer vision, and robotics.
My research includes working with convolutional neural networks, generative adversarial networks, variational autoencoders, deep reinforcement learning, and 3D perception; with major emphasis on object detection, semantics learning, 3D scene understanding, multi-view geometry, and visual odometry.
Topics of Research:
Generative Adversarial Networks for realistic image/video generation with semantic and cycle consistency from simulation data to fill the gap between both worlds.
[ RGB-D camera, Monocular RGB camera, and LiDAR ] - based 3D object detection and Classification in both indoor and outdoor environments.
Stereo Visual Odometry and topological mapping; Monocular RGB camera-based localization (6-DoF pose) in real-time.
Weakly-supervised 6-DoF vehicle pose annotation with known dimensions and camera parameters using non-linear optimization method.
Multi-headed multi-task neural networks for scene understanding. Incorporating Sim2Real methods for zero-cost training of the networks.
Automated extrinsic calibration of multiple spatially distributed sensors within the infrastructure to a common global frame; object detection and their 6-DoF pose tracking in a global frame – all with a robot
Applications Engineer, Perception R&D - ADAS and Autonomous Driving
March 2017 - September 2019
I worked as part of a very small team towards building the ADAS & Autonomous Driving perception reference platform “Perception Quick Start” which includes end-to-end solutions for camera and LiDAR based road feature and object detection.
Developed complete lane detection pipeline from scratch. Pipeline includes lane pixel extraction using a combination of classical computer vision method and deep learning, lane detection, polyfit, noise suppression, lane tracking, lane smoothing, confidence computation, lane extrapolation, lane departure warning, lane offset, lane curvature, and lane types.
Developed C based Computer Vision Library for basic image processing functions like image read/write, hough transforms, edge detection (canny, horizontal, vertical), colorspace conversions, image filtering (sharpen, gaussian smooth, sobel, emboss, edge). Created advanced math library for functions like least-squares polyfit and matrix operations.
Stereo Camera calibration, rectification, disparity map generation, flat road free-space estimation, Object Detection using V-Disparity and 3D density-based clustering, 3D point cloud rendering along with 3D bounding-box, and depth perception.
Developed dynamic image ROI stabilization module for correcting rotation and translation using angular pose/velocity data by computing and applying homography at run-time. Developed a general-purpose positioning driver for bringing in GNSS/IMU data into the perception stack.
Optimized embedded implementation of algorithms for parallel computing on R-Car SoC HW Accelerators.
Developed the complete V2V Solution from Scratch for the Renesas’ V2X Platform including CAN Framework, GPS/INS driver, GPS+IMU fusion for localization, Concise Path History computation, Path Prediction, CSV and KML logging module, 360-degree lane-level Target Classification, Basic Safety Applications, and an HMI using QT for displaying Warnings, Vehicle Tracking, Maps, and Debug Information.
Intelligent Vehicle Engineer (Connected Autonomous Vehicle Research Group)
August 2016 - March 2017
Worked within Changan's Connected and Autonomous Vehicle research team to design and develop various vehicle safety models for 360-degrees target classification and warnings notifications with and without line-of-sight requirements.
Embedded Software Engineer (Intern)
May 2016 - August 2016
Worked with application teams, forward systems algorithm group, and controller design groups to define the functionality, develop algorithms, and implement them in accordance with the V-Model Software Development Life Cycle.
Software Development Intern (Board Support Package)
January 2016 - May 2016
Developed the BSP (Board Support Package) for custom hardware with i.MX6 Solo Processor and several peripherals. Worked towards low-level board bring-ups and provided support for the following peripherals.
Support for RAM file system to manipulate files during runtime.
Support for SPI NOR Flash and Parallel NOR Flash mounted as a filesystem at startup.
Support for Removable storage (SD, microSD, USB Flash). Also, provided support for its auto-detection and auto mounting at the attachment.
Support for USB OTG to be used for the Console Service, Mass-Storage Device, and USB-to-Ethernet Adapter attachments.
Added new features to the QNX OS for the BSP including auto-detection and switching between device stack to provide console service, host stack to provide auto-mounting of mass storage devices, and host stack to provide networking with USB-Ethernet Adapter based on the type of attachment.
August 2015 - December 2015
Designed and Developed the control GUI for a prosthetic system used to help rehabilitate post-stroke patients. It used an Arduino controller for adaptive adjustment of the air bubble pressure at the desired psi value for various points on the leg. Also designed a GUI which allows the user to enter the desired psi values for each air bubble and simultaneously measure current bubble pressure and display it on the GUI in real-time.
Used two Arduino UNO, one for sending the signals to 32 solenoids controlling airflow from Alicat Mass Flow Controller into the respective air bubbles, and one for receiving sense signal from 32 corresponding air pressure sensor. Signals were also sent to the Solenoids for deflating the air bubbles when required.
Developed the communication protocol involving ACK and NACK and implemented them in MATLAB GUI for sending commands to one of the Arduino and receiving sense signals from another and hence provided the synchronization and feedback.
January 2014 - May 2014
Designed and developed a 2 Dimensional plotter (Smart XY Plotter) at the Mechatronics Lab, IISc, capable of plotting any 2D image using a pen, which was controlled by means of a Solenoid and two stepper motors (responsible for x, y, and z directional movement).
The control system was governed by an ARM Processor (STM32F4 Discovery Board) to plot images which features were extracted using MATLAB.
Developed the GUI in MATLAB which allows a user to either upload the image of their choice or select any other plot (Arbitrary interpolated curve, texts, shapes).
Used two timers for controlling and synchronizing the parallel movement of X and Y motors to provide any curve of any desired slope.
The solenoid setup was brought back to its initial position after every plot. Limit switches were used to detect its arrival at the desired reset position.