top of page

CURRENT ROLE

website.png
Roles and Responsibilities:

- Implemented and validated pose prediction for mining trucks
- Implemented , tested and validated Object Awareness Feature for all types of Caterpillar trucks
- Implemented state machine for the trucks
- Troubleshot and debugged implemented features
- Tested the features on ECM and prototype
- Refactored different features and sensor data display
- Worked on camera diagnostics


 

On contract 

June 2021 - Present

PREVIOUS PROJECTS

Project | 01
DEEP ORANGE 12
​

​

Project | 02
UAV + UGV
​​
The main objective of this project is to integrate the working of Unmanned Aerial Vehicle (UAV) with Unmanned Ground Vehicle (UGV).
The UAV captures the image feed which will be processed and used by the UGV.
Various Motion Planning techniques are currently being tested to find the optimized method for this purpose.
Project | 03
Implementation of simple Adaptive Cruise Control in RC car (Control)
​

The main objective of this project is to maintain a safe distance from obstacles. With the help of ultrasonic sensors that were mounted on the front and on the sides of the vehicle, the goal was accomplished. Kalman filter was implemented to obtain accurate distance measurements from the sensors. Steering control was carried out by implementing PID. The difference between the sensor readings on the sides were taken as error input to the PID and steering angle inputs to the servo motor was calculated. For maintaining safe distance from the obstacle, the front sensor readings were read and if it is less than a set distance, the throttle inputs were given as zero.

Project | 04
Autonomous navigation of Turtlebot in a simulated environment (ROS, open CV, obstacle avoidance and detection)

First stage: Wall following:

Lidar mounted on the turtlebot was used to find the distance of the bot from the wall and the velocity and steer angle of the bot was adjusted in such a way it maintains equal distance from the wall and steers accordingly.

Second stage: Obstacle avoidance:

The distance obtained from lidar was used to maintain a safe distance and steer away from the obstacles. The sweep from zero to 90 degrees from the center on both sides was used to determine the steer angles.

Third stage: Line following:

The camera feed was processed using OpenCV to get the centroid of the line. The steer angles of the bot were adjusted to follow the centroid of the line.

Fourth stage: STOP Sign detection:

Using the TinyYOLO dataset, the STOP sign was identified, and the velocity of the bot was set to zero for three seconds.

Fifth stage: Leg tracking

The pose of the leg was obtained and the distance from the pose was computed. The Steering and forward velocity inputs were given based on the distance between the bot and the legs.

​
Project | 05
April Tag tracking:
 
The camera mounted on the Turtlebot was calibrated to obtain the intrinsic parameters of the camera. OpenCV was used to perform the calibration task.
AprilTag detection package was used to track the position of the AprilTag.
A simple controller to track the midpoint of the April track was used to generate steering angles for the Turtlebot
​
Project | 06
Collision avoidance: 3 agents, 8 agents
 
Distance dependent Reactive force-based approach: The distance between the neighboring agents was calculated using which the time to the collision was computed. A cost function based on the agent’s velocity and goal velocity was formulated, whose constant parameters were manually tuned, and the function was minimized to obtain the new velocity and updated.
Project | 07
Discrete planning using A star and Djikstra
 
A* and Djikstra motion planning algorithm was used to plan the path to a goal in the 2D grid. From the start, adjacent nodes were explored; costs and heuristics were calculated for the adjacent nodes to determine the next node.
Project | 08
PRM planner-based navigation:

 

Path planning for 2D objects through obstacles. Random sampling was done to create the vertices. The path from the initial position to the final position was found using the A* search algorithm.
Project | 09
Motion planning for Dubin’s car using RRT:
 
Implementing RRT (Rapidly-exploring Random Tree) algorithm for planning the path of a non-holonomic car (Dubin’s car model) to move from initial state to goal state. The nearest node to the new random node is chosen from the RRT available at that time. The new vertex is added to the RRT based on the set of state constraints. The distance between the tree and the random state is decided by a factor. If the random state is present at a large distance from the nearest state, another vertex is generated at a maximum distance from the nearest node in the tree along the line to the random sample. The random samples decide the direction of the tree growth while the growth factor determines its rate.
Project | 10
Autonomous Navigation using MATLAB:
 
Autonomous Lane Keeping: Lanes are detected using Hough transform from the camera feed. The camera was calibrated using the camera calibrator app in the MATLAB app.
Recognize road signs: Deep Learning Toolbox was used for recognizing road signs. A lot of images of the road sign were used for training the Deep learning model. 
Communication: The identified road sign information was displayed in another MATLAB program. The communication between two MATLAB programs running on two different laptops was established.
Vehicle controls: Stanley controller algorithm was used to find the steering angle. HMI was incorporated to display the control commands
​
Project | 11
Behavior cloning (Deep Learning, CNN):
 
The connection between Udacity's Self-driving Simulator and the Python script was established using Flask and Socket using Python.
The CNN was trained by initially driving the car manually in the simulator and capturing images (Front, left, and right) at each instant along with steering angle and velocity.
This was used to create a CNN model. The CNN-trained model was stored and was used to predict the velocity and steering angles.
​
​

OTHER RELEVANT WORKS:

Active contour and Sobel filter ( computer Vision):
The image was Sobel operated initially. The active contour was made by creating calculating internal and external energies. The internal energies were calculated based on the distances between the adjacent point and the external energy was calculated based on the Sobel gradient. 
 
​
​
Region Interaction (Computer Vision):
GUI: GUI option that allows the user to select the color for the pixels, the mode of region growing.
Region Growing: The Absolute difference and distance from the centroid will be given by the user and the mode in which the region growing happens will also be given by the user. There are two modes: Step and Play. The region growing parameters are obtained from the user and based on this, the pixel will be added to the lot.
​
 
​
​
Range Image Segmentation (Computer Vision) [C]:
Region Growing was used to segment regions. Surface normals were calculated based on cross product. Region growing was used to segment regions. The region predicate was, a pixel can join the region if its orientation is within a threshold of the average orientation of pixels already in the region. The angular difference should be calculated using the dot product. The region growing code was modified to recalculate the average after every new pixel joins the region. Different thresholds were tried to obtain perfect results.
​
​
 
​
​
Camera, Radar, and Lidar data handling and Visualization:
 
NuScenes Dataset was used for the lidar and radar datasets. The binary files of Lidar were parsed and visualized using OpenCV. The Lidar pointclouds were visualized based on height, intensity, and Semantic labels. For radar, the data were colorized based on height and velocity.
And also, the radar points was visualized on the front camera image.
​
Sampling, 2D convolution, line detection, polygon space detection:
 
The Lenna image was used to perform the Downsampling, RGB to gray conversion, 2D convolution, Sobel Gradient tasks were performed.
An image of the parking lot was used for the line detection task. Hough line transformation was used for this purpose. The parking space polygons were plotted after performing line detection (Pardon my weird color combination!)
​
bottom of page