Projects

Integrated System

F1tenth Autonomous Racing Car 2024
This project is based on F1tenth, an open-source platform for small-scale autonomous driving. I implemented the full stack of the algorithm, and I am preparing for the upcoming 15th F1tenth Autonomous Grand Prix on IEEE ICRA 2024. (The video is at 1.0x speed).
Pipeline:
Stage 1: Reactive control, follow the gap and avoid obstacles.
Stage 2: Pre-build map, interpolating the global path, particle filter localization, PID controller.
Stage 3: Built the map by SLAM toolbox and refined by OpenCV, time optimized global path, RRT* as local path planning and geometric controller.
Stage 4: MPC as local planner and controller with kinematic model.
Stage 5: MPC with dynamic side-slipping model.
What's different:
1. I implemented a safety layer to avoid collision with other vehicles and pedestrians.
2. I improved the RRT* with bias sampling, sample rejection, graph sparsify, delay collision check and Python computation tricks to decrease the runtime cost by 80%.
Quadrotor’s Planning and Control 2024
This project realized the state estimation, planning, trajectory optimization, and control of a quadrotor from scratch. Find the code here.
What's different:
1. State estimation: I implemented quaternion-based UKF and complementary filter, which increase 20% efficiency compared with rotation matrices implementation. Because of the computational resource limit, I deployed complementary filter on the onboard IMU.
2. Path planning: A* is used, as well as engineering tricks, like diagonal heuristic and cross tie breaker. I am achieving 20x planning speed improvement in 3D grid map. Further improvement includes integrating dynamic model with state-space planning, such as State Lattice Search, Kinodynamic RRT*, Hybrid A* , etc..
3. Trajectory optimization: a kinodynamic constrained planning algorithm is essentially solving with boundary constrain in the Cartesian space. I implemented minimum snap trajectory optimization algorithms based on path that A* gives.
Pick and Place Challenge 2023
The project is aim at pick the static and dynamic blocks and stack them. I led my team to win the first place with excellent and stable algorithms. Find the competition recording at here.
What's different:
1. It has a robust pose matching algorithm: i.e., it is always desired that the end effector grabs the block in a certain pose (always with the camera facing forward) even with large error in the pose estimation.
2. Trajectory planning: I used RRT* + offline interpolation + velocity profile planning + lookup table speedup, while other candidates used hardcode position. My method doubled the speed.

Reinforcement learning

PPO in Continuous Control 2024
I implemented the PPO and related AC algorithms for continuous control tasks. Find the code here.
What's different:
I tried tricks to improve the performance, showing the implementation matters and a few of them are valid in the walker environment.
They are Advantage, State, Reward Normalization (positive); Reward Scaling (negative); Policy Entropy (negative); Learning Rate Decay (positive); Gradient clip (positive); Orthogonal Initialization (neutral); Adam Optimizer Epsilon Parameter (neutral); Tanh Activation Function (positive).

Perception

Mono ORB Visual Odometry 2023
A minimum python version mono visual odometry. Find the code here.
Pipeline:
1. Frame Processing: Converts each frame to grayscale to simplify and expedite subsequent feature detection and matching steps.
2. Feature Extraction and Matching: Leverages ORB for efficient feature detection and FLANN with LSH for fast, accurate feature matching, optimizing the identification of corresponding points across frames.
3. Motion Estimation: Uses RANSAC for robust Essential Matrix estimation to derive precise rotation and translation matrices, filtering outliers and ensuring reliable motion estimation.
4. Pose Update: Updates camera pose using calculated motion vectors, generating precise projection matrices for depth estimation and 3D reconstruction.
5. Triangulation: Triangulates matching points between successive frames to reconstruct 3D scene geometry, utilizing the camera's intrinsic parameters and updated poses for spatial reconstruction.
• Further improvement can be on G2O pose optimization, database maintainance, and multi-threading for faster processing.

SLAM

Humanoid Robot SLAM 2023
I implement particle filter SLAM in an indoor environment using information from an IMU and a LiDAR sensor. The data is collected from a humanoid robot named THOR built at Penn and UCLA. Find a video about the robot here. The goal is to estimate the robot's pose and build an occupancy grid map of the surroundings.

Control and Hardware

National Engineering Practice 2021
National Silver Prize amongst over two hundred universities.
In this project, we've custom-built a logistics cart from the ground up. It's designed to automatically detect the color of objects and adeptly place them in their designated spots on the shelves.
After implementing an array of sophisticated control strategies for tracking and precise positioning, we discovered that a finely-tuned hard-coded speed control, especially after refining the curve trajectories, offered the most reliable performance. This experience underscored a valuable lesson: complexity isn't inherently superior; adaptability and tailoring solutions to meet specific demands are crucial. Our approach paid off handsomely.
• Find video here.
• The result was featured in the website of MAE department.
Grant Theft Autonomous 2023
In this competition I also led my team to win the first place.
1. Designed a unique robot mechanical structure, circuit structure. And designed a number of functional modules that can be mounted on to the robot based on the modularization design idea and the principle of expandable;
2. In terms of control, the motor control circuit is designed and the PID speed control algorithm is implemented to achieve a good dynamic tracking effect on the speed profile;
3. Using TOF, photoresistor as the main sensing tool and Vive system as the main localization tool, based on which we realized the functions of roving wall, capturing special frequency light source signals and automatically navigating to the source, and automatic grasping;
4. Using I2C, UART and other wired data transmission methods to transmit sensor data, adopts ESP's unique end-to-end communication protocol ESP-NOW to communicate between multiple control boards, and uses UDP and TCP/IP to broadcast its own coordinates and status information.
5. Created a Webpage using html and HTTP communication to teleoperate the robot manually.
• Find the competition recording at GRASP Lab Youtube Channel.
6-DoF Stewart Platform 2024
Because of the achievement in Grant Theft Autonomous 2023, I was invited by Prof. Mark Yim, the director of GRASP Lab, to make the 6-DoF Stewart platform for UPenn Design School.
• Developed a set of self-calibration algorithms for it to ensure the accuracy of motion control.
• Built PD position controller for linear actuators’ velocity feedback control with position sensor.
• Developed and trajectory tracking and polynomial interpolation with desired trajectory recorded by Vicon Motion Capture System.
• Find the code here.