Software
roscpp and rospy (ROS/ROS2) - Used to develop UAV swarming applications (C++ and Python) for control, estimation, and path planning. example
PyTorch/TensorFlow - Used to train, deploy, and evaluate neural networks for DQN and DDPG flavors of reinforcement learning. (example available after publication).
React/JavaScript/HTML/CSS - Used to develop a web-based frontend for a swarm ground control station. example
Docker - Used to package, test, and deploy UAV swarm application code. example
MAVLINK (pymavlink/c) - Used to develop ground control backend. example
Gazebo (formerly ignition) - Used to run high-fidelity physics UAV swarm simulations.
ArduPilot (Copter and SITL) - Used to simulate multiple multi-copters to test swarming application code prior to field deployment.
YOLOv8 - Used for object detection in robotic contexts.
Git
MATLAB
Hardware
Pixhawk flight controllers - Setup and configured several cube-style autopilots for multipcopters.
Small-Form Computers - Raspberry Pi 4 and Nvidia Jetson Xavier for deploying application code.
Multicopter Hardware - Familiar with sourcing, building, modifying, and debugging custom systems built with brushless motors, ESCs, COTS autopilots.
Actuators - Familiar sourcing and using PMDC, stepper, brushless, and servo motors; linear actuators.
Sensors - Familiar using web-cameras, depth-cameras, IMUs, loadcells, rotary encoders, poteniometrs, LIDAR, sonar for perception needs.
Microcontrollers - Familiar with STM32, ATMEL, PICO microcontrollers and their SDKs for low-power control applications.
PCBs - Designed and used (manufacturing outsourced) printed circuit boards for custom UAV flight controllers and robotic manipulator applications.
Rapid Prototyping - experience with designing parts to be machined, printed, laser-cut, and injection-molded.
Conceptual
Controls - Familiar with designing, tuning, and implementing classical and optimal control strategies including PID, LQR and LQI, and MPC.
Dynamic modeling - Familiar with system modeling (mechanical and electrical) using Newtonian and Lagrangian mechanics towards control design and stability analysis.
AI/ML - Familiar with reinforcement learning (Q-learning, DQN, DDPG), Markov Decsision Processes, and basic supervised and unsupervised learning approaches for classification and predicition.
Robotics - Familiar with forward and reverse kinematics, graphical methods, path planning, and Jacobians in the context of classical roboic design.