Mechatronics / Robotics Projects

Bringing a little hardware into it... Here are a few mechatronics projects I've built.

Quadcopter

I built this with a friend back in 2013. We sourced and assembled the drone ourselves. The body is carbon fibre with aluminum motor mounts. We used the open source multiWii software with an arduino. This code is tuned for a specific kit frame, but can be re-tuned with some understanding of PID controllers. It is a little tricky, since you have a PID loop for each axis...we definitely crashed and went through some spare parts while working that out. Also, warning to anybody who wants to build one of these, make sure you're max throttle never exceeds the capacity of any individual motor. The drone will lose stability catastrophically, and it's a bug that only shows up when you start flying aggressively.

The drone was very fun to fly. With a 5000mAh 3S, we could get around 35minutes of flight, or carry a payload up to 4 lbs for about 15 minutes.


Arcade Style - Wallace and Gromet Scene (Final Project for 218A)

This was our final project for Stanford's ME218A : Smart Product Design Fundamentals. This is the first in a series of a mechatronics courses. This project was a way for us all to get comfortable integrating basic circuit design, mechanical design, and firmware.

The project is an interactive game, based on "Wallace and Gromit: A Grand Day Out". The game indicates it is ready for play by opening the launch bay to reveal our space ship. Start the launch sequence by lighting the fuse, represented by the red LEDs. Before the fuse burns down and time runs out, prepare your ship for a journey to the moon, by completing challenges! Fuel up by loading enough crackers for your picnic on the moon. Then heat up your engines, stoking the flames by blowing on the fan!


Autonomous Tank (Final Project for 218B)

This was my first autonomous robot! We had ~3 weeks to build and program this. It was the final project in the second course of our mecatronics series (ME 218). The objective was to knock down the towers in the order specified by a central commander. The bot queries the commander remotely to request the next target. Each time another target is disabled, the bit queries the target commander for the next object, which it will randomly select. At the end of the run, the robot navigates back to home, within the red tape.

How does the bot locate itself and it's targets? Sensors:

  • Tape Following: There are brightness sensors on the bottom of the bot.
  • IR Sensors: When a tower is activated, it also illuminates an IR emitter.
  • Wheel Encoders: We used dead-reckoning in between sensor readings. The bot had a hardcoded map of all the target locations, and the location it intended to fire nerf balls from.
  • Cannon Encoder / PID: The main nerf cannon was a wheel that spun up to shoot nerf balls. We printed a circle with striped spokes, and glue it to the side of the wheel to use as an encoder, which actually worked pretty well. We controlled the wheel/projectile velocity with a PID algorithm.
The bot used dead reckoning and tape following to make it's way to the 'T' at the end of the tape. Then it queries the commander and starts rotating to find targets. We were definitely stretching the precision of all the sensors we were using, but we managed to get it all working.

Our team consisted of three people. I focused on the firmware, while the other two focused on hardware and electronics. Of course, we all chipped in wherever we could.

Around the 40 second mark in the video, the commander fails to register that one of the towers is disabled. It's been a while, but I think our bot was confident enough that it sent an override requesting the commander to advance to the next target.