Introduction
This is very interesting intro
Localization
Localization is figuring out where the robot is in the world. There's many ways to go about this, from using encoder ticks to reading AprilTags on the field. This section will document the ways to go about localization.
Using AprilTags with Limelight
The Limelight 3A (hopefully the one you have) is a Very Fancy Cameraâ„¢ that can automatically read AprilTags and give you the position using MegaTag2. AprilTags.
Setting up AprilTags
[!NOTE] Limelight's official AprilTag guide can be more helpful and updated, and it's recommended you read that first! This section will only contain some tips and tricks.
- If you have issues with moving at high speeds with the camera blurring, try setting your exposure lower.
Reading AprilTags
I recommend having a function you can call to get your results from Limelight. The one I have used for a couple years is:
/**
* This function gets the current limelight result, and returns null if there isn't one.
* @param yaw The current yaw of the robot, should be just what you plug in
* @param yawOffsetFromOrigin The offset yaw, in case you start the robot in a different position than what you want the actual field rotation to be
* @return Limelight data (LLResult) or null, if the result is not valid.
*/
public LLResult getLimelightData(double yaw, double yawOffsetFromOrigin) {
// Add the yaw offset and wrap it as needed.
// If you are using odometry, make sure odometry is also aware of this.
double limelightOrientation = (yaw + yawOffsetFromOrigin) % 360;
if (limelightOrientation > 180) limelightOrientation -= 360;
// Update the robot orientation for MegaTag2 and grab the result
limelight.updateRobotOrientation(limelightOrientation);
LLResult result = limelight.getLatestResult();
// Check if the result is valid and/or exists
if (result != null && result.isValid()) {
return result;
}
// Return null if not valid
return null;
}
And to use the result:
LLResult result = getLimelightData(yawDeg);
if (result != null) {
Position mt2 = result.getBotpose_MT2().getPosition();
double x = mt2.x;
double y = mt2.y;
}
Conclusion
Even with MegaTag2, Limelight data can be jittery. It's up to you to figure out how to deal with this! At higher speeds Limelight data is definitely less reliable but it is decently stable at lower speeds. Be sure to test this if you are planning to use it on your robot.
Using AprilTags with a Camera
I havnt done this 3:
Adding Wheel Encoder Odometry
Odometry is the act of figuring out where the robot is using encoder wheel positions. This can usually be broken into three parts:
- Figure out the change in position relative to the robot in encoder ticks (forward kinematics)
- Rotate this vector to world space
- Convert encoder ticks to real world units (like meters)
This document will focus on Mecanum drivetrains but concepts from this can be used for other drivetrains.
Forward Kinematics
This is the most complicated part of odometry: figuring out how we actually convert encoder ticks to a vector that the robot is currently moving. Fortunately, a lot of smarter people have already figured this out for us: \[ \left\{\begin{aligned}\ v_x & = (\omega_{fl} + \omega_{fr} + \omega_{rl} + \omega_{rr}) \cdot \frac{r}{4}\\ v_y & = (-\omega_{fl} + \omega_{fr} + \omega_{rl} - \omega_{rr}) \cdot \frac{r}{4}\\ \omega_z & = (-\omega_{fl} + \omega_{fr} - \omega_{rl} + \omega_{rr}) \cdot \frac{r}{4(l_x + l_y)} \end{aligned}\right. \]
We don't need to worry about rotation (\( \omega_z \)), instead we can use the IMU to figure it out! The Java equivalent of this would be:
double dxLocal = (deltaFrontLeft + deltaFrontRight + deltaBackLeft + deltaBackRight) / 4.0;
double dyLocal = (-deltaFrontLeft + deltaFrontRight + deltaBackLeft - deltaBackRight) / 4.0;
Rotating the vector
Now, we have a movement vector relative to the robot in encoder ticks. The next step is to rotate this vector to be in terms of field. We can use trigonometry to figure this out: \[ \begin{split}x_2 = \cos(\beta) x_1 - \sin(\beta) y_1 \\ y_2 = \sin(\beta) x_1 + \cos(\beta) y_1\end{split} \] The Java code for this is going to be:
double dx = sin(yawRads) * dxLocal + cos(yawRads) * dyLocal;
double dy = cos(yawRads) * dxLocal - sin(yawRads) * dyLocal;
Convert encoder ticks to Real World Units
The last step is to convert the units of the vector to real world units. Use this formula as a starting point:
\(W_{C}\): Wheel Circumference (in the units you want to convert to)
\({T_{rev}}\): Ticks per rev at the motor (Counts per Revolution)
\({G_{r}}\): Gear reduction
\[ \frac{W_{C}}{T_{rev}\cdot G_{r}} \] Example, using REV 75mm Mecanum Wheels using a HD Hex Motor and a 4:1 + 5:1 UltraPlanetary Gearbox (20:1 gear ratio), we get this:
\(W_{C} = 0.075m * \pi = 0.235m\)
\({T_{rev}} = 28\)
\({G_{r}} = 18.8803\)
[!CAUTION] REV Gear ratios are not exact, check their specifications for the exact numbers. This is why our gear reduction is 18.8, and not 20.
\[ \frac{W_{C}}{T_{rev}\cdot G_{r}}=\frac{0.235}{28 \cdot 18.8803}\approx 0.00044 \]
[!WARNING] In practice, this is only an estimation. How far your robot moves depends on way too many variables to calculate exactly. To improve on this, use a program to run your robot a set amount of ticks, and measure how far it moved, then use this data to refine the number.
You can then add this result to your position. From my testing, I have found that the actual sign and directions of these vectors may be different, and you may need to negate some values from testing. Not entirely sure why this happens.
Conclusion
If you want some more sources to research this topic, here are the sources we used to make this:
- Rotating a vector: https://matthew-brett.github.io/teaching/rotation_2d.html
- Kinematics: https://ecam-eurobot.github.io/Tutorials/mechanical/mecanum.html
Vision Recognition
This is to recognize stuff using the vision pipeline