- The FO_dualQuaternionContext_t object - Dual quaternions are an extended version of quaternions that can handle both rotation and translation within the same mathematical framework. If you haven't encountered them before, I'm not surprised; it looks like few people have used them outside of robotics, even though they've been around since the 1800s. I blame the fact that we're using pi instead of tau for being the root cause of this, but be that as it may, read some papers about dual quaternions!
- Hand-Eye Calibration Using Dual Quaternions
- Dual Quaternions for Rigid Transformation Blending
- Dual Quaternions as a tool for Rigid Body Motion Analysis: A Tutorial with an Application to Biomechanics
- Dual Quaternion Synthesis of Constrained Robotic Systems
- The ability to add subcontexts to FO_context_t objects. This is the reason why I needed the FO_dualQuaternionContext_t in the first place; it allows me to specify how you need to twist a child frame into a parent frame's point of view.
- Cameras/sensors/antennas/robots can all be placed at the origin of their own reference frame. This means that if there are 3 robots, A, B, and C, A might want to know what C can see from A's point of view, and B might want to do the same thing from B's point of view. If we calculate C's output across the whole world from within it's own point of view, and then translate this into A & B's point of view, we don't need to waste time/energy in recalculating the whole mess multiple times in a row.
- If we have some relatively enclosed, and many-times-repeated system, it makes more sense to calculate what it radiates/receives once, and then copy & translate it to new positions. For example, a really weird illumination source, like a candle within a lantern, may take a long time to calculate out, but once the light leaves the source, it is no longer affected by the glass, plastic, lava lamp fluid, etc. of the lantern. Save all the illumination information, and replace it with a sphere that radiates differently depending on what point of the sphere that is visible. You still have to trace the rays out of the surface of the sphere, but you don't have to trace how it bounces around inside of the lantern to get out. The downside to this is that every lantern will be a copy of each other, but depending on what you're trying to model, this might be acceptable.
I've also updated the illumination API slightly; right now, the only things you can change are the total energy and the frequency of the illumination. In the future, I'm hoping to add the ability to specify ranges of frequencies, but that will require a great deal of time, and a great deal more thought on how to do it well. Note that the illumination specifies something akin to a single photon or light packet. That means that one illumination source will be generating many, many of these photon-like objects at one time. I'm not sure if that is the best way of doing things yet, but its a start.
Finally, and perhaps most oddly, I've decided to use the GNU MP Library to pass in all scalar information as rational numbers. Internally, I use floats or doubles, but by having the interface use rationals, I leave open the possibility of arbitrary precision sometime in the future. At the moment, my plan is use the rationals to calculate combined rotations (since I can have an arbitrarily deep ancestor/decedent chain, accuracy can be a problem), and then convert the rotations into floats for use on the GPU. My plan is that if you have a higher precision card (one that uses doubles instead of floats), the precision can automatically scale up. In short, no built-in limits to my API!