I've found some more free time and am trying to reassert my commitment of paying it forward and adding a little something to Cinders via the CiSimplePose block, but I've got a question I'm hoping to get a second opinion on.
Brief pre-requisite knowledge: I'd like for the block to be as straight-forward as possible. CiSimplePose uses straightforward homography-based pose estimation, yielding a ViewMatrix of the 'real-world' camera (in virtual coordinates) observing a given square of known size. I guess that sounds really weird, but it basically means that the method takes in a set of 4 2D coordinates on an image, and calculates where the camera must be oriented if we pretend the tag we're viewing is at the origo.
This is great - as it allows us to fairly easily merge multiple observed tags into the same space. Simply get a viewMatrix for each, apply this individually so they all land in ViewSpace, then use the inverse viewMatrix for the virtual camera we really want to use. Here's where my question comes in:
The Camera class in Cinder is a bit closed off (for good reason I'd assume). It's not possible to directly change any of the matrices. To optimally use this pose estimation business, it'd be best to mess a bit with the projection matrix of the virtual camera, so it more properly aligns with the properties of the real camera observing AR tags. To this end, I've been thinking, why not just inherit from Camera, and make some sort of ARCamera class and set the projection matrix in that fashion.
So far it's the best approach I've been able to come up with. I'd really like to use the Camera class as it's probably as user-friendly as you can get. The alternative is of course to meddle with GLM's myriad of matrix manipulating functions. While I'm getting somewhat familiar with these again, I'd prefer users of this block to just go:
1) Create Virtual Camera
2) Init Real Camera
3) Init CiSimplePose
4) Point virtual camera where ever you'd like and AR objects will always be placed in relation to it!