I was hoping to not post about this until I had some fully working
preliminary code, but I decided that getting some external advice
would probably be healthy at this point. I've been re-writing my
old Homography-based pose estimation code I built way back as part of
my thesis. The goal is to have a completely self-contained cinder
block that does nothing more than detect tags and provide camera
Basically SimplePose is supposed to be a nice starting point for
anyone who wants simple tag detection with more-or-less full freedom
over the code via the new BSD Licence. ARToolKit is LGPL with some
exemptions which can be tricky to incorporate as far as I know. OpenCV
is under the BSD licence but does a ton of other stuff, and besides
there's already a cinder block for that.
A secondary goal is of course to have something new and nice to put
on my CV, as well as give back to the open source community, given how
much I feel I've already benefited from cinder.
The reason I'm posting now is because I've gotten to the
point in the code where I need to calculate a homography based on the
detected corners in an image. A somewhat complex, but not impossible
task. I've so far done all I can to not rely on a big library like
OpenCV. Partially just for the satisfaction of doing it myself, but
also because I feel it makes the project more unique/useful in its
contribution. But is this foolish?
I suppose even if the answer is a resounding yes, I'll probably
still try and make a complete working version with no OpenCV code,
just to get it done, but I'm keen to hear what others think.
Do you think someone might stand to benefit from this block I'm building?
Good observation :) I chose that license because I somehow don't
want the code to end up in a commercial piece of warping software.
It's meant to be a free alternative to software costing thousands
per *projector*, which I think is insane.
But you have my permission to use that piece of code I linked to.
And I should mention that it's a recent addition that I found here. I will add that link to the source code.
Much appreciated Paul. I've updated the readme.md to reflect the
inclusion and customized the leading comment in the file that includes
the code originating from your source. Let me know if you'd like me
to change/add more to the wording.
I've run into a small snag that perhaps someone can give me
some advice regarding. In my original implementation, I altered the
OpenGL perspective projection matrix to fit the intrinsic parameters
of the camera observing a real world tag. However, the perspective
camera provided in Cinder seems to only enable access to the
perspective matrix indirectly.
More importantly however, the formula I used back in the day to
produce the perspective projection matrix is detailed here:
It seems as though the indirect function provided by cinder
(setPerspective) doesn't enable me to set the matrix as detailed
in the link. I've looked a bit at the setPerspective function but
my impression that the only way to get the values set as detailed in
the link is to bypass the aforementioned function. Or am I missing something?
I don't think the CameraPersp
class allows you to set the view and/or projection matrices directly,
as it can not reliably extract information from them like field of view,
or near and far plane distances.
Do you really need the CameraPersp class?
Because you can also set the matrices directly (gl::setViewMatrix) and
construct them using the convenient GLM functions (like glm::lookAt).
I suppose not. I've tried replacing the perspective cameras
behavior directly with that provided by GLM. I'm usually very
cautious and careful with OpenGL when starting out, as I find it hard
to debug what it's wrong when nothing is visible at all.
So with the included code I thought I'd mimic exactly what
I'm seeing when I'm using the perspective camera class. But
when I use the code below, the camera is miles away compared to the
Additionally the white (non-colored) cube is on the top instead of
the bottom visually. I'm thoroughly confused. I must be missing
the matrix obtained from glm::lookAt is a
view-matrix, so you should use gl::setViewMatrix( m )
instead of gl::setModelMatrix().
Additionally, you'd probably want to set the model matrix to
identity if you want to mimic gl::setMatrices(). Your
0, 0, 0 ) );
//gl::setMatrices( mVirtualCam );
gl::setProjectionMatrix( mMatProjection );
gl::setViewMatrix( mMatModelView ); // rename to
gl::bindStockShader( gl::ShaderDef().color() );
gl::drawColorCube( vec3( 0 ), vec3( 5 ) );
gl::drawColorCube( vec3( 0, 10, 0 ), vec3( 5 )
0, -10, 0 ), vec3( 5 ) );
P.S.: The reason why in your
version the cube is far away and upside down, is probably because by
default the system is set up for 2D drawing and you were not
overriding the view matrix but the model matrix.
Good advice all round. Unfortunately, I'm still experiencing
the exact same problem:
With Cinders Perspective Camera:
With the GLM Matrices:
I'm wondering if perhaps the perspective camera sets up the
right GLSL program as well, and the default shader that I'm
assigning perhaps doesn't even include a model, view, or
projection matrix in its transformation calculations?
chews on radians and not degrees by default.
even consider that FOV could invert the resulting image. I guess
it's all just due to speed being the utmost importance, so a
negative FOV is just allowed to pass through despite being
physically impossible. I'd think.
I know I've been quiet for a while, but when I finished refactoring my code, nothing worked. Took me some time to get back into 3d space calculations 'n such, as well as track down column/row major bugs.
I'm glad to announce that my pose estimator now seems to be solid and back on track. I still got a bunch of clean up to do now as well as some more testing, but hopefully I can have a usable version up and ready within' a few weeks time.
Leave a comment on gazoo101's reply
Change topic type
Link this topic
Provide the permalink of a topic that is related to this topic