More Motion Capture Thoughts…

Looking at cameras and possible target parts.

The solutions I see out there tend to use active or passive targets attached to the actor and tracked by multiple cameras. Most seem to favor faster cameras rather than higher quality cameras.

I’m looking at the $8.00-ish Fosmon USB 6 LED 1.2 Megapixel USB PC Webcam and the $7.00-sh Sony PlayStation Eye for fast capture. The Fosmon camera also seems to see in IR and thus may be usable with high output IR LEDs for tracking points. The PS3 Eye camera is fast and there are directions on the web for removing the IR filter to make it IR sensitive as well.

I’ve got a couple of Logitech C930e cameras already to cover the slower but higher resolution (and visible light) part of the spectrum. When the budget recharges I’ll likely pick up a third (they’re also tripod compatible which is nice) to better cover registration of objects in three-space. I’m also considering the Microsoft LifeCam HD-3000 as a middle of the road option…much cheaper than the high end Logitech, but likely faster than it as well (at 720 rather than 1080 resolution).

I’m really thinking that an array of several different cameras might be interesting…with faster and more plentiful cameras providing tracking and disambiguation and higher resolution cameras locking samples more tightly in position when images are available.

I’m also wondering whether a visible timing device might help. Thinking an rPI driven grey counter facing in several directions and running at a decent clip so that each camera can pull the timing information for its images from the picture taken.

Add in some IR and/or visible LED targets powered by something small and capable like a CR2032 battery or two and I suspect that interesting things may be possible. Whether it works or not it seems like an interesting challenge to take on and the learning experience alone should be interesting.

If the cheap cameras work decently (I’m ordering one of each to try out) then I’ll probably pick up a group of them to work with. I’ll probably try to see if Malcolm can 3D print some sort of appropriate brackets for the cheap cameras as they do not have screw mounts (and a fixed mount would be nice to preserve calibrations between runs). First pass is likely to involve several cameras and OpenCV acquisition of data to track a single beacon.

If that works out, I’ll likely move to fabricate a timing device that can provide the time information to the frames and see how that goes. I’d like to avoid the frame synchronization game that I saw one demo engage in where all cameras are wired into an electrical sync system. Embedding the timing information into each frame optically and using that to place them in sequence seems much better. I’d rather have a few extra cameras to fill in the gaps than run wires everywhere.

If all of the above goes well, I expect to move to looking at recording data streams separately and then doing the beacon registration offline. On the fly operation would be nice, but the power of handling things separately seems likely to be easier and allow multiple computers to get involved in the recording process to ensure high frame rates and few drop-outs.

2 Replies to “More Motion Capture Thoughts…”

  1. My thinking on this is that when the cameras are positioned in a hexagon configuration the lower resolution (640×480) should be fine. Each spot should be visible to 3 cameras at different angles resulting in good accuracy.

    Additionally as the sensors are RGB; one option would be to have each “active target” select a different RGB color (as separate from each other in RGB space as possible).

    The main challenges with this approach would be:
    1. Calibrating the cameras
    2. Converting the solved X/Y/Z spatial coordinates for the active targets into a skeletal model

    My guess would be that companies like ipisoft have put extensive effort and tuning into their kinematic algorithms. I have every confidence we could replicate their work, but it would take significant hours to do so.

    1. Well…if a group of cameras works with existing software, that may be the answer.

      I’m a bit intrigued by the problem and suspicious that it isn’t as hard to get to the 80% point as it perhaps looks.

      I certainly don’t expect to build a commercial grade product…but getting something that allows optical frame timing and easy first order calibration would be an interesting item on its own. Something interesting to play with…more coding and design exercise on the in-betweens than anything to take up off Fridays.

Leave a Reply to kylewilson Cancel reply

Your email address will not be published. Required fields are marked *