We’ve framed out the ‘Cluster’ game graphically (at least to a first approximation). Currently the cluster generation is handled in Unity and there are no game rules and no save games.
I’m in the process of roughing out the MySQL database structure that will hold persistent game data and convey it between players and looking at coding up the php code needed to manage this data and implement game turn logic and playing field generation.
Once the basics are sketched out, I expect to remove the generation logic from the Unity code-base and switch things over to use the layouts and turn management provided by the site based php code.
I’m expecting to see the database side layout break out into:
System Data. Things that control game operation but do not change game to game or player to player.
Player Data. Information about the players that is not related to a particular game. Name, picture or icon, other particulars.
Game Data. Information that is a base part of a particular game run but does not change turn by turn.
Turn Data. Turn and move data for the game.
Star name list
Planet types list
Configuration such as number of stars per game and other similar items.
Player picture or icon
Stars Information Table with color, location
Planets Information Table with type, properties, max pop, orbit
User Move Fleets from -> to. Scouts from -> to.
Resource Production Build Ship here, Build planetary resource here, Spend on research. Form and break fleets.
Malcolm and I spent part of Friday experimenting with an evaluation version of a commercial motion capture package. I now have six PS3 Eye cameras and my VR machine has enough USB-3 ports and controllers installed to run them.
Our initial setup had three or four cameras attached and used the room lighting (pretty bright, ceiling mounted LED lights) for illumination. Initial results weren’t great but over the rest of the day we learned a few things.
Need a more contrasty background and a less cluttered background. The bookcases behind the area we were using were better when covered with a piece of white fabric. The tan rug on the floor was less of a problem when the model put on black socks.
Hands really aren’t handled by the package. No big surprise here as hands and fingers are rather small targets for these cameras.
More lighting is better. I added two diffused studio lights I have around and a high intensity three light halogen light bar and things became more precise.
A larger and more diffuse calibration target seemed to work better.
Sliding the calibration beacon along the floor with periodic stops seemed to work better than touching it to the floor. This makes all floor level reference spots about equal (when I was touching it down, it took a little work to make sure we had the lowest spot in each arc).
Aligning the reference person with the human figure in the images at the start helped quite a bit. The tool didn’t seem to do a very good job of this without manual help.
By the end of the session, we seemed to be getting a pretty good capture of arms and legs. Feet could still be a bit twitchy.
Malcolm is going to look at 3D printing mounting clips to attach the PS3 Eye cameras to light stands for more stability.
I ordered a couple of spare cameras to ensure that we don’t come up short if any of them fail and a couple of PCIe USB-3 cards to supplement USB controller availability.
Overall things turned out pretty well and I think we learned a bit more about making motion capture without dedicated beacons work decently. The price of the package is high enough that even a short time license would need us to have some substantial amount of motion capture to get done in order to make things make sense.
I proven that the cameras can run on Ubuntu and the RPi. I found a page with classic Unix/Linux style install instructions. I’ll be working on getting this set up on my biggest RPi machine and take a look at building code to red from multiple cameras and stream the data to a host. If I can run two cameras on a single RPi then I should be set.
I’ll probably also look at doing something similar for the RPi cameras on the RPi-2 machines. That might add a couple of additional cameras to my set.
I’ll then move on to building a simple LED beacon and look at some simple camera calibration code on an appropriate host.
Trying to build the OpenCV package on one of my RPi 3 machines. I think I’m running into heat issues. I’ve switched to a board that I can keep open and has a heat sink on it. Hoping that may be enough. I’ve also dropped the build scripts onto github to make them broadly available.
Pulling packages on this machine now.
Now I’ve got a fan blowing. I see from some web pages that the RPi is supposed to warn and throttle when the temperature spikes…didn’t see that with my black and silver RPi 3…it seemed to halt completely after a short span. Keeping this one cool up front and we’ll see how this goes.
Interesting…it looks as if the scipy build is eating all available physical memory on the RPi board (882 MB of 923 total). Nothing moving on the machine…not even the mouse cursor.
Ah, reference here to bumping the swap file for the RPi.
Monday morning dawns and it appears that I have a raspberry pi that is loaded up with an installed build of OpenCV. No time this morning to test this but tonight I’ll run through a few simple tests and then probably run the same process on one of my Intel NUC machines (should go faster and easier) to get a decently powerful system up and working with the same version.
Yesterday my five additional $6.00 PS3Eye 60 FPS cameras arrived. I’ve pulled my RPi machines and my two Ubuntu based NUC machines out and started checking out the cameras and making sure the systems are up-to-date.
The NUCs generally get more use and thus needed less updating. I found that using cheese I was Immediately able to run the cameras on both the Ubuntu NUC machines and the RPi boxes. Even the older RPi-2 machines ran the cameras…I’m not sure that the performance on those will be sufficient to make them useful though. They do both have RPi internal cameras connected though so that may be of interest. I’ve got to look at how to access those cameras from C++ code and see what sort of performance they have (and test them out to see what they can do).
Here are the cameras (the one sample I bought for testing and the five others that just came in). I may pick up a couple of spares to make sure that I have what I need in the longer run…at $6.00 each that isn’t a big deal). The beer pong balls look like the perfect size for a light diffuser on an LED…I’ll have to rig up a few LED/resistor/battery sets to try that out sometime soon.
On Sunday I’ll look at getting programmatic control of these cameras. If I can acquire images from two cameras at the same time and stream the data out over a socket, I’ll be ready to rock with these things.
Still to be managed is getting the cameras rigged to mount on light stands and getting three more light stands with ball heads to set them on. If the RPi-2 boards look interesting with their integral cameras then I may need some additional mounting options, but initially three to six cameras should be a good start.
I’ve installed the filmbox SDK on my home machines so that I am in a position to play with programmatic manipulation of *.fbx files. This feeds into the motion capture experimentation I’m doing and should be generally useful at times.
I also dug out my three RPi 3+ machines last night and got one of them booted up and connected to a PS3Eye camera…without actually taking images…just observing the power light go on and no other negative effects. I’ll likely move further over the weekend to try getting actual image capture working there.
I did find a github project that claims to be able to run one of these 60 FPS cameras from an RPi. This project appears to be motion sensing related, but I expect it can act as a source of sample code for my longer term purposes…
The Fosmon camera looks too slow and low on image quality to be useful for my purposes here so I’m shelving that one. The PS3 eye camera is challenging to interface to a PC, but at $6.00 per camera, I’m inclined to spend some effort here. There do appear to be Linux drivers out there and I’m going to look at a RPi mediated option with these (I’m willing to invest $30.00 for five more cameras to give this a try).
The other option that was presented as viable are Logitech C922x Pro Stream Webcams. I have two Logitech C930e cameras currently…these won’t run 60 FPS but have comparable image quality and a wider field of view. At $70.00 each, I won’t be picking up six c922x cameras any time soon, but I may purchase on or two if all goes well to see what they are capable of. They’re almost certainly better cameras than the PS3 Eye devices, but rather pricey.
I’m also picking up some CR2023 batteries and battery holders along with a box of LEDs and some 19mm ‘ping pong balls’ to try as diffusers.
This is all an experiment, so it may come to nothing more than an interesting diversion. The commercial motion capture options are far too expensive to be viable as a hobby thing and I’m expecting to have some fun putting this together and seeing where it can go.
I would be particularly happy to see an RPi solution work out as having one pi board for each pair of cameras (or even for each camera at a stretch) with an Ethernet back-haul to a full fledged PC for processing would be a flexible and relatively affordable solution with pretty good scalability to more cameras as well.
There have been concerns expressed about getting from a point cloud to imported animation data tied to a skeletal model.
I accept that this may not be trivial…particularly tying a set of points in a moving cloud to control points for bones in a skeletal animation.
It does appear that there are defined interchange formats (I’ll have to look and see if these are supported bu Unity and Blender) that may make this more straightforward. Being able to export the skeleton for an animation and then inhale it into a processing application seems helpful. Being able to tag points in a cloud for consumption by Unity seems even more helpful.
I looked here and got a pointer to collada which appears to be getting developed by Khronos at this point. This appears to be a general purpose, open, XML based standard for interchange of 3D information. There is also an OpenCollada code archive out there…that link appears to be dead…but it is on GitHub and a project home page.
Filmbox is a more proprietary option with information at Wikipedia, Blender, More Blender, and AutoDesk (who created it and still owns the format). The AutoDesk link connects to SDK downloads that support C++ and Python manipulation of these files.
The solutions I see out there tend to use active or passive targets attached to the actor and tracked by multiple cameras. Most seem to favor faster cameras rather than higher quality cameras.
I’m looking at the $8.00-ish Fosmon USB 6 LED 1.2 Megapixel USB PC Webcam and the $7.00-sh Sony PlayStation Eye for fast capture. The Fosmon camera also seems to see in IR and thus may be usable with high output IR LEDs for tracking points. The PS3 Eye camera is fast and there are directions on the web for removing the IR filter to make it IR sensitive as well.
I’ve got a couple of Logitech C930e cameras already to cover the slower but higher resolution (and visible light) part of the spectrum. When the budget recharges I’ll likely pick up a third (they’re also tripod compatible which is nice) to better cover registration of objects in three-space. I’m also considering the Microsoft LifeCam HD-3000 as a middle of the road option…much cheaper than the high end Logitech, but likely faster than it as well (at 720 rather than 1080 resolution).
I’m really thinking that an array of several different cameras might be interesting…with faster and more plentiful cameras providing tracking and disambiguation and higher resolution cameras locking samples more tightly in position when images are available.
I’m also wondering whether a visible timing device might help. Thinking an rPI driven grey counter facing in several directions and running at a decent clip so that each camera can pull the timing information for its images from the picture taken.
Add in some IR and/or visible LED targets powered by something small and capable like a CR2032 battery or two and I suspect that interesting things may be possible. Whether it works or not it seems like an interesting challenge to take on and the learning experience alone should be interesting.
If the cheap cameras work decently (I’m ordering one of each to try out) then I’ll probably pick up a group of them to work with. I’ll probably try to see if Malcolm can 3D print some sort of appropriate brackets for the cheap cameras as they do not have screw mounts (and a fixed mount would be nice to preserve calibrations between runs). First pass is likely to involve several cameras and OpenCV acquisition of data to track a single beacon.
If that works out, I’ll likely move to fabricate a timing device that can provide the time information to the frames and see how that goes. I’d like to avoid the frame synchronization game that I saw one demo engage in where all cameras are wired into an electrical sync system. Embedding the timing information into each frame optically and using that to place them in sequence seems much better. I’d rather have a few extra cameras to fill in the gaps than run wires everywhere.
If all of the above goes well, I expect to move to looking at recording data streams separately and then doing the beacon registration offline. On the fly operation would be nice, but the power of handling things separately seems likely to be easier and allow multiple computers to get involved in the recording process to ensure high frame rates and few drop-outs.
This weekend, Malcolm and Sam came over for a day to move our various Unity investigations forward.
Malcolm made some very nice pieces of furniture and architectural bits. I think the end result has much more feeling of place and I’m moving forward from there.
Sam was working on his own project and looking at getting his motion working properly on Daydream with the Daydream controller.
I added some interactivity to my prototype. Attaching a ‘stick’ to the controller and hanging a small sphere with a collider attached makes it straightforward to select objects by touching them with the stick. It took a surprisingly long time to find the appropriate code to implement haptic feedback when a star is touched. I’m still looking at adding a short positional sound queue on touch as well.
The code lives on my github account here. This supports Vive only (I think) at the moment. It certainly supports Vive best. At some point I’ll likely look at making this work with DayDream and flat screens.
I grabbed some textures from around the house and yard and slapped them on some random spheres and capsules. Those are here and there are separate branches for DayDream, Vive and flat screen (master). The Vive version looks much better than the DayDream version and I expect that is to be expected.
I spent some more time chopping up texture tiles. Not a lot of time to clean them up though. I really need to better understand the impacts of the various extra texture functions that are available. I think there are some options missing from the default material such as displacement maps that can genuinely move vertices. Lots to learn along the way and getting there with the Vive is pretty cool.
Basement is largely ready to go. Machines are loaded with pretty much what is needed. Now I think I’ll bodge together a bit of script and scene and make sure everything is good to go with the vive headset.