Cluster – RESTful APIs

As part of building the front-end to the shared web side aspect of the cluster game, I’m starting to lay out the RESTful interface to the database at the back-end of the server side.

I’m looking at several main categories using query parameters to filter results on aggregate areas.

  • players
    Returns a list of player names and ids. Details of a specific player can be retrieved (and changed) by accessing the player by player id as a sub-resource. Permissions do apply here…a normal player can only view or edit the details for their own information. An administrator can view and change information for any player.
    • id (immutable after creation)
    • name – Unique, human readable user name
    • email
    • is email public
    • password (settable only)
    • active
    • date joined
    • date last activity
  • games
    Returns a list of games in the database. With query parameters this can return games containing only a selected player or players, games that are in progress and perhaps other subsets of all stored games. Details of a specific game may be retrieved by game id. For normal players, only the details that their player identity has access to may be retrieved. For developers all details are visible and subject to modification.
    • id
    • player ids
    • turns
      One entry for each turn in the game so far. All open if the game is completed otherwise only information visible to this user is present.
      • turn number
      • units list
        • unit id
        • unit type
          ship type, planetary resource type, population
        • location type
          space or system or planet
        • location
        • player id
      • technology list
        • players
          • player id
          • technology id
          • level
          • investment
      • moves list
        • players
    • current turn
    • moves submitted
      (may be part of the turns container)
    • active
    • created
    • completed
    • final scores if completed
    • stars
  • options
    Options that relate to new games being started. Global lists (start information, admiral information and such). Generally these will only impact newly created games. Selected items may affect all games.
  • audit
    Audit log recording changes made using administrative privileges.

Cluster Game Moves

Overview

  • Turn based multi-player
  • Internet based shared state
  • Once all player moves for a turn are complete, the turn is executed
  • Strategic level
  • Non-interactive combat resolution a the end of the turn

Units

  • Scout ships
    Small, unarmed vessels with unlimited range used to gather information.
  • Colony transports
    Large vessels that only exist to transport a group of colonists to a new star. Once the colonists debark, the transport ceases to exist as its parts are used to support the new colony.
  • Escort
    Small interstellar armed vessel
  • Assault ship
    Medium sized interstellar armed vessel
  • Capital ship
    Largest interstellar armed vessel
  • Defense platform
    Small defense system dedicated to the defense of a single planet. Combat equivalent to an escort.
  • System defense boat
    Medium sized system defense vessel. Equivalent to an assault ship in combat. May be used in defense of any planet in a single system.
  • Factory
    Production facility on a given planet. Increases output of one unit of population based on the level of factory technology that has been developed.
  • Robotic Industry
    Production facility that operates without an accompanying population unit.

Basic Turn Mechanics

  • Spend on building ships
  • Spend on building facilities
  • Transfer population to colony transports
  • Form ships into fleets and break ships out of fleets (if at a star)
  • Spend on technology buys

Movement Mechanics

  • Scout ships can move independently and over unlimited distance
  • Unarmed ships that are not escorted may be destroyed on entry into a new system
  • Without hyperwave technology, fleets always continue to their destination system
  • Fleets never encounter each other in open space
  • Fleets move at the current guidance velocity for their side
  • Fleets are destroyed if they move beyond guidance range of all of their side’s beacons
  • Ships that aren’t scouts can only move as part of a fleet
  • Every fleet must have an admiral assigned to it
  • One ship with an admiral can be a fleet
  • When a fleet enters its destination system it ceases movement

Combat Mechanics

  • If fleets from different players are at the same star then combat will occur
  • Combat ends when a system contains only ships from one side in a system
  • Planets with defense platforms may be reduced by attacking and destroying those platforms or interned.
  • Production on planets that aren’t interned becomes available to the controlling player after some delay
  • The owning player may perform planetary bombardment to reduce planetary population. This eliminates one unit of population capacity for every unit of population removed. Likely only relevant if you expect the enemy to reclaim the system soon.

Looking at database structure for Cluster

We’ve framed out the ‘Cluster’ game graphically (at least to a first approximation). Currently the cluster generation is handled in Unity and there are no game rules and no save games.

I’m in the process of roughing out the MySQL database structure that will hold persistent game data and convey it between players and looking at coding up the php code needed to manage this data and implement game turn logic and playing field generation.

Once the basics are sketched out, I expect to remove the generation logic from the Unity code-base and switch things over to use the layouts and turn management provided by the site based php code.

I’m expecting to see the database side layout break out into:

  • System Data. Things that control game operation but do not change game to game or player to player.
  • Player Data. Information about the players that is not related to a particular game. Name, picture or icon, other particulars.
  • Game Data. Information that is a base part of a particular game run but does not change turn by turn.
  • Turn Data. Turn and move data for the game.

System Data

  • Star name list
  • Planet types list
  • Configuration such as number of stars per game and other similar items.

Player Data

  • Player ID
  • Player Name
  • Player picture or icon
  • Games played
  • Wins/Losses

Game Data

  • Stars Information
    Table with color, location
  • Planets Information
    Table with type, properties, max pop, orbit

Turn Data

  • User Move
    Fleets from -> to. Scouts from -> to.
  • Resource Production
    Build Ship here, Build planetary resource here, Spend on research. Form and break fleets.
  • Combat Resolution
    Combat results. Remove destroyed ships. Resolve ownership changes.

Experimenting with Commercial MoCap

Configuring Eye cameras here.

Playing with the evaluation version of this.

Malcolm and I spent part of Friday experimenting with an evaluation version of a commercial motion capture package. I now have six PS3 Eye cameras and my VR machine has enough USB-3 ports and controllers installed to run them.

Our initial setup had three or four cameras attached and used the room lighting (pretty bright, ceiling mounted LED lights) for illumination. Initial results weren’t great but over the rest of the day we learned a few things.

  • Need a more contrasty background and a less cluttered background. The bookcases behind the area we were using were better when covered with a piece of white fabric. The tan rug on the floor was less of a problem when the model put on black socks.
  • Hands really aren’t handled by the package. No big surprise here as hands and fingers are rather small targets for these cameras.
  • More lighting is better. I added two diffused studio lights I have around and a high intensity three light halogen light bar and things became more precise.
  • A larger and more diffuse calibration target seemed to work better.
  • Sliding the calibration beacon along the floor with periodic stops seemed to work better than touching it to the floor. This makes all floor level reference spots about equal (when I was touching it down, it took a little work to make sure we had the lowest spot in each arc).
  • Aligning the reference person with the human figure in the images at the start helped quite a bit. The tool didn’t seem to do a very good job of this without manual help.

By the end of the session, we seemed to be getting a pretty good capture of arms and legs. Feet could still be a bit twitchy.

Malcolm is going to look at 3D printing mounting clips to attach the PS3 Eye cameras to light stands for more stability.

I ordered a couple of spare cameras to ensure that we don’t come up short if any of them fail and a couple of PCIe USB-3 cards to supplement USB controller availability.

Overall things turned out pretty well and I think we learned a bit more about making motion capture without dedicated beacons work decently. The price of the package is high enough that even a short time license would need us to have some substantial amount of motion capture to get done in order to make things make sense.

Onwards to Linux C++ OpenCV and Capturing Some Frames

I proven that the cameras can run on Ubuntu and the RPi. I found a page with classic Unix/Linux style install instructions. I’ll be working on getting this set up on my biggest RPi machine and take a look at building code to red from multiple cameras and stream the data to a host. If I can run two cameras on a single RPi then I should be set.

I’ll probably also look at doing something similar for the RPi cameras on the RPi-2 machines. That might add a couple of additional cameras to my set.

I’ll then move on to building a simple LED beacon and look at some simple camera calibration code on an appropriate host.

Trying to build the OpenCV package on one of my RPi 3 machines. I think I’m running into heat issues. I’ve switched to a board that I can keep open and has a heat sink on it. Hoping that may be enough. I’ve also dropped the build scripts onto github to make them broadly available.

Pulling packages on this machine now.

Now I’ve got a fan blowing. I see from some web pages that the RPi is supposed to warn and throttle when the temperature spikes…didn’t see that with my black and silver RPi 3…it seemed to halt completely after a short span. Keeping this one cool up front and we’ll see how this goes.

Interesting…it looks as if the scipy build is eating all available physical memory on the RPi board (882 MB of 923 total). Nothing moving on the machine…not even the mouse cursor.

Ah, reference here to bumping the swap file for the RPi.

Monday morning dawns and it appears that I have a raspberry pi that is loaded up with an installed build of OpenCV. No time this morning to test this but tonight I’ll run through a few simple tests and then probably run the same process on one of my Intel NUC machines (should go faster and easier) to get a decently powerful system up and working with the same version.

PS3 Eye Cameras Came in and RPi Machines are Going

Yesterday my five additional $6.00 PS3Eye 60 FPS cameras arrived. I’ve pulled my RPi machines and my two Ubuntu based NUC machines out and started checking out the cameras and making sure the systems are up-to-date.

RPi 3 machines and various supporting parts.

The NUCs generally get more use and thus needed less updating. I found that using cheese I was Immediately able to run the cameras on both the Ubuntu NUC machines and the RPi boxes. Even the older RPi-2 machines ran the cameras…I’m not sure that the performance on those will be sufficient to make them useful though. They do both have RPi internal cameras connected though so that may be of interest. I’ve got to look at how to access those cameras from C++ code and see what sort of performance they have (and test them out to see what they can do).

The RPi-2 machines with attached cameras…

Here are the cameras (the one sample I bought for testing and the five others that just came in). I may pick up a couple of spares to make sure that I have what I need in the longer run…at $6.00 each that isn’t a big deal). The beer pong balls look like the perfect size for a light diffuser on an LED…I’ll have to rig up a few LED/resistor/battery sets to try that out sometime soon.

Cameras and diffusers waiting to be unpacked.

On Sunday I’ll look at getting programmatic control of these cameras. If I can acquire images from two cameras at the same time and stream the data out over a socket, I’ll be ready to rock with these things.

Still to be managed is getting the cameras rigged to mount on light stands and getting three more light stands with ball heads to set them on. If the RPi-2 boards look interesting with their integral cameras then I may need some additional mounting options, but initially three to six cameras should be a good start.

Got the *.fbx SDK installed and my RPi machines out and booted up…

I’ve installed the filmbox SDK on my home machines so that I am in a position to play with programmatic manipulation of *.fbx files. This feeds into the motion capture experimentation I’m doing and should be generally useful at times.

I also dug out my three RPi 3+ machines last night and got one of them booted up and connected to a PS3Eye camera…without actually taking images…just observing the power light go on and no other negative effects. I’ll likely move further over the weekend to try getting actual image capture working there.

I did find a github project that claims to be able to run one of these 60 FPS cameras from an RPi. This project appears to be motion sensing related, but I expect it can act as a source of sample code for my longer term purposes…

Sample Cameras for MoCap Experiments Came In

My sample cameras came in. I ordered two cheap webcams, a Sony PlayStation 3 Eye Camera and a Fosmon USB 6 LED 1.2 Megapixel USB PC Webcam as these were both under ten dollars and looked potentially interesting.

The Fosmon camera looks too slow and low on image quality to be useful for my purposes here so I’m shelving that one. The PS3 eye camera is challenging to interface to a PC, but at $6.00 per camera, I’m inclined to spend some effort here. There do appear to be Linux drivers out there and I’m going to look at a RPi mediated option with these (I’m willing to invest $30.00 for five more cameras to give this a try).

The other option that was presented as viable are Logitech C922x Pro Stream Webcams. I have two Logitech C930e cameras currently…these won’t run 60 FPS but have comparable image quality and a wider field of view. At $70.00 each, I won’t be picking up six c922x cameras any time soon, but I may purchase on or two if all goes well to see what they are capable of. They’re almost certainly better cameras than the PS3 Eye devices, but rather pricey.

I’m also picking up some CR2023 batteries and battery holders along with a box of LEDs and some 19mm ‘ping pong balls’ to try as diffusers.

This is all an experiment, so it may come to nothing more than an interesting diversion. The commercial motion capture options are far too expensive to be viable as a hobby thing and I’m expecting to have some fun putting this together and seeing where it can go.

I would be particularly happy to see an RPi solution work out as having one pi board for each pair of cameras (or even for each camera at a stretch) with an Ethernet back-haul to a full fledged PC for processing would be a flexible and relatively affordable solution with pretty good scalability to more cameras as well.

3D Asset Export and Sharing

There have been concerns expressed about getting from a point cloud to imported animation data tied to a skeletal model.

I accept that this may not be trivial…particularly tying a set of points in a moving cloud to control points for bones in a skeletal animation.

Collada

It does appear that there are defined interchange formats (I’ll have to look and see if these are supported bu Unity and Blender) that may make this more straightforward. Being able to export the skeleton for an animation and then inhale it into a processing application seems helpful. Being able to tag points in a cloud for consumption by Unity seems even more helpful.

I looked here and got a pointer to collada which appears to be getting developed by Khronos at this point. This appears to be a general purpose, open, XML based standard for interchange of 3D information. There is also an OpenCollada code archive out there…that link appears to be dead…but it is on GitHub and a project home page.

Wikipedia article here.

FBX

Filmbox is a more proprietary option with information at Wikipedia, Blender, More Blender, and AutoDesk (who created it and still owns the format). The AutoDesk link connects to SDK downloads that support C++ and Python manipulation of these files.

More Motion Capture Thoughts…

Looking at cameras and possible target parts.

The solutions I see out there tend to use active or passive targets attached to the actor and tracked by multiple cameras. Most seem to favor faster cameras rather than higher quality cameras.

I’m looking at the $8.00-ish Fosmon USB 6 LED 1.2 Megapixel USB PC Webcam and the $7.00-sh Sony PlayStation Eye for fast capture. The Fosmon camera also seems to see in IR and thus may be usable with high output IR LEDs for tracking points. The PS3 Eye camera is fast and there are directions on the web for removing the IR filter to make it IR sensitive as well.

I’ve got a couple of Logitech C930e cameras already to cover the slower but higher resolution (and visible light) part of the spectrum. When the budget recharges I’ll likely pick up a third (they’re also tripod compatible which is nice) to better cover registration of objects in three-space. I’m also considering the Microsoft LifeCam HD-3000 as a middle of the road option…much cheaper than the high end Logitech, but likely faster than it as well (at 720 rather than 1080 resolution).

I’m really thinking that an array of several different cameras might be interesting…with faster and more plentiful cameras providing tracking and disambiguation and higher resolution cameras locking samples more tightly in position when images are available.

I’m also wondering whether a visible timing device might help. Thinking an rPI driven grey counter facing in several directions and running at a decent clip so that each camera can pull the timing information for its images from the picture taken.

Add in some IR and/or visible LED targets powered by something small and capable like a CR2032 battery or two and I suspect that interesting things may be possible. Whether it works or not it seems like an interesting challenge to take on and the learning experience alone should be interesting.

If the cheap cameras work decently (I’m ordering one of each to try out) then I’ll probably pick up a group of them to work with. I’ll probably try to see if Malcolm can 3D print some sort of appropriate brackets for the cheap cameras as they do not have screw mounts (and a fixed mount would be nice to preserve calibrations between runs). First pass is likely to involve several cameras and OpenCV acquisition of data to track a single beacon.

If that works out, I’ll likely move to fabricate a timing device that can provide the time information to the frames and see how that goes. I’d like to avoid the frame synchronization game that I saw one demo engage in where all cameras are wired into an electrical sync system. Embedding the timing information into each frame optically and using that to place them in sequence seems much better. I’d rather have a few extra cameras to fill in the gaps than run wires everywhere.

If all of the above goes well, I expect to move to looking at recording data streams separately and then doing the beacon registration offline. On the fly operation would be nice, but the power of handling things separately seems likely to be easier and allow multiple computers to get involved in the recording process to ensure high frame rates and few drop-outs.