It has struck me recently while diving deep in TLS RFCs that it would be kind of fun to build a VR random dungeon generator inspired by the old AD&D dungeon master’s guide random dungeon generator.
It would be comparatively simple to code and would be a blast from the past and provide an interesting sense of scale to some of those old ten foot wide corridors and massive rooms.
Not sure I’ll ever get around to putting the necessary effort in, but who knows I might one day be able to walk the halls of an old school random dungeon of my own creation 🙂
Continuing to read and watch Blender training information.
Getting a bit distracted by pulling in blender 2.8 build source code. I’m now a bit curious about the internals and may let myself be distracted by that now and again. Certainly planning to see if I can run a local build on
Chaos upstairs has path issues that I’ll need to resolve before I can do much on that front. One of the nicer things about having more than one build-capable machines around.
Looking at a few things in the immediate future
- Get text working. Hover text labels over stars that always face the viewer with at a minimum the name of the star.
- Build star gate model(s) in blender and import them into unity and then place gates in the view.
- Build a simple information viewing panel in game and support star selection where the information for the selected star shows on the panel.
- Build a simple mechanism for saving game and loading games. Need a persistence format for game data.
- Look into adding simple feedback sounds to the game. At least a noise when a selection is changed in the star display.
Saw this here (Lorna shared the link).
This does sound like interesting tech (though tracking larger joints and body position seems even more interesting). I do not see how it can handle some of the core use-cases though.
Pretty much anything that requires fine manipulation (triggers, buttons, throttles and HOTAS rigs) would seem to be out of the picture. Being able to track gross hand motion seems possibly within reach. Being able to pick up a glass in VR with hand tracking and then spin it around to look at the far side seems extremely challenging.
I can see this working in the near term for things like fighting games where you punch an opponent. I can’t see it replacing controllers or similar items for fine motor interactions…hoping they’ll prove me wrong though.
Ok, interesting…vive also seems to have some early access hand tracking with the vive and vive pro mentioned here.
Game of life
Thinking that a VR game of life implementation might be fun and relatively straightforward. Perhaps on a sphere would be even more interesting…closed universe.
Also wondering whether a three dimensional version could be jugged together.
More of a fun toy than a real game, but perhaps fun to play with.
Need to think about how to map the quantized grid from a flat version into spherical layouts. Might make sense to do something with continuous placement with volumetric cells that bump aside any too close and attract cells that are near enough to aggregate. Would be similar but a bit different.
Playing with a setup where masses could be placed in three dimensional space and then items released and run the simulation forward.
Could be fun to play with looking at the orbital mechanics issues .
Could move forward to a game where orbital combat is run through. Place vessels with real-ish engines and weapons. Implement UI and interaction to manage motion and allow planning and execution of maneuvers .
Interesting way to play with VR UI approaches if taken moderately seriously
The player is standing on the ground with a dark sky overhead. Controllers are used to cue weapons fire.
Inbounds are shown with location, trails showing previous track and various information attached as they move…possibly current velocity and acceleration vectors attached.
Player points at desired target and cues in a weapons system on that target (or targets sensors perhaps to gain more information)
Could be a fun thing to play with.
As part of building the front-end to the shared web side aspect of the cluster game, I’m starting to lay out the RESTful interface to the database at the back-end of the server side.
I’m looking at several main categories using query parameters to filter results on aggregate areas.
Returns a list of player names and ids. Details of a specific player can be retrieved (and changed) by accessing the player by player id as a sub-resource. Permissions do apply here…a normal player can only view or edit the details for their own information. An administrator can view and change information for any player.
- id (immutable after creation)
- name – Unique, human readable user name
- is email public
- password (settable only)
- date joined
- date last activity
Returns a list of games in the database. With query parameters this can return games containing only a selected player or players, games that are in progress and perhaps other subsets of all stored games. Details of a specific game may be retrieved by game id. For normal players, only the details that their player identity has access to may be retrieved. For developers all details are visible and subject to modification.
- player ids
One entry for each turn in the game so far. All open if the game is completed otherwise only information visible to this user is present.
- turn number
- units list
- unit id
- unit type
ship type, planetary resource type, population
- location type
space or system or planet
- player id
- technology list
- player id
- technology id
- moves list
- current turn
- moves submitted
(may be part of the turns container)
- final scores if completed
Options that relate to new games being started. Global lists (start information, admiral information and such). Generally these will only impact newly created games. Selected items may affect all games.
Audit log recording changes made using administrative privileges.
We’ve framed out the ‘Cluster’ game graphically (at least to a first approximation). Currently the cluster generation is handled in Unity and there are no game rules and no save games.
I’m in the process of roughing out the MySQL database structure that will hold persistent game data and convey it between players and looking at coding up the php code needed to manage this data and implement game turn logic and playing field generation.
Once the basics are sketched out, I expect to remove the generation logic from the Unity code-base and switch things over to use the layouts and turn management provided by the site based php code.
I’m expecting to see the database side layout break out into:
- System Data. Things that control game operation but do not change game to game or player to player.
- Player Data. Information about the players that is not related to a particular game. Name, picture or icon, other particulars.
- Game Data. Information that is a base part of a particular game run but does not change turn by turn.
- Turn Data. Turn and move data for the game.
- Star name list
- Planet types list
- Configuration such as number of stars per game and other similar items.
- Player ID
- Player Name
- Player picture or icon
- Games played
- Stars Information
Table with color, location
- Planets Information
Table with type, properties, max pop, orbit
- User Move
Fleets from -> to. Scouts from -> to.
- Resource Production
Build Ship here, Build planetary resource here, Spend on research. Form and break fleets.
- Combat Resolution
Combat results. Remove destroyed ships. Resolve ownership changes.
Configuring Eye cameras here.
Playing with the evaluation version of this.
Malcolm and I spent part of Friday experimenting with an evaluation version of a commercial motion capture package. I now have six PS3 Eye cameras and my VR machine has enough USB-3 ports and controllers installed to run them.
Our initial setup had three or four cameras attached and used the room lighting (pretty bright, ceiling mounted LED lights) for illumination. Initial results weren’t great but over the rest of the day we learned a few things.
- Need a more contrasty background and a less cluttered background. The bookcases behind the area we were using were better when covered with a piece of white fabric. The tan rug on the floor was less of a problem when the model put on black socks.
- Hands really aren’t handled by the package. No big surprise here as hands and fingers are rather small targets for these cameras.
- More lighting is better. I added two diffused studio lights I have around and a high intensity three light halogen light bar and things became more precise.
- A larger and more diffuse calibration target seemed to work better.
- Sliding the calibration beacon along the floor with periodic stops seemed to work better than touching it to the floor. This makes all floor level reference spots about equal (when I was touching it down, it took a little work to make sure we had the lowest spot in each arc).
- Aligning the reference person with the human figure in the images at the start helped quite a bit. The tool didn’t seem to do a very good job of this without manual help.
By the end of the session, we seemed to be getting a pretty good capture of arms and legs. Feet could still be a bit twitchy.
Malcolm is going to look at 3D printing mounting clips to attach the PS3 Eye cameras to light stands for more stability.
I ordered a couple of spare cameras to ensure that we don’t come up short if any of them fail and a couple of PCIe USB-3 cards to supplement USB controller availability.
Overall things turned out pretty well and I think we learned a bit more about making motion capture without dedicated beacons work decently. The price of the package is high enough that even a short time license would need us to have some substantial amount of motion capture to get done in order to make things make sense.
I proven that the cameras can run on Ubuntu and the RPi. I found a page with classic Unix/Linux style install instructions. I’ll be working on getting this set up on my biggest RPi machine and take a look at building code to red from multiple cameras and stream the data to a host. If I can run two cameras on a single RPi then I should be set.
I’ll probably also look at doing something similar for the RPi cameras on the RPi-2 machines. That might add a couple of additional cameras to my set.
I’ll then move on to building a simple LED beacon and look at some simple camera calibration code on an appropriate host.
Trying to build the OpenCV package on one of my RPi 3 machines. I think I’m running into heat issues. I’ve switched to a board that I can keep open and has a heat sink on it. Hoping that may be enough. I’ve also dropped the build scripts onto github to make them broadly available.
Pulling packages on this machine now.
Now I’ve got a fan blowing. I see from some web pages that the RPi is supposed to warn and throttle when the temperature spikes…didn’t see that with my black and silver RPi 3…it seemed to halt completely after a short span. Keeping this one cool up front and we’ll see how this goes.
Interesting…it looks as if the scipy build is eating all available physical memory on the RPi board (882 MB of 923 total). Nothing moving on the machine…not even the mouse cursor.
Ah, reference here to bumping the swap file for the RPi.
Monday morning dawns and it appears that I have a raspberry pi that is loaded up with an installed build of OpenCV. No time this morning to test this but tonight I’ll run through a few simple tests and then probably run the same process on one of my Intel NUC machines (should go faster and easier) to get a decently powerful system up and working with the same version.