Category Archives: Topics

Posts on technical topics. The sub-category provides the specific area of interest.

Looking at 3D location of Objects with Multiple Cameras

We tried doing some motion capture with a kinect 2 today. Results were mixed though less than ideal.

I am thinking of using OpenCV to acquire images and use them to register locations in three-space. I currently have two high quality cameras and a lower quality one to play with…I expect that if things go well I’ll likely pick up a third higher quality camera to work with along with some hard-mounts for them in the basement. Ideally I’d love to be able to code up some decent motion capture functionality using cameras and reference marks on limbs.

A bit of experimentation will be required in order to get there though.

I am wondering whether a faster, lower resolution webcam might do better for this. There are some $20.00 a piece cameras with USB-2 interface out there (Microsoft Lifecam-3000 or Logitech C270) instead of the higher-end, USB-3 autofocus Logitech C930e. Both wondering whether the lower resolution and USB-2 may provide a higher frame rate and whether fixed focus will stay consistent without periodic refocus hits.

Need to Learn More About CMake

I’ve been looking at using MongoDB from C++ and trying to build the MongoDB C and C++ driver code. So far, the build files that CMake has generated on my main development box (targeting Visual Studio 2017) have pulled in cygwin header files and generated various other problems. I don’t really understand why the tool would be getting confused like this, but I want a better understanding of how CMake decides where to look for build files and configuration and how to control that.

I’m sure that having cygwin and visual studio on the same machine should be a workable configuration so I expect there are ways to keep CMake from pulling in the wrong files.

Hoping to get to the point where I can cleanly build these drivers and get them working in some 64 bit visual C++ code.

Giving up on MongoDB from C++ for now

I’ve tried building this thing using CMake and Visual Studio and CMake seems to keep picking up cygwin headers for Visual Studio builds and I can’t seem to make things better. For now not worth the pain. I’m probably going back to C# with C++/CLI to get this working with C APIs and C# for MongoDB access.

I’m certain that I could get this building with enough effort. The code looks reasonable and I can put together the projects myself. I’m not all that inclined to go there though as I’d have to re-do this every time through. I’m not familiar enough with CMake to try modifying that part of the process.

Ok…not quite giving up. I did find that CMake appears to be adding a cygwin header folder to the projects. If I manually remove this from them the build goes further. The default build configuration appears to be 32 bit which is also an issue for me. Perhaps I’ll do a little more before I completely give up.

C++ MongoDB Driver Circles within Circles…

Looking at getting a bit of C++ file processing code done for some home sandbox tooling reasons. Grabbed the source code.

Source code asks for boost as a polyfill for C++ 2017 (optional and something else) with MSVC 2017. Grabbed boost. Got to build boost, really would like to build the full kit if I’m going to build it at all. Grabber zlib and libbz2. Now looking at any other dependencies needed to build a reasonably complete boost build locally.

Heading off to see the Captain Marvel movie and will continue with this (along with some presentation prep I need to do this weekend and some overflow work items I need to look into). Should be an interesting if busy weekend…

This is once again reminding me why C# and Java are so much more productive than C++ for many things. In C# I’d nuget the mongodb drivers. Up to date versions of any supporting libraries would be pulled in as needed and I’d be writing code in short order. I love C++ for its power and flexibility, but as a tool to get higher level logic in place it is not holding up well…

Add in building bjam to build boost with and then grabbing CMake to build the mongodb drivers with…making sure that the build processes find the right compiler (I’ve had at least one run where my build grabbed the g++ compiler out of the path even though the visual studio tools were there as well.

Unity Investigations…

I’ve been splitting my experiments with Unity between this blog and pandamallet.com (where I put ‘creative’ things.

More graphical content and thoughts on game design possibilities have lived on the creative side while more technical, C# coding and Unity script coding bits have lived here.

I’m going to move to keeping everything in one place for convenience and accessibility. From here on, I’m going to put all of the game programming related content on the creative side and keep my career blog for more directly coding related items and similar things.

As pandamallet.com does not share with linked-in at this time, this will mean that anyone who was watching these goings-on through linked-in notifications will need to pop over to pandamallet to see what’s up going forward. I will look at making sure that all of my sites (pandamallet, my main blog and my career blog) are publishing to my twitter feed when something new is added to make this (perhaps) a bit easier.

Unity Editor Activity and Digging Deeper

More after hours digging into Unity (slowed down by some necessary after hours work related items…probably going to continue for the balance of the week as we wrap up a tight sprint 🙂 ).

Cross Platform VR and Flat Display

I am coming to the conclusion that I need to dig deeper into the prefabs provided with the various VR SDKs that matter to me (daydream, vive and cardboard). It appears that the different vendors implemented their own event systems that aren’t necessarily interoperable. If we want to build code that plugs into any of these systems, we’ll need to address this.

I’m not currently sure how best to deal with this. Trying to build wrappers sounds nice for compatibility but has potential limitations. Changing the prefabs in-place provides more leverage but means potentially having to make the changes repeatedly as new versions appear with other features that we want. This won’t be resolved without some digging into the code and experimentation.

Unity Editing Support

I was recently playing with some asset generation scripting. Malcolm had commented that it would be nice to have a way to quickly create a room of any desired size with appropriate wall textures and texture scaling. I whipped up a very quick and simple script to take a wall, floor and ceiling prefab and build a complete room out of the set.

This worked quite nicely (though many details to be added such as the texture management) but had no visibility in the editor. You added an invisible GameObject to the scene, set its parameters and then when you ran the game you got a magically created room.

I was looking at the editor side options last night and it appears to be relatively straightforward to make the script do its thing in the editor as well as at game run time.

I’m going to play with some of this soon to see what I can put together. Being able to build more complicated, programmatically generated items in the editor would be a great capability to have available. Prefabs are fine for simple things but I’d rather have the option of generating a family of possible items from a script. Editor side visibility is a key part of making something like this useful.

Having fun playing with this stuff as work projects at the moment are in stabilization and thus mostly running down defects and back-filling missed requirements rather than building new architecture and the code to support it.

Unity and Multiple Platforms

I tried to get a single Unity project to build for multiple platforms over the weekend. The code I’m writing is probably best suited to a flat screen with mouse and keyboard quite honestly. That said, I’m interested in working with VR systems and thus intend it to run on the platforms that I have available. This gives me:

  • Flat screen with keyboard and mouse.
  • Google Daydream VR with Daydream controller.
  • HTC Vive with two Vive controllers
  • Possible Google cardboard with no controller (perhaps a game controller as a stretch?).

Multiple Platforms from One Project

Sam had run across a couple of projects that claimed to work across platforms last week and sent me a link. Over the weekend I pulled the projects down and ran them on the Daydream without problems. They were a bit awkward to use as neither of them used the daydream controller effectively (one seemed to be built as a cardboard app, using the sight axis for selection).

I took the project that had been targeted on the Daydream and re-targeted it to the PC with a Vive headset attached. I got errors (don’t have the exact form) that suggested that I needed to load the Steam VR SDK in order to make things work. In the end I loaded the SDK from the asset store. This did allow the Vive headset to connect but did not display the information panels that the project was designed to show. At that point I shelved the project…I’ll probably return at some point to look deeper into the implementation, but for now I’ve had enough.

Switching Platforms in a Project (the hard way)

Later in the weekend I tried simply creating a project with a few simple assets and adding in the two SDKs. I was able to get the Daydream up and running (tried that first) by making the usual additions to the scene for that SDK and setting up the platform. To get the Vive running I had to delete all of the Daydream items from the scene and add in the usual Vive content. That too worked as expected.

The surprise came when I switched back to Daydream again. I continued to get errors indicating that the Vive input mapper json file was missing. It appears that something Vive related was still attached to the project and looking for Vive specific assets. This is another area where I may dig deeper at some point. For the moment I’m moving on.

Modular and Layered

I did dig through a good bit of content over the weekend (both video and online blogs/documentation) that suggested Unity supports external code assemblies linked in. This leads me down the road to a more modular approach.

  • Provide multiple .NET core assemblies that aren’t Unity specific code to implement the ‘business logic’ of the game. This keeps the complex code (game rules, world generation and storage) and other such items in a place where they can be unit tested and managed as pure code assemblies.
  • Manage all Unity assets that aren’t target specific in free-standing packages that get bolted into a target specific frame at the last moment.
  • Build the SDK specific parts of the game as thin wrappers over these other assets. These will be asset packages and projects that contain only target specific items and user facing elements.

I’m going to head down this road next and try structuring a new version of my Cluster game to support this approach. I started coding up the game generation part on Sunday and will likely try bolting a version of that into a VR frame as soon as it is complete enough to be usable.

Unity Event SYstem Looks Like the Next Stop

A quick perusal of the documentation suggests that I’m going to need a better grasp of the unity event system to get my handling of VR controller raycasting working as I’d like.

My stars (and anything else that wants to be responsive) will need to handle input events and thus detect when a ray touches the construct (and stops touching it). This should allow me to do the sort of ‘hover text’ I want. I’ll likely try to work this up tonight and get my inflating stars code working with the DayDream (and perhaps the Vive as well).

I’m still wishing for a decent solution to modularity that would let me keep the common assets between the three platforms I’m interested in separate enough to make sharing them easy.

I’m currently maintaining three divergent versions of the code and assets with each configured for a different system. As the game rules implementation develops, this is going to become hard to support.

Unity flat screen mouse pointing is working

I just pushed an update to github that detects mouse over of stars in the game screen and temporarily bumps the size of the star up by a factor of eight (to show you pointed at it). I’ve also got some other minor bits working such as finding a script on an object given the GameObject .

I need to add code to read in the star names file. I also want to make the DayDream pointer do this same thing to stars you point at. In the longer run, I expect to add a UI panel that displays additional information about the star when you point at it and may very well magnify the size of the star and any associated information at the same time.

Now on to seeing what I can see with DayDream…

Got the Static Assets in Place…

Now I’m stitching in some game logic and properties related to game play. Still need to get the systems loaded from the static stuff I’ve added and start building some game play but it feels like things are moving at a decent clip.

I’m also going to be looking at keeping some of the game logic in external, non-game assemblies. Being able to pull in external code that can be unit tested and perhaps even deployed in other areas is useful.

Hoping to get the basics wired in and then get back to interface work. I need to get the VR controls started. A user should be able to switch perspectives between several fixed camera spots with the controller. They need to be able to get a basic view of the state of the board from a distance and be able to view full details of a given star by pointing their controller at it.

  • View basic star information available to a given player from a distance.
  • Assemble a fleet (if fleet markers are available) from ships present at a star.
  • Task an existing fleet with going to another system.
  • Allocate this turn’s production to construction or research.
  • Side effect of the above, move population from a planet surface to colony transports.
  • Check transit time from one star system to another.
  • See beacon area of effect for your existing beacons.