Software Process

I watched a few conference sessions talking about software process yesterday evening.

Agile

There was one discussion of agile that I really liked. I’ve seen quite a few parts of agile processes that I think add value but I’m not convinced that the overall processes map well to the sorts of large, embedded software projects that I’ve generally been involved with.

Technical Debt

Another session made some very good points related to dealing with technical debt. The presenter had some very interesting thoughts on using source code control system information to direct refactoring efforts. It makes sense that modules with significant complexity and lots of ‘touches’ are good candidates for clean-up. He also made the excellent point that attacking issue counts will focus everyone’s attention on the small, low risk, low reward items that can run down the count quickly. Dealing with a thousand minor naming issues will do little to improve tangible code quality. Addressing a single, large snarl of complex interactions may result in huge improvements. I long ago realized that uncritical use of metrics can derail a team faster than anything else.

Functional Programming

There was also a short functional programming session that seemed decent. It still didn’t address my biggest pain points with functional though.

He made a decent case for the merits and limitations of functional approaches. I tend to buy them for ‘business logic’ type work.

The place I run into trouble with functional (and to a less extent the JVM languages in general) relates to numerics work.

I’ve spent a chunk of time processing images and signals. I still don’t see how one can reasonably implement something like noise reduction or sharpening of a reasonable size image (say 5000 x 5000 RGB pixels) in a functional language. Immutable arrays would seem to leave the developer with a lose/lose/lose set of choices.

  • Direct approach – Process each pixel in series and create a new image array with the updated pixel. Repeat this 5000×5000 time, copying the array at each step.
    This obviously fails even in a garbage collected environment as the process of copying megabytes of data for each pixel update will kill you. It does mean that access to adjacent pixels should be reasonably fast as reads will be no different than in a procedural environment.
  • Partitioned Tree approach – Build your large array as a tree structure under the covers with (say) 32 elements per leaf node. This was the approach suggested by the presenter in the talk.
    This seems to be the worst of all possible worlds. Read access to each pixel requires traversing several levels of pointer indirection. If 32 pixels in a row are updated then we generate 31 ‘dead’ copies of the segment along the way. Locality of reference becomes a mess as adjacent items in the ‘array’ may be anywhere in memory.
  • Process and Reassemble approach – Run through the entire source image and generate a change list to be applied. Once you’ve finished with the image, generate a new array with all of the specified changes in place.
    This will potentially generate a list of tens of millions of update entries in memory. I’m also not sure how one implements the ‘create a new array based on this array with these updates’ in an immutable environment. I guess this is the bottom line of all of the functional coding I’ve run across…the functional environment seems to always assume that there is procedural ‘magic’ to make the ends meet. I suspect that the answer would involve writing the image processing primitives in a ‘real’ language and then exposing them as unitary operations on the functional side. This rather strongly suggests that functional languages will always be a specialty item and not the ‘main course’.

Weekend Update with Blender and Unity

Pushing forward with Blender 2.8 and unity with more detailed comments here.

Creative Things

Looking to be a varied and busy winter and spring. I’m hoping to move my unity and blender knowledge forward significantly. I want to get Cluster to a point where it is more a game and less a sand-box for VR experimentation.

Work

Work looks like it is going to be a wild ride as well as I step into a cyber-security role in a big way. I’ve got to finish defining the network and local security design for the product and generate sufficient documentation to convince the FDA that we’ve done our due diligence. Should be do-able but I’m expecting it to keep be very busy.

OpenSSL

I’m looking at getting a Visual Studio 2019 debug build of OpenSSL together locally as well so that I can look into some functionality that I want/need to understand better.

In particular, the ‘envelope’ functionality that provides encryption at rest with multiple access based on private key encryption of a one-time symmetric key could solve a variety of interesting problems.

I need windows (and ideally bcryptlib) based versions of this functionality that inter-operate with the OpenSSL version if possible. Being able to build some sample code and then step through with the visual studio debugger would help quite a bit.

The End of One Year and the BeginniNG of Another

The last year has been rather busy. Changing jobs, coming up to speed at Dräger, unity programming, motion capture and a variety of other pursuits.

Work

The last few months have been largely consumed by the tail end of my time at KMC Systems and coming up to speed at Dräger. I’m now stepping into a cyber-security role on our existing monitoring product. This is an area that I’ve had an interest in for some time but until recently the medical device world has largely tried to meet their security needs with an external firewall with little internal security on the protected network.

The customer base and FDA appear to be rapidly becoming far more aware of computer security issues. I tend to put the down to the rash of recent activity where medical data is encrypted by criminals and a payment demanded for release of the keys. This hits organizations in the pocket book and impacts patient care at the same time (much more visible to these organizations than leaks of personally identifying information). We have rapidly moved from a universe where computer security was viewed as a nuisance to one where it is seen as a requirement.

Much reading and research over the last few weeks to back-fill any gaps in my knowledge I’m aware of and locate tangible back-up for things I know but wasn’t able to back up with authoritative sources. I feel like I’m just about over the steep part of the learning curve now…so next bit of busy will be writing up what I’ve pulled together.

I miss the people I worked with at KMC but I’m much happier with the challenges I’m being presented with here than I was helping out with the tail end of Newton development. Now that I’m finding a bit of breathing room, I need to drop email to some of my friends back at KMC. I know I’m not good at staying in touch, but I intend to work on being better…

VR

Work on VR coding has been pretty well shelved since the beginning of last summer. This is mostly down to the job change and various bits of being busy leading up to that.

I think I’m finally at a point where I can get back to work on that front. I expect to spend some time learning blender 2.8 first. This should help me make more interesting items to incorporate in my unity games.

Once I’ve gotten to where I’m happy with an initial level of blender competency, I’ll switch back to working on the cluster game (see more details on pandamallet and cluster-1 in my github account. I finally uploaded the latest version of the game game code over the holiday…I hadn’t realized that I had done that much work without pushing code but I expect to keep things more in sync now.

I expect to get back to working with 3D tracking of optical beacons with multiple cameras sometime in the future, but probably not until I’ve got much more done with Cluster.

I am clearing the decks in the basement to free up a larger working area for the room-scale VR. Hoping to have most of the back half of the finished part of the basement cleared and the working boundaries expanded appropriately.

Lots more to play with on the VR front, but I’m going to try to limit my distractions in order to get Cluster to a playable point before shooting off in another direction.

Cybersecurity and Cryptography

I’ve been getting more deeply involved in the cybersecurity and cryptography end of things in the last few weeks.

Did some serious work looking into current best practices for password management. Found that the bcrypt algorithm I had been familiar with has been long ago superseded (no surprise there) and that there is a hash iteration algorithm that can be used to bump up the work involved in computing an off the shelf HMAC to levels where it is suited for use as a password hash (PBKDF2 and here).

I’ve been looking at TLS and related technologies. In the past I’ve tended to treat them as black box components. I’m digging a bit deeper on a few fronts now.

I knew that elliptic curve algorithms were available in the TLS cypher suite but had not realized that they were in active use. Last time I looked at elliptic curve algorithms the community was viewing them with suspicion after the Dual_ED_DRBG fiasco. I think that the reduced computational complexity when processing them may have also lent an air of insecurity to them. At this point it sounds as if they’ve passed muster and are in serious use. I picked up a book (Modern Cryptography and Elliptic Curves, A Beginner’s Guide) to get a better handle on the underlying mathematics and will be taking a closer look on a broader scale.

I’m setting up my raspberry pi controllers (at least a few of them) as TLS/DTLS test endpoints. I’ve loaded and built OpenSSL on them over the weekend and will be coding up some samples to play with in the evenings this week. I’ve got machines ranging from pi-2 to pi-4 so they should provide a nice range of performance for testing.

TLS on TCP

I expect to initially put together some simple TLS over TCP code to make sure I’ve got everything working properly and that my certs are set up correctly.

DTLS on UDP

Once I’ve got TLS working I’ll likely try to transition to point-to-point DTLS as that is also a standardized protocol and a good stepping stone to the proposed multicast adaptation.

Multicast on UDP

I haven’t worked with multicast datagram traffic much (pretty much never) so I’ll likely move on to simple, un-encrypted multicast traffic from there. If I can get some of the machines to join a multicast group and ping traffic off of them, I’ll count that as a win.

Multicast over DTLS on UDP

The final step of this exercise will involve taking the multicast sample and the DTLS sample and attempting to implement the proposed approach to providing multicast support to DTLS. This isn’t a standards track proposal, but seems like the closest thing we’ve got to secure multicast traffic support.

Hoping this comes together. It seems like an interesting exercise. If I can get this work done entirely off hours, I’ll share the resulting code on my github account

Fun with Password Hashing

I’ve been spending some time looking into password hashing best practices over the last week.

I’ve know about the BCrypt algorithm for a long time as the old BSD standard ‘high effort’ hashing algorithm designed to make brute forcing hashes difficult.

I’ve found that there is a new effort called SCrypt intended to generate a modern equivalent for dedicated password hashing as well as a ‘password expansion’ algorithm that appears to be in wide used called PBKFD2.

The PBKDF2 algorithm applies an HMAC using the key input to inject the salt and then to chain iterations of the process. It takes a user selected number of iterations that allows the work-load to generate the hash to be tuned to the scope of expected attacks (and to the performance of the target hardware). This allows modern high performance algorithms such as SHA-256 to be applied in a manner that makes the total calculation of the final salted hash resource intensive enough to reduce the likelihood of a successful brute force attack.

C++ 11, 14, 17 and Later

I’m quite familiar with much of the content of C++ 2011 as it represented a welcome and long desired step up in C++ language capability.

I’m less clear on the changes that live in the 2014 and 2017 incremental updates (smaller and more tightly focused) and the upcoming work that will feed into the next release.

Getting on top of this is becoming more important as I’m back in the C++ world and while almost everything should support C++ 11, the later iterations may be missing or fragmentary.

I’m spending a little time this afternoon looking through resources on this front, starting with the C++ 2011, 2014 and 2017 pages on wikipedia.

I have pulled the draft PDF file for 2011, 2014 and 2017 and grabbed the github source for the standard(s). These are quite useful, but seriously deep waters if only the changes are of specific interest. Interesting pointers on where to buy the official docs here with the 2017 version from ANSI at just over $100.00. The current working draft appears to be on github here. I may take a shot at building that into a readable PDF at some point…

I am also rather interested to see what is in boost these days. Back in the visual studio 2010 era, the TR1 content that eventually fed into C++ 11 was one of the bigger draws…now that is part of the core tools in general so I’m expecting a new range on interesting bits. The seem to have a github repo here.

And here is the C++ 20 page. Interesting that 20 looks to be much more like 11 than 14 and 17 (which were small tweaks).

Just Ordered the newish Josuttis book on C++ 17

Ordered a copy of the new Josuttis book this morning. I’ve found his standard library and templates books to be very much worth reading and I’m hoping that C++17 – The Complete Guide will provide a useful update to Stroustrup (which is getting a bit old).

I’m back in the world of C++ and the language is undergoing a lot more change these days than it had been in the early 2000’s. Keeping up with the future trajectory of C++ is very much on my radar.