Writing up an updated version of my C++ coding guidelines.

I’ve written C++ and C coding guidelines for a number of teams that I’ve been part of. In general, those documents are owned by my employers at the time I created them. I’m working through a list (just getting started at the moment) of things I think should be included in current day C++ guidelines and posting them under my ‘information’ tab here on my career blog. This is a work in progress and depending on how busy I become may stall for extended periods of time…

If time permits I may also write up some commentary on java and C# (and perhaps others).

Interesting tools and libraries for Java and the JVM.

I’ve spent a few evenings rummaging around the web, looking for interesting tools and code that is either coded in java or runs on the JVM. There is a lot of material out there. I had been discussing other programming languages with a friend a little while ago and lisp was mentioned. That got me thinking (and clojure as a functional programming oriented lisp dialect particularly got my attention) so this list is currently a bit lisp heavy.

  • ANTLR is a java coded parser generator.
  • Clojure is a lisp based functional programming language that generates JVM code and can interoperate with java proper. Clojure also seems to have >ClojureScript which generates javascript code from Clojure inputs.
  • Jython is a python compiler that generates JVM code.
  • SISC is a compiler for the Scheme dialect of lisp that targets JVM code.
  • Rhino is a javascript interpreter written entirely in java that originates from the Mozilla folks.
  • JRuby implements the ruby language (which I don’t know much about, but sounds worth a look) and generates JVM code.
  • Armed Bear Common Lisp was the first common lisp implementation that I came across that runs on the JVM. There seem to be others out there, but it appears that common lisp implementations are a bit old and ragged at this point (at least the references I could find).
  • Kawa is another scheme implementation  with JVM support.
  • Groovy is in the same category for me as ruby. Sounds interesting, runs on the JVM and I don’t know much about it.

There’s much more out there and I’ll probably wind up wandering through it as time passes. I’ve grabbed snapshots of these projects to mess with. Depending on time and momentum I’m likely to poke at Clojure and perhaps a bit at some of the other lisp dialects. I haven’t messed with lisp in a long time (and most of what I have done was gnu emacs lisp) but it seems as if a few interesting things are going on there. A functional programming environment that can be used in conjunction with other methodologies is also very interesting. I can’t see doing a majority of the things I do in functional land, but having the option of doing tasks that are well suited to that approach using a functional environment is quite interesting.

Still coming up to speed on the Java front…

I’ve been doing quite a bit of reading and some talking to folks who know the language and environment. I come into this process having a decent, abstract knowledge of the java language itself and some basic idea of how the runtime library is laid out. I started with very limited practical experience writing java code but lots of experience building large, threaded and distributed systems with C++ and C.

So far the books I’ve run through (or am in the process of reading) are

  • Effective Java, Second Edition was another paper purchase. The C++ volumes in this series Effective C++, Third Edition and More Effective C++ provide excellent examples of how to make the best use of the C++ language. The java volume appears to provide good advice in the same vein but with a java focus. I have found it challenging at times to find books like these that provide advice that is useful to the experienced practitioner. Most programming books out there cover language basics and how to get things working. These volumes take that to the next level with advice about how to get the best results and warnings about subtle issues that aren’t immediately obvious.
  • Java Concurrency, Second Edition I bought in paper form. It appears to give very good coverage to java’s concurrency and locking facilities.Reading this got me looking deeper into threading and interrupt exceptions and places where these were perhaps questionable. Dive further into Apache’s CloseQuietly and its possible interactions with (windows of vulnerability in) thread interruption and I wound up in I/O land.
  • Java I/O is the book I’m currently reading. This covers the basics of the core java I/O library and then gives reasonable (if a bit superficial in places) coverage of the new I/O classes that were added later in java’s life.I got started down this road when I realized that baseline java I/O seemed to have serious limitations when used in blocking mode in threaded servers (and it appears that in java, just about everything needs to be moderately threaded). This lead me to NIO as the focal point for dealing with such things and various digression into jvm exception handling and error management.
  • Java NIO which is on my reading list and seems to go into more depth on the new I/O classes in java. It appears that any attempt to build scalable, interruptible server I/O code in java needs to take these facilities into account.Currently I haven’t started reading this one as I’m working through the second edition of Java I/O which covers NIO functionality, but in limited depth. In particular it looks as if it skips over the SSLEngine class that allows NIO sockets to implement SLL and TLS encryption.

Starting at Kiva Soon, setting up Linux to experiment with…

I’ll be starting work as Kiva Systems this coming Monday. Looking forward to getting deep into complex technical problems again and back into development work.

It was sad wrapping up my last day at GE/Oni, but I think the time had come and as endings go this wasn’t a terribly bad one. It seems that pretty much everyone  I know of who was actively looking has found something and the rest are settling in to other priorities for the summer. We’re still waiting for the final pay-out, but that was clearly noted as taking on the order of a month.

I’ve set up a spare machine here to run CentOs (Red Hat Enterprise Linux fork that is free) and started reading some Linux admin stuff. Most of the development processes seem to have remained mostly unchanged since I last touched Linux. Admin processes have changed (the yum package manager) and various things are now interesting that were not back then (http servers, MySQL, etc.).

I picked up an SSD for my main desktop machine (Win 8.1 box) and took some time to get the OS transferred. It was a bit frustrating as the differences were sufficient that neither drive cloning tool I had would handle it. I finally succeeded using a disaster recovery backup (command line version worked nicely) and then did a ‘recovery’ onto the new SSD.

I’ve still got more reading and learning to do on the Java front, but I expect that will be ongoing for the forseeable future as there is much to learn.

New topics and books…

I’ve been doing various catch up and refresher reading lately. I picked up the latest edition of Effective Java, Effective C# and More Effective C# to update my garbage collected environments programming. I’ve got to put together a few toy projects to get a bit more refresher in soon. I’ve written a little swing UI code some time ago but I want to do a bit more and get a better look at javafx to see what Oracle is putting forth there.

I’ve also been digging in a little deeper into QT as one opportunity that I’ve been presented would involve some QT programming.

I try to keep myself broad-based (with areas of intense focus) and I like to do these sorts of things periodically in any case. A job search makes for a good opportunity for a bunch of short but aggressive excuses to investigate new and interesting technologies in any case.

Android is also hanging out there somewhere, but at the moment I’m getting wound up in the details of the base language features and run-time libraries. I’ll almost certainly get around to some Android work once I’ve hit the high points on this current round of stuff.

Motion Control

I have recently had some questions about my knowledge and background in motion control and so I though I’d summarize here to help get my thoughts in order.

I have largely worked with stepper motors to date. For precision positioning (at least by the standard of things I’ve worked on) they are cheap and effective. They have also bee relatively small motors (NEMA 14 to NEMA 17 types) and driving a single axis using a lead screw. I have been looking at putting together a RepRap 3D printer recently, but haven’t yet ordered the stepper for the project.

Most of my detailed motion control experience came when I was working for Howtek. We were designing, manufacturing and selling high end imaging systems for graphics arts and medical film scanning. Our first few designs used a dedicated 80188 microprocessor board for motor control. Precision timing was managed with assembler language coding (this was probably one of the last CPUs where noop sequences would provide good timing as pipelines and caches came into general use shortly). The entire board was dedicated to controlling one stepper motor, the drum drive and reading the quadrature encoder that checked carriage motion.

I became directly involved when we needed to significantly cost-reduce our medical film scanner design. We were moving to a single 386ex processor for all control functions (from a system where I/O, motor control and overall system control ran on independent CPUs). In order to make this transition we needed to dramatically change the stepper control implementation.

The step to step timing is critical if a stepper motor is to run smoothly and reliably. We could not depend on the processor to directly provide this in the new system. For the new design we chose to use a CPU timer/counter segment to provide the time base for commutating the motor. A two stage pipeline for windings state was implemented using two latches with the first latch writable by CPU and its outputs driving the inputs of the second latch. The second latch was clocked by the timer/counter output and fed the drivers that ran the windings (I can’t remember whether this was a unipolar motor or an H-bridge driving a bipolar motor). Each time the time triggered, and IRQ would be delivered to the CPU at a rather high priority as well, allowing the processor to update the winding drive pattern for the next time interval.

This was a single axis system that positioned a heavy aluminum sled holding a rotating acrylic drum with a positional accuracy of approximately 1/4000 of an inch. The stepper motor was coupled to a lead screw that was connected to the sled using an anti-backlash nut. There was a quadrature encoder on the far end of the lead screw that was used to verify that the mechanism was responding to stepper impulses. Flags mounted on the ends of the carriage slotted into optical interrupters at either end of the carriage run to indicate that the end had been reached. The sled was mounted on one side to a circular cross section linear rail using a PTFE contact bearing (this was a cost reduction from previous systems that used linear bearing with a ball bearing race inside). The other side of the sled had a follower (spring loaded with brass rollers for tracking I believe) running along a rectangular cross section bearing. This arrangement allowed larger out of parallel tolerances between the two rails in comparison to previous systems (with two captive bearings the sled would bind if the rails were even slightly out of true).

On system start up (I believe this was carried out on the first commanded motion which would occur immediately if the door was closed and secured) the software would comb the winding pattern forward for one or two steps. This was performed to ensure that the rotor and stator relationship was known before making larger motions. Once the motor had been synchronized in the manner, if the parking side end stop sensor was occluded, a slow move out until the sensor was cleared would be performed, followed by a home to sensor move to establish the parking position of the sled. If the sensor was not occluded, the system would perform a slow home operation until the sensor was occluded (slow moves were not ramped and were run at the maximum unramped step rate for the system).

Normal move to position transits were ramped moves where the timing between steps decreased from the initial value (thus accelerating the movement) until the maximum speed was achieved and then ramped back down in a similar manner to ensure a clean stop. Given that we retained a memory of the current position of the sled and that we had feedback from the quadrature encoder if something caused the sled to fail to respond to a step request we could substantially improve performance by doing this. Moves that were run against the normal imaging step direction would be set up to overshoot the destination position slightly and then move back a short distance to take up any slack (backlash) in the drive system.

Once the sled was positioned at the start of operation position, the imaging system would be primed for data acquisition and scan stepping (not ramped, fixed number of steps per scan line) would be initiated. In this mode, each burst of steps (to position the sled for the next scan line) would be slaved to the needs of the imaging system to ensure clean image data (no motion during imaging) and repeatability (the hosts would sometimes fail to keep up and the scanner would need to stop for a time as internal buffers were saturated).

Source code control systems

I’ve worked with a number of source code control systems and looked into most of the common ones at one time or another.

Those I’ve used for serious work:

  • Visual SourceSafe

    VSS has gotten a lot of mileage as its per-seat cost is low (and zero with MSDN subs) and for small teams its capabilities are adequate. In a number of cases it was the source code control system that was in place when I started working with a team.

    Its performance is acceptable as long as all team members are (or can be via remoting) of the local high performance network. It uses direct file system access to manage its database and thus provides only very limited security. This has not been a huge problem as your in-house development team is trusted in any case. It would be a possible problem if outside developers needed access. Its scalability has been questioned in places, but to date I’ve seen pretty large repositories running without significant issues. Some obvious operations (branching and labeling) don’t work in the obvious way, but there are ways to accomplish everything that I’ve needed to do.

    At this point it is getting very old and creaky, but for a small local team that needed a quick and easy way to manage source code control I might still look at it as a readily available and well understood solution

  • GIT

    I have been using GIT as my local source code sandbox repository for some time now. It is rugged, simple, easy to use for what I need and makes mirroring of work easy. When I’m heading home I can push my days changes to a flash drive and know that I’m in good shape to keep things going that evening if there is a need. The fact that it doesn’t use deltas makes it a bit more of a storage hog than most other choices, but given the size of storage devices and the extra robustness that comes from this choice I’m likely to stick with it for personal projects. The fact that it has supported huge, widely distributed open source projects on the net give me confidence that it will scale to meet pretty much any need.

    The biggest down-side I can see to GIT is the command line interface that is its default front end. I am comfortable with it at this point, but I’d be reluctant to try to introduce most development teams I’ve worked with to GIT as their default source code control system. I know that there are GUI wrappers for GIT, but the ones I’ve seen looked a bit limited (and half baked in some cases). There may be better options out there now as I haven’t gone looking in some time. The requirement that you pull the entire repository onto your local machine could be another issue in cases where the code base is very large. I could imagine that it would make sense to split things into multiple repositories in some cases to permit developers to work on subsets of the entire code-base though I’d imagine this would extract a price when dealing with branches and such.

  • Old school zip or tar ball

  • In my first software job the group had no source code control system of any sort in place when I started and as I moved into a lead role I was aware that this was trouble. At the time the only packages I was at all familiar with were RCS and SCCS (and CMS from VMS but that clearly didn’t apply) and neither was a good choice for the small team at Howtek. I ended up instituting a simple system where code was handed to a central location (each engineer had a separate processor board that they were coding for so no merges) and periodically a version number would be assigned to the result and a zip file created containing the source tree at that point. I’d never go back to this as I know there are better alternatives, but it worked surprisingly well for the five or six person team we had.

  • Perforce

    Perforce is a rather nice and rather expensive source code control system. It appears to scale well, support branch and merge operations as well as anything I’ve worked with and has an easy to use and easy to teach GUI. Aside from the ‘expensive’ aspect I’d be comfortable pitching Perforce to any place I’ve worked. It support change-lists (which is one of the limitations on VSS…it doesn’t) and seems to have a robust back end. I’ve seen it used to support six million lines of code without showing any stretch marks. If the team wasn’t comfortable with GIT and the team and project was large enough to demand something of this scope I’d be comfortable going there.

    One issue I have seen with teams that use source code control systems with very expensive per-seat licensing is that the pool of employees with any sort of access tends to get squeezed. I’d rather have engineers who have occasional need of access (read only or read write) get access as needed rather than having a small group of developers acting as gate keepers (and the associated lack of check-in traceability as checkins get delegated to those with active seats.

Those I’ve Investigated or played with

  • CVS

    CVS seems to be pretty close to VSS in features and functions but with an open source implementation and somewhat fragile back-ends. It we the first open source source code control system that I looked at seriously as I was seeing VSS working well, but concerned that it was getting old and creaky (and not certain that Microsoft had a long term commitment).

    In the end I concluded that CVS offered little incentive to move from any other option, that there were better alternatives out there in open source and that the number of warnings about back-end fragility and corruption issues left me deeply uncomfortable about serious work with the tool.

  • Subversion

    Subversion seems like the most credible option to CVS (and was explicitly created to be such an option). It appears to be more full featured and more robust. It seems to be functionally not that far from Perforce. I still have seen too many concerns about the back ends to be comfortable with it for things that matter (though I haven’t looked into the situation in several years as GIT has been working for me). The last time I looked at it, it appeared to be a decent solution, but not worth scrapping the existing system to make a transition.

  • Team Foundation Server

    This is Microsoft’s heavy duty replacement for VSS in their line up. Most of the research on this was done by a coworker so what I have is the distillation of his investigations. It seems like a very powerful and highly integrated source code control solution. The back end appears to be into SQL Server with all of the power and reliability (and cost) that a full fledged databased server brings to the table. We were looking at budgeting for a deployment and getting this in place but with SMR being end of lifed this isn’t going to happen. I believe that it would have served our needs though probably a much heavier weight solution than a team with a half dozen developers needed.

  • ClearCase

    I have been close to ClearCase implementations but never yet had to work in that environment. My impression from a close distance is that ClearCase is a poor solution for most modern needs. It appears to require aggressive and skilled support and presents a highly integrated and intrusive presence in the work flow. Having the source code control system present as a networked file system seems like unnecessary complexity for the job at hand and the costs of purchasing seats and hiring dedicated administrators makes this a complex and expensive option that might make sense for very large teams. I’d certainly give it a shot if I were hired by a site that had it already in place (I’m a software engineer, not a release engineer and so the work is what matters) but I would have deep reservations about deploying ClearCase into a new team or as a replacement for an existing system. There are too many other options available that don’t have the overhead or down sides that I seen in this tool.

Updating a few books

I’ve got old editions of Effective C# and Effective Java. Since my work for the last few years has been mostly C++ I haven’t gotten around to buying the second editions. Now that I’m looking for new opportunities, the time has come to refresh my C# and Java a bit. They’re in the mail but not here yet. I’m also getting the sequel to the C# book More Effective C# and Java Concurrency in Practice which I have read and found useful but do not currently own a copy of.

Hoping to get around to doing some Android programming with a RESTful web back end here on this site. Getting any rust knocked off of my Java should contribute to that effort. Looking forward to reading these and seeing what I can do with the knowledge…

First interview in a while

I’ve been at Oni/GE for almost ten years now. It has been a great time with a team of exceptional people working on some incredible technology. Now that GE has decided to end the product I’m back out in the job market and yesterday I had my first interview this time ’round.

I think the interview went well and it certainly has me thinking about the future in ways I hadn’t been in the recent past. It was a surprisingly jarring change when the facility closure was made official and I stopped thinking about future directions and possibilities for the scanner system. I’m used to having a mess of work related material sloshing around in my head and for the moment there’s none of that as we’re working on knowledge transfer and wrapping things up for the folks in Wisconsin who will be taking responsibility for ongoing maintenance issues (betting there will be none significant enough to warrant a release, but they have to be cautious about things).

Talking with the folks at KMC Systems did get me thinking about the limited horizons that the slow shrinkage of the Wilmington site has placed on us. I’m looking forward to working with a larger team and with a broader slate of issues than we’ve had/been able to tackle in the last few years. It has been a while since we’ve had a significant range of skill levels on the team and the broader challenges of moving on multiple objectives in parallel.

Talking with KMC has me revisiting many things that have been on the shelf for a while and I’m looking forward to working with a larger team again and a broader range of challenges (whether with them if they offer and I accept or somewhere else). I’ve enjoyed managing team interactions, bringing less experienced engineers along and juggling all of the issues involved with a new and developing product in the past. The focus over the last couple of years at Wilmington has narrowed down slowly but inexorably and I’ve been almost entirely working on bug fixes and feature implementation. Time for some new horizons (not that I have any choice, but I’m ready in any case) and new challenges…