Motion Control

I have recently had some questions about my knowledge and background in motion control and so I though I’d summarize here to help get my thoughts in order.

I have largely worked with stepper motors to date. For precision positioning (at least by the standard of things I’ve worked on) they are cheap and effective. They have also bee relatively small motors (NEMA 14 to NEMA 17 types) and driving a single axis using a lead screw. I have been looking at putting together a RepRap 3D printer recently, but haven’t yet ordered the stepper for the project.

Most of my detailed motion control experience came when I was working for Howtek. We were designing, manufacturing and selling high end imaging systems for graphics arts and medical film scanning. Our first few designs used a dedicated 80188 microprocessor board for motor control. Precision timing was managed with assembler language coding (this was probably one of the last CPUs where noop sequences would provide good timing as pipelines and caches came into general use shortly). The entire board was dedicated to controlling one stepper motor, the drum drive and reading the quadrature encoder that checked carriage motion.

I became directly involved when we needed to significantly cost-reduce our medical film scanner design. We were moving to a single 386ex processor for all control functions (from a system where I/O, motor control and overall system control ran on independent CPUs). In order to make this transition we needed to dramatically change the stepper control implementation.

The step to step timing is critical if a stepper motor is to run smoothly and reliably. We could not depend on the processor to directly provide this in the new system. For the new design we chose to use a CPU timer/counter segment to provide the time base for commutating the motor. A two stage pipeline for windings state was implemented using two latches with the first latch writable by CPU and its outputs driving the inputs of the second latch. The second latch was clocked by the timer/counter output and fed the drivers that ran the windings (I can’t remember whether this was a unipolar motor or an H-bridge driving a bipolar motor). Each time the time triggered, and IRQ would be delivered to the CPU at a rather high priority as well, allowing the processor to update the winding drive pattern for the next time interval.

This was a single axis system that positioned a heavy aluminum sled holding a rotating acrylic drum with a positional accuracy of approximately 1/4000 of an inch. The stepper motor was coupled to a lead screw that was connected to the sled using an anti-backlash nut. There was a quadrature encoder on the far end of the lead screw that was used to verify that the mechanism was responding to stepper impulses. Flags mounted on the ends of the carriage slotted into optical interrupters at either end of the carriage run to indicate that the end had been reached. The sled was mounted on one side to a circular cross section linear rail using a PTFE contact bearing (this was a cost reduction from previous systems that used linear bearing with a ball bearing race inside). The other side of the sled had a follower (spring loaded with brass rollers for tracking I believe) running along a rectangular cross section bearing. This arrangement allowed larger out of parallel tolerances between the two rails in comparison to previous systems (with two captive bearings the sled would bind if the rails were even slightly out of true).

On system start up (I believe this was carried out on the first commanded motion which would occur immediately if the door was closed and secured) the software would comb the winding pattern forward for one or two steps. This was performed to ensure that the rotor and stator relationship was known before making larger motions. Once the motor had been synchronized in the manner, if the parking side end stop sensor was occluded, a slow move out until the sensor was cleared would be performed, followed by a home to sensor move to establish the parking position of the sled. If the sensor was not occluded, the system would perform a slow home operation until the sensor was occluded (slow moves were not ramped and were run at the maximum unramped step rate for the system).

Normal move to position transits were ramped moves where the timing between steps decreased from the initial value (thus accelerating the movement) until the maximum speed was achieved and then ramped back down in a similar manner to ensure a clean stop. Given that we retained a memory of the current position of the sled and that we had feedback from the quadrature encoder if something caused the sled to fail to respond to a step request we could substantially improve performance by doing this. Moves that were run against the normal imaging step direction would be set up to overshoot the destination position slightly and then move back a short distance to take up any slack (backlash) in the drive system.

Once the sled was positioned at the start of operation position, the imaging system would be primed for data acquisition and scan stepping (not ramped, fixed number of steps per scan line) would be initiated. In this mode, each burst of steps (to position the sled for the next scan line) would be slaved to the needs of the imaging system to ensure clean image data (no motion during imaging) and repeatability (the hosts would sometimes fail to keep up and the scanner would need to stop for a time as internal buffers were saturated).

Source code control systems

I’ve worked with a number of source code control systems and looked into most of the common ones at one time or another.

Those I’ve used for serious work:

  • Visual SourceSafe

    VSS has gotten a lot of mileage as its per-seat cost is low (and zero with MSDN subs) and for small teams its capabilities are adequate. In a number of cases it was the source code control system that was in place when I started working with a team.

    Its performance is acceptable as long as all team members are (or can be via remoting) of the local high performance network. It uses direct file system access to manage its database and thus provides only very limited security. This has not been a huge problem as your in-house development team is trusted in any case. It would be a possible problem if outside developers needed access. Its scalability has been questioned in places, but to date I’ve seen pretty large repositories running without significant issues. Some obvious operations (branching and labeling) don’t work in the obvious way, but there are ways to accomplish everything that I’ve needed to do.

    At this point it is getting very old and creaky, but for a small local team that needed a quick and easy way to manage source code control I might still look at it as a readily available and well understood solution

  • GIT

    I have been using GIT as my local source code sandbox repository for some time now. It is rugged, simple, easy to use for what I need and makes mirroring of work easy. When I’m heading home I can push my days changes to a flash drive and know that I’m in good shape to keep things going that evening if there is a need. The fact that it doesn’t use deltas makes it a bit more of a storage hog than most other choices, but given the size of storage devices and the extra robustness that comes from this choice I’m likely to stick with it for personal projects. The fact that it has supported huge, widely distributed open source projects on the net give me confidence that it will scale to meet pretty much any need.

    The biggest down-side I can see to GIT is the command line interface that is its default front end. I am comfortable with it at this point, but I’d be reluctant to try to introduce most development teams I’ve worked with to GIT as their default source code control system. I know that there are GUI wrappers for GIT, but the ones I’ve seen looked a bit limited (and half baked in some cases). There may be better options out there now as I haven’t gone looking in some time. The requirement that you pull the entire repository onto your local machine could be another issue in cases where the code base is very large. I could imagine that it would make sense to split things into multiple repositories in some cases to permit developers to work on subsets of the entire code-base though I’d imagine this would extract a price when dealing with branches and such.

  • Old school zip or tar ball

  • In my first software job the group had no source code control system of any sort in place when I started and as I moved into a lead role I was aware that this was trouble. At the time the only packages I was at all familiar with were RCS and SCCS (and CMS from VMS but that clearly didn’t apply) and neither was a good choice for the small team at Howtek. I ended up instituting a simple system where code was handed to a central location (each engineer had a separate processor board that they were coding for so no merges) and periodically a version number would be assigned to the result and a zip file created containing the source tree at that point. I’d never go back to this as I know there are better alternatives, but it worked surprisingly well for the five or six person team we had.

  • Perforce

    Perforce is a rather nice and rather expensive source code control system. It appears to scale well, support branch and merge operations as well as anything I’ve worked with and has an easy to use and easy to teach GUI. Aside from the ‘expensive’ aspect I’d be comfortable pitching Perforce to any place I’ve worked. It support change-lists (which is one of the limitations on VSS…it doesn’t) and seems to have a robust back end. I’ve seen it used to support six million lines of code without showing any stretch marks. If the team wasn’t comfortable with GIT and the team and project was large enough to demand something of this scope I’d be comfortable going there.

    One issue I have seen with teams that use source code control systems with very expensive per-seat licensing is that the pool of employees with any sort of access tends to get squeezed. I’d rather have engineers who have occasional need of access (read only or read write) get access as needed rather than having a small group of developers acting as gate keepers (and the associated lack of check-in traceability as checkins get delegated to those with active seats.

Those I’ve Investigated or played with

  • CVS

    CVS seems to be pretty close to VSS in features and functions but with an open source implementation and somewhat fragile back-ends. It we the first open source source code control system that I looked at seriously as I was seeing VSS working well, but concerned that it was getting old and creaky (and not certain that Microsoft had a long term commitment).

    In the end I concluded that CVS offered little incentive to move from any other option, that there were better alternatives out there in open source and that the number of warnings about back-end fragility and corruption issues left me deeply uncomfortable about serious work with the tool.

  • Subversion

    Subversion seems like the most credible option to CVS (and was explicitly created to be such an option). It appears to be more full featured and more robust. It seems to be functionally not that far from Perforce. I still have seen too many concerns about the back ends to be comfortable with it for things that matter (though I haven’t looked into the situation in several years as GIT has been working for me). The last time I looked at it, it appeared to be a decent solution, but not worth scrapping the existing system to make a transition.

  • Team Foundation Server

    This is Microsoft’s heavy duty replacement for VSS in their line up. Most of the research on this was done by a coworker so what I have is the distillation of his investigations. It seems like a very powerful and highly integrated source code control solution. The back end appears to be into SQL Server with all of the power and reliability (and cost) that a full fledged databased server brings to the table. We were looking at budgeting for a deployment and getting this in place but with SMR being end of lifed this isn’t going to happen. I believe that it would have served our needs though probably a much heavier weight solution than a team with a half dozen developers needed.

  • ClearCase

    I have been close to ClearCase implementations but never yet had to work in that environment. My impression from a close distance is that ClearCase is a poor solution for most modern needs. It appears to require aggressive and skilled support and presents a highly integrated and intrusive presence in the work flow. Having the source code control system present as a networked file system seems like unnecessary complexity for the job at hand and the costs of purchasing seats and hiring dedicated administrators makes this a complex and expensive option that might make sense for very large teams. I’d certainly give it a shot if I were hired by a site that had it already in place (I’m a software engineer, not a release engineer and so the work is what matters) but I would have deep reservations about deploying ClearCase into a new team or as a replacement for an existing system. There are too many other options available that don’t have the overhead or down sides that I seen in this tool.

Updating a few books

I’ve got old editions of Effective C# and Effective Java. Since my work for the last few years has been mostly C++ I haven’t gotten around to buying the second editions. Now that I’m looking for new opportunities, the time has come to refresh my C# and Java a bit. They’re in the mail but not here yet. I’m also getting the sequel to the C# book More Effective C# and Java Concurrency in Practice which I have read and found useful but do not currently own a copy of.

Hoping to get around to doing some Android programming with a RESTful web back end here on this site. Getting any rust knocked off of my Java should contribute to that effort. Looking forward to reading these and seeing what I can do with the knowledge…

First interview in a while

I’ve been at Oni/GE for almost ten years now. It has been a great time with a team of exceptional people working on some incredible technology. Now that GE has decided to end the product I’m back out in the job market and yesterday I had my first interview this time ’round.

I think the interview went well and it certainly has me thinking about the future in ways I hadn’t been in the recent past. It was a surprisingly jarring change when the facility closure was made official and I stopped thinking about future directions and possibilities for the scanner system. I’m used to having a mess of work related material sloshing around in my head and for the moment there’s none of that as we’re working on knowledge transfer and wrapping things up for the folks in Wisconsin who will be taking responsibility for ongoing maintenance issues (betting there will be none significant enough to warrant a release, but they have to be cautious about things).

Talking with the folks at KMC Systems did get me thinking about the limited horizons that the slow shrinkage of the Wilmington site has placed on us. I’m looking forward to working with a larger team and with a broader slate of issues than we’ve had/been able to tackle in the last few years. It has been a while since we’ve had a significant range of skill levels on the team and the broader challenges of moving on multiple objectives in parallel.

Talking with KMC has me revisiting many things that have been on the shelf for a while and I’m looking forward to working with a larger team again and a broader range of challenges (whether with them if they offer and I accept or somewhere else). I’ve enjoyed managing team interactions, bringing less experienced engineers along and juggling all of the issues involved with a new and developing product in the past. The focus over the last couple of years at Wilmington has narrowed down slowly but inexorably and I’ve been almost entirely working on bug fixes and feature implementation. Time for some new horizons (not that I have any choice, but I’m ready in any case) and new challenges…

More items lined up, a few to go

I’ve got two recruiters active and looking for me (and for the moment that should be enough…best to avoid clashes). I’ve touched base with the folks I want as references and have their ok and contact information in hand. Now I’m mostly waiting for GE to lock in my end date. HR has indicated that end dates should be formally handed out by the end of this week (though these things can always slip) so I’m hoping that by next week the process will be moving forward at full speed. Once I have the date I’ll ping the recruiters and see how things are looking.

Android SDK setup

I have setup the required pieces to start playing with android development on my personal laptop and desktop. This involved updating the JDK (not strictly necessary but not a bad idea) and ADT. I did have to mess around a bit (copying the tool to a subfolder and such) to get the SDK Manager to start up (though even when it starts up properly it seems to take some time between the launch click and the UI being active). I installed most of the platform versions, skipping only 2.1 and earlier. At this point the eclipse IDE runs and I can ask it to create an android project. Now I’m back to my reading to figure out where to go from here.

Side projects

I’m on my second read through of the APress Android programming book I’ve got in my library. First pass gets me general familiarity and the lay of the land. Second pass is when I start trying to use things. The Android dev tools and latest JDK are now installed and working on both my laptop and my main desktop at home. Next step is to get a ‘hello world’ type program in place and running. Once that is done I will likely try putting in place a simple RESTful web service front end in PHP on my hosting and see if I can make a simple multiplayer board game work. Lots to learn and lots to play with here. Knock the rust off my Java and get some useful practical knowledge of Android.

I’m seriously thinking of getting my RepRap project moving forward again. I bought the electronics and read through the wiki a couple of years ago. I was hoping to get my teen age daughter interested in working on it with me. She decided it wasn’t that cool and work got busy and the parts I had got shelved. At this point I am likely to have some free time, I know what I need next and money shouldn’t be a serious issue. My next step needs to be the stepper motors and some connectors, wiring and heat sinks (the stepper controller chips I have look pretty good, but I’m uncomfortable running them without some way to pull heat out). There seem to be sources for relatively cheap steppers associated with the RepRap folks. Digikey should be able to supply the other stuff (and I have a few additional bits I need from them anyway). If things don’t get busy again in the near future, I’ll probably start moving this forward.

Things are moving forward…

At this point I have two recruiters working with me and I am waiting to see what they come up with. Plenty of time yet as I still haven’t received my official end date from GE. Rumor says that end dates will be either June 30, September 30 or December 30. The software documentation is in very good shape (proud of the job we did and it makes it much easier to prepare for the transfer of knowledge) so I’m currently using the June 30 date as my working end date. HR has said that the formal dates and packages should be done this coming week. Hoping they are as some certainty on the end date GE is requesting will help me plan better as information comes back from the recruiters.

Wilmington SMR is shutting down

GE has decided to shut down MRI development group that I work for in Wilmington, MA. The site closing package allows me nearly a year to locate an attractive new opportunity so I have some time to locate a very good match.

It has been a wonderful run and I’ve worked with some great people at SMR (and ONI Medical Systems before the GE acquisition) It is sad to be moving on, but exciting to consider the new possibilities that this opens up.