The things you run into as you’re getting serious with sandbox coding in a new language… I was setting up some pieces for the toy program I’m building (semi-smart de-dup tool with MySQL back end for archive management). After reinstalling Visual Studio 2015 to clear up an issue with creating native DLLs I’ve started framing out the pieces.
I was intending to build this as a console executable with a managed DLL for the bulk of the operational code and a native DLL for things that need interop (currently mostly VSN access for optical media and external drives). As I was laying out the projects I noticed that there was no option for static library creation in the (long) list of project types.
This surprised me a bit as I’ve found it helpful to be able to package code in static libraries while binding the resultant code as a single unit (dll or exe) for distribution. From what I’ve been able to determine with a little google searching it looks as if the managed world only supports dynamic library binding. Makes a bit of sense as the metadata issues could become complex with the same code bound into multiple assemblies.
It is funny that the common suggested solution for cases where code needed to be bound into multiple dlls or exes was to revert back to the really bad old days and just copy the source (or link to source in other projects which seems problematic in a real environment). Seems as if the managed environment assumes that non-trivial projects will consist of a relatively large number of dlls that all get copied from place to place during installs. I’ll probably poke around to see if there’s a .NET equivalent to jar/war packaging…I’d think there ought to be to simplify deployment.
I am a bit unhappy with what I see of .NET interop, particularly on the COM side. I expect to be able to load an arbitrary COM object that supports an interface. Since the interface is the contract, I shouldn’t need to know where the object came from (it may even have been created after my code was compiled) and the interface is the contract.
The same should apply to native code loaded at runtime. I expect to be able to pick an arbitrary DLL by name, load that code into my process and as long as there’s a named entry-point and a (often implied) contract about the signature of the function behind that entry-point everything should work.
The most obvious interop mechanisms I’m seeing take the name of the DLL that is being loaded as part of the binding. There may be a way to opt out on that (aside from the obvious one of writing the load-up code in C++ or COM and handling the runtime load from there) and I will keep looking. So far I’m a bit underwhelmed though.
Slightly frustrating moment…but working through it.
Starting to put together some sandbox code to do file de-duplication scans and some archive management. Found that the visual studio install on my main dev machine here won’t run the wizard that creates native Win32 dll projects. This works as expected on another machine so I’m running a repair to see what happens. Worst case I guess I just reinstall visual studio. Frustrating as this should be one of the simpler things that VS does.
Looking at interop (PInvoke currently) and finding that I have my very old .NET and COM Interoperability book by Nathan and a few notes in more general volumes that are much newer. Particularly disappointing as the APress book on C# 6.0 doesn’t seem to touch interop at all. Not a big deal but a bit disappointing…I’m guessing that interop has evolved somewhat since 2002…
Continue reading Visual Studio issues and Interop
I’m going to be doing more C# in the near future so I’ve been looking into ADO.NET over the weekend. I’m still not sure whether I’ll use MariaDB, SQLite or SQL Server Express for my current sandbox project but it appears that there are connectors for all three available out there.
Looks like the connection strings can be a bit ‘magic’ but no real big issues. I’m currently leaning towards MariaDB for the moment as it has the power to meet my rather limited needs and I’ve worked with MySQL before a good bit.
Over the next few days I expect to start pushing some data and defining the layout for my data. Once I start coding things up I’ll likely start blogging about it here.
Home projects are wrapping up (flooring done, rooms back together). Remaining items should be less time consuming.
I should have a M3D pro 3D printer showing up early in the new year. Time then to get the store bought device running and print out RepRap vertices to move that project forward (just need rod and various fasteners and a bed after that.
The HTC Vive is sitting in a box at the moment but the computer to drive it is together and loaded with software. Once the area in the finished part of the basement that will allow the room scale capabilities to be used is ready it can go down there…then I just have to get coding on some VR stuff (choices, choices…OpenGL or Unity).
My ‘test mule’ machine is not faring well. It is an old, small Dell desktop we picked up at an estate sale years ago. It has served well as a target when I needed to run code on a separate system across the house network for quite a few years. At this point it is regularly rebooting for no good reason. I’m thinking of replacing it sometime in the next few months with a NUC configured similarly. They’re compact, decently powerful and stable. probably wind up spending a bit more money but as a dev test box it should server very well.
Plenty more fun things to get done. Few more home projects that will keep the home front messy but looking forward to playing with the cool toys sometime soon.
I was talking shop with a colleague yesterday and the topic turned to cryptography and then to the challenges of properly implementing core cryptographic systems and I reflected back to the dual ec drbg fiasco.
It was a pretty amazing crash for the NSA. They had been cultivating the trust of the cryptography community ever since the changes to the S-Box structure of DES were found to have strengthened it against differential cryptographic attacks. Continue reading Remembering Dual-EC DRBG…
I was talking with a colleague recently about floating point equality comparison. A double was being serialized, de-serialized and then compared and the loss of precision through the process resulted in the before and after versions being not quite equal.
His initial attempts tried to reconstruct the exact value (from a textual serialization format) after de-serialization by carefully saving all available precision. This failed and I think it was the wrong approach in any case. It is nearly impossible to be sure that every last bit of a fractional floating point value is preserved without preserving the original bit pattern (and this defeats the purpose of saving a human editable form like JSON or XML).
The first discussion centered around floating point epsilon values (least significant bit of the type in use). In this particular case though this wasn’t really the issue. He didn’t need the full precision of the type, his application just needed to test that the values were close enough.
I think the only viable way to implement this sort of comparison robustly involved the site of the compare. Once you define the required degree of accuracy (generally you’ll be discussion base 10 digits after the decimal point) you need to check that the two values are within 1/2 of the next digit down (below and above).
This allows loss of precision due to manipulations (may be serialization, may be computation) while preserving the important parts of the comparison.
Figure out how much precision you need. You’ll wind up with
fuzz = 10^(-(digits + 1) * (1/2)
as the ‘fuzziness’ in the comparison.
If the first value is less than the second value plus fuzz and greater than the second value minus fuzz then you’ve got a good enough match. In any other case you don’t have a match.
While I wouldn’t recommend using floating point values for this sort of thing in the first place, if you’re stuck with one in your system, this approach should preserve your sanity and keep things from getting too weird…
The ‘effective’ book on C# is being updated for C# 6.0. It is on Amazon for pre-order. I have found the books in this series to be wonderful, concise views into the idioms and best practices for various programming languages. I’m looking forward to seeing what this new volume brings to the table.
Good timing too as I’m going to be doing a significant amount of C# ofer the next year or so…