Category Archives: General Technical

Hmm…and Kestrel in .NET Core

Ok..and Kestrel seems to be the .NET Core alternative to the OWIN stuff under .NET classic.

Yet another option…in my case the OWIN side is likely more interesting as I’m almost certainly going to be doing things that need interop or similar windows-centric functionality. Interesting as a Linux facing option though. May also be lower overhead in cases where a particular micro-service doesn’t need access to native capabilities.

I am also expecting to need some sort of SSL certificate to enable TLS on these links (don’t need the full commercial cert probably as these are likely going to be expose by IP address and not on the open web). I need to understand what is needed to deploy TLS, ideally with cert verification on both ends using certs I’ve created myself and that don’t correspond to a particular URL.

In this case I’m looking to ensure no MITM attacks and to encrypt the traffic but not to ensure much more than that. I don’t want further authentication to leak and I want to protect the connection (for example for a web UI on a small ‘appliance’ that may at times be exposed to an open internet connection).

Mostly saving these to read/watch in more detail later. Options on top of options here…

C#: Self Hosting Web UI or WPF?

I continue to bounce back and forth between self-hosted web UI and WPF UI for implementing simple user interactions.

WPF is likely to be easier to build the right sort of thing with but is a bit less flexible and more ‘backward looking’. Web interfaces require a self-hosting solution if they’re going to be stand-alone and cross more lines of language and environment.

On balance the web UI path is better as a learning experience and aside from a few corner-cases (OpenGL say) likely to  result in a better user experience in the long run.

I’m still looking for the right self-hosting solution and UI framework solution. I’m going to have to settle on one shortly so that I can being experimenting in the ninecrows sandbox and getting a feel for the technology.

Current links of interest on the self-hosting front:

There also seem to be some hard choices between versions of the framework…classic and core in particular.

More to learn as things move forward. I expect I’ll wind up doing things with both/all before I’m done as they have different strengths. The sorts of things a WPF/WinForms/MFC UI can do in terms of digging into the system and being screen real-estate aware along make them of great interest.

C++/CLI

Recently bought a couple of used books on C++/CLI to read. This looks like the most seamless and high performance approach to .NET/Native interop.

and

Both a little old (though there isn’t much newer really). I’m a bit concerned that Microsoft might move away from supporting this language as it must  be a bit involved to keep up as new releases happen. I’m going to look at adding some interop samples using this technology to my github in the near future.

Did a Little More Research on Interop During Lunch

I did a bit more research on interop topics during lunch today. I’m going to try to write up some samples using interop to work with the file information and volume information APIs that aren’t available in C# tonight.

If all goes well, I should have something to push up to github as a set of code samples. I don’t expect this to be a finished product, but should be interesting in any case.

I also want to fix the Wix issues with my service sample and get that pushed with a working installer. Probably won’t get the guts finished but should be able to install and uninstall the thing without dev tools.

JSON options in C#…too many options

I’m getting back to sandbox projects on the home front (plenty that got shelved when things got busy) and have been looking at JSON handling in C#.

So far a quick scan of what is out there leaves me with the impression that there are many choices and no clearly dominant one.

So far the open source http://www.newtonsoft.com/json seems to be the best choice. I think I’m going that way for the time being and will look into other options down the road.

Thinking that MongoDB may be a good back end for the data storage I need for the current work at hand. JSON-centric as well so likely a good overall match.

 

Pixel-C and Pixel-XL Filesystem Support

I’m finally getting the filesystem support on my Android pixel devices sorted out. I’ve had Samsung tablets and phones previously (until the Note 7 debacle) and they fully support exFAT. Given this is the default for almost everything larger than about 4GB that has been very useful.

When I got my two current Pixel branded devices I thought that my SD card readers were failing. It is not clear that the issue is a lack of exFAT support. Particularly curious as I appear to be able to read NTFS formatted SD cards in read-only mode.

I’m now at s point where things are workable but still rather annoying. Cards that I’ll use with my pixels are either small, crippled and FAT32 formatted (read/write) or NTFS and read-only. Cards I’ll use with my old Samsung tablet are exFAT formatted and won’t work with any of the pixel devices.

I suspect that if I rooted the tablets I could probably find a way to add exFAT filesystem support but I’d rather not make changes at that level. Really wishing that google had paid the extra bit to license exFAT for its flagship devices.

 

One major reason I’m not fond of fork/exec componentizing

In Debian Linux recently the CryptKeeper encryption package fails to properly set passwords due to a bug fix in the command line handling of a package it runs (see this Register article for more details) .

It appears that the code was tested and worked perfectly in previous versions. The tool being used had a minor bug in its command line processing that resulted in the password being accepted even though it was placed improperly. Once the maintainers of this package fixed their big, the string sent by the CryptKeeper package resulted in the letter ‘p’ being taken as the password and the actual password being ignored and it would appear discarded.

It seems to me that Linux could use a universal (or nearly) –API switch that converts the command line and stdin/stdout/stderr to something like JSON with extensive and aggressive checking for validity in all inputs along with tightly specified outputs.

Most programmatic usage of command line tools that I’ve encountered to date makes broad assumptions about the details of the input and output formats for human usable tools. In most cases I’ve seen the checks for unexpected data are weak and porous. This generally results in tools that are fragile and often either need manual tweaking or are simply tied to a specific version of a particular distro. The alternative (less common but I’ve encountered it as well) is to force tools whose primary interaction is with humans on the command line to lock down the original output formats in ways that don’t necessarily work well for people in order to avoid breaking programs and scripts.

Time to Define Tables

I’ve worked with databases on and off for a long time but in general I’ve been accessing previously defined data tables or had very simple needs (DynamoDB kind of forces that).

I’m at the point in my sandbox work that I’m about to define a substantially more complex schema than I’ve used for anything in the past. Finding that my command of SQL DDL and the various bits involved in creating MySQL users and setting permissions is taking some doing.

I still can’t seem to get a database on a remote system (boojum, myimg_20161229_133853 little test machine). I am not getting a ‘no connection’ response but an authorization failure so I expect that the identit(ies) I created on the NUC aren’t quite right Always details to be dealt with…

I am finding that tables proliferate. Everything needs a unique ID (and auto increment columns help enormously here). Any sort of one to many relationship (keywords, validation) winds up with a new table containing the source object key and the targets. In the past I’ve used the DB as an index for more complex storage (DICOM-3 pretty much forces this with its many 1..n fields, DynamoDB encourages this as well with bulk information in S3).

In the current case I really want to keep everything in the DB rather than spilling items overboard. If I get to thumbnails I’ll see how the BLOB types work. For now I expect that I’ll be ok with the 64K per row limit on MySQL InnoDB tables. I expect that BLOB and TEXT aren’t stored in their respective tables so they should permit ‘stretch’ operations.

Current working notes are a bit chaotic 🙂

DB Initial Sketch
Chaos in the Schema

I am sticking with the MariaDB fork of MySQL to avoid Oracle entanglements. Once the basic database definitions are in place I expect to code the front end in C# as I need to polish up that language and environment a bit at the moment. Not sure whether I’ll go with WPF or some sort of web interface for the UI yet. Got to get the innards a bit better defined first.

Looks like I was just messing up my command line for remote mysql access. Once I got things straightened out, I can talk to the database instance on boojum from chaos. Should make things a bit cleaner as I can keep the data in one spot and access it from any system in the house. Now I just have to finish with the create table statements and see if they build what I need. Short digression into users and permissions first (and probably a quick look at alter table) to see what I should be doing for remote access.

Nice case for my Raspberry Pi-3

Looking very similar (but smaller and much less expensive) to my NUC this case works very well for the R-Pi that I’ll likely img_20161226_140011 use to drive my RepRap if/when I get around to finishing it.

The one issue I have with this case is the limited access to the main board interfaces. It looks as if there is a dedicated ribbon cable slot to allow access to the main external interface.img_20161226_140927 The alternate interfaces are much less accessible as they’re on the top of the board and between the board and the case when fully assembled.

The other issue I have (though for my purposes it isn’t a big deal) is that the micro-SDcard access is limited. img_20161226_141443The card does not extend beyond the case fairing and while it can be extracted with a fingernail between the case edge and the card, I’d be reluctant to use it if i were expecting to change out cards regularly.

 

The bottom of the case is vented and there’s a thermal pad that could be used to sink the CPU head to the case. I have skipped this as I expect to be taking the board out now and again for access to the hardware pins.

img_20161226_140221