Saturday, December 31, 2016

Windows 7 Stinks


Since Microsoft won't let me decide when Windows 10 updates can be applied, I've decided to experiment working from Windows 7 for a while. So far here are the things which stink in Windows 7.  Windows 7 is terrible, but it's better than Windows 10.


 1. Fresh Windows 7/SP1 installs won't update. 

 In order to get windows updates running on a freshly imaged Windows 7 machine, you need to follow some Microsoft workarounds, involving sideloading two or more KBxxx updates. Because of the very bugs you're working around, these updates freeze and will not install until a large delay (up to an hour) after you start installing them. To work around this insanity, it can help to try stopping WUAUSERV (net stop wuauserv) from an elevated command prompt, and disabling WiFi and network connections, so that Windows update can't start a scan for updates, which puts Windows into a stupid mode where it won't install. Hat tip to Glen Dufke for this tip: Download and then disable wifi, install KB3020369 and reboot, then download and install KB3172605, read notes in support article KB3200747.

 2. Powershell 2.0

 I am a daily powershell user, it's my primary shell environment, and PowerShell 2.0 is unspeakably lame. I use PoshGIT. To use PoshGIT and most other stuff, you should update the Windows Management Framework 4.0 which will update PowerShell to a more respectable 4.0.

 3. The old and crappy Console Host (text mode applications in windows 7) 

 Recently in Windows 10, since the anniversary update, the console host is as nice to use as any Linux console host. Most notably copy and paste works properly between windows apps and the shell. In classic Windows 7 console you have to use the horribly clumsy Alt-F10+Cursor-Keys hack to paste into the shell with a keyboard shortcut. Or you can use your mouse. MOUSE. Use the MOUSE to paste because Microsoft didn't set up a keyboard shortcut for pasting. It's hilariously awful to go back to the Windows 7 command prompt, or powershell in Windows 7, when you're used to a sane and useful thing like the Windows 10 shell.

Concluding Rant

 So will I stay on this configuration? I believe that in spite of the things I lose when I move back to Windows 7, after almost two years on Windows 10 (and Windows 8.1), that the one thing I get back is going to be worth it. I need a work machine that doesn't decide that now would be a wonderful time to install updates. It feels to me like Windows 10 does not belong to me, it belongs to Microsoft. It doesn't even notify me when it decides it wants 100% of my hard disk bandwidth. Windows 7 is not perfect in this respect, due to bugs and other weird Windows 7 features, sometimes Microsoft's own core services will go insane on you, performing a local denial of service attack on its own users. How can Microsoft remain as user-hostile as it has clearly been in the Windows 10 era, and retain its customer base? If you want a Windows 10 machine to behave according to business-friendly and work-friendly rules, the best you can do is to buy their Enterprise features, and set up group policies to disable windows updates. A recent update to windows 10 lets you set "active hours" and keep updates from happening except inside those hours, but that has not worked for me. Frequently I will still get to work in the morning and be greeted by a "windows will now reboot and finish updates". This has eaten hours of my time, and each time it happens, I get even more upset. Windows 10 is free, and worth the price. Until Microsoft grants people the right to own and fully control their own computers, I think that using Windows 10 for professional work purposes is insane. I used to worry about IT departments locking PCs down on me so that they became useless. Now Microsoft has cut the IT departments of the world out of the loop. If you don't have Windows 10 enterprise, Microsoft is your IT department, and they've decided you're not to be trusted with something as important as the management of your own PC.

Wednesday, September 21, 2016

Delphi Features I Avoid Using And Believe Need to be Gradually Eliminated from Codebases

My guest post from L.V. didn't seem to have enough Delphi specifics for one commenter, so I thought about it, and realized, that what L.V. is talking about is Practices (stuff people do), not features.

But there are features in Delphi that I think are either over-used, or used inappropriately, or used indiscriminately, or which should almost never be used, since better alternatives almost always will exist.  Time for that list. No humorous guest-posting persona for this post, sorry, just my straight opinions.

1. WITH statement

This one is hardly surprising to be in the list, as it's one of the most controversial features in the Delphi language. I believe that it is almost always better to use a local variable with a short name, creating an unambiguous and readable statement, instead of using a WITH.  A double with is much more confusing than a single WITH, but all WITH statements in application layer code should be eliminated, over time, as you perform other bug fix and feature work on a codebase.

2. DFM Inheritance

I don't mind having TApplicationForm  inherit non-visually from a TApplicationBaseForm that doesn't have a DFM associated with it, but I find that maintenance and ongoing development of forms making use of DFM inheritance is problematic.  There can be crazy issues, and it's very difficult to make changes to an upstream form and understand all the potential problems downstream. This is especially true when a set of form inheritances grows larger.     I have even forced non-visual-inheritance using an interposing class, and found that IDE stability, and ease of working with a codebase is improved.

3. Frames

The problems with frames and with DFM-inherited are overlapping, but Frames have the additional troubling property of being hard to make visually fit and look good.  You can't really assume that any change in the base frame control's original positions will be overridden or not, you just don't know. Trying to move anything around in a frame is an exercise in frustration.  I prefer to compose parts of complex UIs at runtime instead of at designtime.

4. Visual Binding

I have had nothing but trouble with Visual Binding.  It seems that putting complex webs of things into a design-time environment is not a net win for readability, clarity, and maintainability. I would rather read completely readable code, and not deal with bindings.  Probably there are some small uses for visual binding, but I have not found them. My philosophy is to avoid it. It's a cool feature, when it works.  But the end result is as much fun as a mega-form.

5. Untyped Parameters in User Procedures or Functions

Modern Pascal should be done with PByte rather than the old way of handling "void *" types (if you know C) in Pascal is the untyped var syntax. If possible, I prefer to use PByte which I consider much more modern way of working.  I believe the two are more or less equivalent in capabilities, and that Delphi still contains untyped var params for historical compatibility reasons, but unless I'm writing a TStream and must overload a method that already has this signature, I prefer not to introduce any more anachronisms like that in my code.

6. Classic Pascal File IO Procedures

Streams should have replaced the use of AssignFile, Reset, Rewrite, CloseFile.

7. Unnecessary Use of Pointer Types and Operations in Application Layer Code

In low level component code, with unit tests, perhaps sometimes, pointer types and operations will be used. To implement your own linked list of value types which are not already implicitly by reference, but in application layer (form, data module) code that most Delphi shops spend 90% of their time, introducing raw pointer operations into the application code is almost always going to make me require it to be changed, if I'm doing a code-review.   Delphi is a compiled "somewhat strongly typed" language, and I'm most happy with application layer code that does not peel away the safety that the type system gives me.

8. Use of ShortString Types with Length Delimiters, in or out of Records

Perhaps in the 1980s, a pascal file of a record type, with packed records made sense. These days, it's a defect in your code.  The problem is once such a pattern is in your code, it's very difficult to remove it.  So while an existing legacy application may contain a lot of code like that, I believe a "no more" rule has to be set up, and module by module, the unsafe and unportable stuff will have to be retired, replaced, or updated.  The amount of pain this kind of thing causes in real codebases that I have seen that used it, is hard to overstate.

9. Use of Assignable (Non Constant) Consts

A compiler flag {$J+} in Delphi allows constants to be overwritten. It should never be used.




Tuesday, September 13, 2016

Delphi Worst Practices, The Path to the Dark Side

Guest Post from L.V.




If you want to do the worst possible job at being a Delphi developer, and go from merely weak, to positively devastating, and you want to give your employer the greatest chance of failing completely, making your users hate your product, and going out of business, while exacting the maximum amount of pain and suffering on all around you, if you wish everyone to fear your approaching footsteps, and to be powerless to cross you, here are some startlingly effective worst practices to consider.

Many require very little effort from you, other than occasionally putting your foot down and insisting that certain things are sacred and can't be changed, or that everything is bad and must be changed immediately, no matter what the cost.   It is important that the team never sense that they have the collective ability to go around you, and reinstate optimizations that undo your careful work to make things worse.  A strict hierarchical authoritarian power structure is key to maintaining steady progress towards pessimization.

No matter how bad things are, you can always find a way to make things a little worse.   I can't claim to have invented any of these, and I believe all of these are extremely popular techniques in Delphi shops around the world, and so it seems there is great interest in doing as bad a job as possible.  If I can contribute something to the art, it will be in synthesizing all the techniques of all the pessimization masters who have come before.

Now that you have considered whether you want to go there or not, I will share my secrets.
Here is the path that leads to the dark side...

1. Ignore Lots of Exception Types in the Delphi IDE

The more exceptions you ignore, the less aware of your actual runtime behaviors you will be.  Encourage other developers to ignore exceptions.  Suppress the desire to know what is going on, and become as detached as possible from reality.   The optimum practice is to ignore only EAbort and exceptions similar to it, like the Indy disconnect exception.  So the pessimum practice is to disable break on exception forever, or to add a very large number of classes to the delphi Exception Ignore.  Also make very sure that you ignore access violations.

2. Raise lots of  Exceptions, even for things which you didn't need an exception for.

This one is great, because you will annoy all developers and get them to ignore certain exception types.  Old code that uses StrToInt that could have used StrToIntDef will eventually make users ignore all manner of exceptions.

3. Try...Ignore

This worst-practice (or anti-pattern) can cause you more grief than any other worst practice:

   try
      MaybeDoAllOrPartOfSomeThing;
   except
   end;

To be maximally evil, don't even write a comment. Make every reader guess why you felt that not even logging an exception, and not even trying to restrict your exception handling to a specific sane type of thing to catch and ignore (like EAbort).  Make them wonder what kind of  evil things lurk below there, and how much memory corruption is being silently hidden.  Dare them to remove this kludge of doom that you have imposed.

4. Make your debug builds unable to ever run with Range Checking on, Overflow checking on, even if a developer wants to use it for a while.


While it can be a best practice to ship your release builds with Range Checking, and Overflow Checking off, because the effects to your customer of some relatively benign thing blowing up on them in release, that you can't predict or prevent, it can be a remarkably effective worst practice to build a giant codebase where you don't bother to explicitly turn OFF range checking and overflow checking and I/O checking where it's KNOWN to be generating false positives.     In codebases where I can turn on Range Checking and Overflow checking in particular, in my Developer Machine Debug builds, I often find my effectiveness in finding bugs is increased many times.  Those who want to pessimize their entire team's work, will want to make using such powerful tools that can be used for good, out of reach.

Note that turning on Range Checking and Overflow checking in Release builds could be a form of pessimization, because it's hard to guarantee that they won't have unknown effects.  Most of all, changing these defaults to anything other than what you've always had them at, is injecting a massive amount of chaos, and good developers will often state that this should be avoided in release builds.   You might be able to inject this kind of random evil chaos without anyone noticing, if for example, you can arrange for builds to be done on your machine instead of on a build server.  

5.  Permit Privileged Behavior By Developers with God-Like Egos

Unlike self-organized Agile teams, where rules apply the same to everybody, make at least one person on your team a God Like Developer, who can do things that other developers are not allowed to do.   Ugly pointer hackery, and evil kludges are okay, if you're this guy, and totally unacceptable if it's anybody else.   To really fully pessimize your team and your codebase, let this guy randomly refactor anything he wants to without asking anybody else's permission.  These God-Like developers can review other people's code, but don't need their code reviewed, because they never make mistakes.


6. Don't Document Anything

This is one of the easiest ways to pessimize, it requires basically no effort from you, and all things having to do with software teams and processes, will generally tend to rot on their own.  It is consequently one of the most popular ways of pessimization.  Sometimes you will need to quote the Agile Manifesto or people will accuse you of having evil motives. Quoting the Agile Manifesto will get these people to shut up.

7. Argue About Indentation

By now things are bad, and significant developer attention will be focused on improving things, undoing your careful work of Pessimization. Instead of letting the team focus on fixing core engineering mistakes and technical debt, redirect the team to consider more carefully the effects of one indentation style over another, and various formatting issues, or comment block styles.

8. Magical Unicorn Build Process, and the Voldemort Build Process

I call these special non-reproducible builds "Magical Unicorn Builds" because it is entirely possible that the one PC where the builds occur is actually the only place in the universe where the code as it lives now on version control actually will build.  The secrets and accidents of the entire projects history live as non-annotated non-recorded bits of state on that PC. Contents of the Registry. Contents of various folders that contain component source code that is not kept in source control, and will naturally tend to be slightly different on various machines, and there will be no way to assure that known and controlled set of input data created a traceable end product.   Lists of Tools that are required for the product to build will not exist, we don't need no stinking documentation.  For bonus Pessimization points, the build should not be done via a build.cmd batch script or a CI tool like FinalBuilder, but should instead require a bunch of Arcane and Undocumented actions performed Manually by the High Priest of the Dark Art of Building the Product.  In such a build, we may in fact get all the way to the Voldemort Build.   The Voldemort build is a secret known only to one developer, who we will call Voldemort. Voldemort knows arcane and terrible things that would make you weep, which must never be written down, or shared at all.  Only Voldemort knows the ultimate price of his own power, and he is willing to take any action to protect his own interests.

If you do all of these things, you may be very near being as bad as it is possible to be, and may become a Dark Lord some day.  It will take some hard work, but I'm sure you can do it. Go get 'em, tiger.

Please share your own worst practices in the comment box.  Together, we can rule the Galaxy.




Tuesday, August 30, 2016

Nexus Quality Suite: Why Profiling and Checking Your Application for Leaks is Essential (Part 1 of a review of Nexus Quality Suite 1.60)

I've been using and also experimenting with Nexus Quality Suite on and off for the past 9 months and I've been meaning to write up a blog post about it.  The trouble with reviewing this software suite is that it contains so much stuff, I am aware that I can only skim the surface.  So I think I'll present it in small meaningful little task-oriented mini reviews.  Initially I was running the tools in this suite on an extremely large Delphi system.  While it's definitely useful for very large systems, I found it difficult to explain that usefulness using that large application.

So I've decided to keep my real world focus in reviewing this tool, but I'm picking a bit of my own personal code to profile and test.  I'm going to run Nexus Quality Suite's tools against a little application I first wrote in about 1996, that is in my toolkit of "system admin and developer-operations" tools.   Here's what it looks like:


It can ping any number of hosts from one to hundreds. When any of those hosts goes offline (does not respond to ICMP ping), or the DNS resolver stops resolving, this little tool can beep (for in office monitoring) or send an email (which can alert me even when I'm out of the office).   But this tool has always been slow, slow slow.    Since I add additional sleep time (configurable) between its runs, I've never worried about the performance of it, but I recently had a use for this tool again, so I dusted off the source code, added a few little things, and recompiled it in Delphi 10.1 Berlin. I even found a missed out "Unicode port" bug where I had forced a cast to AnsiString over a UnicodeString in a way that actually resulted in sending Unicode bytes into an ANSI Windows API. Bad Warren! No cookie for you!  My only excuse is that I wrote the code in question in 1996, in Delphi 2, and simply overlooked it when porting this code to Unicode Delphi.  Now back to my review...

Anyways, back to the performance profiling tools.  The latest version of Nexus Quality Suite 1.60 supports both 32 bit and 64 bit programs, but I would recommend profiling your 32 bit tools, as the 32 bit tools are probably easier to profile.   For those cases where you really want to profile 64 bit stuff now you can.   The NQS installer installs a group of items in your tools menu.   Be aware that certain Delphi versions have a bug, which has a workaround available, and that the installer for Nexus Quality Quite actually warns you about that. This is good customer service right here.   Good job, Nexus, and thanks Andreas Hausladen.

Here's the installer warning. I have XE4, XE8, and 10.1 Berlin on my computer right now, and this is what I saw:


After installation, here's the menu items. There are too many tools in here to cover them all in one review, but I'm going to quickly show one application run through two of the tools.


The first tool in this review is brand new, I think. The Block Timer application is a new profiler tool based on the other profiler tools, but with some new capabilities.   I asked support and was told that more documentation is coming soon. The Block Timer joins its partner the classic Method Timer in providing some pretty great time-based profiling capabilities for your Delphi applications.  Here is a summary of the features of the new block timer compared to the existing method timer and line timer profilers:


1. The block timer is thread aware, and can break down information into thread by thread values, whereas all times are combined for all threads in the other profilers.

2.  The block timer can accurately report information about time spent in recursive methods.

3. The extra overhead of doing all that extra profiler makes the overhead of running the profiler a bit higher.

4. No dynamic profiling in this one. You loose the trigger feature from the LT profiler, which is an important feature. It's worth switching to LT when you need triggers.

So far it seems to me that in smaller applications, with fewer procedures selected for profiling, the application overhead of the most intensive techniques (BT) produces the most interesting results. The larger the application, and the larger the cross section of the application methods I want to test, the more the classic lower-overhead MT and LT profilers are useful.

Configuring your application to work with this or other profiler tools is pretty consistent, the same steps are necessary for this tool, and for any other sampling profiler or other runtime analyzer tools. Turn on TD32 debug symbols from Project Options, in the Linker tab, in older versions, or Debug Information in the newer ones, according to the docs. 

Run the tool from the Tools menu.  Note that it's a good idea on Delphi XE through XE6 to do a full rebuild before you click the tools menu item as Delphi doesn't rebuild the target for you on those versions.

You click one tool, and the first time you do, you will probably want to do a bit of configuration. Each tool requires some slightly different configuration.  It is NOT a good idea in my opinion to profile ALL of any non-trivial application. First, because you're asking a lot of the NQS tool. Second because even if the tool can successfully gather information on 10 or 20 thousand methods, you probably can't do much with the results.   I recommend doing a little searching and probing and find some routines that matter, and include those.  The user interface is reminiscent of Outlook 2000 for most of the tools.  In the case of the Block Timer and Method Timer, you use the Routines icon, which for some few releases has included a nice Search feature, which I think I requested, and I'm gratified to see that in there.  Because my app is all about the Ping, I'm looking for the Ping methods, I want to know what they're up to...





After searching, then selecting the routines, I right click and "Enable Tracking for Selected" methods. Then I click the green triangle "play" icon to make my application-under-test start execution.   In a small application you could perhaps select everything.  But as I have learned from much experimentation, it's really better to spend a bit of time searching for methods you suspect to be relevant and enable a dozen or two dozen of those. Then drill in, and enable further layers of the code, as necessary, to get a clear picture of your system behavior.

After my program has executed long enough to get a reasonable sample, in my case, just over 5 minutes, I shut it down, and then the timing analysis results are shown:


You can also see a bit of a trend of CPU usage by your program, in total, which can be really interesting, because you might want to know "what is the program doing during these bursts of CPU activity?".



A nice feature built in is that if you have configured your source search path in the NQS project options, you can just double click on a line of interest and see the code:


If NQS tools don't show things in the font you wish, you can change the font it uses, there are individual selectable fonts, I change ALL of them to Consolas because it's the one true Code Editor font.  If you like the Raize font and you have that one around, you could pick that one.  Courier New is more to some other people's taste. If you happen to want Comic Sans, well, you're drunk, go home.



So now I want to jump from Tool to Insight.   The reason using tools like this is great, is when the insight clicks in your head. Today I just saw this line and I realized, ResolveAddress is a function, and because there's no mandatory parenthesis in Pascal method invocation, the code here looks like it's just a variable or property check, but it's actually a very expensive procedure.  Do I really need to repeat the Resolve on each ping or could my tool just periodically check that the DNS resolution is still working properly, and cache the resolve value, and do multiple ICMP pings to the IP address? In my case, I think I'm wasting a lot of cycles, and loading down my company or customer site's DNS service unnecessarily, and generating a bit of wasteful network traffic.  In my next version, even more than making my tool say 10% more CPU efficient, and 10% more network efficient, I might also make it a bit more configurable, say, let the user configure how often to check DNS resolve for my important host is working.


I also think I should write the code above, so that it's clear that the above is not just a check-value but actually that a function is invoked.   I really think I need to rewrite lots of internals in TICMP.

But what else could be wrong with my code other than it's wasteful? How about Memory Leaks.   So I am now going to switch to Code Watch.  Only a few minutes to try it out, and I found that although my background worker thread terminates, it is never freed, and I have a code leak.  This tool finds the problem and reports the source line. Additionally it also found some API failures that I may or may not have been aware of, and Win32 resources (thread handle) that was leaked.  This is awesome.



I'm going to wrap up now. I hope that all the above impresses you, because it sure impresses me.

Before I wrap up, I'll briefly compare this option to your only other real option for this kind of tool.   SmartBear's AQTime suite can do many of the same things that Nexus Quality Suite can do, but Nexus Quality Suite can actually do lots of things that the AQTime suite can't.    AQTime is more expensive, at $599 with a very restrictive named-single-user license, and a nasty activation and intrusive anti-piracy copy protection system that I do not very much like, because it won't let me run with a single user license inside a VM.   The copy protection actually runs a background Windows service, which detects all kinds of things like virtual machine use, and it disallows program operation inside a VM.  And the IDE integration of AQTime just crashed on me the last couple times I used it. I reported these crashes, and over several releases, the crashes never got fixed.   Sayonara, AQTime.

So what's the price for NQS?  At the promotional sale price of  $226 USD ($300 AUD), and with no intrusive copy protection that treats me as a thief, I have no problems recommending EVERY Delphi Developer and delphi using company buy this suite. There are lots of tools, and they work really well. If I had to complain about something it's that the documentation needs some further work, but they are working on that.  The product works, and when I find a problem or have a question, the technical support team is great.   The price is going up soon, so I recommend grabbing this while it's on sale.

I am planning to write some further review articles to cover this suite further, in particular I believe the automated GUI testing features in the NQS deserve their own separate review, and I think that there are many more profiling techniques that are possible to tease out very complex runtime problems in your system, not JUST to get the data to help make your program faster, or not leak memory, but also to understand complex behaviors by gathering runtime data that lets you see your program running.

In the past year, the amount of new stuff that has been delivered in the NQS is truly astounding. 64 bit support is new. I think this whole extra set of profiling tools is new.   I tested NQS on an extremely large application where I work, the product is over 5 million lines of Delphi code including all the in-house and third party component libraries, all the main forms and data modules, and other code.      In an earlier version of the tool, I was able to find a crash inside one of the NQS tools. I sent information to reproduce it to Nexus, and in the next release the product was fixed.    That's good customer service.


NQS is a tool that deserves a spot in your toolbelt too.

Full Disclosure: I received a complimentary review copy of this product, but my opinion above is 100% my own opinion, and I don't write good reviews for every product I receive a license for, in fact, quite the opposite, if I see something I dislike or I can't use, I'll say that. I'm a working coder, and I have no time for weak tools.   I have recommended that my boss buy multiple copies of this tool suite at work, where I believe it would be extremely useful.





Thursday, July 7, 2016

How to Hire the Right People? I have NO IDEA!

I have seen a lot of articles on the interwebs from frustrated job-seekers who say over and over that hiring is broken.

Where I work, I am interviewing candidates who have recently graduated from university, for a Junior Software Developer position with a focus in Web/JavaScript/HTML5.  Consequently, I have been thinking a lot about how we in the software industry interview and hire people.  Because I have been interviewing people and, I think, I have moved past the need to haze candidates.

 I was not subjected to hazing rituals when they hired me for my current gig. When I was hired, I did not write any written technical exam, the interview was verbal, but the company had one, which it would use when it felt there was some question of a candidate's abilities.  I did bring in some code running on a laptop that I could show that did some interesting stuff, and which was as close to "proving" I can code, as I could think of.   I think ideally, a personal project you have spent two or three weeks on, should be enough to demonstrate.  But there have to be alternatives, and I will get into those below.  If we're going to get rid of subjectivity, we need to replace it with something objective.

Hiring, like most management decisions, is in the end always going to be fairly subjective, and it's an area of subjective business decision making that I think is very widely done poorly, and I consider myself very poor at it but I believe I'm getting better at it.   I hope to improve by being both broader in my search for evidence, and more focused on objective hard-to-fake data.

The short version of this blog post works out to this:

I am in favor of two to four hour take-home coding exercises, and I am against two-week trial projects.  


Peppering Candidates with Random Technical Questions Is Not Working

I agree with the critics of our modern whiteboard and non-whiteboard technical hazing rituals.  

By treating all candidates the same, and asking the same barrage of questions, we hope to map a candidate's knowledge, and some are even going to claim that this approach is "rational" or "scientific" or "impartial". It's not. Because people are not bots, and technology is not as complex as you think it is, it's far more complex than you think it is.

Here's the problem with technical knowledge: It's not linear but rather factorial in complexity, like the Koch Curve, the closer you look, the more detail is generated, and there is actually no end to the complexity.  If you don't even know what I mean by that, watch this awesome talk by K Lars Lohn and then come back.   If that talk doesn't give you a reason why you should be going to technical conferences, I don't know what to do to convince you further.  There, now, I'm a thought leader.

Now back to interviews.  If an interviewer is sufficiently intelligent, I think the interviewer should start by determining from a resume and from any phone screens, the areas where the candidate expresses some interest, experience, and ability, and then talk as openly, and with as much good-will and personal charm, as is possible.  In recent weeks, I have watched people as their anxiety goes down, and I notice that what you can learn about someone who believes you are not a jerk, is much higher than someone who has their fence up.   This is a poker game where we lose if we keep our poker faces.  This interview game is a game where the best move is to fold, and show your cards.  This is what I'm looking for. I saw some of what I'm looking for in your resume. I see you mention here that you have tried Scrum and Kanban. What did you find worked and didn't work on your teams when you did those things? Let's talk about how teams work.  Let's talk about how compilers compile, how the JVM runs your code, how a statically typed language helps teams ship.  How a unit test can help you not break things, and is doubly important on a language like JavaScript where there is no compiler, and where consequently useful forms of static analysis may be impossible.  Let's talk about the recent trend towards languages which can be verified to be correct in some aspect, like D or Rust.  Let's talk about Functional programming.   With Junior programmers I'm interviewing very few have ever played with Rust or D, or F#, or Scala.  Very few can tell me about interrupt handling inside the Linux kernel, or about safe concurrency models for web-scale transaction processing, or about the differences between two transaction settings in MS SQL. 

So fine.  Let's find SOMETHING you love.  Animation? Awesome.  Games? Awesome.     Now we will dig into your own interests, and find out what you've done that we can see evidence of.

Don't I just sound so avant-garde? Trust me, I'm not.  I'm probably going to ask Juniors and Intermediates if a Stack is LIFO or FIFO.  Then I ask them whether walking into McDonald's and waiting in line to order a big mac, if that line of customers is a Stack, or a Queue.   This question might be a bit too easy in England where a line-up is actually called a Queue, but in Canada, I find that people who crammed the LIFO/FIFO part of it can't reason about it, and thus some conceptual wiring is missing in their heads, wiring that I can't quite account for.    My mental picture of a Stack is something you might remember from restaurants, if you like me, are of a certain age:


I ask about stacks and queues not because you need to know that every day when you work in my team, but because I have a distressing feeling that candidates can graduate by simply cramming and collaborating on coding projects, and can manage to retain very little of the knowledge-platform that their degree could have given them.    Which data structure would help me reverse the order of items in a list easily, a stack, or a queue?  The important thing about my question isn't if you could google it or not, it's how adept you are at thinking about systems built of large amounts of software and hardware. 

I believe that a working model of a smaller domain contributes to, and correlates well with the reasoning skill you possess in the large domain.   The human brain, confronted with systems composed of parts it does not understand tends to ascribe to others the agency for fixing and changing those systems.  When a engineer who knows how a system works understands the fundamentals, she will, I hope, be able to begin picking complex problems apart, a process I call bisecting, until she can find individual smaller problems which can be solved.  It is these bisectors of complexity that I search for when I interview.  I am looking for the developer who doesn't even know how to do this, but who believes she can do it, and who will keep trying until she does it.  Possessed of reasoning skills, and a strong set of engineering fundamentals, she is apt to succeed.

Even candidates who absorbed everything their school offered them will still need a lot of additional skills and need to learn a lot of tools.   But if you are not a learner, a sponge for knowledge in university, an organizer of systems and ideas, a bisector of problems, what rational evidence do I have that things will be different in your work life?  If you can't tell me how to troubleshoot your mom's internet connection, I'm not going to believe you can understand a Healthcare Information Systems environment.  

I recently interviewed a candidate with a Masters Degree in Computer Engineering, that I hope was simply having trouble because English was a second language.   Several days after the interview, I am wondering if I simply made the candidate so anxious and flustered, that I actually caused the interview's dismal result. Whether or not that happened in that case, it's critical that interviewers turn our dreadful critical gaze upon ourselves, find sub-par elements of our practices, and fix them.

A good interviewer needs to set candidates at ease.  When I see candidates smiling and laughing, and joking in an interview, I am happy.  I know that I'm talking with the real person, and that we can figure out what will and will not work with this candidate within this team.

I am not going to stop asking semi-random factual questions, but I am going to give candidates fair notice. I happen to like the little thing on Reddit where people ask you to "ELI5". Explain it like I'm five.  When you know something cold, you can explain it to a five year old.   This is a new knowledge-sharing phenomenon that originates with millenials.  If you're 21 right now, I'm old enough to be your dad, and then some.  Unlike some people, I think the world is going to be fine, when the Millenials take over and we're all retired.   I'm cool.

So why do I ask what DNS and DHCP are, when you could google that, and when those seem more like questions for an IT/Network-admin than for a Developer role? The argument that you can google what you don't know falls down at the point where you don't google because you're facing unknown unknowns.   Design decision mistakes are a common after-effect of unknown unknowns.  I make design decision mistakes all the time. We all do.  We do not understand the domain in which we are engineering well enough, and we do not even know what it is that we do not know. This is the unknown unknown I speak of.  I am looking for engineers who are wary, meta-cognitive, who build themselves and others up.  So let's get to my hire/no-hire criteria, and see if you agree or disagree with them.

Cardinal "Hire" Qualities (with profuse thanks to Joel Spolsky)

I want to hire someone who is SMART and CURIOUS, who GUARDS the team that GETS THINGS DONE, and WHO IS NOT A JERK.  I have grouped and expanded things in a way that makes sense to me but I freely admit that I stole almost all of this from Joel Spolsky. Thanks, man.

SMART + CURIOUS:  I am looking for evidence that you are a passionate, intelligent geek who likes to write code.  You have a deep and dividing interest in some (but usually not all) areas of computers, software development, and technology.  If I ask you how a CPU's level one and level two cache works, and you don't know that, that's OK, as long as you can answer the question "tell me about something that you built recently on your own time that you didn't have to build", or "tell me about some language or operating system  or tool that you're experimenting with".   

GUARDS + GETS THINGS DONE:   You're not just a member of a team that shipped, but a member of teams that would not have shipped without you.   Your team didn't know about version control? You taught them.  Your team didn't know about continuous integration? You added it to their practices. Your team didn't understand the zen of decoupling or the zen of test? You taught it. You modeled the practices that made your team get stuff done.  When you saw things that were bullshit, that would sap the motivation of the team to GET THINGS DONE, you faced the boss and spoke up. You, my friend, are the guardian of the customer's happiness, the guardian of the product's marketplace success, and the keeper of the flame.  Sometimes being that guardian means NOT GETTING (the wrong) THINGS DONE especially if it means doing them "wrong" just so they can be done "fast".  Long term trends that slip under the radar and that are under-valued in agile/scrum teams, are things you like to bring up at retrospectives. 

NOT A JERK:  You defuse tense situations. You don't add gasoline to open flame.  You call people out privately, and you praise people publicly.  You absorb blame. You deflect praise.   You admit when you failed to do any of the above, and resolve to do better when you don't live up to your own internal high moral standards.   You believe you can be a great engineer while valuing different people who have different communication styles, cultures, languages, and you think that the team's differences can become sources of strength, and when difficulty and division is spreading, you find ways to unify the team and give it a focus, a technical engineering focus, with a strong shared ethical principle.  You are a curator of good company culture.

But let's be honest about the above. The above is the person I'm trying very hard to be.  I'm trying to hire people who are trying to do some of the things I try to do. 

My questions for you guys:

  • How Do you Find Out Real Stuff about Candidates when you are conducting an interview?
  • What do you want to know when you hire or when you are seeking a job?  
    • As a candidate, do you ask who you would report to?    What do you hope to learn?
    • How do you feel about the number of people in the room? Do you think its a better sign when you are interviewed by one person, or do you think it's better when you're interviewed by three or four people?
    • Are there any "shibboleth" questions you have as a candidate?  What do you want to find out with them, even if you don't want to state your question directly, what are you trying to figure out?  I don't have a specific question but if I see signs of aggression, arrogance, or naked exercise of rank or privilege, I quietly note it to myself, and decline further interactions with a company.  One thing you certainly can't fix in a company is the culture of its leaders.
  • When you are being interviewed, how should people approach you to find out the most accurate picture of your strengths and weaknesses?

I'd like to open the floor to a discussion now, let's keep it civil. Thanks.









Wednesday, June 1, 2016

Survey Results for the First Annual Delphi Code Monkey Survey



There were 373 respondents but the statistics shown here only reflect a portion of that, because SurveyMonkey wants $25 US from me to give me fully detailed results. Given that the final numbers are unlikely to be much different from these, I'm going to leave these as they are.  30% of respondents left the "other tools" question blank, which is interesting.














Some interesting responses from the "other" category:

* Some people were in professional categories other than the ones described, such as "retired".

* Other commonly stated worries included financial/pay level concerns, and concerns about whether they can afford to keep up with buying new versions of Delphi, or if they can compete with cheaper contractors, perhaps offshore.


I was pleased to see that although the experience meter tips towards "oldtimers", there are some inexperienced and only slightly experienced readers.  If anyone wants to fire me any comments on what beginner topics they would like to see me covering, please fire some comments up in the comment box.

The next time I run a survey it won't be on survey monkey, it will be my own homebrew PHP script survey.   Thanks everyone!


Saturday, May 21, 2016

Delphi Programmer Thinks about the Go Programming Language and Mandatory Source Code Organization

If you follow one of the usual tutorials for Go programming they will start by trying to dump a load of things you have to do on you.  This is perhaps something that you, as a long-time-Delphi geek have become inured to in your own environment.  Let us imagine a developer who has been given access to a source code repository or server, perhaps a big subversion server, and has no familiarity with the Delphi codebase at CompanyX, where CompanyX is basically every delphi-using company ever.  Let's make a quick list of the first tasks our developer would face:

  •  Setting up a working copy of source code so it builds, and so the forms open up without errors due to missing components.

  • Associating package set X required  to build version Y of product Z.

  • Setting up library paths that might be completely undocumented.

  • Individual  things done at company X like mapping a fake drive letter with SUBST, or setting up environment variable COMPANYX to point at some shared folder location.

  • At some companies they will just look at you blankly if you ask "can you build this entire product and every part of it from the command line on every developer's PC"?  Other companies have exactly ONE computer (MysticalUnicorn.yourcompany.com) on which this feat is frequently possible.    Still others (the sane ones) have made the process so unspectacular, and merely reliable that they think the ones who gave you the blank look just haven't realized how insane they are yet.
  • At some companies it might be considered acceptable if the build scripts and projects and sources ASSUME you will always check your code out to C:\COMPANYX. When you want to have a second branch you simply clone and copy a tiny little 120 gigabyte VM and fire that up.

Has any of that ever seemed insane to you? It does to me.   And so when I look at new languages one of the things that I look for is if the problems above have been thought about and resolved in that language and its related tools, including its build system, if it has one, and its module system.

Go has been known to me for some years as a famously opinionated language, characterized by the removal of features that its designers felt were problematic in C++, and were thus removed from GO:

  • There are no exceptions in Go, only error returns, and panics.

  • There are no generics in Go.

  • There is no classic object oriented  programming with Inheritance, there is only composition, and there are only Interfaces, there are no base classes (because no inheritance).

  • The module structure is pretty much mandatory.  Here's me starting a brand new Go project from a command line, what is happening should be pretty clear to most geeks:

~/work> export GOPATH=~/work
~/work> export PATH=$PATH:$GOPATH/bin
~/work> mkdir -p $GOPATH/src/github.com/wpostma/hello
~/work> cd $GOPATH/src/github.com/wpostma/hello
~/work/src/github.com/wpostma/hello> vi hello.go 



package main

import "fmt"

func main() {
        fmt.Printf("Hello, world.\n")
}

~/work/src/github.com/> go install github.com/wpostma/hello
~/work/src/github.com/> hello
Hello, world.
 
What is the thinking process that goes into designing the module system, with the following structure:





I think the above has the benefit of being about as nice a structure as I could imagine.  The folder names above tell me even where on Github I might find this project.   Source code is now globally unique and mapped by these conventions so that I know where to find any public code I want.  If I want to use gitlab.com or bitbucket, or if it was stored on a private server inside my company named gitlab.mycompany.com, I would move my code into different folders to make that choice clear.  For a language which is intended to be used in large systems, it's an appropriate design choice.  Let's contrast this with Perl or Python where the intended use starts with one to ten line scripts that are basically used for any kind of little task you can imagine, and where this kind of ceremony would be stupid.

I have worked in enough large code-bases in Delphi, and where any form of organization is accepted, it will inevitably become horribly complicated.

Let's briefly discuss the forms of folder/directory organization one might try to do in delphi:

* Ball Of Mud: Everything for one application is in one or some small number of directories - It is extremely common that one directory contains 90% of the files that are not third party components and that directories are only used to hold files not written here.  No sensible use of directories is made.  The source should all be in one directory with 10K files in it.  Usually ball of mud folder structure goes nicely with ball of mud source code organization inside your code files. That form with 5K lines of untestable business logic mashed into the form? That "controller" object that is more of a "god" object that direct references everything else tightly via USES statements? Ball of mud.

* MVC or MVVM: Views in their own folder. Models in their own folder. Controllers in their own folder.   Additional folders for shared areas, and per application area.  I've heard that this is possible, but I've never seen a Delphi codebase coded according to MVC.  Ideally, if you're going to do this, you also don't have your Views reference your controllers or even have access to the source folder where the controllers live.  Your models ideally don't reference anything, they're pure model classes and don't take dependencies on anything.

* Aspirational:    This is the most common condition of Delphi codebases.  There is some desire to be organized and some effort has been made, but it is fighting an uphill battle that may be unwinnable, because the barn door was already opened, and that cow of accidental complexity is already out and munching happily in your oat field.    You have a desired modular approach and it's expressed in your code, but every unit USES 100 other units, and your dependency graph looks like something a cat coughed up.

So given that I have seen large systems get like the above, I have a lot of sympathy for languages like C# where at least you can get your IDE and tools to complain when you break the rules, and I even have more sympathy for Java where namespaces are required on classes, and where the classes must live in directories which are named and hierarchically ordered.   In Go we have named modules, and the modules contain functions and can define interfaces, but they're not really Classes like in Java.  But the idea of order and organization has been preserved as important.  In Delphi we have the ability to use unit prefixes which are weaker than true Namespaces but still potentially useful.  In Delphi but most code I have seen does not attempt to adopt them.  It seems to me that having a codebase that uses unit prefixes, and that has source organized into these folders is a worthy future goal for a Clean Delphi Codebase, but existing legacy codebases are all we have, and so getting there is not something I'm going to hold my breath on.   One has to have practical achievable plans, and not tilt at windmills.

My first reaction to Go's requirement to use a fixed structure was predictably the same reaction that I had when I first realized the horrors of Forced Organization that java was forcing on me when I first tried Java in 1996.  Now, it's 20 years later, and I think we can say that Java's designers were right.   Java has proven to be especially useful in constructing extremely large systems.

The go package dependency fetching system (go get X) works precisely because of this forced organization, and it's all very well thought out.  There's NO reason that a clever Delphi programmer couldn't learn the lessons of how GOPATH and how go get and go install work and use that to fashion a guaranteed well organized and maintainable and clean delphi codebase, incrementally, by a phased approach.

You don't gain much if you close the barn door after the cow's got out, and you can't stop everything and rewrite, but if you can build some tools to help you tame accidental complexity gradually, you can restore order, over time, while you work on a system.

What goals might you start with?  I'm not going to tell you.    All I'm going to do is say that if your brain lives in a box that has a begin at the beginning, and an end at the end, and you can't read and think outside that box, you're sadly limiting yourself as a developer.  Becoming a better developer (in any language) requires what the old timers even older than me called a "Systems  Approach".  A view of what you build, and your project and its goals that is larger than your daily work, longer in scope than whatever form of agile "sprints" you're doing, and which has a sustainable high quality engineering methodology behind it.

You can't build that kind of mentality in at a language level (in Go or Java or Pascal), but I think it does help to have the bar set progressively higher as you can, so that once code becomes cleaner and more maintainable, there is at least the potential to detect when someone has made things worse.

Thus far we have seen many programmers throw up their hands at the 5 million line big-ball-of-mud projects and consider rewriting it from the beginning.  My feeling is that the bad patterns in your brain are still there, and if you rewrite it all in the same language or a different one, you're going to make all those mistakes again, and some new ones, unless you start learning some ways to approach system design that promotes clean decoupled programming.   Studying and research phases are required. Do not race to reimplement anything, either in Pascal or any other language.  Spend time and sharpen your sword. And remember Don Quixote.















Saturday, May 14, 2016

Completely Anonymous Delphi Code Monkey Reader Survey 2016

(update: Poll closed! Link removed)

I have put a completely optional, completely anonymous survey on survey monkey. The questions are completely general and will help me to get an idea of the readers who visit and thus, what you might enjoy hearing about, and may of course be fun for you to see the answers to as well.

I will share all results here on the blog after the survey closes, at the end of this month.
Here is a preview of all six questions on the survey. Please do not enter any email addresses or personal information into the comment boxes, where "other" answer boxes exist, so I don't have to spend time deleting it all before sharing the results here on the blog. The poll is completely optional and the categories/answers are fairly general.vague, and everybody will see the total answers.  If any answers to question 5 and 6 are repeated frequently, I'll publish those as well, otherwise, the published results will only specify the number of people who answered "other" to 5 and 6.  I really don't know what all the possible "#1 worries" might be, and I have a feeling that you all might enjoy seeing these answers. Please be as general and brief as possible, you only get 40 characters.






Saturday, April 30, 2016

Writing an IDE Expert (Toronto Delphi User Group, April 2016)

Meeting notes including links to sample code for writing your own IDE wizards is over on the Toronto Delphi user group site:

http://www.tdug.com/2016/04/april-meeting-follow-up-2/

That article has links to some open source repositories with my starter expert dll and wizard bpl code samples.  

Sample expert and wizard repositories on bitbucket:

https://bitbucket.org/wpostma/helloworld_delphi_expert/src/master/ (git)

https://bitbucket.org/wpostma/helloworld_delphi_wizard/src/master/ (git)

Sunday, April 24, 2016

Patterns in the History of Computing: The Birth, Life and Death of the Tech Empire

As a student of history and a geek deeply interested in computers, Computer History is a personal passion of mine.

I am particularly interested in unlikely upstart success stories, most of which have a predictable arc:


  • A founder has an idea which is considered ridiculous to whatever industry he or she plans to disrupt.
  • A founder executes a minimum viable product, and iterates. 
  • Building an ever growing product line, the company flourishes, expands, and reaches a point I will call the Apogee, the highest point in an orbit.
  • Someone else has an idea which is going to disrupt this founder's business. This founder ignores this disruptive change and continues on the original plan.
  • The company, after realizing too late that a change in the market is afoot, eventually dies or is acquired.
  • We fondly remember the company, and its founders, who made so many pivotal or important technologies, and which is now all but gone.
I think anybody here can list 100 of these, but today I'd like to talk about DEC, and Ken Olsen, and do a brief retrospective on his accomplishments, his brilliance, and his critical mistake.

What do we owe to DEC and Ken Olsen?  The original internet machines built by Bolt Beranek and Newman were built around DEC hardware modules.  The ultimate success of Ethernet networking was due to collaboration between Xerox and DEC.  Xerox could be another example of a failed company, but rather than dying, they're merely a niche imaging company instead of the godfathers of computing.  The idea of owning your own computer and the computer being used directly by individual operators, a key element of Personal Computing, was first made possible by small DEC machine that were not even called "Computers" in the earliest days because the term was too strongly associated with the priestly caste of IBM mainframe programmers in their glass-walled temples.  And yet, Olsen's failure of vision were twofold.  He failed to move DEC towards RISC technology fast enough that they could realize the architecture benefits of RISC, which have informed subsequent CISC architecture designs, while RISC itself is dead, the process improvements in silicon ULSI design and fabrication that RISC permitted have lived on.   He famously derided the idea that personal computers, of the kind that Microsoft wanted to see proliferate would eat DEC's entire cake, killing the VAX and the PDP-11, and almost every 1970s mainframe and minicomputer company.

What is ironic to me is that DEC became what it originally was intended to become an alternative to.  Today's developers would not see much distinction between an IBM system 360 and a VAX 11/780. Both are dinosaurs, artifacts.

I actually took a whole term course in 1990, not that long ago, on VAX Assembly language. What the hell was the University of Western Ontario thinking when it set up my curriculum? VAX Assembly language?  Yet I'm happy I learned it.  The VAX architecture was and is beautiful. The VMS operating system was beautiful.  Dave Cutler, the Microsoft alpha geek (ha, did you get that pun?) behind Windows NT, basically rewrote VMS and it's running on your computer today, first it was called OS/2, then Windows NT, later Windows XP and today it's called Windows 10. It's the same kernel, and its architecture owes a lot to VMS. Like VMS,  Windows is not the only operating system that runs on PC architectures.  Unlike DEC, Microsoft at one point in its life made a lot of its money selling software. What would a Microsoft that makes most of its money selling Cloud and SaaS plus selling Enterprise platforms and tools look like? We're about to find out.

Today, Microsoft in 2016 is at the same point that DEC was in 1988. You can see Microsoft hosting huge events like Build 2016.   They have money, they have influence, and developer mindshare everywhere except on mobile.   They have a brilliant CEO who like the founder at Microsoft, is also a competent technologist.  They understand that Microsoft without change internally, is the same company in 2016 that DEC was in 1988, a few years away from irrelevance and death, unless they pivot. IBM pivoted and is now 90% an IT services and consulting company and maybe 10% a mainframe hardware company.  IBM will still be around in ten years.

What does it mean to pivot?  Microsoft is executing one right now. Go look. At Microsoft, it's free Visual Studio Community, free Xamarin,  Ubuntu Bash running unmodified user binaries on Windows 10 desktops, it's .net core, a radical (and beautiful) rebuild of the .net platform for the next 30 years of cloud and corporate computing.   Will Microsoft break the chain of companies with disruptive ideas (A computer on every desk) and unlike DEC, still be around in 20 years? I think it will.  Will Blackberry? I don't think so.   

What about the things you build? What about your company? Will you and the leadership in your organization recognize disruptive change, and the need to pivot your organization to survive? What if today you are a software vendor but you need to become a SaaS IT Provider to survive, or precisely the reverse? How will you know?  More thoughts on that later.  Only this in conclusion: The market will shift. Your skills and your current product will become a commodity, or worse, a worthless historical artifact, like buggy whips.  How will you adapt, and change so that you, and your organization will flourish?

Tuesday, April 12, 2016

Linux Essentials for Delphi Developers

There is currently no way using Delphi to target Linux. Long ago there was a thing called Kylix that worked on one version of RedHat Linux, barely, back in the 1990s. But in the Community road-map, targeting a fall release, there might be a way to target Linux Servers.  Here's hoping.  If that happens, or even if that's delayed a bit, now is a fantastic time to hone your Linux skills.    I'm not going to tutor you.  You can google probably almost as well as I can.  But I am going to outline a plan of attack for a competent Windows developer to learn the essentials of Unix systems, with a focus on Linux.  I recommend this plan be carried out on a virtual machine inside your main windows PC. You can NOT learn everything there is to know about Linux just by using the Windows Subsystem for Linux.  There's no linux kernel, no linux networking stack, no desktop environment in the WSL.  Learn on an Ubuntu VM.



My belief is that Linux matters on the Server because:


  • It is currently the right way to deploy for the web in 2016. 
  • It is the right technology for cluster scale technologies.
  • It is currently the right way to build systems that are easily administered remotely, whether in the cloud, or at remote sites, or in large numbers.
  • It is a lighter weight technology and currently has mature support for containers, big data technologies, and about 1000 other things in that vein.
  • It has a better way of upgrading, without requiring as many reboots.
  • It has a mature set of binary dependency management (package installer tools), container and orchestration tools.

There are several aspects to learning to be a competent Linux server developer

  • You can install, upgrade, troubleshoot and maintain both client and server Linux systems.  You know the 50 most common command line tools and their everyday uses. You can log in, change your password, obtain root access, check what groups a userid belongs to, install and remove, and upgrade packages.
  • You have installed and learned several different distributions.  The entire concept of distributions deserves some study by a person who wants to know what Linux is. You know not only how to use apt-get (on debian and ubuntu) but several alternatives such as those on RedHat/Centos and others.  You know roughly what changes from one major family of related distributions to another.  I recommend Ubuntu to every beginner, and Debian to every intermediate and advanced user.  In some corporate environments, you may find that RedHat Enterprise Linux (RHEL) or its open-source variants CentOS and or Fedora are preferred.   I recommend you learn Ubuntu first, and learn a RedHat variant later.
  • You know how the Linux boot process works, from BIOS or EFI to the boot loader, to the kernel startup, to the init sequence, and service startups, and you know what runlevels are, and what systemd is, and what /etc/init.d.  You appreciate that unlike Windows, when a system refuses to boot, it's not that hard to repair it.
  • You are comfortable in the Unix style shells, such as bash, csh, and tcsh. You can write shell scripts and read and repair shell scripts.
  • You are familiar with the basics of C development in Linux, including the use of GCC and CLANG, build tools, and associated parts. You  can download something.tar.gz and unpack it, read the instructions and build it from source.  When it breaks you can read the output and figure out what's wrong, and if googling the error doesn't help, you can dig in and fix it yourself.    You know what static and shared libraries are, and you can find and install dependencies (libraries, tools) that some package needs to build.
  • You are comfortable with rebuilding the Linux kernel from source code, you know what kernel modules are and what lsmod and modprobe do, and you know how to reconfigure a kernel, turning options on and off.  You know how to upgrade or add an additional kernel to your system's boot loader.  This is actually really fun.  You may find that having a system you can completely and utterly modify to suit your own needs and requirements becomes a bit of a giddy experience.  I know that I feel like I'm actually in control of my computer when I run on Linux.  On Windows 10, I feel like my machine belongs to Microsoft, and they just let me use it sometimes, when it's not busy doing something for the boys in Redmond.  That being said, I really like Windows 10, and I still primarily enjoy developing for Windows systems.  But knowing both Linux and Windows is a very useful thing to me.
  • You have a decent understanding of system administration core concepts, including the wide set of tools that will be on almost every unix system you use. You can find files using several techniques. You can list processes. You can monitor systems. You know how to troubleshoot networking issues from the command line.
  • You will know you've gotten in deep, when you have taken a side on the vi versus emacs debate, and become extremely proficient in the use of one or the other. (Hint: The correct choice here is vi. Die emacs heretics, die die die.)
The above should give you enough to chew on for a year or two.  What should your first steps be if you know nothing?



  • You will need at least 20 gigs of free space.
  • Download the latest Ubuntu 15.xx as an .ISO file.
  • Install Ubuntu into a virtual machine.  I recommend Client Hyper-V on Windows 10 which is included in Windows 10, or if you're still using that ancient Windows 7 thingy, then download VirtualBox, which is free.  If your Linux install worked perfectly, the client integration piece that makes working with a mouse within a virtual operating system will work perfectly. If the client integration piece didn't work, make sure to learn how to manually "free" your mouse pointer from the VM if it becomes locked inside it and you can't figure out how to release it.
  • Play with virtual consoles (Ctrl+Alt+F1 through F8). Learn what they are.  Watch some tutorials on basic Linux stuff like logging in.  Learn a bit about bash shell.  Learn about the structure of unix filesystems, learn the basics of unix file permissions and ownership.
  • Learn about commands like ls, find, locate, grep, ps, pswd, more, less, wget, ssh, ping. chmod, chown, and others.  Use the man command to learn about them (man grep).
  • Learn to install and start up Apache web server.  Learn a bit about configuring it.   Examine the configuration files in the /etc/apache2 folder
  • Browse from your host (Windows) PC web browser to the IP address of your Virtual Machine.  Use /sbin/ifconfig eth0 command to display your current IP address from a terminal prompt.
  • Learn to start and stop the X Server. When the X server is stopped, you have a text mode only operating system, which is ideal for server deployment. When it's running you have an opportunity to try some of the available IDEs that run on Linux.
  • Optional: Learn some Python and learn to write simple web server applications with Python.  (I do not recommend bothering to learn PHP, if you don't like Python then look into Ruby and Go as server side languages.)
  • Optional: Learn the fundamentals of compiling some small applications from source. Write some small command line applications in C, since that's going to give you a bit of a flavor for the classic Unix environment.  C programming on Unix is easily the single most important skill I have on Linux.  If you can get over your preference for begin/end and learn to work on Unix in C when it's useful to do so, you become a much more well rounded developer.
  • Optional: Install some open source Pascal compiler.   Don't expect things to be exactly like Delphi, because they aren't but you might enjoy messing around with FreePascal (compiler), or Lazarus (IDE).

Come to the dark side. We have fortune cookies...





Monday, April 11, 2016

Ubuntu on Windows is here, first thoughts.

For Windows 10 users who have the Insider preview enabled, who have the slider all the way to the bleeding edge side (fast track means all the way to the right), a new preview windows build will become visible and ready to install within about 24-48 hours after you switch to the fast ring.

After that you have to enable the new Windows Subsystem for Linux (beta) and make sure in the system settings that Developer mode is enabled. After that you should open a command prompt and make sure that the use legacy console checkbox is not checked in your command prompt (conhost) properties.

Now open a console window and type bash. The system will install. If you get an 0x80070057 error and you skipped past the link above about legacy console go back, and listen to me next time.  If you get a different error then try Googling the Error Message.


Once you have it installed, you will be in one of several different mental states. If you are like me and you have been using Linux (and other Unix operating systems) since before anyone thought of slicing bread, then you will have lots of fun things you will want to try.  If you are familiar with basics of working in the commandline environment in a Debian or Ubuntu variant of Linux, you will know that it uses apt-get to install packages from repositories, which are configured in /etc/apt/sources.list.  If you open that file you will see that this is not some customized set of binaries created by Canonical (the parent company that releases Ubuntu) so that you can pretend to run Linux binaries.  These are real Linux binaries, unmodified from their real ubuntu versions.  You are running a Linux userland on windows.  On what does it run? Is there a Linux kernel? No.  If you know how Posix environments (broadly compatible Unix implementations claim some level of interoperability and commandline shell compatibility) you know you type uname to find out about the kernel.  Let's do that:

root@localhost:/etc/apt# uname -a
Linux localhost 3.4.0+ #1 PREEMPT Thu Aug 1 17:06:05 CST 2013 x86_64 x86_64 x86_
64 GNU/Linux




So right there I'm surprised.  I would have expected Microsoft to have their Linux subsystem for Windows report something other than 3.4.0+ as the kernel version. That ought to make a person think when they see that.  This means they implemented all the system calls (syscalls) that things like libc would invoke on a real system. This is zero overhead, extremely efficient, and is a relatively large amount of API surface area for the Windows team to take on.   This is not Steve Ballmer's Microsoft, this is Satya Nadella's Microsoft, and it's kind of awesome.

  The performance here is native. The ABI (binary interface) between userland and kernel is at a 3.4.0 level, but it's not exactly perfect because there will be APIs that are in Linux that the Microsoft emulation layer will not emulate perfectly, at least not yet.   This should impress you.  If this does not impress you, you really don't know what this is doing, and you should remedy that lack of knowledge about windows.   Subsystems are a powerful concept that has lain dormant in Windows since the death of the Windows NT subsystems for Posix, which Microsoft grudgingly brought about to win some big US government contract, and then let wither and die.

Now let's talk about those of you who still have their heads in the sands about the importance of Linux. Why is Microsoft putting a pure Ubuntu "userland" experience for developers (not for production server use) into Windows?  They've been pretty clear. For Developers, Developers, Developers. If you are a developer and you still have no skills at all on Linux systems, then you have your head firmly in the sand my friend, and should fix it.  If you have no prior knowledge of Linux at all, I highly recommend installing a full real Linux environment in a virtual machine and spending some time learning it and using it.   If you expect to be employable in the future, as a server side developer in the future, and you don't plan to only work for small desktop/workgroup codebases for the rest of your life, then Linux systems, containers, cloud technologies, cluster scale technologies, and big data technologies are all something you can just ignore.  Continue to play with your datasets and your data aware controls, and live in your own tiny 1990s world.

I will write a second post on getting started on Linux shell in Windows, and on possible things that might be useful for Delphi developers to learn first in a second post. For now, I suggest you create a VM and install the latest ubuntu.  No matter what you do, you will learn more in that, than you will from playing with this beta ubuntu on windows.

Some things you might like to try:

apt-get install joe
Then run the joe editor:
joe hello.txt

Note that joe (joes own editor) uses those Ctrl+K Ctrl+B / Ctrl+k Ctrl+K type shortcuts you might remember as a Pascal/Delphi old-timer.  These Ctrl+K based set of shortcuts actually predate delphi/pascal and come from the 1970s WordStar editor/text-processing system, which first appeared on CP/M.   Guess which platform TurboPascal supported even before it supported IBM PC on DOS? That's right!  CP/M on Z80.

Some more nostalgia, anyone?

apt-get install fp-ide
then run it
fp

Well that wasn't really perfect yet. I guess this thing has bugs.  (Update 1:The screenshot below is messed up because the command prompt font was set wrong.)



What else has pascal in the description?  Type apt-cache search pascal.


This seems like a great place to be in 2016, with the public road-map for Delphi showing that Linux support is important to them.  I would love to be able to build and dev-test with a local gdb debugger against a server side service built in Delphi.

Update:  Here's FP ide with the font fixed in my command prompt (Lucida console works!) and rocking out like it's 1992: