Tuesday, August 30, 2016

Nexus Quality Suite: Why Profiling and Checking Your Application for Leaks is Essential (Part 1 of a review of Nexus Quality Suite 1.60)

I've been using and also experimenting with Nexus Quality Suite on and off for the past 9 months and I've been meaning to write up a blog post about it.  The trouble with reviewing this software suite is that it contains so much stuff, I am aware that I can only skim the surface.  So I think I'll present it in small meaningful little task-oriented mini reviews.  Initially I was running the tools in this suite on an extremely large Delphi system.  While it's definitely useful for very large systems, I found it difficult to explain that usefulness using that large application.

So I've decided to keep my real world focus in reviewing this tool, but I'm picking a bit of my own personal code to profile and test.  I'm going to run Nexus Quality Suite's tools against a little application I first wrote in about 1996, that is in my toolkit of "system admin and developer-operations" tools.   Here's what it looks like:


It can ping any number of hosts from one to hundreds. When any of those hosts goes offline (does not respond to ICMP ping), or the DNS resolver stops resolving, this little tool can beep (for in office monitoring) or send an email (which can alert me even when I'm out of the office).   But this tool has always been slow, slow slow.    Since I add additional sleep time (configurable) between its runs, I've never worried about the performance of it, but I recently had a use for this tool again, so I dusted off the source code, added a few little things, and recompiled it in Delphi 10.1 Berlin. I even found a missed out "Unicode port" bug where I had forced a cast to AnsiString over a UnicodeString in a way that actually resulted in sending Unicode bytes into an ANSI Windows API. Bad Warren! No cookie for you!  My only excuse is that I wrote the code in question in 1996, in Delphi 2, and simply overlooked it when porting this code to Unicode Delphi.  Now back to my review...

Anyways, back to the performance profiling tools.  The latest version of Nexus Quality Suite 1.60 supports both 32 bit and 64 bit programs, but I would recommend profiling your 32 bit tools, as the 32 bit tools are probably easier to profile.   For those cases where you really want to profile 64 bit stuff now you can.   The NQS installer installs a group of items in your tools menu.   Be aware that certain Delphi versions have a bug, which has a workaround available, and that the installer for Nexus Quality Quite actually warns you about that. This is good customer service right here.   Good job, Nexus, and thanks Andreas Hausladen.

Here's the installer warning. I have XE4, XE8, and 10.1 Berlin on my computer right now, and this is what I saw:


After installation, here's the menu items. There are too many tools in here to cover them all in one review, but I'm going to quickly show one application run through two of the tools.


The first tool in this review is brand new, I think. The Block Timer application is a new profiler tool based on the other profiler tools, but with some new capabilities.   I asked support and was told that more documentation is coming soon. The Block Timer joins its partner the classic Method Timer in providing some pretty great time-based profiling capabilities for your Delphi applications.  Here is a summary of the features of the new block timer compared to the existing method timer and line timer profilers:


1. The block timer is thread aware, and can break down information into thread by thread values, whereas all times are combined for all threads in the other profilers.

2.  The block timer can accurately report information about time spent in recursive methods.

3. The extra overhead of doing all that extra profiler makes the overhead of running the profiler a bit higher.

4. No dynamic profiling in this one. You loose the trigger feature from the LT profiler, which is an important feature. It's worth switching to LT when you need triggers.

So far it seems to me that in smaller applications, with fewer procedures selected for profiling, the application overhead of the most intensive techniques (BT) produces the most interesting results. The larger the application, and the larger the cross section of the application methods I want to test, the more the classic lower-overhead MT and LT profilers are useful.

Configuring your application to work with this or other profiler tools is pretty consistent, the same steps are necessary for this tool, and for any other sampling profiler or other runtime analyzer tools. Turn on TD32 debug symbols from Project Options, in the Linker tab, in older versions, or Debug Information in the newer ones, according to the docs. 

Run the tool from the Tools menu.  Note that it's a good idea on Delphi XE through XE6 to do a full rebuild before you click the tools menu item as Delphi doesn't rebuild the target for you on those versions.

You click one tool, and the first time you do, you will probably want to do a bit of configuration. Each tool requires some slightly different configuration.  It is NOT a good idea in my opinion to profile ALL of any non-trivial application. First, because you're asking a lot of the NQS tool. Second because even if the tool can successfully gather information on 10 or 20 thousand methods, you probably can't do much with the results.   I recommend doing a little searching and probing and find some routines that matter, and include those.  The user interface is reminiscent of Outlook 2000 for most of the tools.  In the case of the Block Timer and Method Timer, you use the Routines icon, which for some few releases has included a nice Search feature, which I think I requested, and I'm gratified to see that in there.  Because my app is all about the Ping, I'm looking for the Ping methods, I want to know what they're up to...





After searching, then selecting the routines, I right click and "Enable Tracking for Selected" methods. Then I click the green triangle "play" icon to make my application-under-test start execution.   In a small application you could perhaps select everything.  But as I have learned from much experimentation, it's really better to spend a bit of time searching for methods you suspect to be relevant and enable a dozen or two dozen of those. Then drill in, and enable further layers of the code, as necessary, to get a clear picture of your system behavior.

After my program has executed long enough to get a reasonable sample, in my case, just over 5 minutes, I shut it down, and then the timing analysis results are shown:


You can also see a bit of a trend of CPU usage by your program, in total, which can be really interesting, because you might want to know "what is the program doing during these bursts of CPU activity?".



A nice feature built in is that if you have configured your source search path in the NQS project options, you can just double click on a line of interest and see the code:


If NQS tools don't show things in the font you wish, you can change the font it uses, there are individual selectable fonts, I change ALL of them to Consolas because it's the one true Code Editor font.  If you like the Raize font and you have that one around, you could pick that one.  Courier New is more to some other people's taste. If you happen to want Comic Sans, well, you're drunk, go home.



So now I want to jump from Tool to Insight.   The reason using tools like this is great, is when the insight clicks in your head. Today I just saw this line and I realized, ResolveAddress is a function, and because there's no mandatory parenthesis in Pascal method invocation, the code here looks like it's just a variable or property check, but it's actually a very expensive procedure.  Do I really need to repeat the Resolve on each ping or could my tool just periodically check that the DNS resolution is still working properly, and cache the resolve value, and do multiple ICMP pings to the IP address? In my case, I think I'm wasting a lot of cycles, and loading down my company or customer site's DNS service unnecessarily, and generating a bit of wasteful network traffic.  In my next version, even more than making my tool say 10% more CPU efficient, and 10% more network efficient, I might also make it a bit more configurable, say, let the user configure how often to check DNS resolve for my important host is working.


I also think I should write the code above, so that it's clear that the above is not just a check-value but actually that a function is invoked.   I really think I need to rewrite lots of internals in TICMP.

But what else could be wrong with my code other than it's wasteful? How about Memory Leaks.   So I am now going to switch to Code Watch.  Only a few minutes to try it out, and I found that although my background worker thread terminates, it is never freed, and I have a code leak.  This tool finds the problem and reports the source line. Additionally it also found some API failures that I may or may not have been aware of, and Win32 resources (thread handle) that was leaked.  This is awesome.



I'm going to wrap up now. I hope that all the above impresses you, because it sure impresses me.

Before I wrap up, I'll briefly compare this option to your only other real option for this kind of tool.   SmartBear's AQTime suite can do many of the same things that Nexus Quality Suite can do, but Nexus Quality Suite can actually do lots of things that the AQTime suite can't.    AQTime is more expensive, at $599 with a very restrictive named-single-user license, and a nasty activation and intrusive anti-piracy copy protection system that I do not very much like, because it won't let me run with a single user license inside a VM.   The copy protection actually runs a background Windows service, which detects all kinds of things like virtual machine use, and it disallows program operation inside a VM.  And the IDE integration of AQTime just crashed on me the last couple times I used it. I reported these crashes, and over several releases, the crashes never got fixed.   Sayonara, AQTime.

So what's the price for NQS?  At the promotional sale price of  $226 USD ($300 AUD), and with no intrusive copy protection that treats me as a thief, I have no problems recommending EVERY Delphi Developer and delphi using company buy this suite. There are lots of tools, and they work really well. If I had to complain about something it's that the documentation needs some further work, but they are working on that.  The product works, and when I find a problem or have a question, the technical support team is great.   The price is going up soon, so I recommend grabbing this while it's on sale.

I am planning to write some further review articles to cover this suite further, in particular I believe the automated GUI testing features in the NQS deserve their own separate review, and I think that there are many more profiling techniques that are possible to tease out very complex runtime problems in your system, not JUST to get the data to help make your program faster, or not leak memory, but also to understand complex behaviors by gathering runtime data that lets you see your program running.

In the past year, the amount of new stuff that has been delivered in the NQS is truly astounding. 64 bit support is new. I think this whole extra set of profiling tools is new.   I tested NQS on an extremely large application where I work, the product is over 5 million lines of Delphi code including all the in-house and third party component libraries, all the main forms and data modules, and other code.      In an earlier version of the tool, I was able to find a crash inside one of the NQS tools. I sent information to reproduce it to Nexus, and in the next release the product was fixed.    That's good customer service.


NQS is a tool that deserves a spot in your toolbelt too.

Full Disclosure: I received a complimentary review copy of this product, but my opinion above is 100% my own opinion, and I don't write good reviews for every product I receive a license for, in fact, quite the opposite, if I see something I dislike or I can't use, I'll say that. I'm a working coder, and I have no time for weak tools.   I have recommended that my boss buy multiple copies of this tool suite at work, where I believe it would be extremely useful.





Thursday, July 7, 2016

How to Hire the Right People? I have NO IDEA!

I have seen a lot of articles on the interwebs from frustrated job-seekers who say over and over that hiring is broken.

Where I work, I am interviewing for a Junior Software Developer position with a focus in Web/JavaScript/HTML5.  Consequently, I have been thinking a lot about how we in the software industry interview and hire people.  Because I have been interviewing people and, I think, I have moved past the need to haze candidates.

 I was not subjected to hazing rituals when they hired me for my current gig. When I was hired, I did not write any written technical exam, the interview was verbal, but the company had one, which it would use when it felt there was some question of a candidate's abilities.  I did bring in some code running on a laptop that I could show that did some interesting stuff, and which was as close to "proving" I can code, as I could think of.   I think ideally, a personal project you have spent two or three weeks on, should be enough to demonstrate.  But there have to be alternatives, and I will get into those below.  If we're going to get rid of subjectivity, we need to replace it with something objective.

Hiring, like most management decisions, is in the end always going to be fairly subjective, and it's an area of subjective business decision making that I think is very widely done poorly, and I consider myself very poor at it but I believe I'm getting better at it.   I hope to improve by being both broader in my search for evidence, and more focused on objective hard-to-fake data.

The short version of this blog post works out to this:

I am in favor of two to four hour take-home coding exercises, and I am against two-week trial projects.  


Peppering Candidates with Random Technical Questions Is Not Working

I agree with the critics of our modern whiteboard and non-whiteboard technical hazing rituals.  

By treating all candidates the same, and asking the same barrage of questions, we hope to map a candidate's knowledge, and some are even going to claim that this approach is "rational" or "scientific" or "impartial". It's not. Because people are not bots, and technology is not as complex as you think it is, it's far more complex than you think it is.

Here's the problem with technical knowledge: It's not linear but rather factorial in complexity, like the Koch Curve, the closer you look, the more detail is generated, and there is actually no end to the complexity.  If you don't even know what I mean by that, watch this awesome talk by K Lars Lohn and then come back.   If that talk doesn't give you a reason why you should be going to technical conferences, I don't know what to do to convince you further.  There, now, I'm a thought leader.

Now back to interviews.  If an interviewer is sufficiently intelligent, I think the interviewer should start by determining from a resume and from any phone screens, the areas where the candidate expresses some interest, experience, and ability, and then talk as openly, and with as much good-will and personal charm, as is possible.  In recent weeks, I have watched people as their anxiety goes down, and I notice that what you can learn about someone who believes you are not a jerk, is much higher than someone who has their fence up.   This is a poker game where we lose if we keep our poker faces.  This interview game is a game where the best move is to fold, and show your cards.  This is what I'm looking for. I saw some of what I'm looking for in your resume. I see you mention here that you have tried Scrum and Kanban. What did you find worked and didn't work on your teams when you did those things? Let's talk about how teams work.  Let's talk about how compilers compile, how the JVM runs your code, how a statically typed language helps teams ship.  How a unit test can help you not break things, and is doubly important on a language like JavaScript where there is no compiler, and where consequently useful forms of static analysis may be impossible.  Let's talk about the recent trend towards languages which can be verified to be correct in some aspect, like D or Rust.  Let's talk about Functional programming.   With Junior programmers I'm interviewing very few have ever played with Rust or D, or F#, or Scala.  Very few can tell me about interrupt handling inside the Linux kernel, or about safe concurrency models for web-scale transaction processing, or about the differences between two transaction settings in MS SQL. 

So fine.  Let's find SOMETHING you love.  Animation? Awesome.  Games? Awesome.     Now we will dig into your own interests, and find out what you've done that we can see evidence of.

Don't I just sound so avant-garde? Trust me, I'm not.  I'm probably going to ask Juniors and Intermediates if a Stack is LIFO or FIFO.  Then I ask them whether walking into McDonald's and waiting in line to order a big mac, if that line of customers is a Stack, or a Queue.   This question might be a bit too easy in England where a line-up is actually called a Queue, but in Canada, I find that people who crammed the LIFO/FIFO part of it can't reason about it, and thus some conceptual wiring is missing in their heads, wiring that I can't quite account for.    My mental picture of a Stack is something you might remember from restaurants, if you like me, are of a certain age:


I ask about stacks and queues not because you need to know that every day when you work in my team, but because I have a distressing feeling that candidates can graduate by simply cramming and collaborating on coding projects, and can manage to retain very little of the knowledge-platform that their degree could have given them.    Which data structure would help me reverse the order of items in a list easily, a stack, or a queue?  The important thing about my question isn't if you could google it or not, it's how adept you are at thinking about systems built of large amounts of software and hardware. 

I believe that a working model of a smaller domain contributes to, and correlates well with the reasoning skill you possess in the large domain.   The human brain, confronted with systems composed of parts it does not understand tends to ascribe to others the agency for fixing and changing those systems.  When a engineer who knows how a system works understands the fundamentals, she will, I hope, be able to begin picking complex problems apart, a process I call bisecting, until she can find individual smaller problems which can be solved.  It is these bisectors of complexity that I search for when I interview.  I am looking for the developer who doesn't even know how to do this, but who believes she can do it, and who will keep trying until she does it.  Possessed of reasoning skills, and a strong set of engineering fundamentals, she is apt to succeed.

Even candidates who absorbed everything their school offered them will still need a lot of additional skills and need to learn a lot of tools.   But if you are not a learner, a sponge for knowledge in university, an organizer of systems and ideas, a bisector of problems, what rational evidence do I have that things will be different in your work life?  If you can't tell me how to troubleshoot your mom's internet connection, I'm not going to believe you can understand a Healthcare Information Systems environment.  

I recently interviewed a candidate with a Masters Degree in Computer Engineering, that I hope was simply having trouble because English was a second language.   Several days after the interview, I am wondering if I simply made the candidate so anxious and flustered, that I actually caused the interview's dismal result. Whether or not that happened in that case, it's critical that interviewers turn our dreadful critical gaze upon ourselves, find sub-par elements of our practices, and fix them.

A good interviewer needs to set candidates at ease.  When I see candidates smiling and laughing, and joking in an interview, I am happy.  I know that I'm talking with the real person, and that we can figure out what will and will not work with this candidate within this team.

I am not going to stop asking semi-random factual questions, but I am going to give candidates fair notice. I happen to like the little thing on Reddit where people ask you to "ELI5". Explain it like I'm five.  When you know something cold, you can explain it to a five year old.   This is a new knowledge-sharing phenomenon that originates with millenials.  If you're 21 right now, I'm old enough to be your dad, and then some.  Unlike some people, I think the world is going to be fine, when the Millenials take over and we're all retired.   I'm cool.

So why do I ask what DNS and DHCP are, when you could google that, and when those seem more like questions for an IT/Network-admin than for a Developer role? The argument that you can google what you don't know falls down at the point where you don't google because you're facing unknown unknowns.   Design decision mistakes are a common after-effect of unknown unknowns.  I make design decision mistakes all the time. We all do.  We do not understand the domain in which we are engineering well enough, and we do not even know what it is that we do not know. This is the unknown unknown I speak of.  I am looking for engineers who are wary, meta-cognitive, who build themselves and others up.  So let's get to my hire/no-hire criteria, and see if you agree or disagree with them.

Cardinal "Hire" Qualities (with profuse thanks to Joel Spolsky)

I want to hire someone who is SMART and CURIOUS, who GUARDS the team that GETS THINGS DONE, and WHO IS NOT A JERK.  I have grouped and expanded things in a way that makes sense to me but I freely admit that I stole almost all of this from Joel Spolsky. Thanks, man.

SMART + CURIOUS:  I am looking for evidence that you are a passionate, intelligent geek who likes to write code.  You have a deep and dividing interest in some (but usually not all) areas of computers, software development, and technology.  If I ask you how a CPU's level one and level two cache works, and you don't know that, that's OK, as long as you can answer the question "tell me about something that you built recently on your own time that you didn't have to build", or "tell me about some language or operating system  or tool that you're experimenting with".   

GUARDS + GETS THINGS DONE:   You're not just a member of a team that shipped, but a member of teams that would not have shipped without you.   Your team didn't know about version control? You taught them.  Your team didn't know about continuous integration? You added it to their practices. Your team didn't understand the zen of decoupling or the zen of test? You taught it. You modeled the practices that made your team get stuff done.  When you saw things that were bullshit, that would sap the motivation of the team to GET THINGS DONE, you faced the boss and spoke up. You, my friend, are the guardian of the customer's happiness, the guardian of the product's marketplace success, and the keeper of the flame.  Sometimes being that guardian means NOT GETTING (the wrong) THINGS DONE especially if it means doing them "wrong" just so they can be done "fast".  Long term trends that slip under the radar and that are under-valued in agile/scrum teams, are things you like to bring up at retrospectives. 

NOT A JERK:  You defuse tense situations. You don't add gasoline to open flame.  You call people out privately, and you praise people publicly.  You absorb blame. You deflect praise.   You admit when you failed to do any of the above, and resolve to do better when you don't live up to your own internal high moral standards.   You believe you can be a great engineer while valuing different people who have different communication styles, cultures, languages, and you think that the team's differences can become sources of strength, and when difficulty and division is spreading, you find ways to unify the team and give it a focus, a technical engineering focus, with a strong shared ethical principle.  You are a curator of good company culture.

But let's be honest about the above. The above is the person I'm trying very hard to be.  I'm trying to hire people who are trying to do some of the things I try to do. 

My questions for you guys:

  • How Do you Find Out Real Stuff about Candidates when you are conducting an interview?
  • What do you want to know when you hire or when you are seeking a job?  
    • As a candidate, do you ask who you would report to?    What do you hope to learn?
    • How do you feel about the number of people in the room? Do you think its a better sign when you are interviewed by one person, or do you think it's better when you're interviewed by three or four people?
    • Are there any "shibboleth" questions you have as a candidate?  What do you want to find out with them, even if you don't want to state your question directly, what are you trying to figure out?  I don't have a specific question but if I see signs of aggression, arrogance, or naked exercise of rank or privilege, I quietly note it to myself, and decline further interactions with a company.  One thing you certainly can't fix in a company is the culture of its leaders.
  • When you are being interviewed, how should people approach you to find out the most accurate picture of your strengths and weaknesses?

I'd like to open the floor to a discussion now, let's keep it civil. Thanks.









Wednesday, June 1, 2016

Survey Results for the First Annual Delphi Code Monkey Survey



There were 373 respondents but the statistics shown here only reflect a portion of that, because SurveyMonkey wants $25 US from me to give me fully detailed results. Given that the final numbers are unlikely to be much different from these, I'm going to leave these as they are.  30% of respondents left the "other tools" question blank, which is interesting.














Some interesting responses from the "other" category:

* Some people were in professional categories other than the ones described, such as "retired".

* Other commonly stated worries included financial/pay level concerns, and concerns about whether they can afford to keep up with buying new versions of Delphi, or if they can compete with cheaper contractors, perhaps offshore.


I was pleased to see that although the experience meter tips towards "oldtimers", there are some inexperienced and only slightly experienced readers.  If anyone wants to fire me any comments on what beginner topics they would like to see me covering, please fire some comments up in the comment box.

The next time I run a survey it won't be on survey monkey, it will be my own homebrew PHP script survey.   Thanks everyone!


Saturday, May 21, 2016

Delphi Programmer Thinks about the Go Programming Language and Mandatory Source Code Organization

If you follow one of the usual tutorials for Go programming they will start by trying to dump a load of things you have to do on you.  This is perhaps something that you, as a long-time-Delphi geek have become inured to in your own environment.  Let us imagine a developer who has been given access to a source code repository or server, perhaps a big subversion server, and has no familiarity with the Delphi codebase at CompanyX, where CompanyX is basically every delphi-using company ever.  Let's make a quick list of the first tasks our developer would face:

  •  Setting up a working copy of source code so it builds, and so the forms open up without errors due to missing components.

  • Associating package set X required  to build version Y of product Z.

  • Setting up library paths that might be completely undocumented.

  • Individual  things done at company X like mapping a fake drive letter with SUBST, or setting up environment variable COMPANYX to point at some shared folder location.

  • At some companies they will just look at you blankly if you ask "can you build this entire product and every part of it from the command line on every developer's PC"?  Other companies have exactly ONE computer (MysticalUnicorn.yourcompany.com) on which this feat is frequently possible.    Still others (the sane ones) have made the process so unspectacular, and merely reliable that they think the ones who gave you the blank look just haven't realized how insane they are yet.
  • At some companies it might be considered acceptable if the build scripts and projects and sources ASSUME you will always check your code out to C:\COMPANYX. When you want to have a second branch you simply clone and copy a tiny little 120 gigabyte VM and fire that up.

Has any of that ever seemed insane to you? It does to me.   And so when I look at new languages one of the things that I look for is if the problems above have been thought about and resolved in that language and its related tools, including its build system, if it has one, and its module system.

Go has been known to me for some years as a famously opinionated language, characterized by the removal of features that its designers felt were problematic in C++, and were thus removed from GO:

  • There are no exceptions in Go, only error returns, and panics.

  • There are no generics in Go.

  • There is no classic object oriented  programming with Inheritance, there is only composition, and there are only Interfaces, there are no base classes (because no inheritance).

  • The module structure is pretty much mandatory.  Here's me starting a brand new Go project from a command line, what is happening should be pretty clear to most geeks:

~/work> export GOPATH=~/work
~/work> export PATH=$PATH:$GOPATH/bin
~/work> mkdir -p $GOPATH/src/github.com/wpostma/hello
~/work> cd $GOPATH/src/github.com/wpostma/hello
~/work/src/github.com/wpostma/hello> vi hello.go 



package main

import "fmt"

func main() {
        fmt.Printf("Hello, world.\n")
}

~/work/src/github.com/> go install github.com/wpostma/hello
~/work/src/github.com/> hello
Hello, world.
 
What is the thinking process that goes into designing the module system, with the following structure:





I think the above has the benefit of being about as nice a structure as I could imagine.  The folder names above tell me even where on Github I might find this project.   Source code is now globally unique and mapped by these conventions so that I know where to find any public code I want.  If I want to use gitlab.com or bitbucket, or if it was stored on a private server inside my company named gitlab.mycompany.com, I would move my code into different folders to make that choice clear.  For a language which is intended to be used in large systems, it's an appropriate design choice.  Let's contrast this with Perl or Python where the intended use starts with one to ten line scripts that are basically used for any kind of little task you can imagine, and where this kind of ceremony would be stupid.

I have worked in enough large code-bases in Delphi, and where any form of organization is accepted, it will inevitably become horribly complicated.

Let's briefly discuss the forms of folder/directory organization one might try to do in delphi:

* Ball Of Mud: Everything for one application is in one or some small number of directories - It is extremely common that one directory contains 90% of the files that are not third party components and that directories are only used to hold files not written here.  No sensible use of directories is made.  The source should all be in one directory with 10K files in it.  Usually ball of mud folder structure goes nicely with ball of mud source code organization inside your code files. That form with 5K lines of untestable business logic mashed into the form? That "controller" object that is more of a "god" object that direct references everything else tightly via USES statements? Ball of mud.

* MVC or MVVM: Views in their own folder. Models in their own folder. Controllers in their own folder.   Additional folders for shared areas, and per application area.  I've heard that this is possible, but I've never seen a Delphi codebase coded according to MVC.  Ideally, if you're going to do this, you also don't have your Views reference your controllers or even have access to the source folder where the controllers live.  Your models ideally don't reference anything, they're pure model classes and don't take dependencies on anything.

* Aspirational:    This is the most common condition of Delphi codebases.  There is some desire to be organized and some effort has been made, but it is fighting an uphill battle that may be unwinnable, because the barn door was already opened, and that cow of accidental complexity is already out and munching happily in your oat field.    You have a desired modular approach and it's expressed in your code, but every unit USES 100 other units, and your dependency graph looks like something a cat coughed up.

So given that I have seen large systems get like the above, I have a lot of sympathy for languages like C# where at least you can get your IDE and tools to complain when you break the rules, and I even have more sympathy for Java where namespaces are required on classes, and where the classes must live in directories which are named and hierarchically ordered.   In Go we have named modules, and the modules contain functions and can define interfaces, but they're not really Classes like in Java.  But the idea of order and organization has been preserved as important.  In Delphi we have the ability to use unit prefixes which are weaker than true Namespaces but still potentially useful.  In Delphi but most code I have seen does not attempt to adopt them.  It seems to me that having a codebase that uses unit prefixes, and that has source organized into these folders is a worthy future goal for a Clean Delphi Codebase, but existing legacy codebases are all we have, and so getting there is not something I'm going to hold my breath on.   One has to have practical achievable plans, and not tilt at windmills.

My first reaction to Go's requirement to use a fixed structure was predictably the same reaction that I had when I first realized the horrors of Forced Organization that java was forcing on me when I first tried Java in 1996.  Now, it's 20 years later, and I think we can say that Java's designers were right.   Java has proven to be especially useful in constructing extremely large systems.

The go package dependency fetching system (go get X) works precisely because of this forced organization, and it's all very well thought out.  There's NO reason that a clever Delphi programmer couldn't learn the lessons of how GOPATH and how go get and go install work and use that to fashion a guaranteed well organized and maintainable and clean delphi codebase, incrementally, by a phased approach.

You don't gain much if you close the barn door after the cow's got out, and you can't stop everything and rewrite, but if you can build some tools to help you tame accidental complexity gradually, you can restore order, over time, while you work on a system.

What goals might you start with?  I'm not going to tell you.    All I'm going to do is say that if your brain lives in a box that has a begin at the beginning, and an end at the end, and you can't read and think outside that box, you're sadly limiting yourself as a developer.  Becoming a better developer (in any language) requires what the old timers even older than me called a "Systems  Approach".  A view of what you build, and your project and its goals that is larger than your daily work, longer in scope than whatever form of agile "sprints" you're doing, and which has a sustainable high quality engineering methodology behind it.

You can't build that kind of mentality in at a language level (in Go or Java or Pascal), but I think it does help to have the bar set progressively higher as you can, so that once code becomes cleaner and more maintainable, there is at least the potential to detect when someone has made things worse.

Thus far we have seen many programmers throw up their hands at the 5 million line big-ball-of-mud projects and consider rewriting it from the beginning.  My feeling is that the bad patterns in your brain are still there, and if you rewrite it all in the same language or a different one, you're going to make all those mistakes again, and some new ones, unless you start learning some ways to approach system design that promotes clean decoupled programming.   Studying and research phases are required. Do not race to reimplement anything, either in Pascal or any other language.  Spend time and sharpen your sword. And remember Don Quixote.















Saturday, May 14, 2016

Completely Anonymous Delphi Code Monkey Reader Survey 2016

(update: Poll closed! Link removed)

I have put a completely optional, completely anonymous survey on survey monkey. The questions are completely general and will help me to get an idea of the readers who visit and thus, what you might enjoy hearing about, and may of course be fun for you to see the answers to as well.

I will share all results here on the blog after the survey closes, at the end of this month.
Here is a preview of all six questions on the survey. Please do not enter any email addresses or personal information into the comment boxes, where "other" answer boxes exist, so I don't have to spend time deleting it all before sharing the results here on the blog. The poll is completely optional and the categories/answers are fairly general.vague, and everybody will see the total answers.  If any answers to question 5 and 6 are repeated frequently, I'll publish those as well, otherwise, the published results will only specify the number of people who answered "other" to 5 and 6.  I really don't know what all the possible "#1 worries" might be, and I have a feeling that you all might enjoy seeing these answers. Please be as general and brief as possible, you only get 40 characters.






Saturday, April 30, 2016

Writing an IDE Expert (Toronto Delphi User Group, April 2016)

Meeting notes including links to sample code for writing your own IDE wizards is over on the Toronto Delphi user group site:

http://www.tdug.com/2016/04/april-meeting-follow-up-2/

That article has links to some open source repositories with my starter expert dll and wizard bpl code samples.  

Sunday, April 24, 2016

Patterns in the History of Computing: The Birth, Life and Death of the Tech Empire

As a student of history and a geek deeply interested in computers, Computer History is a personal passion of mine.

I am particularly interested in unlikely upstart success stories, most of which have a predictable arc:


  • A founder has an idea which is considered ridiculous to whatever industry he or she plans to disrupt.
  • A founder executes a minimum viable product, and iterates. 
  • Building an ever growing product line, the company flourishes, expands, and reaches a point I will call the Apogee, the highest point in an orbit.
  • Someone else has an idea which is going to disrupt this founder's business. This founder ignores this disruptive change and continues on the original plan.
  • The company, after realizing too late that a change in the market is afoot, eventually dies or is acquired.
  • We fondly remember the company, and its founders, who made so many pivotal or important technologies, and which is now all but gone.
I think anybody here can list 100 of these, but today I'd like to talk about DEC, and Ken Olsen, and do a brief retrospective on his accomplishments, his brilliance, and his critical mistake.

What do we owe to DEC and Ken Olsen?  The original internet machines built by Bolt Beranek and Newman were built around DEC hardware modules.  The ultimate success of Ethernet networking was due to collaboration between Xerox and DEC.  Xerox could be another example of a failed company, but rather than dying, they're merely a niche imaging company instead of the godfathers of computing.  The idea of owning your own computer and the computer being used directly by individual operators, a key element of Personal Computing, was first made possible by small DEC machine that were not even called "Computers" in the earliest days because the term was too strongly associated with the priestly caste of IBM mainframe programmers in their glass-walled temples.  And yet, Olsen's failure of vision were twofold.  He failed to move DEC towards RISC technology fast enough that they could realize the architecture benefits of RISC, which have informed subsequent CISC architecture designs, while RISC itself is dead, the process improvements in silicon ULSI design and fabrication that RISC permitted have lived on.   He famously derided the idea that personal computers, of the kind that Microsoft wanted to see proliferate would eat DEC's entire cake, killing the VAX and the PDP-11, and almost every 1970s mainframe and minicomputer company.

What is ironic to me is that DEC became what it originally was intended to become an alternative to.  Today's developers would not see much distinction between an IBM system 360 and a VAX 11/780. Both are dinosaurs, artifacts.

I actually took a whole term course in 1990, not that long ago, on VAX Assembly language. What the hell was the University of Western Ontario thinking when it set up my curriculum? VAX Assembly language?  Yet I'm happy I learned it.  The VAX architecture was and is beautiful. The VMS operating system was beautiful.  Dave Cutler, the Microsoft alpha geek (ha, did you get that pun?) behind Windows NT, basically rewrote VMS and it's running on your computer today, first it was called OS/2, then Windows NT, later Windows XP and today it's called Windows 10. It's the same kernel, and its architecture owes a lot to VMS. Like VMS,  Windows is not the only operating system that runs on PC architectures.  Unlike DEC, Microsoft at one point in its life made a lot of its money selling software. What would a Microsoft that makes most of its money selling Cloud and SaaS plus selling Enterprise platforms and tools look like? We're about to find out.

Today, Microsoft in 2016 is at the same point that DEC was in 1988. You can see Microsoft hosting huge events like Build 2016.   They have money, they have influence, and developer mindshare everywhere except on mobile.   They have a brilliant CEO who like the founder at Microsoft, is also a competent technologist.  They understand that Microsoft without change internally, is the same company in 2016 that DEC was in 1988, a few years away from irrelevance and death, unless they pivot. IBM pivoted and is now 90% an IT services and consulting company and maybe 10% a mainframe hardware company.  IBM will still be around in ten years.

What does it mean to pivot?  Microsoft is executing one right now. Go look. At Microsoft, it's free Visual Studio Community, free Xamarin,  Ubuntu Bash running unmodified user binaries on Windows 10 desktops, it's .net core, a radical (and beautiful) rebuild of the .net platform for the next 30 years of cloud and corporate computing.   Will Microsoft break the chain of companies with disruptive ideas (A computer on every desk) and unlike DEC, still be around in 20 years? I think it will.  Will Blackberry? I don't think so.   

What about the things you build? What about your company? Will you and the leadership in your organization recognize disruptive change, and the need to pivot your organization to survive? What if today you are a software vendor but you need to become a SaaS IT Provider to survive, or precisely the reverse? How will you know?  More thoughts on that later.  Only this in conclusion: The market will shift. Your skills and your current product will become a commodity, or worse, a worthless historical artifact, like buggy whips.  How will you adapt, and change so that you, and your organization will flourish?