Saturday, December 19, 2015

When All You Have Is a Hammer (Part Deux) - A Cry For Professionalism

A couple years ago I wrote a blog post called "When All You Have Is A Hammer, Everything Looks like a Nail".  I was bemoaning the way that software developers (including me) will sometimes use the one thing they know and avoid learning new things.  I have further come to believe that even though we start out passionate, and with a desire to learn more, and to find ways to keep getting better, many of us still periodically have days or weeks where our output decreases in quality, sometimes we even make a complete hash of things, we who perhaps might be expected to know better.  Most often it happens when developers "phone it in" for a day, a week, or a month. Maybe they've been demoralized by micro-management, or continual hectoring.  Let's be kind to people when they are in ruts, because we never know when it will be us.  Let's ask ourselves why we build bad things, and not hector others.   Somehow, our process and its outputs are no good.  Later on this will be discovered, and someone has to clean it up.  Sometimes I am the person who has to clean up my own mess, several weeks or months later, when it becomes clear my changes were a bad idea. Sometimes I am cleaning up someone else's mess.  I ought to do this without grumbling or attacking others. That's part of being a professional. But, since I have observed this category of problems over and over again, and like any hard problem in computing (naming things, being smart and good at your job without becoming an arrogant jerk), it admits of no final solutions.   But there are things we can do, and if the software you make is making the world a better place, then you should care enough to try to do your job better, every week, every month, every year.

First, begin your coding day by asking yourself if the work you're doing today matters. For some of you, that's easy.  Even if curing cancer is out of your reach, then at least increasing the effectiveness of breast cancer screening protocols counts, if you work at a place that does that.  But let's say you spend five minutes considering the business you work in and you still haven't thought of the reason why what you will do today (fix a bug, work on a feature) matters.   Here's my last line, if all else fails;   If what you do today keep your businesses (and your customer's businesses) running, then you're not only earning the dollars that feed you, and your children today, you're also helping other businesses that employ people who are hopefully going to be able to feed and clothe their children, and give those kids somewhere to sleep.  And if you like me are a regular contributor to the charities that matter, then every organization that employs the men and women who give to others, is not only doing its own work, and helping its own employees and customers, it's also, hopefully making the world a better place.  So, you're rooted now? You're ready to begin your day? You care. You are checked in fully.  This is step one to avoiding doing bad engineering work. You have to care about quality.  If you're anything like me, you need to connect all the wires up, from the battery, to the lightbulb, and do that almost daily to keep caring.

Second, all day long, when you find something stupid, that makes no sense, avoid the temptation to stop caring, and push on through, so you can close your bug-tracker ticket, and go home.  Ask the questions. Find the subject-matter experts. Engage in constructive, rational critique of the system. Don't jump into ripping it apart and rewriting it, and don't jump into "must not touch legacy code" fear mode, either. In between, find a way to make the customer's life better. First, the person who is forced by the conditions of their employment to use your software is your customer. Second, the person who signed the cheque to buy your software from your employer is your customer. That business has to succeed, to keep everybody working. Nobody succeeds until everybody succeeds.   This is the second step in avoiding making these mistakes.


Third, keep learning.  If you are stuck on Delphi 5, it's time to move up and onwards. You won't even know what all the reasons why are, until you do. That's just the nature of the game. If you are stuck in a rut, defending the rut won't get you unstuck.  Fine, your code is on a modern delphi version? Next, if all you know is Delphi, then go learn Python. If you do not know any version control tools at all other than zip, then you my friend are not a professional developer.  Go learn one now, I'll wait here.  If all you know is Subversion, go learn Mercurial. If you don't know how to automate your build, at least build a batch file and invoke msbuild and have that build your Delphi, C#, and C++ targets for you.  If you're still using the lame old solution of batch files, then level up, my friend. It's time to learn how to use FinalBuilder, or WANT. If you have a build tool but no CI server software experience, it's time to learn Jenkins (or Hudson).      If you have never done SmokeTesting, you better learn how.  A smoketest job runs on Jenkins (or Hudson) and makes sure everything that is checked into source control still builds, and runs after every single checkin.   Now you have arrived at this point, it might be a good time to notice if the unit tests you write are actually finding any bugs. Does the smoketest job go red not only for compilation failures, but also for unit test failures?   If you have zero unit test failures over 60 days, chances are that you actually have zero meaningful unit tests.  Go analyze why changes that resulted in regressions that your QA person found, were not found in the unit tests.

The more I think about the problems of "hammer-thinking", the more I think there is no real solution other than the basic patterns I see in all good engineers.  Ask why.  Grow and improve. Learn.  Add new tools to your toolbox.  Build new tools when none exist.  Never stop growing.

If you don't want to do this, you should go find a new line of work.  I'm not kidding.



Postscript:

Agile is Dead. Agile values have been totally lost behind the implementation.
Dave Thomas (of The Pragmatic Programmer)




Saturday, December 12, 2015

How To Find and Fix Problems In Your Component Packages

A very useful technique that all Delphi developers should know, is how to debug a your own DLLs (IDE experts) or  BPLs (binary component packages) when they exhibit problems when they run inside Delphi itself.  You could think if it as "debugging delphi inside of delphi", except you're not really doing that.   For one thing, the Delphi main IDE executable (BDS.exe) is not your application and it doesn't have full debug symbols, so you can't see all the names of procedures and objects inside of it, but what you can do, and what matters for the purposes of this quick tutorial, is that you can set a breakpoint in a unit that is installed as a designtime Component, or as an IDE Expert, that you have built with full Debug information, and that you have the source code for.

If you have some bug in a commercial library (such as Developer Express, or Teechart), and you have the source code for it (your purchase included more than just BPL and DCP, and DCU files, and also included the .PAS files and the .DPK/.DPROJ project files), then you can find and fix problems with them, as well.

When Not To Use This Technique


  • You could just use logging instead.  Inside your design-time component, you could just add some CodeSite logging, to help you figure out if the math or logic in your component is working properly at designtime.
  • You could simulate this within your own code, at runtime.
  • You could just install MadExcept into the Delphi IDE, and catch your exception and get a stack trace, and figure out from the stack trace, what calling sequence is causing the exception, and fix it without actually needing to debug.  If you don't have MadExcept yet, go buy it. I'll wait. You're back? Okay good.   (There's a free non-commercial version, but you are not the kind of limp-brain who refuses to support a pillar of the community like Matthias, are you? You bought it anyways right? Good for you.)

What is the difference between Runtime and Designtime, for your code?

  • Every class that inherits from TComponent has a property called ComponentState.  It is a set of bit-flags, and one of those flags is called csDesigning. When csDesigning bit is on, we are in designtime mode. A common beginner mistake is to forget to protect code that should not run at designtime with guard blocks like this:

procedure TMyComponent.SomePropertySetter(Value:Integer);
begin
  FValue := Value; // designtime+runtime
  if not (csDesigning in MyComponent.ComponentState) then
    DoRuntimeOnlyThing();
end;

  • Your objects are instantiated in designtime mode when they are being loaded by Delphi into a Form, when you are in the Design tab of the IDE.  This means your code can do some pretty amazing things for you.  You could add menus that appear when you right-click on your component, to automatically do some task for you that would be helpful when you're building applications using your component. These designtime actions are known as Verbs. The default one is known as Edit. You could have a dialog box open to help you configure your component.  TeeChart makes excellent use of these designtime actions. If you want to see how to add one to your component, there are many excellent tutorials around on the web, check out Zarko Gajic's excellent series on delphi.about.com.
Examine And Understand Your Current IDE and Project Settings, and Make Sure You Know How to Compile and Link A Package So that the BPL that Is Loaded Into The IDE Contains Full Debug Information


  • Before you start changing anything, be sure you understand your working environment. Spend a minute to review the the following aspects of your IDE's configuration, and your project configuration:
    • Look at your IDE Library  Path. What folders will be searched when you look for DCU and DCP files? 
    • Look at your IDE Default DCP and BPL paths. What paths are used? Is it the default or a custom setting?
    • Look at your project search path. What folders will be searched when building this project in debug mode, and then again, in release mode? Look at both.
    • Look at your project BPL and DCP output path. Are you using the default (that field is blank in your project), or are you overriding it? Where do you expect the BPL to be output to when you build your package?  Is that the same place, when you are in debug mode, or in a different place?
    • Look in the IDE Options at the Environment variables, and examine your path. What folders currently contain your BPL files? Do you understand what packages you want to load from what locations?
  • You will want your package to be compiled in Debug mode, that is, compiler and linker settings set to include debug information.
  • Somehow we need to get the IDE to load our debug package, and every package it depends on, from the set of folders in my PATH.  This might be as simple as hitting compile once on your package, while the Debug settings or the debug configuration are active. In ancient Delphi versions which lack a separate Debug/Release mode profile setting, this is, paradoxically, more straightforward, just check the Compiler debug info and Linker debug info checkboxes are on.  On modern versions it gets more tricky.  For example, if you are a tidy sort of person, and you have set up your packages so that your debug and release mode packages output their BPL files into different folders for your Release and Debug configurations, you may run into some headaches at this point, getting everything to load. In the end you have a set of DLLs that have the file-extension BPL instead. Now you want to load a mix of debug and release packages and say you have both the debug and release BPL output folders in your path. How do you suppose that's going to work out for you? Suppose your BPL depends on another set of BPLs.  The longer I consider the mess that I have sometimes made by having multiple BPL output folders, the more I don't like doing that. I would have only ONE package BPL output folder, but have separate DCP and DCU output folders.  Changing from Debug to Release mode when building a Package, I would want all my outputs to go to the same binary output BPL folder, so I only have ONE folder to manage in my PATH.   Switching executable/dll search PATH values in and out is a source of headaches and pain for me, and I suggest you avoid that pain.
  • At this point, before you continue, you should be able to build your package, know that it is built with debug DCUs, and that it will load into your IDE, without errors. To verify that you have accomplished at least this much, you should click Component menu, then click Install Packages, and find your package in your installed packages list.  If the checkbox is unchecked, or if you saw some errors when starting your IDE, or loading your project which uses your component, you have not yet succeeded in compiling your package with debug information turned on, and still retaining a loadable state.  Make sure that BPL that is in the IDE has the same FULLY QUALIFIED FILENAME INCLUDING ITS ENTIRE PATH and not just a package with the same name, in a different directory. If your Debug version of your BPL is output to some different path than your release BPL, then you shall have pain here.  I said that already above, and I'll say it again, and still you'll plough on past me, and then wonder why you can't set breakpoints inside your BPL, and generally debug things.

Click Run → Parameters and Set the Host Application

For most modern delphi versions, type $(BDS)\bin\bds into the Host application edit box.
If you're a sad-sack using some ancient Delphi 5/6/7 version, put the path to your main Delphi32.exe into the Host Application box. Now click Ok.  Now click the debugger Run icon, and hold onto your hat, because Kansas is going Bye-Bye.  Things are about to get weird if this is the first time you've ever done this.



   
If you did this right, you'll start out seeing the Delphi splash screen, then probably your whole IDE (if it's maximized on a one monitor system) will be replaced by another IDE window.  I recommend you un-maximize, and then move your second Delphi instance over to a different monitor at this point.  Which one is which? Look carefully.  Which one has the Debugger desktop-profile selected, and has [Running] in the Caption of the window.  I always keep that one to the left of my workstation (either on a left monitor or to the left side of my main monitor) and I keep the application under test to the right of my monitor, or on a secondary monitor.



Now I can work with the Delphi form editor, perhaps create a new project, drop the component I'm debugging onto the form, and start setting properties via the property inspector. I could open up the Component Editor (which is a designtime-only bit of code that may contain a wizard to help me configure my component's properties), and debug that.  If I'm debugging an IDE Expert, I could click Tools → My Wizard → Do Something, and watch my Wizard Do Something, or have the IDE stop my wizard for me, on first chance exception, because my wizard is accessing an object which is NIL, so of course, I'll get an access violation. Now I can see where that code is, and examine it.

If you find you can not debug your package in this situation, then you messed up in the areas I talk about in the first half of this blog post. If you thought I was belaboring my points above, please rest assured, I was only trying to be helpful to you. Are you really aware of every detail of your environment? What your windows environment variable named PATH is? What every project has as its output folders, not only in Debug mode, but also in Release mode?   What all your compiler and linking settings are, and how they affect your environment? Sure, that's a lot of information. But what are you? Are you a developer, or a whiner?  Woman Up, or Man Up, as the case may be. Dig in. Solve your own problem. Nobody else has the tools in front of them to solve this. You do.  Go to it.

Thursday, December 3, 2015

Thoughts on Delphi 10 Seattle, and Idera

Most people who follow what's going on in the Delphi world are aware that Embarcadero was sold by Thoma Cressey Bravo to Idera.  I had not heard of Idera and when the acquisition happened, like a lot of people, I googled the name, found the company website, and did a little reading.   I could see the synergy between their MS SQL Monitoring flagship product line and a lot of Embarcadero's database tooling but the Development Tools market appears to be kind of new to them.     I'm a member of the "MVP" program and I've sent my thoughts and ideas along to some of the folks in the new leadership team, and I'm looking forward to seeing some fresh ideas in growing Delphi's market share, and getting more people using Delphi.

In my opinion, Delphi "10 Seattle" (once I have gotten used to the change away from the XE naming convention, which I quite liked) is a good release, and worth upgrading to.  The extra memory available inside the IDE, due to the large address aware IDE core, is a welcome improvement. The IDE is also faster, and more stable than ever, partly due to the increased memory available, and also because it seems a lot of work has been done to make startup, shutdown, building, running, debugging, and loading and unloading packages work better, and faster.  

That being said, I have to admit, I still get Delphi crashes.  One of the difficult parts of being a Delphi developer is my inability to tell the difference between a Delphi crash caused by a bug in Embarcadero's code, and a crash brought on by the components and experts I've loaded into the IDE. Since I can't work without my components, and only really can choose to turn Experts on or off, it's really a difficult problem.  One hopes, that if stability is still an issue for people, that there might be some ways to tell the difference between a designtime package gone amok, and a real Delphi bug.  Maybe the designtime packages should be loaded into a different process, one that only contains the form designer, and not the rest of the IDE's code-space.   Then the form designer could say "oh snap! something's gone wrong!" without the rest of the IDE going snap too. Some day, right?

The best technique currently available is to run the IDE inside the IDE.  Delphi can debug itself.  Which is cool. Unfortunately, unless you already know a likely way to cause a crash, this is not a great help, as the inner IDE instance will run quite slowly.  I have taken to instrumenting my packages, adding OutputDebugString messages to my packages, in key module initialization and finalization sections, so I can see the packages loading and unloading.  That gets a bit difficult after a while, and in my case, with over 247 designtime and runtime packages (the designtime packages depend on runtime packages, so in total, I have 247 bpl files loading into my IDE) it's too much for me to figure out.  The codebase I work on right now is the largest I've ever worked on. It's impressive, and in Seattle it's working fine, but I have some doubts about the stability of the latest Developer Express packages inside Delphi 10 Seattle. There's something weird going on, and I'm not sure what's up. Once I isolate it, I'll report it both to Developer Express and to Embarcadero, but due to the complexity, it's not easily done.

I have reported some issues with High DPI systems, and I'm pleased to see some progress on this with the recent "Update 1".  It seems there is more still to be done in supporting the latest hardware, including 4K and 5K monitors, with their lovely crisp 150 dpi to 200 dpi pixel densities.   I'm using Windows 10, and Windows 10 is really quite slick on systems with multiple monitors and each monitor can have a different DPI scaling factor.  Delphi has some initial support for this, but where I work we already have our own classes to handle this, so we have the interesting problem of trying to decide how to merge the new VCL DPI awareness functionality with our own.

I'd like to hear your comments, if you're reading this, and you've tried Delphi 10. How are you finding it so far?








Tuesday, September 22, 2015

Delphi 10 Seattle Introduction at Toronto Delphi User Group

I'm giving a talk tonight at the Toronto Delphi User Group. I will try to record and post a video of the talk after it's completed, here, and it will be hosted on youtube.

Meeting is Tuesday, September 22nd, 2015
North York Central Library – 5120 Yonge Street
6:00PM to 8:00PM

Thursday, August 27, 2015

Webinar : What's coming up in RAD Studio, Delphi, and C++ Builder.

I just found out about an upcoming webinar from Embarcadero, on September 2nd:
Ride the Windows 10 Release Wave See What's Coming in RAD Studio, Delphi and C++Builder

Any improvements in Windows 10 support are of course welcome, but as a guy who works with large projects, this sounds good too: 

Double the IDE Project Power:  The product and build management system has been re-architected which effectively doubles the project size capacity resulting in higher stability and performance when dealing with large projects, particularly multi-platform projects.

Basically, if the IDE can  handle very large projects better than it did before, anybody who works with five million line codebases, which I certainly do every day, will benefit.

Check it out and click the link above.

Saturday, May 23, 2015

Questions that Reveal the Importance Of Good Tools and Add-Ons for Delphi Users

I'm working on a review of the new Nexus Quality Suite, which I'll be posting as soon as I've finished it.  I just wanted to write a quick post about a very simple topic, that I think might be a good question for every delphi shop's developers, and every single-person delphi developer to think about:


Can you afford NOT to answer any of these questions?

1.  What is the slowest function in my application? (Total CPU seconds spent while doing some particular sequence of commonly used features)

2. What function is called most often in my application, perhaps thousands or millions of times, perhaps even unnecessarily?  (Function X calls function Y, in a tight loop, burning CPU, and something calls function X more often than necessary.)

3. Do I have any memory leaks? Are any heap memory areas being written to after they are freed?

4.  Are there errors in my code that the Delphi compiler and its hints and warnings can't show me, that could be found easily by static analysis?

5.  What is your largest module (lines of code)?  What is your longest and most complex, and most potentially buggy function?

If you need answers to #1, and #2, and I think most developers do need answers, then you need a profiling tool.

If you need answers to #3, you need something of a heap debug session, either FastMM in Full Debug Mode, or something equivalent.

If you need answers to #4, you need something of a "LINT" tool, either CodeHealer, or Pascal Analyzer.

If you need answers to #5, and you're on XE8, you have Castalia, which will help you find functions that could be the most complex, or largest, or the largest units, in your application.   There are also some answers to related questions in the Audits and Metrics feature built into higher SKUs of Delphi.

What other questions should every Delphi developer ask about her codebase, and what tools do you recommend to find the answers to those important questions?

Sunday, March 8, 2015

20 Years as a Delphi Developer

This is the talk I gave at the TDUG meeting, in February, now on youtube.


Monday, February 23, 2015

Awesome new IDE plugin : Test Insight


This plugin is awesome: Test Insight

Go download it now.  It works with DUNIT, DUNIT2, and DUNIT/X.






Do you know what this little image represents?  It represents a whole new level of "flow state" for Delphi developers who want to practice TDD.

You need this. You really need this. Unless you don't ever write tests. In which case,  you should be wearing this:


Wednesday, February 11, 2015

Toronto Delphi User Group Meeting

I'm the speaker at the upcoming February 24th Toronto Delphi User Group meeting.

The group had been a bit idle for some time, and we're re-launching it.  Probably not every month, but we'll see what demand and attendance calls for.

Location : Toronto Northern District Library, 40 Orchard View Blvd.
Time: 6 to 8 PM
Date: Feb 24

Please RSVP at the TDUG blog, here:

The TDUG blog post is here.


Delphi at 20: A Retrospective

My talk will be about 20 years as a Delphi Programmer.  It will be a Retrospective with a bit of a Forward Look as well.   I think Delphi is a young, and rapidly growing technology, but I also happen to have a rather large amount of experience as a user of the product, and as a passionate advocate of Delphi.  
It's  time to look back, and time to look forward.  Bring your own retrospective goggles.



Wednesday, January 28, 2015

Odds and Ends: Another CodeSite thing and a thought about the 20th birthday of Delphi

I think I forgot to mention a lot of features in my CodeSite review, but there's one that just made me so happy. I actually smiled. While coding. I know eh?

  Imagine this. I'm persisting some complex structure as XML and writing that to the DB. Writing that big mess out to a plain text log file quickly gets you into that "Drinking from the Firehose" mess.  But what if your viewer just showed a quick summary like "Variable = <?xml version .....".  Then you click it, and you can inspect that row "zoomed in", using the Inspector.


Here is a sample:


Now, instead of feeling like I'm drinking from the firehose, I feel like the guy who can pick exactly the rows he wants, and find the items inside he needs.  You really need to try sending giant blobs of text to CodeSite, as ONE CodeSite.Send, and using the Inspector pane, to realize, hey, there is NO reason to use anything else for logging.

It's that good.

I suggest that you have specialized DIFFERENT sets of IFDEFs and use these giant dumps to codesite, only when you need to use them, like this:

{$ifdef TROUBLESHOOT_PERSISTENCE_XML}
   CodeSite.Send( 'LoadSomething',  AXMLString  );
{$endif}


Delphi 20th Birthday Party Coming Up

Hey folks, did you realize that Delphi is going to be 20 in a few weeks?  You know what would be a good idea to celebrate? Find your local Delphi user group or meet-up, and join them for a beverage, at some point in February 2015.  I plan to do just that in Toronto.

Update:  Register now for Delphi Week 2015!


Sunday, January 18, 2015

Debugging, Inspection, and Logging: Tools and Techniques (Why I like CodeSite)

I'm going to talk about the importance of logging, and in particular, why I like to use CodeSite in my Delphi applications.  Before I do that I want to place logging within a context. It is a tool, among other tools, available to you, to give you insight into your running code, so that you can observe the behaviour of your application, determine the correctness of your application, and even, give you some ability to reason about how to fix it, just from the output of the logs.

Fixing performance problems, validation problems,  exception mis-handling problems (catching things you shouldn't catch, not catching things you should catch),  detecting and fixing thread deadlocks,  problems with control painting, or high level user interface behaviours like persistence of user customizations to the application's grids or toolbars, are things I've never found any way to wrap up in unit tests and then fix.  Attempts to do it that way have just wasted time and money.

First I'll say that I wish I found and fixed all my bugs by wrapping all my application code in unit tests, and that TDD is so wonderful that nobody needs logging, and nobody ever needs the Debugger. Ha. What a laugh.  I'll bet TDD people are keen on TDD precisely because they have, and still do, spend a lot of precious time in the Edit-Compile-Debug cycle.

Logging can be a way to reduce the amount of time you spend in Edit-Compile-Debug.

So, let's take it for granted that any bug that can be found by TDD practices probably should be found that way, and perhaps have been found that way already, and that leaves behind the bugs that don't get caught that way.  For those bugs, there are three main debugging techniques that I rely on:

1.  Making effective use of the Debugger, especially the advanced Breakpoint properties.    Some people seem to think this is the only way to find and fix problems though.  Set a breakpoint. Inpect a variable value.  Step into and out of a complex code path.     Yes, using a debugger is a critical part of being an effective developer, but sometimes, you can spend a long time chasing bugs, and even finding where a problem is.   There are a lot of features in the debugger that, if you use them correctly can reduce the time to find your problems. One of them is the advanced features in the Delphi breakpoint settings window; Conditional breakpoints, and even non-breaking breakpoints, which can be used to generate some log messages without modifying your code:



2. Code Inspection.  Read your code. Without running any code, and without even much searching, it is sometimes possible to locate the code that is malfunctioning and find the repair that is necessary, without even running the debugger, by a combination of good code reading skills, and a good working knowledge of where in the code you might go to look and do some code inspection.  The question is, how do you find the code that was running right when the problem you experienced was happening.   I find inspection useful only after my code is instrumented with trace logging, or when the failure happens in an area where I have just been, and made changes. My mental process is usually to detect all the cases that I have to handle, and see if all the states are detected properly, and then correctly handled.  Many bugs can be found by determining that a correct bit of code to handle a particular scenario either doesn't exist, or can't possibly be run (even if it exists) because the code that detects it is not correct.  I find in custom-control work, for example, that states like Disabled, and Hot state handling (the less common states for a control) are where a lot of the bugs in my custom controls lurk.   Another kind of inspection that I like to do is reviewing the version control logs and looking at what's changed today (especially when I'm part of a team working on the code). Sometimes by reviewing the version control logs, and source code diffs, I can quickly zero in on a change that caused a problem.



3. Trace Logging, also known as "Debugging by Print Statements".   There are a wide variety of techniques for logging, and CodeSite, my topic today is just one of the available ways to do it.  A classic one, one I've used for a long time, is to use the Win32 API function OutputDebugString.  You can see the messages from these right in your IDE, or using the free SysInternals DebugView tool which you can download from Microsoft. Another way is to roll your own little disk based logging library.   I think we all think logging libraries are not very difficult to write properly, until we've written one.  Open a file, append a line of text.  If you don't have to worry about multi-threaded logging, and multi-process logging, perhaps you might make due with a pretty simply logging system, it probably has a global Critical Section that blocks every thread in your app for the 300 microseconds that it takes to write a log message, and that's probably fine for your needs.   Probably most Delphi developers who have been working for a while have their own favourite logging system.  I have my own:


  • I like one file per day, named in the form YYYY-MM-DD.log and I like logs to auto-clean after some sensible period like 30 days. That way if I turn on logging at a client machine, it's never ever going to fill the hard disk with my logging messages.  
  • I also like to have multiple levels of logging, at least three;  Information, Warning, and Error.   I like the message to have the timestamp first, and the severity indicator second, followed by some kind of area or scope name, and then the message.   Something like this:
   2016-01-16 08:03.02.123 [I] [NETLISTENER]  Listening on port 8081

  • Typically my logging library has a simple API that I don't have to think about. My own home-brew logging system is built around a function called Log:
    Log( lvInfo, 'Listening on port %d', [ FServer.Port ] );


  • If I'm in a module I might have a log method that is private to the implementation section,  that adds the facility (section) of the application, the 'NETLISTENER' above, and invokes a global callback function, if it's assigned:

procedure Log( LogLevel:TLogLevel; FormatMessage:string; Args:array of const ); inline;
begin
   if Assigned(Logger) then 
     Logger( 'NETLISTENER',  LogLevel, FormatMessage, Args);
end;

What I wish is that I didn't have to wrap Log messages I only want on sometimes in any explicit {$ifdef}.  I'd love it if I could declare inline functions as conditionally "no-op", so that I could switch out all the logging calls in my application in some non-logging build, just like Assert can. If I was using C++ this is a place where Log() would be a macro, instead of a function.  Although I don't much like having to do it this way, I often find myself going back and conditionally defining my log messages, like this:

{$IFDEF LOG_NETWORK}Log( lvInfo, 'Listening on port %d', [ FServer.Port ] );{$ENDIF}

So I have my own logging library, and you have yours.  Is Logging like Lasagna, something you should always make yourself, and not go for a ready-made?  No.  I think there's a time and a place for both things in your tool-chest.

Sometimes I will use my own home-brew logging system, and sometimes CodeSite, and sometimes both, even in one codebase.   Already having a logging system in place, of sorts, isn't a good reason to ignore CodeSite.  You might find your existing ad-hoc system, and CodeSite actually combine pretty seamlessly.

Why CodeSite?

If logging to a file is a few lines of your own code, and that doesn't require CodeSite, why do I find I like CodeSite so much?

Well the first thing I do in any codebase that has its own logging functions, is wire up CodeSite to my own internal logging system's handler function.

Just add CodeSiteLogging, to the uses clause, and call CodeSite.Send, and you've got the lovely Code Site live viewer application up on your screen.   Live Viewing rocks.   But then, OutputDebugString can enable the "live viewer" built into your debugger. So why is CodeSite's live viewer better than just adding an OutputDebugString call to your logging handler function?   It's easier to read the live viewer window's output, and easier to find stuff I care about.  This is the one area where all homegrown or the IDE-built-in OutputDebugString systems tend to fall apart, in my experience, is in finding the messages I care about and ignoring the ones I don't care about.

 Another less common problem, is having to read several logs separately and combine them manually in my head.  Code-site has a dispatching (routing) system that can combine messages from multiple applications and multiple machines, into a single log file.  It also has ways to help me find the needles in the haystacks.  I can use CodeSite to arrange live logging scenarios that help me see what I want to see to fix a particular problem, in a distributed multi-application multi-machine scenario.   If I have to manually collate and combine results from multiple log times, that decreases the overall efficiency of the log file as a tool.  I could have a server log, and two client-side application logs, and have to go find a message at 11:32:17 AM in several different logs. Now I need to check if all my computers are synchronized correctly, so that I know if 11:32:17 is slightly ahead or behind, for computer 1 or computer 2.

Another area where CodeSite really shines, is as an IDE wizard for adding logging messages quickly to your application. When I need to quickly add a lot of instrumentation (new trace messages) to a huge area of code, as an exploratory step, and then quickly take it all away.  The CodeSite MethodTracer will automatically instrument an entire area of your code, including all calling functions in a call tree, and even insert {$IFDEF CODESITE_LOGGING}..{$ENDIF} block around the generated CodeSite.TraceMethod function calls.  Seeing what functions call what functions in a structured way can be very enlightening. Adding the execution times for functions, even more.

Here goes, a few clicks...



Hey, presto, millisecond time-elapsed values, just what I wanted..




The more complicated the data types you work with, the more a tool like CodeSite helps.  You can send a whole record or object off to CodeSite, and then worry later about finding the values you need out of the object.  When I'm working with a graphical control, this kind of time-series inspection is exactly the ticket.  With a debugger you get a current value. With a logging tool, of any kind, even the roll your own kind, you get a series of points of information over time.  The question for me is, do I want to spend a lot of time implementing logging features, or do I want a tool that adapts quickly to a variety of ad-hoc logging tasks over time, quickly.

Here are some of the tasks that I find I use CodeSite for instinctively, due to the efficiency of working with the live-viewer, and the variety of tools and facilities in CodeSite:

1.   Anything you install into the IDE itself, I prefer adding logging and instrumentation, and then I visually watch my log output, and when this stuff misbehaves, I can see why it's wrong from the output of the logging screen.  Seeing your messages INSTANTLY on screen is essential here.  For IDE wizards or component debug logging, opening a file and searching in it is a waste of my time, so a live-viewer logging solution is the only sensible choice.

2.  Any graphical drawing code, whether part of a component I plug into an IDE, or even the custom paint code inside an application context (whether the custom-draw events of a control, or a control like TPaintBox that is all about owner-draw).  It's just useless to inspect this stuff with a debugger, and OutputDebugString techiniques are not good enough for my purposes. The ability to send a TRect, or a TPoint to CodeSite and visually see it are very useful:

3. Any time I want to have expandable, collapsible tree view of log messages, giving me better organization than in a flat log viewed in notepad. In the screenshot for the previous point, did you notice how there are little icons before each of the log messages?  There's a little icon for entering a function. Another little icon for exiting a function.  Next there's my Origin = message.   I know that somebody set Origin and it's AFTER the exit of HasGlyphFor, but must be inside some other context that doesn't have logging.

4. Any time I have a system that has multiple programs, parts, pieces, services, runs on more than one computer.  Honestly, I rarely co-ordinate logging from multiple sources, but when I do, it's like when I need to use the remote debugger, it can be a life saver on those days.

5. Any time I need filtering, organization by categories. Particularly nice is the Auto Organize feature, if you make use of the Category field in your log messages, something I urge you to do.

6. Any time I want to quickly instrument (add a lot of logging) that I might later revert out of my working copy.  I use the CodeSite Method Tracer for that.

And now, if I have convinced you to try adding some more logging to your application, or to try out CodeSite, which you may have already (at least the Express edition) if you have a modern version of Delphi.  CodeSite Express was first bundled with Delphi XE, and is still bundled today in Delphi XE7,

Now that I've stated why I love logging, I would like to discuss why logging becomes less useful to me than it could have been, no matter what tool I use, whether it's my roll-my-own file based loggers, or a tool like CodeSite.  Yes folks, there are ways to do logging more and less effectively.


The Cardinal Sins of Logging

Logging is not a panacea.   Making good use of logging tools is harder than it looks.

You can make your life easier by doing the right kinds of logging.  I advocate a restrained, controlled approach, with a retrospective phase where you look at how you log and modify your approach if the approach is not paying rich rewards.

1. Logging too much.   A log that contains 100 messages per second, or a log that contains 1000 useless messages that convey nothing to me, with one message in there that might tell me something interesting, is the most common form that logging fails.   Curating your log messages is something of a skill, and removing useless log messages is something I think you should be even more ruthless about than removing dead code (commented out) or useless and wrong comments.   If you can not use your logging facilities to see problems, because of all the noise, you need to do a reduction of logging.  In logging, often less is more.  If 99% of your log is noise, you may as well rip it all out and start over.

2. Not having ability to filter your logging.   Maybe you added 500 log messages to your system's logging last year due to a bug you had that day. Now those messages are useless to you. The two ways I filter my logging are by category, and by severity.  Category is a thing in my head that is coarser than unit scope, and finer than binary module scope.  In a scientific application that deals with some kind of hardware communications, I might split the scopes up to Communications, Analytics, Database, and then various user interface Category values for parts of my GUI.  The category concept is about grouping your log information to keep irrelevant details apart.   The second thing is severity. Sometimes Information messages should be suppressed and so instead of just turning logging on and off, it could be set of Off, All Messages (Verbose), Normal (Warnings and Errors), or Quiet (Errors only).

3. Logging only to disk and never directly to screen.    I find locating logs on the disk, opening them in an editor tool, and finding the rows that are recent and relevant a complete waste of my time. In fact, I find I ignore logs completely if I don't have a live viewer. Even if you didn't want to use CodeSite, at least adding an OutputDebugString line to your own in-house file logging code is friendlier in use than having to stop what you're doing, go browse in the windows explorer and find a log and then find the content in it.

4. Slowing down your end user's experience or otherwise degrading the application's primary role, due to logging. Your logging solutions, whatever they are, should be possible to turn off, and should not consume memory, or slow the user down, when turned off. If they do, and you can't fix it any other way, you should IFDEF the logging OUT of the code.    In some systems, logging is not worth doing do disk, until a serious fault occurs.  Then, and only then, perhaps some context may need to be written. In high performance applications, I have designed a circular buffer based trace logging system, that logs only to memory in a circular buffer. Then when some fault occurs that justifies the expense of persisting the log content, I save the incident file, with the preceding recent information and warning messages, from the circular buffer.   Or I append the recent in-memory records to an exception traceback (which I might generate using JclDebug or MadExcept), for purposes of automated bug reporting.   This approach is more appropriate  to some application domains, than a simple disk based logging system.

5.  Not cleaning your logs up or having a good log rotation and log naming system.  I have seen log systems that are a pain to leave turned on for any length of time because they fill the user's disk up with gigabytes of noise, and they don't neatly switch files per calendar date, making it hard to keep a recent amount (say the last 7 days) of log data around, while wiping out data that is so old as to not matter any more (say, greater than 7 days).

6.  Omitting Log headers.   My favorite practice is to have daily files with the date in the file name, and the first line in the file should be the executable name (with full executable path) and the version. Something like this:

*** C:\Program Files\MyApp\MyApp.Exe Version 1.2.3.4


Going to Production - CodeSite works there too.

CodeSite is fully capable of working in a production environment, I just haven't used it much for that yet, as I also have my own, or my employers production file based logging code.  Whatever logging system you use in production, remember before going to production, to add a way to configure your logging subsystem,  and then have some configuration user interface, at a minimum, perhaps you will want the options to turn it off completely, or set a logging level (Verbose, Normal, Quiet).

Going to production doesn't cost anything either, since the tools, including the dispatcher, live-viewer and file-viewer are freely re-distributable.  That's awesome.



My CodeSite Wish List : Greater Ability to withstand a Drink from the Firehose


"Drinking from the Firehose" - scene from UHF (1989)

Notwithstanding my advice above about logging too much being bad for you, there have been times when I wanted to generate a giant pile of log messages and then filter the data later.  CodeSite is the best tool for that job that I have found yet, as well.  I have found it's best to turn off the auto-scrolling feature in that case.    What I wish that I had was a technique that lets me see only new errors as they occur, leave the auto-scroll turned ON and showing new error messages, and hiding all messages of severity less than error, until I pause the logging, and want to "zoom in".   With auto-scroll on, and a high enough message send rate, I feel a lot like the kid in the movie clip above.  Michael Richards (and Weird Al) should have got an Oscar for UHF. There is no justice in Hollywood.


 Express versus full CodeSite Studio

CodeSite Express comes bundled with your XE through XE7 versions of Delphi has enough to get started with adding CodeSite.Send() stuff to your code, and includes enough Live Tracing functionality that you really should be using it.

The ability to send much more than just strings, the ability to send whole objects or collections in at a shot, are worth the upgrade alone.   The high precision timing stuff and the TraceMethod technique are only available in the full Studio edition.  There are even a few cool things in the full featured product list that I haven't used yet like conditional sending (SendIf).



Monday, January 12, 2015

I've been a bit busy lately! Upcoming posts on Delphi Code Monkey

I apologize for the lack of blog activity for the last few months. I've been in Crunch Mode at work, and various things are going on in my family life right now that have made it harder to blog about Delphi.

I remain an active Delphi guy, all day at work, and I'm happily using XE7 in production.  I remain confident in the product's stability curve.  XE6 was better and more stable than XE5, and XE7 has continued to be stable.

Here are some things I'm planning to write about soon:

1. The sad state of dUnit and the plan to resuscitate it.  This plan is currently underway by the current maintainer of dUnit, Jeroen Pluimers.   He plans to move the project from SourceForge to a public GIT based hosting system, and abandon Subversion for Git.  I plan to be one of the contributors to that newly rescuscitated dUnit and I'll have my own branch (which I will use to submit pull requests, using GitFlow).  I'm quite excited to see dUnit come back again.   I will probably also include some notes about WHY I'm not interested in DUnit/X right now, and yet, I am quite interested in seeing the changes that went into DUnit2 getting merged up to the DUnit trunk.

2.  A plan to relaunch the WANT program, also a Juancarlo Anez original, in a new form.  I will probably be hosting the code at bitbucket, and I plan to use Mercurial to host it.    I need to discuss the plan with my employers as I want to take some of our own in-house customizations and changes and external tools and build them right into WANT.   Why?  Because CI (continuous integration via build servers), IDE package installation, and unit and integration testing frameworks (like DUnit and DUnit/x) should be integrated.

3. I'm planning to write a blog post on how to I use logging tools, especially CodeSite, to help me with my debugging. I think that will be the next one.

Sorry for the quiet months on the blog, but this blog is back, and I've got some stuff to talk about coming up January 2015.   Today is, by the way, my 45th birthday, so I might write a post a bit later this month about the issue of Getting Old, from the perspective of a Software Development professional.