Monday, August 12, 2013

... just write *what* down?

In a previous post about the importance of reducing documentation-related technical debt in the near term, I said Just write it down.

But what's the it in it?

If someone vaguely suggests you need to document your code better -- what? How, where, when?

Here are a few ways to answer that question.

Explaining to yourself and others

Six months from now when you come back to this project you'll have to spend a morning figuring out your own stuff. Whatever you spend that morning figuring out (Yes, a List<?> argument would have been more elegant here, but a bit downstream we're writing to a third-party API which takes List<Object>, or, The reason for the counterintuitive spelling of these log-file entries is because of the syntax expected by a log-parsing tool, not in this package) is precisely what you need to write down now. 

But ... 6 months from now hasn't happened yet! So maybe leave the code as is, and do the documenting 6 months from now -- ? Right now, while you're mid-project, your brain cache is loaded with corporate value. If you wait, that value will expire from the cache. 

Instead, maybe think about the previous project, and mimic that. 

Or, ask someone else what's not clear to them. Say you spend half an hour explaining something to them. Whatever you say to them to make it clear ... that's what needs to go into code comments and/or a document.

Build up patterns

After a few projects you'll see the pattern of what kinds of things you end up explaining to yourself and others. Then on project n+1 you can go ahead and write those things down in the first place. 

My personal list of the kinds of things I end up writing down: 

* At top of source files, a hyperlink to the project's wiki doc. And in the project's wiki doc, a link to the source-code package. Also, just a few sentences about Why does this code exist? Who was it written for? What does it connect to? Who consumes its data? Nothing fancy or elaborate is needed here, but providing this context is crucial. This is the first light at the beginning of the tunnel your reader is about to walk down -- so tell them what tunnel this one is, and where it goes.

* For the wiki doc, take the time to make an architecture/flow diagram or two. (Use the figure-draw feature of any office software, then save as PNG or take a screenshot.) This is worth its weight in gold. Usually these are better off in wiki docs (rough-draft with pencil and paper) but occasionally ASCII art is feasible -- and has the benefit of being able to go directly into the code. 

* Use nitty-gritty comments for nitty-gritty code. Any subtle thing which, upon re-reading causes me to puzzle 5 minutes before the light bulb goes on over my head. That light-bulb insight must become a code comment. 

Classic example: a subroutine with a complicated regexp for parsing some config-file contents. Or pct =`df -Ph /var/log`.split(/\n/)[1].split(/\s+/)[-2]. (Wait, what?) Simply copy and paste a sample line of that file, saying, "Here's the kind of thing we're parsing". For example:

# Sample input (we're taking the second-to-last field of line 2,
# without first checking for existence of the directory):
# $ df -Ph /var/log
# Filesystem      Size  Used Avail Use% Mounted on
# /dev/sda1       185G  137G   39G  78% /
free_percent =`df -Ph /var/log`.split(/\n/)[1].split(/\s+/)[-2]

This kind of thing takes seconds to paste in, and not much space, but it's a huge time-saver for the next person to be looking at that scary regexp or split/join/etc.  (Especially when the file format in question is something off a socket, something they can't just type for themselves.)

* During a project, from design through implementation, testing, and ops, I always end up keeping a set of browser bookmarks and a text file with commands I copy/paste into there -- basically any often-repeated commands I end up using. Paths to things, tools I wrote, tools I often use for the project, etc. Those bookmarks and the frequently used items from my cheat sheet must be copy/pasted into a document and/or code comment. These are also worth their weight in gold. 

The key point here is that writing takes time and shouldn't be done for its own sake. Write down something which will save your co-workers more time than the time you're spending to write it. Be frugal with your team's time -- with foresight.

See also previous posts herehere, and here.

On-call support

What will you or your co-worker need at 3 a.m.?

* Ask yourself

* Ask them

* For a not-yet-deployed project, see what kinds of alarms / error conditions are going off, things that would result in a page if they were live. 

What about when it goes stale?

I'm asking you to do a dangerous thing. I like to use a junk-DNA metaphor: DNA has coding sequences (a mutation there is fatal) and junk sequences (a mutation there does nothing). [Disclaimer: apparently junk DNA does more than nothing. But I'm not a geneticist. It's my metaphor and i'm sticking to it.]

Code is like the coding sequences of DNA. Change an a to a q in the code and you've got a compile-time error or run-time bug -- which is actually a good thing since you'll find it right away. Comments are like junk sequences: change an a to a q in a comment (let alone something more significant) and the program runs as before. It's an informational time bomb which can blow up months or years later.

As time goes by, code changes, but comments often don't. They can be outright dangerous, leading someone to think something untrue. They can be worse than no comment at all. And keeping comments up to date as code gets refactored significantly can be an awful lot of work (and a pain). Leading us perhaps not to bother in the first place.

This is an important criticism. 

So what do we write down? And how do we insure against future harm? 

* Some things needn't be written down. (Don't turn everything into a novel.) In particular, the living code should be self-describing; this is the coding-sequence and it will stay current. (Name the boolean haveSeenHeader, not just flag).

* Big-picture comments belong at the top of the file, or in a wiki page, and hopefully they'll stay true, or be easily found.

* Nitty-gritty comments belong right next to the nitty-gritty code. If that code gets changed then the comment should get found and go along with it. 

* When I'm in doubt, I find that a disclaimer of the form As of this writing (July 2013), this is ... is a good compromise: some information is better than none, yet you're giving notice that it may not always be true. 

Things that deserve special attention

* Interfaces: APIs and config files. File formats you consume or produce.

* Clever algorithms. Clever anything.

Thursday, April 4, 2013

Just write it down


A friend of yours walks into a local software firm at 2:30 on a Tuesday afternoon. Never having been there before, they won't know what's going on. They'll see people working; they'll see people typing. It'll be pretty quiet -- there'll be some people talking over an idea here and there but mainly people are facing their terminals with their hands on their keyboards, and they're working really hard.

The question is, What are they doing? What's really happening? What is it like inside the heads of those developers? What is the stuff that they do all day while quietly typing? I'll tell you (and this has been true everywhere I've worked, the bad and the good): all day, every day, they're stumbling, and they're staggering, and they're swearing. And the reason is that they're trying to build things. They're still building things because those things don't work yet. And the things don't work yet because they're trying to figure out how to put the pieces together.

This is where the time goes. This is where the budget goes.

I saw a magazine ad years ago which had a red sports car on a highway with red lights every fifty feet. I think that's what life is like for us most of the time.

The reason we don't understand how things work is because -- last year, last month, or even yesterday -- we didn't explain our hard-won lessons to one another.

My suggestion to you is to simply write it down. Write it down now, in a way that's just good enough for tomorrow. You can start small -- you don't have to write something as long as Moby Dick and it doesn't have to be perfect. The goal is just to save your co-workers a difficult day tomorrow or next week. Imagine one of the people near you and make their life a little bit easier.

I want to acknowledge barriers as well as solutions. These are two different things. I think the conversation is mainly about barriers. One set of barriers is when we say: I'm not good at writing, I don't like to write, I don't understand how the wiki tool works, I don't have any time, Not right now, I'm busy, I have another project to move on to -- or (worst of all) Perfection is unattainable so why bother.

Another set of barriers is when we say: We're here to build solutions, not documents. My counter-argument is that our solutions baffle us when we have to modify them or debug them not very far down the road. Development time too often is just code archaeology. Much of it is not really development in the sense of creating something new, but rather re-discovery. There's this Thomas Heller quote that I like, that in every company there's at least one person going crazy. I would say that most of us are going crazy. We let ourselves get used to this continual re-deciphering of things and we accept it as ordinary. But it doesn't need to be.

What are some solutions? You can start small. You can simply ask yourself, What took me a lot of time this week? What was hard? Where did my time go? What did I wish I hadn't needed to do? What did I wish someone had just come out and told me up front? Those are the kinds of things most worth knowing. In terms of the cost of time, those are some of the most valuable deliverables you can offer. Write down the command you could never remember how to type, the data-update sequence you kept doing out of order, the steps for viewing the database-table schema. Make yourself a little cheat sheet for next time and put it where other people can see it. The deliverable is not only the software solution but also the knowledge of how you got to the solution.

There's a saying that the best time to plant a tree is twenty years ago. Fortunately we don't need twenty years -- weeks and months are just fine. And documents can live. You can cross-link between source code and wiki pages: simply have each one give the name of the other one. People will find them; they will stay alive and they won't become stagnant.

Write it down, and make it just good enough for tomorrow. This afternoon, take fifteen minutes (no more). Take your last project. Write down the purpose of the project and the top three gotchas. Make a few paragraphs and push the save button. Those fifteen minutes today can save more than fifteen minutes on your group's subsequent projects.

There's no single magic solution to all problems in software development. But the way I like to envision these little bits of solution-recording is reaching down the road ahead and, each time, flipping a few more traffic lights from red to green.

Sunday, February 24, 2013

The third half, concluded: kindness in software


In a previous post I wrote about the realization that as of this year I'll have been programming for half my life, and wondering where I'm going to take my career from here. I left a few details for a later post: meta-thinking, and kindness in software. This post is about the latter. It's about kindness to users, as well as other developers.

I should be clear that this is a separate skill from personal kindness. On a Friday afternoon, looking forward to a weekend trip and working with a smile on my face, I can absent-mindedly create a tool which people will utterly hate using on Monday. Likewise, feeling disgruntled with the status quo I can be motivated to build something delightful.

The value of software for people who use it

I don't have too much to add here beyond the topic of agile software development. An awful lot has changed for the better over the years, and agile is a big part of that.

In summary, though, I'll say the following:

The waterfall model --- wherein I ask you what you want, go away and build it, and deliver you the result --- doesn't work well in today's faster-changing world (if indeed it ever did). For me the short summary of agile is that it's midway between the waterfall model, at one extreme, and Ray Bradbury's Jump off the cliff and build your wings on the way down at the other. Specifically:

* You don't really know what you want when we're writing requirements. But you will know what feels right when you use it with your hands, and you'll tell me about it.
* What you want will change as I'm building a solution for you, and the longer I go without talking to you, the wronger I'll get. So we'd better touch base often.
* Neither of us knows (unless I'm doing something very low-risk such as building a system nearly identical to one I've built before) everything I can actually do for you. So we'd better iterate back and forth, bouncing needs and solutions off each other in a very transparent way.

There are team-internal implementation details in agile, such as the scrum technique  (daily 15-minute standing meetings to stay cohesive) --- but for developer/user interaction, one key point is user stories.  It's useful for far more than software. If I'm doing something for you, I need to know: Why do you do what you do?  What do you do?  How do you currently do it?  What do you hope for? 

A quick sanity check I've used is:

* See if I can make a one-to-two-minute summary of what my customers' daily lives are like.

* If I find that I can't, then meet with them, take notes, draw pictures. Doodles help.  Tell them what my summary is and ask them for their opinion on it.

* Remember that some quick notes are better than none at all: get at least an outline written down first. Make it prettier as I go on, if that turns out to be important.

The other key point for agile, besides user stores, is the sprint cycle, typically two weeks, which is the longest amount of time user and developer work independently. It formalizes the interpersonal feedback loop and makes it part of a scheduled process.

I've recently used these techniques on the job as a developer, and I've benefitted as a user from software built using agile. I have a lot more to learn about it, but I already know it's a better way to live.

The value of software for people who build it

Programming is the art of telling another human being what one wants the computer to do. -- Donald Knuth

Programs have two kinds of value: what they can do for you today, and what they can do for you tomorrow. -- Martin Fowler, Kent Beck, and John Brant, in Refactoring: Improving the Design of Existing Code

Code will be read and modified more times than it will be written. -- Andrew Hunt and David Thomas, in The Pragmatic Programmer

It’s harder to read code than to write it. -- Joel Spolsky's Fundamental Law of Programming

If your users are happy, is that all that matters? More and more of our world involves software. Moreover, as systems get bigger, open-source libraries continue to amaze, and we all can finally do more reusing than reinventing the wheel ... we're spending more and more time working with each other's stuff.

We had better be nice to each other.

Writing to be read

It's tempting to think that code only matters to the compiler and the platform executing it: that code is for the machine, that all we have to do is get the program working and move on. But in today's world, CPU, RAM, and disk are all cheaper than ever. Meanwhile our time and our happiness are as valuable as ever. I believe we should design with concern for the resources which are truly scarce. This is not a zero-sum equation: coding for people is not mutually exclusive to coding for performance. In fact, I claim that it's easier to optimize a program you understand than one you don't --- likewise for a program which has been designed to be modified. 

So where do we start?  The thing I hate about Spolsky's Law is that it's true. It takes time and effort to up-end it, every time, to turn a confusing program into a joy to read.  The good news is that a few small steps can make a big difference. 

We spend a lot of time on things like "Where is the code that does task X? I'm looking at code C ... What does it do? How does it work? Who uses it? Who will be affected by my change to it?" Time goes by --- maybe a lot of time. I think the best thing to do for readability is to tackle each of those questions head-on: 

* Every significant project should have some kind of wiki page which makes a mental map for its source files.
* Source files should contain a simple hyperlink back to their wiki pages.
* Code can point the reader not only how it does what it does, but why it exists and who uses it.
* If I think I'm writing general-purpose code and I don't know to how many uses it will be put, that's great -- but I can at least list out the nominal client(s) as of when I wrote it.

Good enough for tomorrow

In a previous post I wrote about (among other things) technical surplus and multiplicativity. This is a specific instance of both. A trap too easy to fall into is to build things which are good enough for today --- and to tell ourselves that anything more is a waste of effort, or a waste of the customer's money.

In between perfection (unobtainable, and harmful to aspire to) and good enough for today (which leads to rigid legacy code that holds us back) is good enough for tomorrow.

Working today, we don't know exactly what will happen next ...

... and so we might think it's a waste of time to even try to guess ...

... but we can think about things likely to happen. In particular, if we create a useful data stream, a reasonable next step for someone to take is periodic reporting of that data stream. Or integration into a new display environment. Or porting it over to another, yet similar, project or line of business.

In summary, in addition to designing for present-day function and performance, I like to also:

* design for understanding;
* design for verifiability;
* design for re-use.

A question I can ask myself: am I building exactly a solution to the current problem, or thinking ahead by splitting out clear (preferably standards-based) modularities between reusable components? 

A very, very common pattern I see is programs mixing computation and display in the same routine. These programs work. But they're hard to change. By separating the computation and display routines, we make it easier to re-task the computation for command-line, report, and dashboard environments.

Another question I can ask myself: looking at this current solution I'm creating, what's likely to be the effort to (a) bug-fix it? (b) add features? (c) integrate it into a larger flow? (d) adapt it to other, similar uses? I can strike a balance between spending too little time (creating technical debt) and spending too much (which is time-wasting perfectionism) by the following: envision reasonable scenarios for the future of the software, then simply keep the extra time I spend now less than the total future time cost to others that would otherwise occur.

The third half, continued: Meta-thinking


In a previous post I wrote about the realization that as of this year I'll have been programming for half my life, and wondering where I'm going to take my career from here. I left a few details for a later post: meta-thinking, and kindness in software. This post is about the former.

Craftsmanship has a subtopic worth its own discussion, namely, thinking about how we think. This is something that has always excited me.

It excites me in large part not only because of it usefulness, but because of its strongly counterintuitive nature. We've all had those moments when the door is labeled, plain as day, Push. And yet we pull it. This doesn't work. In our industry it turns out that stepping back from the guns and smoke and details of the impending deadlines and ringing phones to iron out some kinks in the process can result in quicker progress toward those deadlines. It's as though pulling the push-door actually worked.

This isn't new to software. One of my favorite quotes is due to Abraham Lincoln: Give me six hours to chop down a tree and I will spend the first four sharpening the axe.

I should also disclaim that as much as I enjoy meta-thinking, I'm not always good at it, especially when I most need to be: those times when all I can think is I have this pile of logs to chop and tick tick tick!

Topics in this post:
* How do we get there from here?
* Exponential expense
* Technical debt
* Technical surplus
* Multiplicativity
* Finding root problems
* Disruption

How we get there from here?

Back in 1992 or so, shortly after I'd first taught myself C, I read an article in Scientific American about the Mandelbrot set and I was struck by how such a simple algorithm (literally, just a few dozen lines of code) could lead to such amazing complexity. My first version, on a 386, painted a few square inches of screen in a few minutes. The results were pretty. Then one day at the campus computer lab I happened to sit down at a machine with a math co-processor. The same pretty results --- in seconds. The hardware innovation made all the difference.

What are the barriers now? We have gigascale networking, CPUs, and RAM; terascale disks; places like Google run programs on thousands of machines, and you can rent the use of similar resources on Amazon's AWS. The biggest barrier isn't hardware anymore. I have access to ten thousand times the computing resources I did twenty years ago. Am I ten thousand times more productive? Why not, and is there a future in which I could be?

I often wonder, How do we get there from here? Or: Why aren't we there already?

That's a big question, and it's too much to answer in full. But I can say that as a developer, on a day-to-day basis, I know that we all have demands from multiple customers, who in turn have changing needs. Meanwhile the platforms we're building on top of are also changing. Most importantly, we're all busy, we don't have enough time in our days, and emergencies keep happening -- emergencies that we just can't ignore. Something is always in the way.

And so I ask myself about getting some of those things out of the way:

* How can we find more time in our day?
* How can we have more fun doing our work?
* How can we bring about the change we want to see in the world?
* In short, what stands between us and everything we dream of?

My current best partial answers follow.  They all involve stepping away for a moment from the too-few-hours-in-a-day kinds of too-hard thinking, and instead thinking about how I think --- treating thought itself as a scarce commodity to be carefully planned for. They involve technical surplus, multiplicativity, finding root problems, and disruption.

Exponential expense

In my first post-college job, at a small software company in Arizona, I learned firsthand about exponential bug cost and what I started calling the needle-in-a-haystack problem. Namely: we make mistakes (bugs) all the time. And the more time that passes between the error and its discovery --- equivalently, the more people involved between the oopser and the oucher --- the cost of finding and fixing it goes up exponentially.

For example:
* If I make a mistake and catch it right there at my desk, no one even needs to know.
* Suppose I don't catch it, and suppose I say there's a good reason: We have an entire department devoting to software testing and it's good division of labor to let them do the testing.
* If, after I check in my code along with the other half-dozen people on my team, the integrator sees a symptom, it'll take a few minutes, or maybe an hour, for them to comb through the merge report and find out I'm the one with today's mistake.
* If the bug survives to system test, there we might have a domain-knowledgeable user: for example, a medical-software company employing nurses and respiratory techs to use the software in a (hopefully) real-world way. They'll know a lot about their work but not much about mine --- they'll see a symptom at the user-interface level (if they see it at all), know something's wrong somewhere, and write up a bug report --- and after that someone will dig into it, and eventually my phone will ring.
* If the bug makes it past system test and isn't triggered until the real real world, then a display crashes on a nurse doing bedside rounds at 3 a.m. in some time zone somewhere, the sysadmin's pager goes off, our nighttime software-support person looks up the symptom in our database of known bugs (finding nothing since this one is new) ... and so on. Eventually my phone rings and I get to fix my bug.

Finding bugs in software is like finding needles in a haystack. At my desk, I've got one bale of hay. If I comb through it --- and that takes time I might not think I have, since tick tick tick! --- I'll find it in minutes. If I don't comb through it, I save those minutes --- and I'm dumping my bale of hay into a cartful, from there into a barnful, and so on. Debugging in a large-scale deployed software system is like looking for a needle in a barnful of hay and it is no fun.

Technical debt

The needles-in-haystacks metaphor I found out about early on in my career is just one example of a more general concept, known as technical debt. Here's a very nice write-up: http://en.wikipedia.org/wiki/Technical_debt. If you didn't read that, the basic idea is that the minutes we save today can end up costing us more tomorrow --- and worse, borrowing time from the future comes at a cost of compound interest. A team can get to the point when all they're doing is servicing interest --- receiving piled-up bug reports, debugging, fixing, patching, working very, very hard --- and making more mistakes along the way.

These are difficult problems, and we borrow from the future for good reason: someone is in need today. And just like taking out a small-business loan to start a new business, technical debt does have its place. We borrow time from the future when we need to meet a deadline today, and this isn't always reckless: customers want their deliverables on time. The trick is knowing what to do early on, when the debt is just beginning to accrue, before it gets out of hand --- doing a few key extra things at the point when they don't even seem to be necessary.

Technical surplus

Technical debt has its flip side: technical surplus. This is when we do just a little more today than we need to, and it makes life a little better and gives us time to do one more little thing --- and so on. One example is a code comment around a particularly clever algorithm implementation which takes a minute to write, saving others down the road an hour of head-scratching. Another example is a few pages of big-picture documentation helping to organize in the reader's head what would otherwise be two hundred source files in a directory tree. A third is automated unit-test cases which help future maintainers of the code know when they've broken something.

When we produce needle-free bales of hay, we can scale to systems of arbitrary complexity. 

Again, the art is in doing a bit more than needed, before it appears to be necessary --- that is, when it appears gratuitous --- and in knowing what to pick. Eternally polishing one rock to the neglect of others is unconstructive perfectionism; polishing each of them is good craftsmanship.

Getting this right is an ongoing art. People tell me I'm good at writing software for the long haul --- but (unless my axe is sharp) I'm not fast. For me, there are a few rules of thumb:

* I try to avoid the not-today-because-having-a-bad-day excuse. I like the Dr. Seuss quote: "Today I shall behave as if this is the day I will be remembered."

* Do it now. That means either go ahead and write that comment or that wiki page or that test scenario while it's fresh in my head --- or, make an entry in the team's problem-tracking system which will make it an accountable task for me to catch up with.

* Too much documentation can be as bad as too little. The rule of thumb I use is, I try to write the simple reference I wished I had had. I do this by thinking about the things that confused me most when I started. (Since time is money, anything that caused me an hour of head-scratching to figure out is --- at the corporate bottom-line level --- worth writing down for the next person.) I write down the things I worry I'll otherwise forget in six months. I do it now, before I forget that understanding I have in the present. Then I put that where other people can find it (with hyperlinks in from parent pages, and out to other pages), and I move on.

There's a lot more good news --- our industry has come a long way since I first got started. Unit-testing is now widely recognized as a best practice. Agile development encodes technical surplus into a clearly articulated lifestyle. It's a good time to be a software developer.

Multiplicativity

One of the things I wish I'd learned about sooner is multiplication of effort. This enables crowdsourcing, or concurrent programming, to use current buzzphrases.

There are at least five levels of ways to change software, each providing a productivity multiplier over the one before:

* Level 0: change nothing.

* Level 1: Fix a specific bug, add a specific feature. There is a one-to-one correspondence between question and answer.

* Level 2: While fixing a bug, find the missing document, misunderstanding, missing unit-test case, etc. and fix that. For feature addition, create a solution which other people (not just the original requestor) can use. Here the answer-to-question ratio is greater than one. It's necessary to work at least at this level if we're ever to dig out of the tar pit of too-many-bugs-to-fix-too-little-time-to-fix-them.

* Level 3: Create tools which people can use to glue together to develop their own features. They're no longer waiting on a programmer to create features --- the power is in their own hands. It's important to make those tools documented, visible, and debuggable enough that they can find their own mistakes. This requires some level of customizability, whether in a GUI, a configuration language, an embedded language, or what have you. I've seen significant implementation of this paradigm at my current employer, and it's absolutely jaw-dropping to see what a hundred smart, motivated people can do when someone gives them a rich, powerful, extensible tool and gets out of their way.

* Level 4: Create tools which allow people to develop their own level-3 tools. The design of programming languages is an example of this. I have a programmer-centric point of view, but I think language designers engage in one of the highest forms of human thought.

Finding root problems

The difference between levels 1 and 2 above is a big one. Too many times over the years I've seen a user file a bug report, with a developer replying You had an error at line 52 of your config file or That method doesn't accept non-null pointers. User fixes their config file, and developer does their own null-pointer check --- problem solved? Absolutely not.

The challenge here is to keep asking why?

* What was the user trying to do? What did they believe the config-file syntax did?
* Did we ever give them examples of valid config-file syntax?
* Why was the pointer null in the first place?
* If the report was missing because the report-generator didn't run, why was that? Is the job scheduler supposed to retry failed jobs? Does the user think it is?
* And so on.

Just as technical surplus requires doing a little more than seems necessary --- but not so much as to miss deadlines --- here too there's a challenge. We've all been the child who kept asking a follow-on Why? to our parent's previous Because ... . From the parent's end, it's cute at first but it gets maddening after a while. And there are always other rocks to polish, too many why-chains to push down all at once.

My current rules of thumb are:

* Keep a list --- in my head, or preferably written down --- of frequently encountered themes. If after a few weeks or months the same kinds of things keep happening to different people in the same particular area, then that's the rock to polish, the chain of whys that it will pay off to descend into.

* At the very least, leave things better than I found them. I might get just a level or two into the recursive why chain, but as long as I keep my multiplicative ratio greater than one (fixing more than one problem instance for each reported problem) I feel like I've come out ahead.

Disruption

One of the challenges in my career --- and one I've often handled poorly --- is where in the complexity spectrum to make a change. Fixing a one-time bug is quick. Fixing the erroneous documentation which led the programmer to have the mistaken understanding which led him or her to create that bug in the first place is also quick. Taking a subsystem, preserving its API, and doing a gut-level rebuild of its internals may not be quick, but as long as the API is preserved it won't break anybody else, and no one else needs to be involved. I can do all those things, and have done so time and again. I've fixed a lot of problems along the way ...

... but what if the root problems lie deeper? What about making changes across systems, causing other people's code or workflow to need to be changed? This is called disruption, and being disruptive can be a virtue.

This takes courage and it takes communication. It requires taking time out of other people's day and sitting down with them and listening to them explain why my idea is a bad one. But it's only way to make meaningful progress in large-scale legacy systems. A single developer can fix a problem here and there, and add significant value, but it takes motivating an entire team to really move a culture and fix root problems.

Why I Do What I Do: or, The Third Half Of My Life

As of this year I will have been programming for half my life. It's my passion, my vocation, and much of what I have to offer the world. Twenty-two years of software development have given me time to make (and observe) thousands of mistakes and missteps, and I have (as I well should by now) plenty of opinions about our craft. At this point I have to ask myself some fundamental questions: What will the twenty-third year do for me that the first twenty-two have not? What's next for me? In fact: Why do I do what I do? What do I have to offer the world and where do I want to take it?

I say this not only with local interest --- asking myself how to be keep myself happy and support my family --- but also with global interest. Namely, what are the greater impacts of the kind of work I do? How can we in our industry make our customers and ourselves happier, especially now that software is defining more and more of our lives? Over the years I've found myself motivated in particular by the specific things we can do in the short term --- at times counterintuitive and a bit inconvenient --- which can have significant increase on our happiness in the long term.

Here are my current answers to those questions. Another question is: what do I have to offer you in this article? I'm not the only mid-life software developer out there, not the only one to ask where we're going. Hopefully there's some food for thought here. I'd like to hear what you think.

The short list

* The power of symbols
* Craftsmanship
* Meta-thinking
* Kindness

The power of symbols

Back in 1991 or so, I was taking a linear-algebra course and learned that if you take position coordinates and multiply them by a simple matrix made up of sines and cosines, you can rotate that position around a fixed point. Nice idea --- but what did it look like? I had access to a 286 and I knew some BASIC, so that afternoon at the campus computer lab I wrote a program to draw a cube in wireframe on the screen, then nudge it by ten-degree increments in the pitch, roll, or yaw axes driven by keystrokes: for example, L to rotate left by ten degrees. I sat down to a blank screen and a blinking cursor; three hours later there was this wireframe cube floating and turning in space doing just what I had told it to. I was bowled over: if you carve the runes in the right way, you can make a computer do anything you want.

I've lost count of the number of programming languages I've learned since then (about a dozen), and I've lost count of how many hundreds of thousands of lines of code I've written. What remains is this: Coding is good clean fun. It was and it still is.

There's more to it, though. I've always been fascinated by human languages: despite all their differences in word order, pronunciation, and so on, ultimately they all come to make sense when they are studied for a while. Ultimately they are all just superficially different ways to speak human. Programming languages vary more in their capabilities: while most are good for many tasks --- any of them can print "Hello, world" or factor integers --- the differences run deep, from learning-curve time, to runtime performance, to the way they encourage project-busting worst practices or project-saving best practices. Nonetheless, what programming languages have in common is that (along with human languages, mathematics, and so on) they are all ways to systematically encode human thought. And what programming languages have that natural languages and math don't, moreover, is the ability to automate human thought.

This is powerful stuff.

Craftsmanship

It's one thing to make a computer do something; it's another to do it well, to make it easy on the people who use it and the people who'll have to deal with it next. Careful coding gives me pleasure while doing it --- just like woodworking, say. The social practice of craftsmanship is also a valuable commodity. As long as we keep making mistakes and learning from them, every additional year practicing our craft increases our value.

Here's my favorite blog on the subject: http://blog.8thlight.com. Uncle Bob et al. have a lot to say, and they say it well.

Meta-thinking

Craftsmanship has a subtopic worth its own discussion, namely, thinking about how we think. This is something that has always excited me.

It excites me in large part not only because of it usefulness, but because of its strongly counterintuitive nature. We've all had those moments when the door is labeled, plain as day, Push. And yet we pull it. This never works. In our industry it turns out that stepping back from the guns and smoke and details of the impending deadlines and ringing phones to iron out some kinks in the process can result in quicker progress toward those deadlines. It's as though pulling the push-door actually worked.

This isn't new to software. One of my favorite quotes is due to Abraham Lincoln: Give me six hours to chop down a tree and I will spend the first four sharpening the axe.

I should also disclaim that as much as I enjoy meta-thinking, I'm not always good at it, especially when I most need to be: those times when all I can think is I have this pile of logs to chop and tick tick tick!

There's more to say here --- technical debt, root problems, and multiplicativity --- which I'll leave to a separate post.

Kindness

This is a simple word which I'm using in two specific ways. Namely: okay, so I enjoy coding and I enjoy doing it carefully. But (except inasmuch as the journey is the destination) that's more about the means than the end. Software does a lot for us these days. It produces images for cancer scans; it processes inventory when we check out at the grocery store; it guides missiles to their targets.

To what ends should I direct my passion?

For me there are two answers: again, one local and one global. Locally, my work should make its immediate users happy. At my current employer, we make software for the people around us, without putting the software itself on the market (we make a living instead from the results our tools deliver for our customers). So here it's easy enough to see the furrowed eyebrows or the smiling face of the user.

Globally, I want to make the world a better place and I'm not too picky about how. Where I work, we're solving the retirement problem: I can be proud of that. There is work to be had in greentech, life sciences, communications, and so on. Wherever I am, on those days when I get mired in the details and then suddenly stop to think Why?, I'm happy if I can honestly remind myself that my work is being kind to someone.

There's more to say here, too --- the value of software for the people who use it, and for the people who build it --- which I'll again leave to a separate post.

What's next?

I love coding, it's what I do best, and it's how I give value back to the world. But do I keep calling myself a programmer? Maybe not. As I'll write about in the posts on meta-thinking and kindness, I've got a lot to learn about a lot of things.  But this much is clear:

* Already, I think of myself as a data engineer. Software developer per se is just too broad a title anymore --- there are so many of us now that the general title has little meaning.
* Time and wisdom, suitably curated, enable me to use craftsmanship as a commodity.
* My fascination with multiple languages makes me already a polyglot.
* Agile and other workflows create the opportunity for one to define oneself as a software methodologist.
* What I'll call "level-3 work"  in the post on meta-thinking permits a view of certain kinds of software development as productivity multiplication. This is worth thinking of as an endeavor valuable in its own right.
* My biggest growth opportunity right now is in creating more opportunities for disruption at the cross-system level. This requires moving away from the keyboard and into the meeting room more often --- a move worth making.

Thursday, January 24, 2013

Java 7, ulimit, and non-heap memory


A couple of co-workers of mine found the following problem:

Error occurred during initialization of VM
  Could not reserve enough space for object heap
  Could not create the Java virtual machine.

It happened in a batch-job environment where we use ulimit (for all programs, Java or not), in addition to Java's heap specification with -Xms and -Xmx. And it only started happening on a test switch from Java 6 to Java 7.

Now, Java requires memory for more than just heap --- native C malloc in JNI, mmap, and so on. But it also (moreso in Java 7, which is the point of this post) needs some non-heap memory to keep track of the heap itself.

A way to measure this is the following script called trymem:

#!/bin/bash
if [ $# -ne 3 ]; then
    echo "Usage: $0 {javadir} {heap MB} {extra MB}"
    exit 1
fi
javadir=$1
heap_mb=$2
xtra_mb=$3

totl_mb=$[heap_mb+xtra_mb]
totl_kb=$[totl_mb*1024]

ulimit -v $totl_kb

$javadir/bin/java -cp . -Xmx${heap_mb}m -Xms${heap_mb}m MyProgram
status=$?
echo ${heap_mb}+${xtra_mb}:${status}
exit $status

where MyProgram.java is simply

public class MyProgram {
    public static void main(String[] args) {
    }
}

along with a second script called searchmem:

#!/bin/bash

if [ $# -ne 1 ]; then
    echo "Usage: $0 {Java dir}"
    exit 1
fi
javadir=$1

heap_mb=10000
while [ $heap_mb -le 60000 ]; do
    echo -n "$heap_mb "
    xtra_mb=100
    while true; do
        ./trymem $javadir $heap_mb $xtra_mb 1> /dev/null 2> /dev/null
        status=$?
        echo -n .
        if [ $status -eq 0 ]; then
            echo $xtra_mb
            break
        elif [ $xtra_mb -gt $heap_mb ]; then
            echo "> $heap_mb"
            break
        fi
        xtra_mb=$[xtra_mb+100]
    done
    heap_mb=$[heap_mb+5000]
done

The idea is to set ulimit to the heap size, then keep increasing it until the test program can run without error. The difference between Java 6 and Java 7 is significant:

$ ./searchmem /usr/local/jdk/x86_64/jdk1.6.0_35

10000 ....400
15000 ....400
20000 ....400
25000 ....400
30000 .....500
35000 .....500
40000 .....500
45000 .....500
50000 .....500
55000 ......600
60000 ......600

$ ./searchmem /usr/local/jdk/x86_64/jdk1.7.0_9

10000 ........800
15000 ..........1000
20000 .............1300
25000 ..............1400
30000 ................1600
35000 ...................1900
40000 .....................2100
45000 .......................2300
50000 .........................2500
55000 ............................2800
60000 .............................2900
We couldn't find a pronouncement from Oracle regarding minimum ulimit as a function of heap size. But a plot of the above numbers suggests a linear relationship, and a regression (with more densely sampled data in searchmem, stepping heap_mb by 1000 rather than 5000) shows slope 1/24 and intercept 400MB. That is, given a Java heap size in MB, divide it by 24 and add 400 to it to find Java 7's heap-management overhead.

P.S. Thanks to David Craft for syntax highlighting in Blogspot:
http://www.craftyfella.com/2010/01/syntax-highlighting-with-blogger-engine.html

Thursday, January 17, 2013

Threw an exception from a method that throws no exceptions -- wait, what?


Here's a fun little false positive I ran into recently.

Application (ClassBeingTested) believes Library (ClassBeingMocked) throws an exception, but Library really does not.  Yet Application's unit-test case passes! This takes an interplay of JUnit, Jmock, and their use of reflection.

Library:
public interface ClassBeingMocked {
    public void neverThrowsAnException();
}

public class ClassBeingMockedImpl implements ClassBeingMocked {
    public void neverThrowsAnException() {
        System.out.println("ClassBeingMocked: in the method which does not throw an exception");
    }
}

Application:

public class ClassBeingTested {

    private final ClassBeingMocked _them;

    public ClassBeingTested(ClassBeingMocked them) {
        _them = them;
    }

    public void neverThrowsAnExceptionEither() {
        try {
            _them.neverThrowsAnException();
        }
        catch (Exception e) {
            System.out.println("ClassBeingTested#neverThrowsAnExceptionEither: " +
                "caught \"" + e.toString() + "\"");
        }
    }

}

Test code:

public class MyTest {
    private Mockery _mockery;
    private ClassBeingMocked _mockedInstance;
    private ClassBeingTested _testedInstance;

    @Before

    public void setup() {
        _mockery = new Mockery();
        _mockedInstance = _mockery.mock(ClassBeingMocked.class);
        _testedInstance = new ClassBeingTested(_mockedInstance);
    }

    @Test

    public void testMethod() {

        _mockery.checking(new Expectations() {{
            one(_mockedInstance).neverThrowsAnException();
            will(throwException(new Exception("we expect an exception on timeout")));
            }});
        _testedInstance.neverThrowsAnExceptionEither();
       _mockery.assertIsSatisfied();
    }    

}

What prints is:
JUnit version 4.10
..ClassBeingTested#neverThrowsAnExceptionEither: caught "java.lang.IllegalStateException: tried to throw a java.lang.Exception from a method that throws no exceptions"

Time: 0.021

OK (4 tests)

One handy item is the print to stdout in the application's exception handler; without that, the misunderstanding proceeds silently.  Yet this will disappear in logging contexts.

One solution is a narrower catch:  If the application instead does

try { _them.neverThrowsAnException(); } catch (MoreSpecificExceptionType e) { ... }
then this is caught at compile time:
"exception MoreSpecificExceptionType is never thrown in body of corresponding try statement"
This must be used carefully in a catch-all-exceptions server-thread context.