Pennington: Professional corner-cutting
Software remains a craft rather than a science, relying on the experience of the craftsperson. Like cabinetmakers, we proceed one step at a time, making judgments about what’s important and what isn’t at each step. A professional developer does thorough work when it matters, and cuts irrelevant corners that aren’t worth wasting time on. Extremely productive developers don’t have supernatural coding skills; their secret is to write only the code that matters. How can we do a better job cutting corners? I think we can learn a lot from people building tables and dressers."
Posted May 6, 2016 3:16 UTC (Fri)
by torquay (guest, #92428)
[Link] (31 responses)
Umm.. really? Technical debt arises over time and cannot be clearly envisioned in advance. Technical debt will always be an ongoing problem, and dismissing the necessary time to work on it is a recipe for eventual disaster.
I'd love to see someone applying Pennington's advice about cutting corners to something serious, say a controller for a nuclear power plant, or an autopilot. "That intermittent radiation leak is a small byproduct of our code, but the end user can't really see it, so don't worry about it".
Perhaps Pennington should retire to be a part-time cabinet maker and let the professionals do the software work.
Posted May 6, 2016 3:36 UTC (Fri)
by dlang (guest, #313)
[Link] (9 responses)
Linux vs *BSD vs HURD is an example. Linux was a horrible OS to start with, and has only ever tried to be "good enough", but it was readily available and willing to accept fixes from anyone. HURD tried for perfection, some of the *BSD flavors also try for perfection.
End result: Linux's "good enough" has kept moving the bar higher to where it has passed the BSDs who had a significant head start, arguably passed most of the commercial *nix systems to the point where they have been abandoned or are in life support mode for legacy customers, and HURD is still pending it's first real release.
Posted May 6, 2016 3:56 UTC (Fri)
by pabs (subscriber, #43278)
[Link] (1 responses)
Do you consider the Debian GNU/Hurd release around the time of the Debian jessie release to be a "real release"?
Posted May 6, 2016 6:12 UTC (Fri)
by rsidd (subscriber, #2582)
[Link]
Is Debian GNU/Hurd a "real release"? Depends what you mean by "real": it exists, it isn't vapourware. But it's not comparable to other systems out there. OP's point is totally valid.
Posted May 9, 2016 11:53 UTC (Mon)
by teythoon (guest, #108658)
[Link] (6 responses)
Posted May 9, 2016 13:56 UTC (Mon)
by johannbg (guest, #65743)
[Link] (5 responses)
Posted May 9, 2016 14:05 UTC (Mon)
by teythoon (guest, #108658)
[Link] (4 responses)
[One of the worst problems of the Hurd project is clueless people spreading FUD scaring away potential contributors. Be a sport and stop.]
Posted May 9, 2016 14:15 UTC (Mon)
by johannbg (guest, #65743)
[Link] (3 responses)
Posted May 9, 2016 14:32 UTC (Mon)
by teythoon (guest, #108658)
[Link] (2 responses)
dlang wrote "HURD is still pending it's first real release" which is clearly wrong and discrediting the work put into it. Then again, what do you expect from someone who cannot even spell the Hurd right...
> Even less so if there was no public announcement/community agreement, discussion on list publicly available where it's stated that as of 2015 there will be bi-yearly releases of those three projects.
More FUD. This was publicly discussed and is documented in the mailing list archive.
Posted May 9, 2016 15:12 UTC (Mon)
by micka (subscriber, #38720)
[Link]
Posted May 9, 2016 15:40 UTC (Mon)
by johannbg (guest, #65743)
[Link]
You seem to be under some misconception that the lack of participation in those project are due to "FUD" but the fact is people that are interested and want to participate will participate regardless of any FUD against any given project.
1. https://www.gnu.org/software/hurd/news/2015-10-31-release...
Posted May 6, 2016 8:03 UTC (Fri)
by jiiksteri (subscriber, #75247)
[Link] (8 responses)
You might be deliberately misinterpreting the analogy :)
For a nuclear power plant controller an "intermittent radiation leak" would be something comparable to a cabinet's structural soundness, something that obviously matters to the customer. Not a corner you'd cut there.
Now, as far as the rounded corners and golden ratio bevelling on the 'abort' button goes, that's a different story...
Posted May 6, 2016 8:58 UTC (Fri)
by torquay (guest, #92428)
[Link] (6 responses)
The point is that selecting something to cut is inherently subjective. My idea of something "unimportant" might be very well different to yours.
When it comes to important functionality (nuclear power plant, or an online banking site), "cutting corners" is the last thing one wants to hear. Instead, the approach should be based on Occam's razor coupled with paranoia: implement only what is needed for the exact required functionality, add 100% unit-test coverage, and finally use wide fuzz testing on multiple levels (in an attempt to catch 2nd and 3rd order effects).
Posted May 6, 2016 12:51 UTC (Fri)
by hp (guest, #5220)
[Link] (1 responses)
There is no way to not make a judgment. We wake up every day and decide what is most important to work on today. It matters a lot what percentage of the time we are getting that choice right.
These judgments are always always highly contextual. Nuke plants aren't the same situation as a website for a local restaurant. But again, "contextual" isn't the same as "all judgments are valid."
Posted May 12, 2016 13:12 UTC (Thu)
by ksandstr (guest, #60862)
[Link]
I'd go so far as to say that the latter is a sign of ignorance: not knowing enough and/or not having done the legwork (as above) to distinguish good answers from ones that, slightly mutated, would leave us equally thinking the moon were made of cheese.
Posted May 6, 2016 14:33 UTC (Fri)
by dskoll (subscriber, #1630)
[Link]
The point is that selecting something to cut is inherently subjective.
Yes. Yes it is. And something that's essential to being a good software developer is to have good taste. Up to a point, software development is a science, but at the higher levels, it truly is an art and it relies very much on subjective developer decisions.
Posted May 9, 2016 6:50 UTC (Mon)
by niner (subscriber, #26151)
[Link] (2 responses)
This common "wisdom" I would very much like to dispute. I think programmers are actually usually good UX designers. Most of them just don't take the time and don't invest the effort to build a good UX, but they could. You know who are notoriously bad UX designers? Designers! Just think of the instances where you found yourself thinking "looks cool but is a pain to use and I'm getting nowhere" vs. "ok, looks like crap but at least gets the job done".
When a programmer wants to build a really good UX, she will look at the use cases and common usage patterns and find some simple, elegant and efficient way to support those. Finding simple, elegant and efficient solutions is a programmer's bread and butter after all.
Posted May 9, 2016 16:46 UTC (Mon)
by raven667 (subscriber, #5198)
[Link] (1 responses)
That's not exactly disagreeing with the point, you are just restating that developers are usually bad UX designers using other words. Of course a developer could pick up the skill set of UX design and do the leg work to talk to users and find the common usage patterns, the point is not that "Designers" are some magic breed that sprinkle pixie dust on your software, it's that to make good usable software, it has to be *someone's* job to talk to the users and think about the UX, if it's no-one's job then if you get usable software it's by chance and not by design.
Posted May 9, 2016 18:41 UTC (Mon)
by Wol (subscriber, #4433)
[Link]
At the end of the day a good manager knows his team's strengths, and if he can get them working together each doing the bit they do well, then the result should be a good program. Programmers - indeed pretty much everyone - are NOT "jacks of all trades", and by playing up to strengths you get a good solid product.
I know I had some managers who got exasperated with me because I'm a poor finisher. I had other managers who knew my strengths and always put me on a team doing something new - it was me who saw all the problems coming down the line, me who saw how to future-proof things, me who would expand the scope of a simple request so it fixed a complex problem. And then when I'd done the leg work of putting the skeleton of the solution in place, someone else would finish it off. If I were a cabinet maker I'd be the guy who'd know what was in the woodyard, what bits of scrap could be utilised in the hidden areas, where to source the nice stuff, so when an order came in they'd give it to me and maybe I'd make the shell, maybe not, but the master craftsman would come in to find all the wood laid out, the rough cutting all done, and the cabinet would be sitting there (probably in pieces :-) waiting for him to do all the fine work to finish it off.
Cheers,
Posted May 6, 2016 22:22 UTC (Fri)
by pr1268 (guest, #24648)
[Link]
I believe they (in the nuclear engineering field) call it a Scram Button. But I'm nitpicking here... ;-) A general comment, though: I think that "cutting corners" is such a pejorative term that it elicits thoughts of any number of other disdainful terms like "sloppiness", "shoddiness", or "laziness". But, I'm not sure of any other term appropriate for Pennington's article. Engineers (no matter which discipline) are continuously faced with having to decide what's the best solution given the constraints. Seeing how they're up against a budget, this often means... cutting corners where appropriate.
Posted May 6, 2016 8:10 UTC (Fri)
by gdt (subscriber, #6284)
[Link]
That intermittent radiation leak is a small byproduct of our code, but the end user can't really see it… There is a distinction between ‘user’ and ‘end-user’. The end-user isn't always the person sitting in front of the computer and your example is a fine illustration of the point. The end-users of a nuclear power plant control system include the owners of the plant and those who's health is risked by a plant failure.
Posted May 6, 2016 9:15 UTC (Fri)
by tcolgate (guest, #76945)
[Link]
Posted May 6, 2016 12:40 UTC (Fri)
by hp (guest, #5220)
[Link] (8 responses)
In saying "you shouldn't have put it in there," I'm not implying that any of us are perfect and always get it right. I'm implying that when we fuck up we have to fix it - if it matters of course. And that if the fuckup was out of laziness rather than the inherent difficulty of software, we should try to do better next time.
I'm aware that some management makes this harder than other management. We might choose jobs accordingly, if we can. But I've seen plenty of devs complaining that "management won't listen" when management has literally no idea what the devs are even trying to ask. I've also seen plenty of technical debt that was completely avoidable by being non-lazy and doing things like writing tests.
Posted May 7, 2016 5:54 UTC (Sat)
by douglascodes (guest, #105468)
[Link] (6 responses)
I thought the article was well founded. And there are good and bad professionals in every field.
Posted May 7, 2016 6:22 UTC (Sat)
by dlang (guest, #313)
[Link] (5 responses)
Technical debt is just something that happens when you don't have unlimited time, money, and manpower.
You don't "stand strong" against technical debt you figure out how to manage it sanely. This means planning it rather than letting it 'just happen'. It means having each iteration of improvements incorporate fixes to existing technical debt not just add new features.
If you are looking ahead a bit, you will see areas that you know you are going to replace in the near future. Those are the places to let technical debt pile up, so that it can be cleared by the already planned replacement.
A woodworker who only used the finest wood and hand tools for every aspect of a dresser, and makes every surface perfect is going to go out of business.
A woodworker who just grabs whatever wood is handy/cheap and uses it is not going to build anything memorable.
But a woodworker who uses a variety of woods, some very expensive for the parts of the dresser that are visible, and other very cheap (but strong) woods for inside components that will never be seen. The woodworker will also spend a lot of time perfecting the surfaces that are visible, but not waste time on the inside and bottom surfaces. The woodworker will make appropriate use of power tools, but then perfect the joints and surfaces by hand. The result will be available far sooner, and far cheaper than the first woodworker, and as a result will be far more memorable because it's able to be used (and the woodworker can then turn around and do it again, probably several times before the first oen ever produces anything)
you can call that 'cutting corners', or you can call it 'making the best use of available resources'
Posted May 8, 2016 6:51 UTC (Sun)
by epa (subscriber, #39769)
[Link] (1 responses)
The second reason is that letting something cruft up and become essentially unmaintained will make it more difficult to replace. The replacement code has to do the same job and may even have to produce the same result given the same input. If the old code is buggy you may have to carefully implement bug-compatibility in its replacement; better to first fix a bug in the existing code (as a small, narrowly scoped change) and then later switch out the implementation without changing the outwardly visible behaviour. If you try to do both at once in a production system that is more risky. Finally, in maintaining the older component properly and cleaning it up where possible, you gain the necessary familiarity with the job it needs to do and any 'gotchas' which the code has evolved to work around. This is particularly true where the older version was written by somebody else. It is the easiest thing in the world to look at crufty old code you inherited, go 'blech' and set out to rewrite it from scratch... but this is usually a mistake. Fix it first, and when you have gone through that formative experience you will be a better and wiser programmer and you can decide whether to rewrite it.
For these reasons I think that 'areas you know you are going to replace' is a poor metric for deciding where to let technical debt pile up. Sadly, even old and obsolete code needs the same standard of maintenance, until it is finally turned off in production and deleted from the codebase.
Posted May 8, 2016 11:49 UTC (Sun)
by dlang (guest, #313)
[Link]
I'm not saying that you just ignore an area because you plan to work on it soon, but rather that instead of leaving technical debt scattered around throughout your system, you try to eliminate it in the areas that you don't plan on working on soon, which will concentrate in in the areas that you plan to work on soon anyway.
Yes, sometimes plans change, but even then I think you are better with a known area of problems than with your problems scattered throughout the system.
Posted May 8, 2016 12:10 UTC (Sun)
by hp (guest, #5220)
[Link]
Posted May 9, 2016 7:03 UTC (Mon)
by niner (subscriber, #26151)
[Link] (1 responses)
A better way to think about it, that I've come across is seeing it as insurance. You don't buy insurance for every and all risks, that would be prohibitively expensive. But you do want insurance where it matters. The tests you write today are your insurance policy against breaks and overtime tomorrow. Yet sometime, it's just not worth investing too much. Spending lots of time to perfectly engineer a one off script usually will not pay off. Spending extra time to make your core business component future proof will hopefully. It's really a risk assessment and mitigation exercise.
Posted May 9, 2016 8:52 UTC (Mon)
by dlang (guest, #313)
[Link]
But I must disagree with you because I have never seen any program, opensource or proprietary that has been able to build a first version that did not have problems that clearly qualify as 'debt', not efficiency in design.
In spite of the mantra, nobody every waits to release something "until it's ready". At best they wait until it's "good enough", but if they waited until it was "ready" the release would never happen.
And even if the development team _thinks_ the software is perfect, when it hits the real-world and users who stress it in ways the developers never imagined, you will find areas that they thought were solid are really places where technical debt is hiding.
Posted May 8, 2016 6:50 UTC (Sun)
by hitmark (guest, #34609)
[Link]
Posted May 8, 2016 6:47 UTC (Sun)
by hitmark (guest, #34609)
[Link]
Pennington: Professional corner-cutting
Perhaps they complain to management about "technical debt" and being "given time to work on it." This is a sign that we aren’t owning our decisions. If the technical debt is a problem, 1) we shouldn’t have put it in there
Pennington: Professional corner-cutting
Pennington: Professional corner-cutting
Pennington: Professional corner-cutting
Pennington: Professional corner-cutting
Pennington: Professional corner-cutting
( GNU Hurd/Mach/Mig only had two releases last year, this year nothing and before that as in 2014 nothing )
Pennington: Professional corner-cutting
Pennington: Professional corner-cutting
Pennington: Professional corner-cutting
Pennington: Professional corner-cutting
Pennington: Professional corner-cutting
2. http://git.savannah.gnu.org/cgit/hurd/hurd.git/tree/NEWS?...
3. http://git.savannah.gnu.org/cgit/hurd/gnumach.git/tree/NE...
4. http://git.savannah.gnu.org/cgit/hurd/mig.git/tree/NEWS?i...
Pennington: Professional corner-cutting
Pennington: Professional corner-cutting
For a nuclear power plant controller an "intermittent radiation leak" would be something comparable to a cabinet's structural soundness, something that
obviously matters to the customer. Not a corner you'd cut there.
Now, as far as the rounded corners and golden ratio bevelling on the 'abort' button goes, that's a different story...
:-) Even that might be debatable: you want the button to be easily accessible, highly visible, not too large, not too small, right tectile feedback, texture, etc. As the saying goes, programmers are notoriously bad UX designers.
Pennington: Professional corner-cutting
Pennington: Professional corner-cutting
Pennington: Professional corner-cutting
Pennington: Professional corner-cutting
Pennington: Professional corner-cutting
Pennington: Professional corner-cutting
Wol
Pennington: Professional corner-cutting
Now, as far as the rounded corners and golden ratio bevelling on the 'abort' button goes, that's a different story...
Pennington: Professional corner-cutting
Pennington: Professional corner-cutting
The danger comes when people introduce tools and services lightly, with the assumption that they are somehow too trivial to care about. Queue server's, caches, DBs, get thrown in to systems in he belief that, e.g., adding a queue server is easier than implementing your own in your code base. The question is really, "easier for who?". A deployment of a queue server that is trivial for a developer can become the entire focus of an Ops team down the line.
Software framework's transition to "technical debt" and "legacy code", with alarming ease.
Planned obsolescence seems to be the only way to go. Just accept that in two years you'll hate the thing you are working on now, and plan to kill it off.
Pennington: Professional corner-cutting
Pennington: Professional corner-cutting
Pennington: Professional corner-cutting
Pennington: Professional corner-cutting
If you are looking ahead a bit, you will see areas that you know you are going to replace in the near future.
I think this is often unwise. Firstly because such code has a habit of staying around longer than expected. A component can be 'deprecated' or 'obsolete' or 'on the way out' and stay that way for a decade if in production. In fact, getting rid of existing code is even more likely to slip behind schedule than other programming tasks.Pennington: Professional corner-cutting
Pennington: Professional corner-cutting
Pennington: Professional corner-cutting
Pennington: Professional corner-cutting
Pennington: Professional corner-cutting
Pennington: Professional corner-cutting