Coders: Who They Are, What They Think and How They Are Changing Our World
I’m the kind of person who, when finding a new interest, gets sucked in — all-in — hook, line and sinker. I discovered the Internet when I was 14, and was enthralled. Exploring the web was great, but quickly this wasn’t enough: I wanted to make my own web pages.
I taught myself HTML by viewing the source code of other pages. Soon enough I discovered javascript, a way to animate and create interactivity of otherwise static pages. I copied snippets of code and experimented with them by blindly tweaking bits and seeing how this changed the code’s behaviour.
This is how I stumbled into my career in software engineering.
Besides loving to code, I was fascinated by the history of programming and computing. I love picking up old programming books from the 70s and 80s, on languages that you don’t hear much about these days. As an industry, software development suffers from a massive case of novelty bias: we constantly worry about falling behind, and there’s an incessant clamor about the latest trendy tools and techniques. Web developers used to joke that every six months, a new javascript framework comes out, which makes everything that came before it obsolete. (React’s enduring popularity seems to have obsoleted this joke, fortunately.)
But what I found, from reading these old books, is that programming styles and approaches come and go out of style just like fashion. Patterns that are en vogue today are rarely new discoveries, they’re ideas of previous generations rediscovered.
My interest in coding isn’t just limited to its craft and history: I’ve often tried — and failed — to explain how it feels to write code. I don’t know how it feels when an artist paints or a musician composes, but writing code, while an act of creation, seems… different. Maybe these are the wrong things to compare it to (we’re not called software artists, after all), but other fields of engineering or architecture or design don’t quite seem to be a match, either.
Software is full of borrowed terminology and analogies from other industries, such as architecture and construction. These analogies can be helpful, but they never seem to hold together when examined from all angles. It’s like this, except when it’s not. Another confounding aspect of software is that the laws of the physical world can mislead us when trying to understand the digital.
This is why I was so excited to read Clive Thompson’s Coders. The prospect of a journalist — and one with tech bona fides — examining the essence of code and the people who write it sounded like it’d be right up my street. Software engineering as a field doesn’t yet have its abundance of historians.
Written in a conversational style and drawing on interviews, Thompson covers topics such as the origins of the industry’s gender disparity, what it feels like to code (and why coding is singularly suited to be a solitary activity), and the effects of the rise of software on society at large — the industry’s loss of innocence, if you will.
Coders covers a lot of ground, and does a good job of it. My favourite parts delved into how coding feels and how computers shape the behaviours of coders. I did wish there would have been more philosophical meditations on the nature of software and code.
—
A lot has been said about how immersive computers can be, whether you are coding or gaming or surfing the internet. It’s as if you are in another world, a seductive place in which you might find yourself lost for hours on end. (Locationlessness appears to be troublesome for our brains, we seem to need spatial cues to help us have a sense of time passing.)
And when you are coding, you are the one making and tweaking the rules of the world you’re working in. It is immensely engrossing.
There are two aspects that make coding uniquely learnable as a hobby.
First, you can do it on your own. As long as you have the time and the interest, trial and error is an eminently viable strategy to learn how to code. The computer has all the time in the world for you. As Sarah Connor put it: “[It] would never stop. It would never leave him, and it would never hurt him, never shout at him, or get drunk and hit him, or say it was too busy to spend time with him. It would always be there.”
Second, while coding is a creative act — comparable maybe to writing or drawing — it’s also an act of problem-solving. Tasks often take on the shape of a puzzle which can be solved, such as the game sudoku. Because your code will either run or it won’t, returning the desired output or not, you get constant feedback from the computer. It’s this that differentiates it from other creative endeavours that have no external arbiters of whether your creation works.
One front-end coder I know used to do a lot of visual art but found she did it less now that she was programming for a living. Why? She so deeply enjoyed the binary jolt of joy when a program started working. “I think part of the reason I’m not doing just art anymore is it just doesn’t satisfy me the same way,” she said. “There is no ‘aha’ moment with a painting,” no instant when it suddenly begins working.
I noticed this effect myself. I’m in my late 40s but haven’t done much programming since I was [a] kid. But when I started researching this book, I began teaching myself a bit more and quickly discovered something dangerous: It was much more satisfying to program than to write. One week, for example, I decided to write a Python program that would help archive links I’d posted on Twitter. I often tweet about scientific or high-tech news and usually regret, months later, not having saved all that stuff in one easy-to-find place. So I decided to create a script that would log into my Twitter account every morning at 8:30 a.m., scrape out all my tweets of the last 24 hours, and analyze them: It would make a list of any tweets that had links in them, and ignore all the others, where I was just @-replying to people. Then it would email me a nicely formatted list of all the links, with the text of my tweet alongside. (page 81)
Giving up these constant little wins — and the dopamine hits that came with them — was one of the hardest parts of adjusting to the role of engineering manager, which didn’t involve daily coding.
But the computer’s persnickety particularity also affects programmers’ behaviour:
Why can coders be so snippy? [Jeff] Atwood asks, rhetorically. He thinks it’s because working with computers all day long is like being forced to share an office with a toxic, abusive colleague. “If you go to work and everyone around you is an asshole, you’re going to become like that,” Atwood says. “And the computer is the ultimate asshole. It will fail completely and spectacularly if you make the tiniest errors. ‘I forgot a semicolon.’ Well guess what, your spaceship is a complete ball of fire because you forgot a fucking semicolon.” (He’s not speaking metaphorically here: In one of the most famous bugs in history, NASA was forced to blow up its Mariner 1 spacecraft only minutes after launch when it became clear that a bug was causing it to veer off course and possibly crash in a populated area. The bug itself was caused by a single incorrect character.)
“The computer is a complete asshole. It doesn’t help you out at all,” Atwood explains. Sure, your compiler will spit out an error message when things go wrong, but such messages can be Delphic in their inscrutability. When wrestling with a bug, you are brutally on your own; the computer sits there coolly, paring its nails, waiting for you to express what you want with greater clarity. “The reason a programmer is pedantic,” Atwood says, “is because they work with the ultimate pedant. All this libertarianism, all this ‘meritocracy,’ it comes from the computer. I don’t think it’s actually healthy for people to have that mind-set. It’s an occupational hazard! …(page 68)
As technology permeates our lives, this literality might be spreading beyond programmers. I’ve heard parents worry that Alexas and Siris are teaching children to be demanding and rude.
—
Another aspect about code that seems difficult to translate to other walks of life is bugs. A bug is when the code does something unintended and undesirable. The term “bug” came from a real-life insect — a moth — was lodged in an early computer and caused a technical malfunction. Thus, debugging is the process of locating and correcting code errors.
Dealing with bugs can be infuriating. At a previous job at an online pharmacy, we occasionally saw customer orders being picked twice in the warehouse. The duplicated order, when it came to be labelled, would error — which was as it should be, as the prescription items had already been dispensed — but it did cause unnecessary work for the pickers, for the troubleshooting team who dealt with the error, and for the staff who had to reshelve the medicines. We called this the “Phantom Picking Bug” and, over almost three years, I spent hours upon hours trying to isolate and spot what conditions were causing it.
When the Phantom Picking Bug struck, I’d pore over the logs and the code in the hope of identifying the commonalities that triggered the bug. While it wasn’t a particularly dangerous bug, as we knew that we had safeguards in place to prevent the medicine being sent out more than once, it irked me. Annoyed that I’d yet again failed to find the cause of the bug, I’d swear it off, promising myself that I wouldn’t waste any more time trying to fix it. Yet time and again, I found myself tilting at this elusive bug. I’d rope in different colleagues, hoping that maybe they’d see something previous bug-hunters hadn’t seen.
When we finally did quash the Phantom Picking Bug, it was with a whimper, not a triumphant bang. The bug was eradicated not by the heroics of a debugger or the keen insight of a developer, but by altering the behaviour of another part of the system for completely unrelated reasons.
On the flip side, finding and fixing a bug can be exhilarating.
My friend Max Whitney has been a programmer for over two decades, but she still remembers the first time she fixed a truly fiendish bug. She was working as a programmer for New York University, and students were reporting some trouble logging in to the university’s main web portal.
Specifically, they would sometimes discover that they were logged into someone else’s account. Whatever was going on?
At first, they noticed that a large number of complaints came from students who were logging into NYU’s portal while using computers at the Kinko’s copy center around the corner. Maybe Kinko’s was somehow to blame? But then Whitney saw reports of the same login bug from computers located on-campus. It became clear that the culprit was the university’s login system itself. Unfortunately, that login code had been written years earlier by a staff programmer who no longer worked for NYU. Since Whitney couldn’t ask him to help debug his code, she sat down to scrutinize it line by line with the help of another expert programmer.
Reading someone else’s code can be a baffling task. That’s because there’s rarely a simple, obvious way to write a piece of code. That’s because there’s rarely a simple, obvious way to write a piece of code. Idiosyncrasies abound. Different coders have very different styles. If you asked four different programmers to write a pretty basic algorithm — say, one that figures out and prints the first 10,000 prime numbers, for example — you’ll likely get four different approaches that were structured and looked a bit different. Even something as simple as picking the names for one’s variables can be a source of bitter argument between coders. Some prefer to use extremely short, one-letter variables (
x = "Hello, World!"
), arguing it keeps their code more compact and thus easier to glance at. Others prefer to use more descriptive variable names (greetingToUser = "Hello, World!"
), pointing out that it’ll be easier, a year later when the code is crashing, to look at a variable likegreetingToUser
and know what it means. When code gets particularly lengthy, or if something is particularly dense, coders are usually encouraged to leave little comments in their code that explain exactly what the heck is going on, so that some poor soul years hence will have guidance in sifting through the thicket. But often when coders are working fast, or under pressure, they don’t “document” their code very much; and even with comments, frankly, figuring out the flow and logic of a piece of code can still be a brow-furrowing affair. (In well-functioning firms, no code is put into production until it’s undergone “code review,” with colleagues looking it over — not just to make sure it works, but that it’s sufficiently readable by others.) One estimate suggests that coders spend 10 times the amount of time parsing lines of software than they do writing them. This is another reason coders can be so snippish and judgy about the style of their colleague’s code. They know they may eventually need to read it.This is the situation in which Whitney found herself. She and her colleague pored over the login code for hours, slowly figuring out how it worked, like an electrician patiently following the tangled wires that someone else had laid down in an apartment. Hmmm, this section triggers that chunk of code, which would get that other function to start up…
Then suddenly, they saw it. When they finally had enough of the code’s structure loaded into their minds, they could see the bug.
The problem began the moment someone connected to NYU’s network. When students logged in, the system gave them a random temporary ID number for that session. To generate the random ID number, the program would “seed” its random-number generator using the timestamp, the exact instant that the student logged in. But what if two students just coincidentally logged in at precisely the same second? They’d be issued the same quasi-random number; oops! To prevent this, the programmer added another “seed” number, the IP address of the computer that the student was using. NYU had tons of IP addresses, so the programmer figured there was no chance any two students logging in would have precisely the same timestamp and precisely the same IP address. Right?
Nope. Years later, NYU and Kinko’s switched over to a new technology that funneled lots of computers through just one or two IP addresses. They did this to handle the explosive growth of internet use on campus, but nobody realized it might interfere with the old login system, written so many years previously. But it did. Suddenly it became possible for two people to log in at the same IP address. The two users would be assigned the exact same session ID, and presto: One of them would be logged into the other person’s account, able to see the other person’s email and notes.
In a flurry, Whitney wrote some code to test to see if their diagnosis was correct. It was. They’d figured it out. It’d take more weeks of slogging to actually fix the bug, but at least the mystery was solved.
And she was suffused with a drug-like euphoria, a feeling of mastery and accomplishment that rendered her aglow. “It was wonderful,” she recalls. “I walked the halls of Warren Weaver Hall, up and down the little H-shaped hall, just going, I am a golden god! I am a golden god!”
She wanted to savor the moment, because she knew it wouldn’t last.
“I knew that the moment I sat down again, I was gonna find the next thing that was broken,” she says, and sighs. Sure, lots of the code at NYU worked fine. Most of it did, probably some of which she had written herself. But you didn’t spend much time pondering the stuff that worked. Indeed, by definition, if it’s working, you’re usually ignoring it. “The actual thing that a programmer spends their time on is all the shit that’s broken. The entire activity of programming is an exercise in continual failure.
“The programmer personality is someone who has the ability to derive a tremendous sense of joy from an incredibly small moment of success.”
Part of what’s so thrilling about a programming “win” is how abruptly it can emerge. “Code can quickly change states; it goes from not working at all to working, in a flash,” as Cal Henderson, the CTO and cofounder of Slack, once told me. This provides a narcotic jolt of pleasure, one so intense that coders will endure almost any grinding frustration just to taste it again […] (page 69)
The greatest bug-hunt narrative of all time has to be Ellen Ullman’s The Bug, a novel that depicts the obsession of a software developer who is driven over the edge by the elusiveness of a bug.
—
Software engineering is still unfortunately male dominated, but it wasn’t always this way.
Coding jobs exploded in the ’50s and ’60s, and for women, this weird new field was quite receptive to them: Since almost nobody knew how to code, in the very early days, men had no special advantage. Indeed, firms were struggling to figure out what type of person would be good at coding. You needed to be logic-minded, good at math, and meticulous, they figured. In this respect, gender stereotypes could work in women’s favor. Some executives argued that women’s traditional expertise at fastidious pastimes like knitting and weaving imparted precisely this mind-set. (The 1968 book Your Career in Computers argued that people who liked “cooking from a cookbook” would make good programmers.) Mostly, firms gave potential coders a simple pattern-recognition test, which many women readily passed. Most hires were then trained on the job, which made the field particularly receptive to neophytes of any gender. (“Know Nothing about Computers? Then We’ll Teach You (and pay you while doing so),” as one ad enthused.) Eager to recruit women, IBM even crafted a brochure entitled My Fair Ladies. Another ad by the firm English Electric showed a bob-haired woman chewing a pen, noting that “Some of English Electric Leo’s best computer programmers are as female as anything.”
Even some black women could find a toehold, so hungry was the field for talent. In Toronto, a young black woman named Gwen Braithwaite had married a white man, and they discovered that given the racism of the time, nobody would rent to them. That meant they had to buy a house, which meant she needed a job. After seeing an ad for “data processing” jobs, Braithwaite showed up and convinced the all-white employers to let her take the coding-aptitude test. When she placed in the 99th percentile, the supervisors thought she’d pulled a prank and grilled her with verbal questions. When she passed those with flying colors, they realized she was the real thing and hired her. She became one of the first female coders in Canada, and she led several big projects to computerize insurance companies. “I had it easy,” she later told her son. “The computer didn’t care that I was a woman or that I was black. Most women had it much harder.”
By 1967, there were so many women programming that Cosmopolitan magazine commissioned a feature on “The Computer Girls.” Illustrated with pictures of beehived women piloting computers that looked like the control deck of the Starship Enterprise, the story described this crazy, new Wild West that was paying women $20,000 a year—or over $140,000 in today’s money. Coding had quickly become a rare white-collar professional field in which women could thrive. For an ambitious woman in the ’60s, the traditional elite professional fields surgery, law, mechanical engineering accepted almost no women. Programming was an outlier; the proportion of female coders was fully one in four, a stunningly high figure, given the period. The options for women with math degrees were limited—teaching high school math or doing rote calculations at insurance firms, for example—so “women back then would basically go, ‘Well, if I don’t do programming, what else will I do?’” notes Janet Abbate, a professor in the department of Science, Technology, and Society at Virginia Tech, who has closely studied the era. “At the time back then, the situation was very grim for women’s opportunities.” (page 192)
How differently the demographics of the software industry turned out.
I believe that as a craft, programming is something that anyone can enjoy, excel at, and use to make new things, for fun or for profit. But while computers may not care about race or gender, our industry has not been welcoming to people who don’t fit the stereotypes of what a developer looks like. And now, with AI, which is trained on existing biases, we are at risk of baking these prejudices into the systems of tomorrow.
For this reason it’s more important than ever that we have diverse participants in creating a future that works for all.
