/ Programming

Are programmers inherently bad?

Rhythm 0 (1974) was a six-hour work of performance art by Yugoslav artist Marina Abramović in Studio Morra, Naples. The work involved Abramović standing still while the audience was invited to do to her whatever they wished, using one of the seventy-two objects she had placed on a table. Some of the items were benign; a feather boa, some olive oil, honey, perfume, grapes, roses. Others were not.

I had a pistol with bullets in it, my dear. I was ready to die. (Marina Abramović)
A shot of Marina Abramović while performing in Rhythm 0.

Her instructions regarding the performance were placed on the table. They read:

Instructions.
There are 72 objects on the table that one can use on me as desired.
Performance.
I am the object.
During this period I take full responsibility.
Duration: 6 hours (8 pm – 2 am).

While the psychological layer of the experiment (if any), was not the target objective, Marina's performance shed light on the inherent nature of humans and their behaviours when absolved of consequences (the artist knowingly forfeited her right to act or complain against any action incurred on her by the participants).

As she later said:

What I learned was that ... if you leave it up to the audience, they can kill you. (Marina Abramović)

Art critic Thomas McEvilley, who was present during the entire performance, wrote:

It began tamely. Someone turned her around. Someone thrust her arms into the air. Someone touched her somewhat intimately. [...] In the third hour all her clothes were cut from her with razor blades. In the fourth hour, the same blades began to explore her skin. Her throat was slashed so someone could suck her blood. Various minor sexual assaults were carried out on her body. [...] Faced with her abdication of will, with its implied collapse of human psychology, a protective group began to define itself in the audience. When a loaded gun was thrust to Marina's head and her own finger was being worked around the trigger, a fight broke out between the audience factions. (Thomas McEvilley)

Marina's act surfaced a darker side of humanity. One that we might not even know it's there until we are presented with the right environment to exercise it. The fact that the artist took full responsibility for all possible actions and consequences, seemed to have triggered the participants' inner demons. Immediate guilt was no longer an issue.

The Stanford Prison Experiment proved a similar theory. It was a social psychology experiment that attempted to investigate the psychological effects of perceived power, focusing on the struggle between prisoners and prison officers. It was conducted at Stanford University between August 14–20, 1971, by a research group led by psychology professor Philip Zimbardo using college students. In the study, volunteers were randomly assigned to be either guards or prisoners in a mock prison and were asked to act accordingly.

Long story short, the experiment was abandoned after six days (out of a two weeks total planned) due to abnormal cruelty and physical and psychological torture that the guards inflicted on the prisoners. Unlike Marina's performance act, which enabled extreme behaviour due to full a priori consent, the Stanford prison experiment demonstrated that, as per Zimbardo's conclusions, the simulated-prison situation, rather than individual personality traits, caused the participants' behaviour. Seems like the thirst for and the feeling of power causes, after all, extreme effects.

Both experiments reached similar conclusions. That if given the right conditions, it is in the human nature to not be afraid to act irresponsibly or even hurtful to self or others. Somehow, it seems like there is evil inside every one of us, just waiting for the right time to surface.

So this begs the question: are humans inherently bad?


One way of asking about our most fundamental characteristics is to look at babies. Babies' minds are a wonderful showcase for human nature. Babies are humans with the absolute minimum of cultural influence – they don't have many friends, have never been to school and haven't read any books. They can't even control their own reflexes, let alone speak the language, so their minds are as close to innocent as a human mind can get.

Nothing comes even close to this shot.

Ingenious experiments carried out at Yale University in the U.S. concluded that even the youngest humans have a sense of right and wrong, and, furthermore, an instinct to prefer good over evil.

But doesn't that suggest that evil is present within and that we're capable of recognising it (even from a very young age)? It's beyond my expertise to answer why that happens or discuss the evolutionary aspects that led to forming such innate behaviour and conscience. At a minimum though, the study showed that tightly bound into the nature of our developing minds is the ability to make sense of the world in terms of motivations, and a basic instinct to prefer friendly intentions over malicious ones. It is on this foundation that adult morality is built.

Talking about morality ...


Humans possess the ability to rationalise on things. We have something that we call a conscience, which acts as a guiding force, a trusted advisor in our day to day life. Historically speaking, in order for the human species to thrive and form the now knowing societal hierarchy, rules had to be established and thus, morality was born. Nowadays, apart from the innate ability to somehow distinguish right from wrong, we have morality being taught to us by our parents since the youngest of age.

The point here is, morality is stopping us from doing bad things. We are aware of the practical consequences of evil (prison, loss of status, societal disconnect) and emotional ones (guilt - which one could argue it's sometimes even more powerful).

We need morality to act as a guard and prevent us from slipping into the dark side. But morality is something we realised and built over time as a direct consequence of evolution and the need to survive. It is not something we are born with. It is something that is taught. It strikes me as funny that we need to invent something so that we can control potentially destructive inner urges at times.

Maybe a better question (but not one I am capable of answering) might be: why do we feel like this (evil at times) at all?


I'm a firm believer that the tendency to slip towards the dark side exists in each one of us. And it manifests itself in almost every aspect of our daily lives. To get more specific, when applied to programming, I'm of the opinion that if you can do something wrong (a.k.a write bad code), then most likely you will at a certain point.

Bonus points if you can guess the output ...

This sounds like an adaptation (or another facet, if you will) of the maxima proposed by Atwood: any application that can be written in JavaScript, will eventually be written in JavaScript.

The ability to write bad code comes in many forms. Sometimes, it's embedded at the language level. Think global variables in JS or the confusing prototypal inheritance system. Or the cascading part in CSS which is both a blessing and a curse. Or the invention of !important (which had established use cases at its roots) only to be later abused as an escape hatch for when you don't make sense of your CSS codebase anymore. Other times, it's embedded at the framework level. Think about having a two-way data binding model as the de-facto approach to updating your UIs, which sounds good in theory but scales rather poorly. Or having too much flexibility due to a very unopinionated implementation which only causes churn and confusion because there's no right way to do things. And sometimes (honestly, a lot of the times) it's just embedded at the human level. Us abusing the faults in the programming language or the paradigm we use, due to laziness, time pressure or genuine disinterest.

In my opinion, the last one is the craziest. And probably the hardest to solve. It deserves an article in its own (that will, however, be left for another time). But maybe the way we could think about taming it, is by drawing inspiration from how we thought about solving the other two (language and framework level).

Much like we slowly invented morality to act as our day to day guardian, in the software world, we invented other safety nets: we're pouring efforts into complex testing suites or we're coming up with constraints built-in into the code we use. Constraints are extremely valuable. They limit the ways in which someone can screw things up and, in the long run (that is, after assimilating them) they lighten the cognitive overhead of the programmer due to the fact that they don't have to think too much about how to do a certain thing (there're not a lot of options, right?).

In this regard, in the JavaScript frameworks space, Ember shines, at least when compared to the other solutions. The mantra is convention over configuration which provides the programmer with a mental (doubled by the software as much as possible) model that tells them how to do things.

The driving point here is that we use ideas and we invent forms of limitations to prevent us from slipping to the dark side when coding. Who knows, maybe the answer to our own laziness or external pressure affecting the code lies in some sort of framework or set of rules we haven't really emphasised too well as an industry so far. Maybe we need something standardised at a more global scale? Don't know. What I do know, is that such a framework does not exist in the IT world. This is in contrast with other domains, though. Think about doctors or lawyers who are going to be held responsible (malpraxis) if they do not perform to their best self and act in accordance to the purest intentions of human nature or best representing the interests of their patient/client.


I'm of the opinion that humans are inherently bad (there is a slice of evil lurking around in each of us) and will act as such if given the chance. Thus, programmers are inherently bad and will write bad code if presented with the opportunity. To deal with these things, we had to come up with our own safety nets: morality, guilt or software constraints. In a way, I find it ironic that the most dominating species on Earth has to come up with ways to prevent other siblings from themselves. But it is what it is.

If I can leave you with a piece of advice, then it's this: embrace constraints. Come up with your own set of project rules (from coding to architecture), and make sure you and your team adhere to them as closely as possible. Employ and verify them via code reviews and peer to peer feedback. Only then you will be able to sleep better, knowing that you're performing to your absolute best. And even if you think of yourself (or others) as diligent, responsible programmers, I think it's better to plan for the worst and hope for the best, right?

Bringing AI into the discussion

I want to end with a thought I've had now for a while. Disclaimer: I'm not an expert on the subject, yet I somehow seem to not be able to shake off this slightly apocalyptic scenario from my brain.

This might be the future.

We know that humans possess consciousness and that their actions are driven and shaped by it. But what about Artificial Intelligence? With the advent of computers and their respective computational power, much of the human actions (or jobs entirely) have been lent to the space of AI. Now, I think we're far from creating a superintelligence capable of mimicking the entirety of what a human represents. I'm not saying it's impossible though.

But are we sure we can create something capable of reasoning and empathy? Something that will, in turn, understand and employ the concept of morality in its actions? It seems, at least for now, a daunting task. One that we have no immediate answer to. What's clear though, is that we're not there yet. We already have examples of AI gone wrong: the Tay bot built by Microsoft or the paperclip maximizer (also know at a meta-language level, as the value added problem).

Ethics and a guided-by-morality evolution of the algorithm seem to be tough nuts to crack when it comes to AI. It's our job to act responsibly and make sure we employ (or maybe better, teach?) constraints when building autonomous software.

Otherwise, it can turn ugly pretty quickly ...

Vlad Zelinschi

Vlad Zelinschi

Human. Entrepreneur. Speaker. CTO. Google Developer Expert. Advisor for https://codecamp.ro and https://ndrconf.ai.

Read More
Are programmers inherently bad?
Share this

Subscribe to Vlad Zelinschi