I was supposed to do an Animorphs post today, but then I realised that I don’t feel like reading a book where the Animorphs travel through time, cross the Delaware and meet Hitler. So instead you’re getting another entry in my
wildly popular widely read well-known currently extant ‘Your Science Might Be Terrible If’ series, in which I complain about topics I am wholly unqualified to talk about.
Today we’re doing computers.
Before I get started, I should point out that today’s post has a background reading requirement. (This is the Intergalactic Academy, remember? And yes, this will be on the final exam.) Before progressing any further, read 8 Scenes That Prove Hollywood Doesn’t Get Technology: Towards a Hermeneutics of the Pop-Culture Singularity by esteemed scholar Dr. Cracked Dot Com. It’s an exhaustive, dauntingly erudite piece of work, but I think you’ll agree that your understanding of the field will be greatly increased by reading every word of it.
Dr. Com covers some of the most egregious examples of compu-ignorance in the medium of television and films, but we’re here to point and laugh at screw-ups in science fiction novels. Do your characters use computers? Do they possess l33t skills? Are they – god help us – ‘cowboys on the frontier of the information superhighway’?
Then buckle up, because Your Computer Science Might Be Terrible If…
…you have a fictional operating system that would never work in real life.
The following is a screenshot of Mac OS X’s Terminal program, which is basically an environment for using UNIX shells:
That’s a list of all the files in a particular folder on my computer. Even if you’ve never used a UNIX-like OS before, you can probably work out what you’re looking at here. There’s the name of the owner of each file (i.e. me), its size in bytes, the date it was created, and its name. The letters and dashes on the left-hand side denote file permissions, and are likely to be the only entirely unfamiliar element of the screenshot to anyone coming from a different OS.
Now here’s the same folder in Mac OS X’s Finder:
You may notice that both interfaces present almost exactly the same information, even though UNIX was created in the late sixties and the version of Mac OS X I’m running debuted in 2011. Obviously the first UNIX systems would have looked different to my screenshot up there, but that basic method of organizing information about files has been around for ages.
There’s a reason for that: it works. Any operating system that comes out within the next twenty or thirty years will, I guarantee you, still organize data into discrete files with certain attributes (date of creation, name, type etc.) recorded by the OS. New-fangled devices like smartphones also do this, even if they don’t always display the same quantity of information to the user.
If you make up a fictional operating system, it must seem usable to the reader. You want them to come away thinking ‘Yes, that sounds like something I would actually be able to use in real life’, even if the characters in your book are using an outlandish SF input method. This is why I can’t take CSI: Miami seriously – none of the computers in the show look even remotely usable. (Well, there are plenty of other reasons to not take CSI: Miami seriously, but that’s one of the big ones for me.)
The worst case of an author getting this wrong is probably in the Net Force books. Here, Tom Clancy takes a fairly standard SF conceit – futuristic VR! – and uses it to instantly reveal that he doesn’t have the first idea how computers work. There’s a particularly bonkers scene where a pair of Net Force Operatives™ remove bugs in a proprietary system using what is essentially a video game. They put on VR goggles, then shoot virtual rats in a simulation. Every time they shoot a rat, one of the bugs in the system gets fixed.
Stop and think about what’s actually happening here. The user shoots a virtual rat, and the computer…what, fixes a bug on its own? There is no way to translate the high-level action (shooting virtual rats) into anything that would make sense on the level of the program’s source code. It would be like having a program that deleted viruses every time you clicked a button. Why not just remove the user input and make the program delete every virus automatically?
…you think anyone can hack anything.
Below is a list of electrical items. Identify the ones that you would NOT be able to ‘hack into’ using a regular laptop with no specialized hardware. You have thirty seconds:
- A computer.
- A 10-year old car.
- An ATM.
- A cash register.
- A microwave.
- The lighting or heating system of an average office building.
- A nuclear missile.
All done? Here’s the answer sheet:
- Can hack.
- Can’t hack.
- (Probably) can’t hack.
- Can’t hack.
- Can’t hack.
- Can’t hack.
- Can’t hack.
I think we just introduced plot holes into every action movie of the last decade where anyone uses a computer.
I should point out that every item on the list uses ‘a computer’ of some sort of another. Most of them will be small embedded systems (like the microwave), but they’re still computers. So why can’t you hack them?
Simple: they’re not connected to any sort of network. Remember, we’re talking about hacking things without using specialized hardware. Your microwave probably does not have an ethernet port hidden behind a secret panel on its side. An automatic heating system in an office building has no good reason to be connected to a wifi network.
But what about the ATM? They must be part of a network, right? Sure, but that doesn’t mean you just need to position yourself close enough to one and black-hat it up until your bank account is filled to capacity. Machines like ATMs will almost always use remote
systems and tools specifically designed for them, which means you won’t have much luck ‘hacking’ them unless you do a good deal of background information and maybe get your hands on some software developed by the company that made them. (Having said that, people have had some luck getting into ATMs thanks to shoddy security measures.)
So that’s most of the list rendered unhackable without specialized software and/or hardware and/or knowledge. But what about the nuclear missile? Hasn’t there been something in the news lately about Iranian nuclear power plants getting themselves all hacked?
Well, yes, but that was a power plant, not a missile, and it involved a virus called Stuxnet, not a lone hacker. So no, you can’t force nuclear missiles to launch from your local Starbucks. Sorry.
(Incidentally, Stuxnet was developed jointly by the US and Israeli governments, which means your fictional super-virus doesn’t have to be developed by Russia and/or China. It turns out the bad guy is us!)
…you include any sort of computer specifications whatsoever.
I recently decommissioned an eight-year-old computer with a 3GHz processor. My current, far more powerful laptop has a 2.3GHz processor. But bigger numbers are always better when it comes to computers, right?
Well, yes, but people make the mistake of assuming that one computer-number takes precedence over all other computer-numbers. That old computer might have had a 3GHz processor, but that processor only had a single ‘core’. My laptop processor has two, which is better (bigger numbers!). The processor also has a whole boatload of other improvements, but we won’t go into that now.
The take-home point is that computer hardware doesn’t progress by just upping the numbers year-on-year. Even in the realm of storage, it isn’t as simple as hard drives gradually getting bigger. We now have 3TB disks, but we also have 512MB SSDs. Which is better? That depends: the 3TB drive can hold more cat pictures than you can shake a stick at, but the SSD will access those cat pictures at speeds that would have been the stuff of fiction a decade ago. The ‘future’ of computer storage is arguably not increasingly-huge hard drives (how much space do most people even need?), but cheap, decently-sized and lightning fast SSDs. You probably wouldn’t have been able to predict that before commercial SSDs hit the market, though.
If you wrote a near-future SF novel five years ago and had your characters boasting about their 12GHz processors, congratulations, your book now looks ridiculously out of date. We don’t have 12GHz processors, we have 2.2GHz+ processors with four (or more) discrete cores. Your best bet is to just not mention numbers at all, because you have no way to predict what kind of metrics computers nerds will be bragging about ten years from now. If you’re writing SF, your readers will just assume the computers are ridiculously powerful by today’s standards unless told otherwise.
…you perpetuate the myth of the lone genius.
The novel Ready Player One is about a massive virtual system designed and built by, apparently, one dude. Presumably other people were involved in its creation at some point, but we don’t hear a whole lot about them.
In a way, this isn’t entirely out of the ordinary. Nobel Prizes in chemistry, biology and physics and are all given to single people for work carried out by teams of people, and its true that a single visionary designer come up with ideas that never could have emerged from a design-by-committee environment. But in computer hardware and software in particular, almost everything is done by huge groups of people. Most big-budget videogames are created by hundreds of programmers, artists, and designers. Photoshop has a credits page longer than what you’d find at the end of most movies. The iPhone was not developed by Steve Jobs alone.
And so on. Basically, major computer software and hardware is produced by huge teams of engineers at international companies for the same reason that drugs are created by massive pharmaceutical juggernauts. A lone genius, no matter how ‘visionary’, isn’t going to develop the next iPhone without a whole lot of money and people.
…your characters can reprogram anything.
Challenge time: pick any random program installed on your computer and send me a screenshot of its source code. It’s okay, I’ll wait.
You may never have thought about this before (I didn’t for years), but you can’t just crack open a program and start messing around in its internals. Oh, it’s possible, but it requires a whole lot of specialized tools that most people simply won’t have at hand. This is why I roll my eyes every time I read about a character altering some commercial piece of software the same way you’d crop a photograph in MS Paint. It’s just not that easy.
(The reason why is aggressively nerdy, but the short version is that the languages commonly used for programming – C or C++ or Java or what have you – aren’t the same as the languages a computer understands. Compiling a program from a high-level language readable to humans involves translating it into a low-level language readable by computers. Most people who can understand the former cannot necessarily understand the latter, because it’s a hell of a lot more complicated, and even if you can, companies take measures to ensure nobody easily reverse-engineers their products.)
So those are my tips for Writing About Computers. Got any more to add? Noticed a glaring mistake? Want to suggest future topics for this illustrious series? Vent your spleen in the comments!
(Particularly if you have ideas for future posts. Seriously, I’m all out.)