A Song of Flame and Snow

Where I lived growing up in Australia, there was no snow. There was barely rain, and if it ever gold cold enough for water to freeze, the entire city lost its mind. The closest experience we ever had to snow was a massive hailstorm. Shops closed, preachers proclaimed the end times were upon us, there was mass panic, and people used surfboards (remember, this is Australia) to try and recreate sledding as seen on TV, with the layer of ice on the streets instead of snow.

It wasn’t that we lacked our own weird weather. It’s just that we didn’t get a whole lot of storms. We did get droughts. It got hot and dry enough that if one wasn’t careful, a backyard pool that existed at the beginning of the beginning of the week could easily be a dry hole in the ground by week’s end. Water rationing made this harder.

The storms we did have were more often firestorms. With the land as dry as it was, the threat of fire loomed over everyone. A single match, or cigarette dropped accidentally or negligently, could kindle a fire that would consume the country. A single bolt of lightning striking the wrong tree could ignite a blaze that would render the most flammable parts, which were mostly the areas surrounding population centers, uninhabitable.

Through the summer months, public information adverts warned citizens of hazards that could create the spark that would burn down our civilization. School projects demonstrated how a glass bottle discarded in the open could, if left at the wrong angle, magnify the sun’s raise and start a fire. Poster campaigns reminiscent of WWII and Cold War civil defence campaigns lined walls at public places, warning of the danger. Overhead, helicopters scouted and patrolled daily, checking for any signs of smoke, and marking off which pools, ponds, and lakes still had water that could be used if needed. Meanwhile, ground vehicles checked water meters and sprinkler setups, and issued stiff fines to those who used more than their fair share.

At school, we conducted fire drills, not only for evacuation, but for prevention, and active firefighting. We were quizzed on which tools and tactics worked best against which type of fire. We drilled on how to aim and hold a hose while battling flames, how to clear a fire break, how to fortify a residential structure against an oncoming firestorm, how to improvise masks to prevent smoke inhalation, and other first aid. We were told that if the fire reached our homes, the professionals would likely already be overwhelmed, and it might well be up to us to do what we could, either to shelter in place, or to eliminate any fuel from our homes before retreating.

When the fires did come, as they did almost every summer, we followed the progress along, marking down on maps the deployments of the fire brigades, the areas that were being cleared to create a break, the areas that were marked for evacuation, and so on. We noted the location of our local teams, and the presence of airborne units that we saw on television and publicity material. The campaigns often lasted for weeks, and could span the length of the continent as new theaters flared up and were pacified.

While there are some similarities, a blizzard is something quite different. Whereas the Australian bushfires are perhaps best understood as a kind of siege, blizzards of the sort that exist in the American northeast are closer to a single pitched battle. Blizzards are mostly contained within the space of a day or two, while bushfires can rage for an entire season.

A blizzard is almost quaint in this way. It forces people to break routines, and limits their actions in a way that creates the kind of contrived circumstances that make interesting stories. Yet the inconvenience is always temporary. Snow melts, or is removed, power lines get repaired, and schools go back into session.

Bushfires are only quaint if one is sufficiently far away from them, and only then in the way that people call the Blitz, with its Anderson Shelters, blackouts, evacuations, and “we’ll all go together when we go” spirit, quaint, in retrospect. A blizzard may impact a larger swath of land in a shorter period, dumping snow to knock out electricity to large numbers of people, and closing roads. But bushfires will scour towns off the map, suffocating or burning those who stand in their way. And while snowstorms are mostly bound by the laws of meteorology, a sufficiently large bushfire will spawn its own weather system.

It is an interesting contrast to consider while my house and the landscape around it is buried in snow.

Not Dead Again

So last night, as of writing, I very nearly died. This comes off somewhat melodramatic, I know, but I regard it as a simple fact. Last night, the part of me that makes me a cyborg and prevents me from being dead suffered a catastrophic failure. Possibly multiple catastrophic failures depending on how you count, and how much blame you give the hardware for trusting it.

That last sentence doesn’t make a whole lot of sense out of context, so here’s an illustrative example: take your average AI-goes-rogue-and-starts-hurting-humans plot. In fact, it doesn’t even to be that extreme: take the plot of WarGames. Obviously, McKitrick didn’t intend intend to start an accidental nuclear war, and even tried to prevent it. But he did advocate for trusting a machine to oversee the process, and the rest of the film makes it pretty clear that, even if he’s not a villain, he’s at least partly at fault. The machine, inasmuch as a (at least probably) non-sentient machine can take blame, was responsible for the film’s main crisis, but it was literally just following its instructions.

Last night wasn’t quite as bad as that example. My life support didn’t go rogue, so much as the alarm that’s supposed to go off and warn me and others that my blood sugar is dropping critically low didn’t go off, at least not at first. The secondary alarm that is hardcoded and can’t be silenced (normally a fact I loathe) did go off, but only on my receiver, and not on my mother’s.

By the time I was woken up, only half-conscious at this point, I was so far gone that I couldn’t move my legs. I’m not sure if the problem was that my legs wouldn’t respond, or that my brain was so scrambled that it couldn’t issue commands to them. I picked up my phone and immediately texted my mother for help. In retrospect, I could and probably should have called her, either on the phone, or by screaming bloody murder until everyone in the house was awake. The fact that nether of these things occurred to me speaks to my mental state.

I felt like I was drowning. It didn’t help that my body was dumping all of its heat into my surrounding linens, creating a personal oven, and sweating up a small lake, and shivering all at the same time. I don’t know why my autonomous nervous system decided this was a good idea; I suspect it was simply that the part of my brain that controls temperature was just out of commission, and so was doing everything it knew how to simultaneously and at maximum capacity.

I was drowning in my mind as much as on land. I struggled to pull together coherent thoughts, or even incoherent ones. I fought against the current of panic. I couldn’t find the emergency sugar stash that I normally kept on my nightstand, and I couldn’t move to reach the one in the hall. I looked around in the darkness of my home at night, trying to find something that might save me.

And that was when I felt it. The pull of darkness. It was much the same tug as being sleepy, but stronger, and darker in a way that I can’t quite put words to. It called to me to simply lay down and stop moving. I had woken up because of the alarm, and because I had felt like I was baking in my own juices, but these things wouldn’t keep me awake if I let go of them. My vision darkened and lost its color, inviting me to close my eyes. Except I knew that if I fell asleep, there was a very good chance I wouldn’t wake up again. This was, after all, how people died from hypoglycemia. In their sleep. “Peacefully”.

I didn’t make a choice so much as I ignored the only choice given. In desperation, I began tearing open the drawers on my nightstand that I could reach. I rifled through the treasured mementos and documents like a stranger would; a looter in my own home. At last I found a couple of spearmints, which I presumably acquired long ago at a restaurant and left in my drawer when emptying my pockets. I frantically discarded the wrappers and shoved them into my mouth, crunching them between my teeth. I could feel the desperately needed sugar leech into my mouth. It wasn’t enough, but it was a step in the right direction. I found some throat lozenges, and similarly swallowed them.

I kept pillaging my nightstand with shaking hands, until I hit upon what I needed. A rice krispy treat. I spent several seconds searching for an expiration date, though I’m not sure why. Even if it were expired, it wouldn’t have changed my options. So long as fending off death was the goal, it was better to be hospitalized for food poisoning than dead from low blood sugar. I fumbled around the wrapping, mangling the food inside, until I managed to get it open. I gnashed my teeth into the ancient snack, swallowing before I had even finished chewing. I continue to rifle through the drawers while I waited for the Glucose to absorb into my bloodstream.

I texted my mother again, hoping she might wake up and come to my aid. At the same time, I listened to music. The goal of this was twofold. First, it helped keep the panic at bay and focus my thoughts. Second, and more importantly, it helped anchor me; to keep me awake, and away from the darkness.

Whether it was the music, or the sugar, or both, the dark, sleepy sensation that pulled towards eternity, started to ebb. More of the color came back to my vision. The trend indicator on my sensor, though it was still already dangerously low and dropping, was slowing in its descent.

It was now or never. I yanked my uncooperative legs over the side of the bed, testing their compliance and trying to will them to work with me. With trepidation, but without the luxury of hesitation, I forced myself to stand up, wobbling violently and very narrowly avoiding a face-first collision as the floor leapt up to meet me. Without time to steady myself, I shifted the momentum of falling into forward motion, knocking over my rubbish bin and a few various articles and pieces of bric-a-brac that lay on my winding path from bed to doorway. Serendipitously, I avoided destroying anything, as my lamp was knocked over, hit the wall, and harmlessly bounced off it back into standing position.

I staggered towards the IKEA bookshelf where we kept my emergency sugar stash. I braced myself against the walls and sides of the bookshelf as I took fistfuls of this and that item and stuffed it into my pajama pockets, knocking over containers and wrecking the organizational system. So be it. This was a live-to-clean-up-another-day situation. With the same graceless form of loosely-controlled falling over my own feet, I tripped, stumbled, and staggered back to my bed to digest my loot. I downed juice boxes scarfed peppermint puffs stockpiled from post-holiday sales.

By this point, the hunger had kicked in. My brain had started to function well enough to realize that it had been starving. The way the human brain responds to this is to induce a ravenous hunger that is more compulsion than sensation. And so I devoured with an unnatural zeal. About this time, my mother did show up, woken by some combination of my text messages, the noise I had stirred up, or the continued bleating of my life support sensors. She asked me what I needed, and I told her I needed more food, which was true both in the sense that my blood sugar was still low, and in the sense that a compulsive hunger was quickly overrunning my brain and needed to be appeased.

My blood sugar came up quickly after that, and it took another fifteen minutes or so before the hunger faded. By that time, the darkness had receded. I was still sleepy, but I felt confident that this was a function of having been rudely awoken at an ungodly hour rather than the call of the reaper. I felt confident that I would wake up again if I closed my eyes. I didn’t feel safe; I hardly ever feel safe these days, especially after so harrowing an incident; but I no longer felt in imminent danger.

I woke up this morning slightly worse for wear. Yet I am alive, and that is never nothing. It had been a while since I last had a similar experience of nearly-dying. Of course, I evade death in a fashion every day. That’s what living with a chronic disease is. But it had been a while since I had last faced death as such, where I had felt I was acutely dying; where I had been dying, and had to take steps to avert that course. After so many similar incidents over so many years, naturally, they all start to blur together and bleed over in memory, but I reckon it has been a few months since the last incident.

I am slightly at a loss as to what cadence I ought take here. Obviously, nearly dying is awful and terrifying, and would be even more so if this wasn’t a semi-regular occurrence. Or perhaps the regularity makes it worse, because of the knowledge that there will be a next time. On the other hand, I am glad to not have died, and if there is going to be a next time, I may as well not waste what time I do have moping about it. As the old song goes: What’s the use of worrying? It never was worth-while. Oh, pack all your troubles in your old kit bag and smile, smile, smile!

It is difficult to find a balance between celebrating small victories like not dying when I very well might have, and letting myself become complacent. Between acknowledging my handicaps and circumstances in a way that is sound, and letting them override my ambitions and sabotage myself. Of course, I am neither the first, nor the only person to face these questions. But as the answers necessarily very from person to person, I cannot draw upon the knowledge of others in the same way that I would for a more academic matter. I wish that I could put this debate to bed, nearly as much as I wish that it wasn’t so relevant.

Notes on Descriptivism

There is an xkcd comic which deals with linguistic prescriptivism. For those not invested in the ongoing culture war surrounding grammar and linguistics, prescriptivism is the idea that there is a singular, ideal, correct version of language to which everyone ought adhere. This is distinct from linguistic descriptivism, which maintains that language is better thought of not as a set of rules, but as a set of norms; and that to try and enforce any kind of order on language is doomed to failure. In short, prescriptivism prescribes idealized rules, while descriptivism describes existing norms.

The comic presents a decidedly descriptivist worldview, tapping into the philosophical question of individual perception to make the point that language is inherently up to subjective interpretation, and therefore must vary from individual to individual. The comic also pokes fun at a particular type of behavior which has evolved into an Internet Troll archetype of sorts- the infamous Grammar Nazi. This is mostly an ad hominem, though it hints at another argument frequently used against prescriptivism; that attempts to enforce a universal language generally cause, or at least, often seem to cause, more contention, distress, and alienation than they prevent.

I am sympathetic to both of these arguments. I acknowledge that individual perceptions and biases create significant obstacles to improved communications, and I will agree, albeit with some reluctance and qualifications, that oftentimes, perhaps even in most cases, that the subtle errors and differences in grammar (NB: I use the term “grammar” here in the broad, colloquial sense, to include other similar items such as spelling, syntax, and the like) which one is liable to find among native speakers of a similar background do not cause significant confusion or discord to warrant the often contentious process of correction.

Nevertheless, I cannot accept the conclusion that these minor dissensions must necessarily cause us to abandon the idea of universal understanding. For that is my end goal in my prescriptivist tendencies: to see a language which is consistent and stable enough to be maximally accessible, not only to foreigners, but more importantly, to those who struggle in grappling with language to express themselves. This is where my own personal experience comes into the story. For, despite my reputation for sesquipedalian verbosity, I have often struggled with language, in both acute and chronic terms.

In acute terms, I have struggled with even basic speech during times of medical trauma. To this end, ensuring that communication is precise and unambiguous has proven enormously helpful, as a specific and unambiguous question, such as “On a scale of zero to ten, how much pain would you say you are currently experiencing?” is vastly easier to process and respond to than one that requires me to contextualize an answer, such as “How are you?”.

In chronic terms, the need to describe subjective experiences relies on keen use of precise vocabulary, which, for success, requires a strong command of language on the part of all parties involved. For example, the difference between feeling shaky, dizzy, lightheaded, nauseated, vertigo, and faint, are subtle, but carry vastly different implications in a medical context. Shaky is a buzzword for endocrinology, dizzy is a catch-all, but most readily associated with neurology, lightheadedness is referred to more often for respiratory, nausea has a close connection with gastroenterology, vertigo refers specifically to balance, which may be either an issue for Neurology, Ophthalmology, or an ENT specialist, and faintness is usually tied to circulatory problems.

In such contexts, these subtleties are not only relevant, but critical, and the casual disregard of these distinctions will cause material problems. The precise word choice used may, to use an example from my own experience, determine whether a patient in the ER is triaged as urgent, which in such situations may mean the difference between life and death. This is an extreme, albeit real, example, but the same dynamic can and will play out in other contexts. In order to prevent and mitigate such issues, there must be an accepted standard common to all for the meaning and use of language.

I should perhaps clarify that this is not a manifesto for hardcore prescriptivism. Such a standard is only useful insofar as it is used and accepted, and insofar as it continues to be common and accessible. Just as laws must from time to time be updated to reflect changes in society, and to address new concerns which were not previously foreseen, so too will new words, usages, and grammar inevitably need to be added, and obsolete forms simplified. But this does not negate the need for a standard. Descriptivism, labeling language as inherently chaotic and abandoning attempts to further understanding through improved communication, is a step backwards.

Byronic Major

I’ve tried to write some version of this post three times now, starting from a broad perspective and slowly focusing in on my personal complaint, bringing in different views and sides of the story. Unfortunately, I haven’t managed to finish any of those. It seems the peculiar nature of my grievance on this occasion lends itself more easily to a sort of gloomy malaise liable to cause antipathy and writer’s block than the kind of righteous indignation that propels good essays.

Still, I need to get these points off my chest somehow. So I’m opting for a more direct approach: I’m upset. There are many reasons why I’m upset, but the main ones pertain to trying to apply to college. I get the impression from my friends who have had to go through the same that college applications may just be a naturally upsetting process. In a best case scenario, you wait in suspense for several weeks for a group of strangers to pass judgement on your carefully-laid life plans; indeed, on your moral character.

Or, if you’re me, you’ve had enough curveballs in your life so far that the pretense of knowing what state you’ll be in and what to do a year from now, let alone four years from now and for the rest of your life, seems ridiculous to the verge of lunacy. So you pull your hair and grit your teeth, and flip coins to choose majors because the application is due in two hours and you can’t pick undecided. So you write post-hoc justifications for why you chose that major, hoping that you’re a good enough writer that whoever reads it doesn’t see through your bluff.

Although certainly anxiety inducing, this isn’t the main reason why I’m upset. I just felt it needed to be included in the context here. While I was researching majors to possibly pick, I came across nursing. This is a field in which I have a fair amount of experience. After all, I spent more time in school in the nurse’s office than in a classroom. I happen to know that there is a global shortage of nurses; more pronounced, indeed, than the shortage of doctors. As a result, not only are there plenty of open jobs with increasing wages and benefits, but there are a growing number of scholarship opportunities and incentives programs for training.

Moreover, I also know that there is an ongoing concerted effort in the nursing field to attempt to correct the staggering gender imbalance, which cake about as a result of Florence Nightingale’s characterization of nursing as the stereotypically feminine activity; a characterization which in recent years has become acutely harmful to the field. Not only has this characterization discouraged young men who might be talented in the field, and created harmful stereotypes, but it has also begun to have an effect on women who seek to establish themselves as independent professionals. It seems the “nursing is for good girls” mentality has caused fewer “good girls”, that is, bright, driven, professional women, to apply to the field, exacerbating the global shortage.

In other words, there is a major opportunity for people such as myself to do some serious good. It’s not as competitive or high pressure as med school, and there are plenty of nursing roles that aren’t exposed to contagion, and so wouldn’t be a problem for my disability. The world is in dire need of nurses, and gender is no longer a barrier. Nursing is a field that I could see myself in, and would be willing to explore.

There’s just one problem: I’m not allowed into the program. My local university, or more specifically, the third-party group they contract with to administer the program, has certain health requirements in order to minimize liability. Specifically, they want immune titers (which I’ve had done before, and never not been deficient).

I understand the rationale behind these restrictions, even if I disagree with them for personal reasons. It’s not a bad policy. Though cliched to say, I’m not angry so much as disappointed. And even then, I’m not sure precisely with whom it is that I find myself disappointed.

Am I disappointed with the third-party contractor for setting workplace safety standards to protect both patients and students, and to adhere to the law in our litigious society? With the university, for contracting with a third party in the aim of giving its students hands-on experience? With the law, for having such high standards of practice for medical professionals? I find it hard to find fault, even accidental fault, with any of these entities. So what, then? Am I upset with myself for being disabled, and for wanting to help others as I have been helped? Maybe; probably, at least a little bit. With the universe, for being this way, that bad outcomes happen just as a result of circumstances? Certainly. But raging at the heavens doesn’t get me anywhere.

I know that I’m justified in being upset. My disability is preventing me from helping others and doing good: that is righteous anger if ever there was a right reason to be angry. A substantial part of me wants to be upset; to refuse to allow anyone or anything from standing in the way of my doing what I think is right, or to dictate the limits of my abilities. I want to be a hero, to overcome the obstacles in my path, to do the right thing no matter the cost. But I’m not sure in this instance the obstacles need to be overcome.

I don’t know where that leaves me. Probably something about a tragic hero.

Automatism

I’m not sure what exactly the nightmares was that stuck with me for a solid twenty minutes after I got out of bed and before I woke up. Whatever it was, it had me utterly convinced that I was in mortal peril from my bed linens. And so I spent a solid twenty minutes trying desperately to remove them from me, before the cold woke me up enough to realize what I was doing, and had the presence of mind to stop.

This isn’t the first time I’ve woken up in the middle of doing something in an absentminded panic. Most of those times, however, I was either in a hospital, or would be soon. There have been a handful of isolated incidents in which I have woken up, so to speak, at the tail end of a random black-out. That is, I will suddenly realize that I’m most of the way through the day, without any memory of events for some indeterminate time prior. But this isn’t waking up per se; more like my memory is suddenly snapping back into function, like a recording skipping and resuming at a random point later on.

I suppose it is strictly preferable to learn that my brain has evidently delegated its powers to operate my body such that I need not be conscious to perform tasks, as opposed to being caught unawares by whatever danger my linens posed to me that required me to get up and dismantle them from my bed with such urgency that I could not wake up first. Nevertheless I am forced to question the judgement of whatever fragment of my unconscious mind took it upon its own initiative to operate my body without following the usual channels and getting my conscious consent.

The terminology, I recognize, is somewhat vague and confusing, as I have difficulty summoning words to express what has happened and the state it has left me in.

These episodes, both these more recent ones, and my longer history of breaks in consciousness, are a reminder of a fact that I try to put out of mind on a day to day basis, and yet which I forget at my own peril. Namely, the acuity of my own mortality and fragility of my self.

After all, who, or perhaps what, am I outside of the mind which commands me? What, or who, gives orders in my absence? Are they still orders if given by a what rather than a who, or am I projecting personhood onto a collection of patterns executed by the simple physics of my anatomy? Whatever my (his? Its?) goal was in disassembling my bed, I did a thorough job of it, stripping the bed far more efficiently and thoroughly than I could have by accident.

I might not ever find serious reason to ask these questions, except that every time so far, it has been me that has succeeded it. That is, whatever it does, it is I who has to contend with the results when I come back to full consciousness. I have to re-make the bed so that both of us can sleep. I have to explain why one of us saw fit to make a huge scuffle in the middle of the night, waking others up.

I am lucky that I live with my family, who are willing to tolerate the answer of “actually, I have no idea why I did that, because, in point of fact, it wasn’t me who did that, but rather some other being by whom I was possessed, or possibly I have started to exhibit symptoms of sleepwalking.” Or at least, they accept this answer now, for this situation, and dismiss it as harmless, because it is, at least so far. Yet I am moved to wonder where the line is.

After all, actions will have consequences, even if those actions aren’t mine. Let’s suppose for the sake of simplicity that these latest episodes are sleepwalking. If I sleepwalk, and knock over a lamp, that lamp is no more or less broken than if I’d thrown it to the ground in a rage. Moreover, the lamp has to be dealt with; its shards have to be cleaned up and disposed of, and the lamp will have to be replaced, so someone will have to pay for it. I might get away with saying I was sleepwalking, but more likely I would be compelled to help in the cleanup and replacement of the lamp.

But what if there had been witnesses who had seen me at the time, and said that they saw my eyes were open? It is certainly possible for a sleepwalker to have their eyes open, even to speak. And what if this witness believes that I was in fact awake, and fully conscious when I tipped over the lamp?

There is a relevant legal term and concept here: Automatism. It pertains to a debate surrounding medical conditions and culpability that is still ongoing and is unlikely to end any time soon. Courts and juries go back and forth on what precisely constitutes automatism, and to what degree it constitutes a legal defence, an excuse, or merely an avenue to plead down charges (e.g. manslaughter instead of murder). As near as I can tell, and without delving too deeply into the tangled web of case law, automatism is when a person is not acting as their own person, but rather like an automaton. Or, to quote Arlo Guthrie: “I’m not even here right now, man.”

This is different from insanity, even temporary insanity, or unconsciousness, for reasons that are complex and contested, and have more to do with the minutiae of law than I care to get into. But to summarize: unconsciousness and insanity have to do with denying criminal intent, which is required in most, though not all, crimes. Automatism, by subtle contrast, denies the criminal act itself, by arguing that there is not an actor by whom an act can be committed.

As an illustration, suppose an anvil falls out of the sky, cartoon style, and clobbers innocent bystander George Madison as he is standing on the corner, minding his own business, killing him instantly. Even though something pretty much objectively bad has happened; something which the law would generally seek to prevent, no criminal act per se has occurred. After all, who would be charged? The anvil? Gravity? God?

Now, if there is a human actor somewhere down the chain of causality; if the reason the anvil had been airborne was because village idiot Jay Quincy had pushed a big red button, which happened to be connected to an anvil-railgun being prepared by a group of local high schoolers for the google science fair; then maybe there is a crime. Whether or not Jay Quincy acted with malice aforethought, or was merely negligent, or reckless, or really couldn’t have been expected to know better, would be a matter for a court to decide. But there is an actor, so there is an act, so there might be a crime.

There are many caveats to this defence. The most obvious is that automatism, like (most) insanity is something that has to be proven by the defence, rather than the prosecution. So, to go back to our earlier example of the lamp, I would have to prove that during the episode, that I was sleepwalking. Merely saying that I don’t recall being myself at the time is not enough. For automatism to stick, it has to be proven, with hard evidence. Having a medical diagnosis of somnambulance and a history of sleepwalking episodes might be useful here, although it could also be used as evidence that I should have known better to prevent this in the first place (I’ll get to this point in a minute).

Whether or not this setup is fair, forcing the defence to prove that they weren’t responsible and assuming guilt otherwise, this is the only way that the system can work. The human sense of justice demands that crimes be committed, to some degree or another, voluntarily and of free will. Either there must be an act committed that oughtn’t have been, or something that ought have been prevented that wasn’t. Both of these, however, imply choices, and some degree of conscious free will.

Humans might have a special kind of free will, at least on our good days, that engenders us these rights and responsibilities, but science has yet to prove how this mechanism operates discretely from the physical (automatic) processes that make up our bodies. Without assuming free will, prosecutors would have to contend with proving something that has never even been proven in the abstract for each and every case. So the justice system makes a perhaps unreasonable assumption that people have free will unless there is something really obvious (and easily provable) that impedes it, like a gun to one’s head, or a provable case of sleepwalking.

There is a second caveat here that’s also been hinted at: while a person may not be responsible for their actions while in a state of automatism, they can still be responsible for putting themselves into such a state, either intentionally or negligently, which discounts the defence of automatism. So, while sleeping behind the wheel might happen in an automatic state, the law takes the view that you should have known better than to allow yourself to be behind the wheel if you were at risk of being asleep, and therefore you can still be charged. Sleepwalking does not work if, say, there was an unsecured weapon that one should’ve stowed away while conscious. Intoxication, even involuntary intoxication, whether from alcohol or some other drug, is almost never accepted.

This makes a kind of sense, after all. You don’t want to let people orchestrate crimes beforehand and then disclaim responsibility because they were asleep or what have you when the act occurred. On the other hand, this creates a strange kind of paradox for people with medical conditions that might result in a state of automatism at some point, and who are concerned about being liable for their actions and avoiding harm to others. After all, taking action beforehand shows that you knew something might have happened and should have been prepared for it, and are therefore liable. And not taking action is obviously negligent, and makes it difficult to prove that you weren’t acting under your own volition in the first place.

Incidentally, this notion of being held responsible; in some sense, of being responsible; for actions taken by a force other than my own free will, is one of my greatest fears. The idea that I might hurt someone, not even accidentally, but as an involuntary consequence of my medical situation; that is to say, the condition of the same body that makes me myself; I find absolutely petrifying. This has already happened before, as I have accidentally hurt people while flailing passing in and out of a coma, and there is no reason to believe that the same thing couldn’t happen again.

So, what to do? I was hoping that delving into the law might find me some solace from this fear; that I might encounter some landmark argument that would satisfy not just some legal liability, but which I would be able to use as a means of self-assurance. Instead it has done the opposite, and I am less confident now than when I started.

Thoughts on Steam

After much back and forth, I finally have a steam account. I caved eventually because I wanted to be able to actually play my brother’s birthday present to me; the game Cities: Skylines and all of its additional downloadable content packs. I had resisted, what has for some time felt inevitably, downloading steam, for a couple of reasons. The first was practical. Our family’s main computer is now reaching close to a decade old, and in its age does not handle all new things gracefully, or at least, does not do so consistently. Some days it will have no problem running multiple CPU-intensive games at once. Other days it promptly keels over when I so much as try to open a document.

Moreover, our internet is terrible. So terrible in fact that its latest speed test results mean that it does not qualify as broadband under any statutory or technical definition, despite paying not only for broadband, but for the highest available tier of it. Allegedly this problem has to do with the geography of our neighborhood and the construction of our house. Apparently, according to our ISP, the same walls which cannot help but share our heating and air conditioning with the outside, and which allow me to hear a whisper on the far side of the house, are totally impermeable to WiFi signals.

This fear was initially confirmed when my download told me that it would only be complete in an estimated two hundred and sixty one days. That is to say, it would take several times longer to download than it would for me to fly to the game developer’s headquarters in Sweden and get a copy on a flash drive. Or even to take a leisurely sea voyage.

This prediction turned out, thankfully, to be wrong. The download took a mere five hours; the vast majority of the progress was made during the last half hour when I was alone in the house. This is still far longer than the fifteen minutes or less that I’m accustomed to when installing from a CD. I suppose I ought to give some slack here, given that I didn’t have to physically go somewhere to purchase the CD.

My other point of contention with steam is philosophical. Steam makes it abundantly clear in their terms and conditions (which, yes, I do read, or at least, glaze over, as a general habit), that when you are paying them money to play games, you aren’t actually buying anything. At no point do you actually own the game that you are nominally purchasing. The legal setup here is terribly complicated, and given its novelty, not crystal clear in its definition and precedence, especially with the variations in jurisdictions that come with operating on the Internet. But while it isn’t clear what Steam is, Steam has made it quite clear what it isn’t. It isn’t selling games.

The idea of not owning the things that one buys isn’t strictly new. Software has never really been for sale in the old sense. You don’t buy Microsoft Word; you buy a license to use a copy of it, even if you were receiving it on a disk that was yours to own. Going back further, while you might own the physical token of a book, you don’t own the words on it inasmuch as it is not yours to copy and sell. This is a consequence of copyright and related concepts of intellectual property, which are intended to assist creators by granting them a temporary monopoly on their creations’ manufacture and sale, so as to incentivize more good creative work.

Yet this last example pulls at a loose thread: I may not own the story, but I do own the book. I may not be allowed to manufacture and sell new copies, but I can dispose of my current copy as I see fit. I can mark it, alter it, even destroy it if I so choose. I can take notes and excerpts from it so long as I am not copying the book wholesale, and I can sell my single copy of the book to another person for whatever price the two of us may agree upon, the same as any other piece of property. Software is not like this, though a strong argument can be made that it is only very recently that this new status quo has become practically enforceable.

Indeed, for as long as software has been sold in stores by means of disks and flash drives, it has been closer to the example of the classic book. For, as long as I have my CD, and whatever authentication key might come with it, I can install the contents wherever I might see fit. Without Internet connectivity to report back on my usage, there is no way of the publisher even knowing whether or not I am using their product, let alone whether I am using it in their intended manner. Microsoft can issue updates and changes, but with my CD and non-connected computer, I can keep my version of their software running how I like it forever.

Steam, however, takes this mindset that has existed in theory to its practical conclusion. You do not own the games that you pay for. This is roughly equivalent to the difference between buying a car, and chartering a limo service. Now, there’s nothing inherently wrong with this approach, but it is a major shift. There is of course the shift in power from consumers to providers: rather than you getting to dispose of your games as you see fit, you can have them revoked by Steam if you misbehave or cheat. This is unnerving, especially to one such as myself who is accustomed to having more freedom with things I buy (that’s why I buy them- to do as I please with), but not as interesting as the larger implications on the notion of property as a whole.

I don’t think the average layman knows or even cares about the particulars of license transfers. Ask such a layman what Steam does, and they’ll probably answer that they sell video games, in the same way that iTunes sells music. The actual minutiae of ownership are a distant second to the point of use. I call my games, and digital music, and the information on my Facebook feed mine, even though I don’t own them by any stretch of the imagination.

This use need not be exclusive either, so long as it never infringes on my own plans. After all, if there were a hypothetical person listening to my music and playing my games only precisely when I’m not, I might never notice.

So far I have referred to mostly digital goods, and sharing as it pertains to intellectual property. But this need not be the case. Ridesharing, for example, is already transforming the idea of owning and chartering a vehicle. On a more technical level, this is how mortgages, banknotes, and savings accounts have worked for centuries, in order to increase the money supply and expand the economy. Modern fiat currency, it will be seen, is not so much a commodity that is discretely owned as one that is shared an assigned value between its holder, society, and the government backing it. This quantum state is what allows credit and debt, which permit modern economies to function and flourish.

This shift in thinking around ownership certainly has the capability to be revolutionary, shifting prices and thinking around these new goods. Whether or not it will remains to be seen. Certainly it remains to be seen whether this change will be a net positive for consumers as well as the economy as a whole.

Cities: Skylines seems to be a fun game that our family computer can just barely manage to play. At the moment, this is all that is important to me. Yet I will be keeping an eye on how, if at all, getting games through steam influencers my enjoyment, for good or for ill.

Thanksgivings

So Australia, where I did most of my growing up, doesn’t have a thanksgiving holiday. Not even like Canada, where it’s on a different day. Arbor Day was a bigger deal at my school than American thanksgiving. My family tried to celebrate, but between school schedules that didn’t recognize our traditions, time differences that made watching the Macy’s parade and football game on the day impossible, and a general lack of turkey and pumpkin pie in stores, the effect was that we didn’t really have thanksgiving in the same way it is portrayed.

This is also at least part of the reason that I have none of the compunctions of my neighbors about commencing Christmas decorations, nor wearing holiday apparel. as soon as the leaves start to change in September. Thanksgiving is barely a real holiday, and Halloween was something people barely decorated for, so neither of those things acted as boundaries for the celebration of Christmas, which in contrast to the other two, was heavily celebrated and became an integral part of my cultural identity.

As a result, I don’t trace our thanksgiving traditions back hundreds of years, up the family tree through my mother’s side to our ancestor who signed the Mayflower Compact, and whose name has been passed down through the ages to my brother. Rather, I trace our traditions back less than a decade to my first year in American public school, when my teacher made out class go through a number of stereotypical traditions like making paper turkeys by tracing our hands, and writing down things we were thankful for. Hence: what I’m thankful for this year.

First, as always, I am thankful to be alive. This sounds tacky and cheap, I know, so let me clarify. I am thankful to be alive despite my body which does not keep itself alive. I am thankful to have been lucky enough to have beaten the odds for another year. I am acutely aware that things could have quite easily gone the other way.

Perhaps it is a sad reflection that my greatest joy of this year is to have merely gotten through it. Maybe. But I cannot change the facts of my situation. I cannot change the odds I face. I can only celebrate overcoming them. This victory of staying alive is the one on which all others depend. I could not have other triumphs, let alone celebrate and be thankful for them without first being sufficiently not-dead to achieve and enjoy them.

I’m thankful to be done with school. I’m glad to have it behind me. While it would be disingenuous to say that high school represented the darkest period in my life; partly because it is too soon to say, but mostly because those top few spots are generally dominated by the times I nearly died, was in the ICU, etcetera; there can be no denying that I hated high school. Not just the actual building, or having to go there; I hated my life as a high school student. I didn’t quite realize the depths of by unhappiness until I was done, and realized that I actually didn’t hate my life as a default. So I am thankful to be done and over with that.

I am thankful that I have the resources to write and take care of myself without also having to struggle to pay for the things I need to live. I am immensely thankful that I am able to sequester myself and treat my illnesses without having to think about what I am missing. In other words, I am thankful for being able to be unable to work. I am thankful that I have enough money, power, and privilege to stand up for myself, and to have others stand up for me. I am aware that I am lucky not only to be alive, but I to have access to a standard of care that makes my life worth living. I know that this is an advantage that is far from universal, even in my own country. I cannot really apologize for this, as, without these advantages, it is quite likely that I would be dead, or in such constant agony and anguish that I would wish I was. I am thankful that I am neither of those things.

I am thankful that these days, I am mostly on the giving end of the charitable endeavors that I have recently been involved in. For I have been on the receiving end before. I have been the simultaneously heartbreaking and heartwarming image of the poor, pitiful child, smiling despite barely clinging to life, surrounded by the prayer blankets, get well cards, books, and other care package staples that my friends and relations were able to muster, rush-shipped because it was unclear whether they would arrive “in time” otherwise. I defied the stereotype only insofar as I got better. I am doubly thankful, first that I am no longer in that unenviable position, and second, that I am well enough to begin to pay back that debt.

The Lego Census

So the other day I was wondering about the demographics of Lego mini figures. I’m sure we’re all at least vaguely aware of the fact that Lego minifigs tend to be, by default, adult, male, and yellow-skinned. This wasn’t terribly worthy of serious thought back when Lego had only a handful of different minifigure designs existed. Yet nowadays Lego has thousands, if not millions of different minifigure permutations. Moreover, the total number of minifigures in circulation is set to eclipse the number of living humans within a few years.

Obviously, even with a shift towards trying to be more representative, the demographics of Lego minifigures are not an accurate reflection of the demographics of humankind. But just how out of alignment are they? Or, to ask it another way, could the population of a standard Lego city exist in real life without causing an immediate demographic crisis?

This question has bugged me enough that I decided to conduct an informal study based on a portion of my Lego collection, or rather, a portion of it that I reckon is large enough to be vaguely representative of a population. I have chosen to conduct my counts based on the central district of the Lego city that exists in our family basement, on the grounds that it includes a sizable population from across a variety of different sets.

With that background in mind, I have counted roughly 154 minifigures. The area of survey is defined as the city central district, which for our purposes is defined by the largest tables with the greatest number of buildings and skyscrapers, and so presumably the highest population density.

Because Lego minifigures don’t have numerical ages attached to them, I counted ages by dividing minifigures into four categories. The categories are: Children, Young Adults, Middle Aged, and Elderly. Obviously these categories are qualitative and subject to some interpretation. Children are fairly obvious for their different sized minifigures. An example of adult categories follows.

The figure on the left would be a young adult. The one in the middle would be classified as middle aged, and the one on the right, elderly.

Breakdown by age

Children (14)
Lego children are the most distinct category because, in addition to childish facial features and clothes, they are given shorter leg pieces. This is the youngest category, as Lego doesn’t include infant Lego minifigures in their sets. I would guess that this age includes years 5-12.

Young Adults (75)
Young adults encompasses a fairly wide range, from puberty to early middle age. This group is the largest, partially because it includes the large contingent of conscripts serving in the city. An age range would be roughly 12-32.

Middle Aged (52)
Includes visibly older adults that do not meet the criteria for elderly. This group encompasses most of the city’s administration and professionals.

Elderly (13)
The elderly are those that stand out for being old, including those with features such as beards, wrinkled skin, or off-color hair.

Breakdown by industry

Second is occupations. Again, since minifigures cant exactly give their own occupations, and since most jobs happen indoors where I can’t see, I was forced to make some guesses based on outfits and group them into loose collections.

27 Military
15 Government administration
11 Entertainment
9 Law enforcement
9 Transport / Shipping
9 Aerospace industries
8 Heavy industry
6 Retail / services
5 Healthcare
5 Light Industry

An unemployment rate would be hard to gauge, because most of the time the unemployment rate is adjusted to omit those who aren’t actively seeking work, such as students, retired persons, disabled persons, homemakers, and the like. Unfortunately for our purposes, a minifigure who is transitionally unemployed looks pretty much identical to one who has decided to take an early retirement.

What we can take a stab at is a workforce participation rate. This is a measure of what percentage of the total number of people eligible to be working are doing so. So, for our purposes, this means tallying the total number of people assigned jobs and dividing by the total number of people capable of working, which we will assume means everyone except children. This gives us a ballpark of about 74%, decreasing to 68% if we exclude military to look only at the civilian economy. Either of these numbers would be somewhat high, but not unexplainably so.

Breakdown by sex

With no distinction between the physical form of Lego bodies, the differences between sexes in minifigure is based purely on cosmetic details such as hair type, the presence of eyelashes, makeup, or lipstick on a face, and dresses. This is obviously based on stereotypes, and makes it tricky to tease apart edge cases. Is the figure with poorly-detailed facial features male or female? What about that faceless conscript marching in formation with their helmet and combat armor? Does dwelling on this topic at length make me some kind of weirdo?

The fact that Lego seems to embellish characters that are female with stereotypical traits suggests that the default is male. Operating on this assumption gives you somewhere between 50 and 70 minifigures with at least one distinguishing female trait depending on how particular you get with freckles and other minute facial details.

That’s a male to female ratio somewhere between 2.08:1 and 1.2:1. The latter would be barely within the realm of ordinary populations, and even then would be highly suggestive of some kind of artificial pressure such as sex selective abortion, infanticide, widespread gender violence, a lower standard of medical care for girls, or some kind of widespread exposure, whether to pathogens or pollutants, that causes a far higher childhood fatality rate for girls than would be expected. And here you were thinking that a post about Lego minifigures was going to be a light and gentle read.

The former ratio is completely unnatural, though not completely unheard of in real life under certain contrived circumstances: certain South Asian and Middle Eastern countries have at times had male to female ratios of as high as two owing to the presence of large numbers of guest workers. In such societies, female breadwinners, let alone women traveling alone to foreign countries to send money home, is unheard of.

Such an explanation might be conceivable given a look at the lore of the city. The city is indeed a major trade port and center of commerce, with a non-negligible transient population, and also hosts a sizable military presence. By a similar token, I could simply say that there are more people that I’m not counting hiding inside all those skyscrapers that make everything come out even. Except this kind of narrative explanation dodges the question.

The strait answer is that, no, Lego cities are not particularly accurate reflections of our real life cities. This lack of absolute realism does not make Lego bad toys. Nor does it detract from their value as an artistic and storytelling medium; nor either the benefits for play therapy for patients affected with neuro-cognitive symptoms, my original reason for starting my Lego collection.

 

The War on Kale

I have historically been anti-kale. Not that I don’t approve of the taste of kale. I eat kale in what I would consider fairly normal amounts, and have done even while denouncing kale. My enmity towards kale is not towards the Species Brassica oleracea, Cultivar group Acephala Group. Rather, my hostility is towards that set of notions and ideas for which kale has become a symbol and shorthand for in recent years.

In the circles which I frequent, at least, insofar as kale is known of, it is known as a “superfood”, which I am to understand, means that it is exceptionally healthy. It is touted, by those who are inclined to tout their choices in vegetables, as being an exemplar of the kinds of foods that one ought to eat constantly. That is to say, it is touted as a staple for diets.

Now, just as I have nothing against kale, I also have nothing against diets in the abstract. I recognize that one’s diet is a major factor in one’s long term health, and I appreciate the value of a carefully tailored, personalized diet plan for certain medical situations as a means to an end.

In point of fact, I am on one such plan. My diet plan reflects my medical situation which seems to have the effect of keeping me always on the brink of being clinically underweight, and far below the minimum weight which my doctors believe is healthy for me. My medically-mandated diet plan calls for me to eat more wherever possible; more food, more calories, more fats, proteins, and especially carbohydrates. My diet does not restrict me from eating more, but prohibits me from eating less.

Additionally, because my metabolism and gastronomical system is so capricious as to prevent me from simply eating more of everything without becoming ill and losing more weight, my diet focuses on having me eat the highest density of calories that I can get away with. A perfect meal, according to my dietician, nutrition, endocrinologist and gastroenterologist, would be something along the lines of a massive double burger (well done, per immunologist request), packed with extra cheese, tomatoes, onions, lots of bacon, and a liberal helping of sauce, with a sizable portion of fries, and a thick chocolate malted milkshake. Ideally, I would have this at least three times a day, and preferably a couple more snacks throughout the day.

Here’s the thing: out of all the people who will eventually read this post, only a very small proportion will ever need to be on such a diet. An even smaller proportion will need to stay on this diet outside of a limited timeframe to reach a specific end, such as recovering from an acute medical issue, or bulking up for some manner of physical challenge. This is fine. I wouldn’t expect many other people to be on a diet tailored by a team of medical specialists precisely for me. Despite the overly simplistic terms used in public school health and anatomy classes, every body is subtly (or in my case, not so subtly) different, and has accordingly different needs.

Some people, such as myself, can scarf 10,000 calories a day for a week with no discernible difference in my weight from if I had eaten 2,000. Other people can scarcely eat an entire candy bar without having to answer for it at the doctor’s office six months later. Our diets will, and should, be different to reflect this fact. Moreover, the neither the composition of our respective diets, nor particularly their effectiveness, is at all indicative of some kind of moral character.

This brings me back to kale. I probably couldn’t have told you what kale was before I had fellow high schoolers getting in my face about how kale was the next great superfood, and how if only I were eating more of it, maybe I wouldn’t have so many health problems. Because obviously turning from the diet plan specifically designed by my team of accredited physicians in favor of the one tweeted out by a celebrity is the cure that centuries of research and billions in funding has failed to unlock.

What? How dare I doubt its efficacy? Well obviously it’s not going to “suppress autoimmune activation”, whatever that means, with my kind of attitude. No, of course you know what I’m talking about. Of course you know my disease better than I do. How dare I question your nonexistent credentials? Why, just last night you watched a five minute YouTube video with clip-art graphics and showing how this diet = good and others = bad. Certainly that trumps my meager experience of a combined several months of direct instruction and training from the best healthcare experts in their respective fields, followed by a decade of firsthand self-management, hundreds of hours of volunteer work, and more participation in clinical research than most graduate students. Clearly I know nothing. Besides, those doctors are in the pockets of big pharma; the ones that make those evil vaccines and mind control nanobots.

I do not begrudge those who seek to improve themselves, nor even those who wish to help others by the same means through which they have achieved success themselves. However I cannot abide with those who take their particular diet as the new gospel, and try to see it implemented as a universal morality. Nor can I stand the insistence of those with no medical qualifications telling me that the things I do to stay alive, including my diet; the things that they have the distinct privilege of choice in; are not right for me.

I try to appreciate the honest intentions here where they exist, but frankly I cannot put up with someone who had never walked in my shoes criticizing my life support routine. My medical regimen is not a lifestyle choice any more than breathing is, and I am not going to change either of those things on second-hand advice received in a yoga lesson, or a ted talk, or even a public school health class. I cannot support a movement that calls for the categorical elimination of entire food groups, nor a propaganda campaign against the type of restaurant that helps me stick to my diet, nor the taxation of precisely the kind of foodstuffs which I have been prescribed by my medical team.

With no other option, I can do nothing but vehemently oppose this set of notions pertaining to the new cult of the diet, as I have sometimes referred to it, and its most prominent and recognizable symbol: kale. Indeed, in collages and creative projects in which others have encouraged me to express myself, the phrases “down with kale” and “death to kale”, with accompanying images of scratched-out pictures of kale and other vegetables, have featured prominently. I have one such collage framed and mounted in my bedroom as a reminder of all the wrongs which I seek to right.

This is, I will concede, something of a personal prejudice. Possibly even a stereotype. The kind of people that seem most liable to pierce my bubble and confront me over my diet tend to be the self-assured, zealous sort, and so it seems quite conceivable that I may be experiencing some kind of selection bias that causes me to see only the absolute worst in my interlocutors. It is possible in my ideo-intellectual intifada against kale, that I have thrown the baby out with the bathwater. In honesty, even if this were true, I probably wouldn’t apologize, on the grounds that what I have had to endure has been so upsetting that, with the stakes being my own life and death as they are, that my reaction has been not only justified, but correct.

As a brief aside, there is, I am sure, a great analogy to be drawn here, and an even greater deal of commentary to be drawn on this last train of thought as a reflection of the larger modern socio-political situation; refusing to acknowledge wrongdoing despite being demonstrably in the wrong. Such commentary might even be more interesting and relevant than the post I am currently writing. Nevertheless such musings are outside the scope of this particular post, though I may return to them in the future.

So my position has not changed. I remain convinced that all of my actions have been completely correct. I have not, and do not plan, to renounce my views until such time as I feel I have been conclusively proven wrong, which I do not feel has happened. What has changed is I have been given a glimpse at a different perspective.

What happened is that someone close to me received a new diagnosis of a disease close in pathology to one that I have, and which I am also at higher risk for, which prevents her from eating gluten. This person, who will remain nameless for the purposes of this post, is as good as a sister to me, and the rest of her immediate family are like my own. We see each other at least as often as I see other friends or relations. Our families have gone on vacation together. We visit and dine together regularly enough that any medical issue that affects their kitchen also affects our own.

Now, I try to be an informed person, and prior to my friend’s diagnosis, I was at least peripherally aware of the condition with which she now has to deal. I could have explained the disease’s pathology, symptoms, and treatment, and I probably could have listed a few items that did and did not contain gluten, although this last one is more a consequence of gazing forlornly at the shorter lines at gluten-free buffets at the conferences which I attended than a genuine intent to learn.

What I had not come to appreciate was how difficult it was to find food that was not only free from gluten in itself, but completely safe from any trace of cross contamination, which I have learned, does make a critical difference. Many brands and restaurants offer items that are labeled as gluten free in large print, but then in smaller print immediately below disclaim all responsibility for the results of the actual assembly and preparation of the food, and indeed, for the integrity of the ingredients received from elsewhere. This is, of course, utterly useless.

Where I have found such needed assurances, however, are from those for whom this purity is a point of pride. These are the suppliers that also proudly advertise that they do not stock items containing genetically modified foodstuff, or any produce that has been exposed to chemicals. These are the people who proclaim the supremacy of organic food and vegan diets. They are scrupulous about making sure their food is free of gluten not just because it is necessary for people with certain medical conditions, but as a matter of moral integrity. To them these matters are of not only practical but ethical. In short, these are kale supporters.

This puts me in an awkward position intellectually. On the one hand, the smug superiority with which these kale supporters denounce technologies that have great potential to decrease human hardship based on pseudoscience, and out of dietary pickiness as outlined above, is grating at best. On the other hand, they are among the only people who seem to be invested in providing decent quality gluten free produce which they are willing to stand behind, and though I would trust them on few other things, I am at least willing to trust that they have been thorough in their compulsiveness.

Seeing the results of this attitude I still detest from this new angle has forced me to reconsider my continued denouncements. The presence of a niche gluten-free market, which is undoubtedly a recent development, has, alas, not been driven by increased sensitivity to those with specific medical dietary restrictions, but because in this case my friend’s medical treatment just so happens to align with a subcategory of fad diet. That this niche market exists is a good thing, and it could not exist without kale supporters. The very pickiness that I malign has paved the way for a better quality of life for my comrades who cannot afford to be otherwise. The evangelical attitude that I rage against has also successfully demanded that the food I am buying for my friend is safe for them to eat.

I do not yet think that I have horribly misjudged kale and its supporters. But regardless, I can appreciate that in this matter, they have a point. And I consider it more likely now that I may have misjudged kale supporters on a wider front, or at least, that my impression of them has been biased by my own experiences. I can appreciate that in demanding a market for their fad diets, that they have also created real value.

I am a stubborn person by nature once I have made up my mind, and so even these minor and measured concessions are rather painful. But fair is fair. Kale has proven that it does have a purpose. And to that end I think it is only fitting that I wind down my war on kale. This is not a total cessation of all military action. There are still plenty of nutritional misconceptions to dispel, and bad policies to be refuted, and besides that I am far too stubborn to even promise with a straight face that I’m not going to get into arguments about a topic that is necessarily close to my heart. But the stereotype which I drew up several years ago as a common thread between the people who would pester me about fad diets and misconceptions about my health has become outdated and unhelpful. It is, then, perhaps time to rethink it.

Technological Milestones and the Power of Mundanity

When I was fairly little, probably seven or so, I devised a short list of technologies based on what I had seen on television that I reckoned were at least plausible, and which I earmarked as milestones of sorts to measure how far human technology would progress during my lifetime. I estimated that if I was lucky, I would be able to have my hands on half of them by the time I retired. Delightfully, almost all of these have in fact already been achieved, less than fifteen years later.

Admittedly, all of these technologies that I picked were far closer than I had envisioned at the time. Living in Australia, which seemed to be the opposite side of the world from where everything happened, and living outside of the truly urban areas of Sydney which, as a consequence of international business, were kept up to date, it often seems that even though I technically grew up after the turn of the millennium, that I was raised in a place and culture that was closer to the 90s.

For example, as late as 2009, even among adults, not everyone I knew had a mobile phone. Text messaging was still “SMS”, and was generally regarded with suspicion and disdain, not least of all because not all phones were equipped to handle them, and not all phone plans included provisions for receiving them. “Smart” phones (still two words) did exist on the fringes; I know exactly one person who owned an iPhone, and two who owned a BlackBerry, at that time. But having one was still an oddity. Our public school curriculum was also notably skeptical, bordering on technophobic, about the rapid shift towards Broadband and constant connectivity, diverting much class time to decrying the evils of email and chat rooms.

These were the days when it was a moral imperative to turn off your modem at night, lest the hacker-perverts on the godless web wardial a backdoor into your computer, which weighed as much as the desk it was parked on, or your computer overheat from being left on, and catch fire (this happened to a friend of mine). Mice were wired and had little balls inside them that you could remove in order to sabotage them for the next user. Touch screens might have existed on some newer PDA models, and on some gimmicky machines in the inner city, but no one believed that they were going to replace the workstation PC.

I chose my technological milestones based on my experiences in this environment, and on television. Actually, since most of our television was the same shows that played in the United States, only a few months behind their stateside premier, they tended to be more up to date with the actual state of technology, and depictions of the near future which seemed obvious to an American audience seemed terribly optimistic and even outlandish to me at the time. So, in retrospect, it is not surprising that after I moved back to the US, I saw nearly all of my milestones commercially available within half a decade.

Tablet Computers
The idea of a single surface interface for a computer in the popular consciousness dates back almost as far as futuristic depictions of technology itself. It was an obvious technological niche that, despite numerous attempts, some semi-successful, was never truly cracked until the iPad. True, plenty of tablet computers existed before the iPad. But these were either klunky beyond use, incredibly fragile to the point of being unusable in practical circumstances, or horrifically expensive.

None of them were practical for, say, completing homework for school on, which at seven years old was kind of my litmus test for whether something was useful. I imagined that if I were lucky, I might get to go tablet shopping when it was time for me to enroll my own children. I could not imagine that affordable tablet computers would be widely available in time for me to use them for school myself. I still get a small joy every time I get to pull out my tablet in a productive niche.

Video Calling
Again, this was not a bolt from the blue. Orwell wrote about his telescreens, which amounted to two-way television, in the 1940s. By the 70s, NORAD had developed a fiber-optic based system whereby commanders could conduct video conferences during a crisis. By the time I was growing up, expensive and klunky video teleconferences were possible. But they had to be arranged and planned, and often required special equipment. Even once webcams started to appear, lessening the equipment burden, you were still often better off calling someone.

Skype and FaceTime changed that, spurred on largely by the appearance of smartphones, and later tablets, with front-facing cameras, which were designed largely for this exact purpose. Suddenly, a video call was as easy as a phone call; in some cases easier, because video calls are delivered over the Internet rather than requiring a phone line and number (something which I did not foresee).

Wearable Technology (in particular smartwatches)
This was the one that I was most skeptical of, as I got this mostly from the Jetsons, a show which isn’t exactly renowned for realism or accuracy. An argument can be made that this threshold hasn’t been fully crossed yet, since smartwatches are still niche products that haven’t caught on to the same extent as either of the previous items, and insofar as they can be used for communication like in The Jetsons, they rely on a smartphone or other device as a relay. This is a solid point, to which I have two counterarguments.

First, these are self-centered milestones. The test is not whether an average Joe can afford and use the technology, but whether it has an impact on my life. And indeed, my smart watch, which was enough and functional enough for me to use in an everyday role, does indeed have a noticeable positive impact. Second, while smartwatches may not be as ubiquitous as once portrayed, they do exist, and are commonplace enough to be largely unremarkable. The technology exists and is widely available, whether or not consumers choose to use it.

These were my three main pillars of the future. Other things which I marked down include such milestones as:

Commercial Space Travel
Sure, SpaceX and its ilk aren’t exactly the same as having shuttles to the ISS departing regularly from every major airport, with connecting service to the moon. You can’t have a romantic dinner rendezvous in orbit, gazing at the unclouded stars on one side, and the fragile planet earth on the other. But we’re remarkably close. Private sector delivery to orbit is now cheaper and more ubiquitous than public sector delivery (admittedly this has more to do with government austerity than an unexpected boom in the aerospace sector).

Large-Scale Remotely Controlled or Autonomous Vehicles
This one came from Kim Possible, and a particular episode in which our intrepid heroes got to their remote destination by a borrowed military helicopter flown remotely from a home computer. Today, we have remotely piloted military drones, and early self-driving vehicles. This one hasn’t been fully met yet, since I’ve never ridden in a self-driving vehicle myself, but it is on the horizon, and I eagerly await it.

Cyborgs
I did guess that we’d have technologically altered humans, both for medical purposes, and as part of the road to the enhanced super-humans that rule in movies and television. I never guessed at seven that in less than a decade, that I would be one of them, relying on networked machines and computer chips to keep my biological self functioning, plugging into the wall to charge my batteries when they run low, studiously avoiding magnets, EMPs, and water unless I have planned ahead and am wearing the correct configuration and armor.

This last one highlights an important factor. All of these technologies were, or at least, seemed, revolutionary. And yet today they are mundane. My tablet today is only remarkable to me because I once pegged it as a keystone of the future that I hoped would see the eradication of my then-present woes. This turned out to be overly optimistic, for two reasons.

First, it assumed that I would be happy as soon as the things that bothered me then no longer did, which is a fundamental misunderstanding of human nature. Humans do not remain happy the same way than an object in motion remains in motion until acted upon. Or perhaps it is that as creatures of constant change and reecontextualization, we are always undergoing so much change that remaining happy without constant effort is exceedingly rare. Humans always find more problems that need to be solved. On balance, this is a good thing, as it drives innovation and advancement. But it makes living life as a human rather, well, wanting.

Which lays the groundwork nicely for the second reason: novelty is necessarily fleeting. What advanced technology today marks the boundary of magic will tomorrow be a mere gimmick, and after that, a mere fact of life. Computers hundreds of millions more times more powerful than those used to wage World War II and send men to the moon are so ubiquitous that they are considered a basic necessity of modern life, like clothes, or literacy; both of which have millennia of incremental refinement and scientific striving behind them on their own.

My picture of the glorious shining future assumed that the things which seemed amazing at the time would continue to amaze once they had become commonplace. This isn’t a wholly unreasonable extrapolation on available data, even if it is childishly optimistic. Yet it is self-contradictory. The only way that such technologies could be harnessed to their full capacity would be to have them become so widely available and commonplace that it would be conceivable for product developers to integrate them into every possible facet of life. This both requires and establishes a certain level of mundanity about the technology that will eventually break the spell of novelty.

In this light, the mundanity of the technological breakthroughs that define my present life, relative to the imagined future of my past self, is not a bad thing. Disappointing, yes; and certainly it is a sobering reflection on the ungrateful character of human nature. But this very mundanity that breaks our predictions of the future (or at least, our optimistic predictions) is an integral part of the process of progress. Not only does this mundanity constantly drive us to reach for ever greater heights by making us utterly irreverent of those we have already achieved, but it allows us to keep evolving our current technologies to new applications.

Take, for example, wireless internet. I remember a time, or at least, a place, when wireless internet did not exist for practical purposes. “Wi-Fi” as a term hadn’t caught on yet; in fact, I remember the publicity campaign that was undertaken to educate our technologically backwards selves about what term meant, about how it wasn’t dangerous, and about how it would make all of our lives better, as we could connect to everything. Of course, at that time I didn’t know anyone outside of my father’s office who owned a device capable of connecting to Wi-Fi. But that was beside the point. It was the new thing. It was a shiny, exciting novelty.

And then, for a while, it was a gimmick. Newer computers began to advertise their Wi-Fi antennae, boasting that it was as good as being connected by cable. Hotels and other establishments began to advertise Wi-Fi connectivity. Phones began to connect to Wi-Fi networks, which allowed phones to truly connect to the internet even without a data plan.

Soon, Wi-Fi became not just a gimmick, but a standard. First computers, then phones, without internet began to become obsolete. Customers began to expect Wi-Fi as a standard accommodation wherever they went, for free even. Employers, teachers, and organizations began to assume that the people they were dealing with would have Wi-Fi, and therefore everyone in the house would have internet access. In ten years, the prevailing attitude around me went from “I wouldn’t feel safe having my kid playing in a building with that new Wi-Fi stuff” to “I need to make sure my kid has Wi-Fi so they can do their schoolwork”. Like television, telephones, and electricity, Wi-Fi became just another thing that needed to be had in a modern home. A mundanity.

Now, that very mundanity is driving a second wave of revolution. The “Internet of Things” as it is being called, is using the Wi-Fi networks that are already in place in every modern home to add more niche devices and appliances. We are told to expect that soon that every major device in our house will be connected to out personal network, controllable either from our mobile devices, or even by voice, and soon, gesture, if not through the devices themselves, then through artificially intelligent home assistants (Amazon echo, Google Home, and related).

It is important to realize that this second revolution could not take place while Wi-Fi was still a novelty. No one who wouldn’t otherwise buy into Wi-Fi at the beginning would have bought it because it could also control the sprinklers, or the washing machine, or what have you. Wi-Fi had to become established as a mundane building block in order to be used as the cornerstone of this latest innovation.

Research and development may be focused on the shiny and novel, but technological process on a species-wide scale depends just as much on this mundanity. Breakthroughs have to not only be helpful and exciting, but useful in everyday life, and cheap enough to be usable by everyday consumers. It is easy to get swept up in the exuberance of what is new, but the revolutionary changes happen when those new things are allowed to become mundane.