Automatism

I’m not sure what exactly the nightmares was that stuck with me for a solid twenty minutes after I got out of bed and before I woke up. Whatever it was, it had me utterly convinced that I was in mortal peril from my bed linens. And so I spent a solid twenty minutes trying desperately to remove them from me, before the cold woke me up enough to realize what I was doing, and had the presence of mind to stop.

This isn’t the first time I’ve woken up in the middle of doing something in an absentminded panic. Most of those times, however, I was either in a hospital, or would be soon. There have been a handful of isolated incidents in which I have woken up, so to speak, at the tail end of a random black-out. That is, I will suddenly realize that I’m most of the way through the day, without any memory of events for some indeterminate time prior. But this isn’t waking up per se; more like my memory is suddenly snapping back into function, like a recording skipping and resuming at a random point later on.

I suppose it is strictly preferable to learn that my brain has evidently delegated its powers to operate my body such that I need not be conscious to perform tasks, as opposed to being caught unawares by whatever danger my linens posed to me that required me to get up and dismantle them from my bed with such urgency that I could not wake up first. Nevertheless I am forced to question the judgement of whatever fragment of my unconscious mind took it upon its own initiative to operate my body without following the usual channels and getting my conscious consent.

The terminology, I recognize, is somewhat vague and confusing, as I have difficulty summoning words to express what has happened and the state it has left me in.

These episodes, both these more recent ones, and my longer history of breaks in consciousness, are a reminder of a fact that I try to put out of mind on a day to day basis, and yet which I forget at my own peril. Namely, the acuity of my own mortality and fragility of my self.

After all, who, or perhaps what, am I outside of the mind which commands me? What, or who, gives orders in my absence? Are they still orders if given by a what rather than a who, or am I projecting personhood onto a collection of patterns executed by the simple physics of my anatomy? Whatever my (his? Its?) goal was in disassembling my bed, I did a thorough job of it, stripping the bed far more efficiently and thoroughly than I could have by accident.

I might not ever find serious reason to ask these questions, except that every time so far, it has been me that has succeeded it. That is, whatever it does, it is I who has to contend with the results when I come back to full consciousness. I have to re-make the bed so that both of us can sleep. I have to explain why one of us saw fit to make a huge scuffle in the middle of the night, waking others up.

I am lucky that I live with my family, who are willing to tolerate the answer of “actually, I have no idea why I did that, because, in point of fact, it wasn’t me who did that, but rather some other being by whom I was possessed, or possibly I have started to exhibit symptoms of sleepwalking.” Or at least, they accept this answer now, for this situation, and dismiss it as harmless, because it is, at least so far. Yet I am moved to wonder where the line is.

After all, actions will have consequences, even if those actions aren’t mine. Let’s suppose for the sake of simplicity that these latest episodes are sleepwalking. If I sleepwalk, and knock over a lamp, that lamp is no more or less broken than if I’d thrown it to the ground in a rage. Moreover, the lamp has to be dealt with; its shards have to be cleaned up and disposed of, and the lamp will have to be replaced, so someone will have to pay for it. I might get away with saying I was sleepwalking, but more likely I would be compelled to help in the cleanup and replacement of the lamp.

But what if there had been witnesses who had seen me at the time, and said that they saw my eyes were open? It is certainly possible for a sleepwalker to have their eyes open, even to speak. And what if this witness believes that I was in fact awake, and fully conscious when I tipped over the lamp?

There is a relevant legal term and concept here: Automatism. It pertains to a debate surrounding medical conditions and culpability that is still ongoing and is unlikely to end any time soon. Courts and juries go back and forth on what precisely constitutes automatism, and to what degree it constitutes a legal defence, an excuse, or merely an avenue to plead down charges (e.g. manslaughter instead of murder). As near as I can tell, and without delving too deeply into the tangled web of case law, automatism is when a person is not acting as their own person, but rather like an automaton. Or, to quote Arlo Guthrie: “I’m not even here right now, man.”

This is different from insanity, even temporary insanity, or unconsciousness, for reasons that are complex and contested, and have more to do with the minutiae of law than I care to get into. But to summarize: unconsciousness and insanity have to do with denying criminal intent, which is required in most, though not all, crimes. Automatism, by subtle contrast, denies the criminal act itself, by arguing that there is not an actor by whom an act can be committed.

As an illustration, suppose an anvil falls out of the sky, cartoon style, and clobbers innocent bystander George Madison as he is standing on the corner, minding his own business, killing him instantly. Even though something pretty much objectively bad has happened; something which the law would generally seek to prevent, no criminal act per se has occurred. After all, who would be charged? The anvil? Gravity? God?

Now, if there is a human actor somewhere down the chain of causality; if the reason the anvil had been airborne was because village idiot Jay Quincy had pushed a big red button, which happened to be connected to an anvil-railgun being prepared by a group of local high schoolers for the google science fair; then maybe there is a crime. Whether or not Jay Quincy acted with malice aforethought, or was merely negligent, or reckless, or really couldn’t have been expected to know better, would be a matter for a court to decide. But there is an actor, so there is an act, so there might be a crime.

There are many caveats to this defence. The most obvious is that automatism, like (most) insanity is something that has to be proven by the defence, rather than the prosecution. So, to go back to our earlier example of the lamp, I would have to prove that during the episode, that I was sleepwalking. Merely saying that I don’t recall being myself at the time is not enough. For automatism to stick, it has to be proven, with hard evidence. Having a medical diagnosis of somnambulance and a history of sleepwalking episodes might be useful here, although it could also be used as evidence that I should have known better to prevent this in the first place (I’ll get to this point in a minute).

Whether or not this setup is fair, forcing the defence to prove that they weren’t responsible and assuming guilt otherwise, this is the only way that the system can work. The human sense of justice demands that crimes be committed, to some degree or another, voluntarily and of free will. Either there must be an act committed that oughtn’t have been, or something that ought have been prevented that wasn’t. Both of these, however, imply choices, and some degree of conscious free will.

Humans might have a special kind of free will, at least on our good days, that engenders us these rights and responsibilities, but science has yet to prove how this mechanism operates discretely from the physical (automatic) processes that make up our bodies. Without assuming free will, prosecutors would have to contend with proving something that has never even been proven in the abstract for each and every case. So the justice system makes a perhaps unreasonable assumption that people have free will unless there is something really obvious (and easily provable) that impedes it, like a gun to one’s head, or a provable case of sleepwalking.

There is a second caveat here that’s also been hinted at: while a person may not be responsible for their actions while in a state of automatism, they can still be responsible for putting themselves into such a state, either intentionally or negligently, which discounts the defence of automatism. So, while sleeping behind the wheel might happen in an automatic state, the law takes the view that you should have known better than to allow yourself to be behind the wheel if you were at risk of being asleep, and therefore you can still be charged. Sleepwalking does not work if, say, there was an unsecured weapon that one should’ve stowed away while conscious. Intoxication, even involuntary intoxication, whether from alcohol or some other drug, is almost never accepted.

This makes a kind of sense, after all. You don’t want to let people orchestrate crimes beforehand and then disclaim responsibility because they were asleep or what have you when the act occurred. On the other hand, this creates a strange kind of paradox for people with medical conditions that might result in a state of automatism at some point, and who are concerned about being liable for their actions and avoiding harm to others. After all, taking action beforehand shows that you knew something might have happened and should have been prepared for it, and are therefore liable. And not taking action is obviously negligent, and makes it difficult to prove that you weren’t acting under your own volition in the first place.

Incidentally, this notion of being held responsible; in some sense, of being responsible; for actions taken by a force other than my own free will, is one of my greatest fears. The idea that I might hurt someone, not even accidentally, but as an involuntary consequence of my medical situation; that is to say, the condition of the same body that makes me myself; I find absolutely petrifying. This has already happened before, as I have accidentally hurt people while flailing passing in and out of a coma, and there is no reason to believe that the same thing couldn’t happen again.

So, what to do? I was hoping that delving into the law might find me some solace from this fear; that I might encounter some landmark argument that would satisfy not just some legal liability, but which I would be able to use as a means of self-assurance. Instead it has done the opposite, and I am less confident now than when I started.

The Story of Revival

Okay, I’ll admit it. Rather than writing as I normally do, the last week has been mostly dominated by me playing Cities: Skylines. It is a game which I find distinctly easy to sink many hours into. But I do want to post this week, and so I thought I would tell the story thus far of one of the cities I’ve been working on.

Twenty-odd years ago, a group of plucky, enterprising pioneers ventured forth to settle the pristine stretch of land just beside the highway into a shining city on the hill. The totalitarian government which was backing the project to build a number of planned cities had agreed to open up the land to development, and, apparently eager to prove something, granted the project effectively unlimited funds, and offered to resettle workers immediately as soon as buildings could be constructed. Concerned that they would be punished for the failure of this city personally, settlers came to calling the city “New Roanoke”. The name stuck.

A cloverleaf interchange was built to guide supplies and new settlers towards settlement, with a roundabout in the center of town. The roundabout in turn fed traffic down the main streets; Karl Marx Avenue, Guy Debord Boulevard, and Internationale Drive. Within a year of its establishment, New Roanoke began making strides towards its mandate to build a utopia by mandating strict sustainability guidelines on all new construction. With an infinite budget, the city government established large scale projects to entice new settlers.

With its zeppelins for transport, its high tech sustainable housing initiatives, and its massive investment in education and science, the city gained a reputation as a research haven, and began to attract eccentric futurist types that had been shunned elsewhere. New Roanoke became known as a city that was open to new ideas. A diverse populace flocked to New Roanoke, leading it through a massive boom.

Then, disaster struck, first in the form of a tornado that ripped through the industrial district, trashing the rail network that connected the city to the outside world, and connected the city’s districts. The citizens responded by building a glittering new monorail system to replace it, and with renewed investment in emergency warning and shelters. This system was put to the test when an asteroid impacted just outside the rapidly expanding suburbs of the city.

Although none were hurt, the impact was taken by the population as an ill omen. Soon enough the government had walled off the impact site, and redirected the expansion of the city to new areas. Observant citizens noticed several government agents and scientists loitering around the exclusion zone, and photographs quickly circulated on conspiracy websites detailing the construction of new secret research facilities just beyond the wall.

This story was quickly buried, however, by a wave of mysterious illness. At first it was a small thing; local hospitals reported an uptick in the number of deaths among traditionally vulnerable populations such as children, the elderly, and the disabled. Soon, however, reports began to appear of otherwise healthy individuals collapsing in the middle of their routines. The city’s healthcare network became overloaded within days.

The government clung to the notion that this massive wave of deaths was because of an infection, despite few, if any, symptoms in those who had dies, and so acted to try and stop the spread of infection, closing public spaces and discouraging the use of public transport. Ports of entry, including the city’s air, sea, and rail terminals, were closed to contain the spread. Places of employment also closed, though whether from a desire to assist the government, or to flee the city, none can say. These measures may or may not have helped, but the one thing they did do was create traffic so horrendous that emergency vehicles, and increasingly commonly hearses, could not navigate the city.

With a mounting body count, the government tore up what open space it could find in the city to build graveyards. When these were filled, the city built crematoria to process the tens of thousands of dead. When these were overloaded, people turned to piling bodies in abandoned skyscrapers, which the government dutifully demolished when they were full.

By the time the mortality rate fell back to normal levels, between a third and a half of the population had died, and tensions New Roanoke sat on a knife’s edge. The city government build a monument to honor those who had died in what was being called “the Great Mortality”. The opening ceremony brought visiting dignitaries from the national government, and naturally, inspired protests. These protests were initially small, but a heavy-handed police response caused them to escalate, until soon full-scale riots erupted. The city was once again paralyzed by fear and panic, as all of the tension that had bubbled under the surface during the Great Mortality boiled over.

Local police called in outside reinforcements, including the feared and hated secret police, who had so far been content to allow the city to function mostly autonomously to encourage research. Rioters were forced to surrender by declaring martial law, and shutting down water and power to rebellious parts of the city. With public services suspended, looters and rioters burned themselves out. When the violence began to subside, security forces marched in to restore order by force. Ad-hoc drumhead courts-martial sentenced the guilty to cruel and unusual punishments.

The secret police established a permanent office adjacent to the new courthouse, which was built in the newly-reconstructed historic district. The city was divided into districts for the purposes of administration. Several districts, mainly those in the older, richer sections of the city, and those by the river, cruise terminals, and airports, were given special status as tourist and leisure districts. The bulk of rebuilding aid was directed to these areas.

New suburbs were established outside of the main metropolis, as the national government sought to rekindle the utopian vision and spirit that had once propelled the city to great heights. The government backed the establishment of a spaceport to bring in tourists, and new research initiatives such as a medical research center, a compact particle accelerator, and an experimental fusion power plant. Life remained tightly controlled by the new government, but after a time, settled into a familiar rhythm. Although tensions remained, an influx of new citizens helped bury the memory of the troubled past.

With the completion of its last great monument, the Eden Project, the city government took the opportunity to finally settle on a name more befitting the city that had grown. The metropolis was officially re-christened as “Revival” on the thirtieth anniversary of its founding. Life in Revival is not, despite its billing, a utopia, but it is a far cry from its dystopic past. Revival is not exceptionally rich, despite being reasonably well developed and having high land values, though solvency has never been a priority for the city government.

I cannot say whether or not I would prefer to live in Revival myself. The idea of living in such a glittering antiseptic world of glass and steel and snow-white concrete, with monorails and zeppelins providing transport between particle colliders, science parks, and state of the art medical centers, where energy is clean and all waste is recycled, or treated in such a way to have no discernible environmental impact, sounds attractive, though it would also make me skeptical.

Thoughts on Steam

After much back and forth, I finally have a steam account. I caved eventually because I wanted to be able to actually play my brother’s birthday present to me; the game Cities: Skylines and all of its additional downloadable content packs. I had resisted, what has for some time felt inevitably, downloading steam, for a couple of reasons. The first was practical. Our family’s main computer is now reaching close to a decade old, and in its age does not handle all new things gracefully, or at least, does not do so consistently. Some days it will have no problem running multiple CPU-intensive games at once. Other days it promptly keels over when I so much as try to open a document.

Moreover, our internet is terrible. So terrible in fact that its latest speed test results mean that it does not qualify as broadband under any statutory or technical definition, despite paying not only for broadband, but for the highest available tier of it. Allegedly this problem has to do with the geography of our neighborhood and the construction of our house. Apparently, according to our ISP, the same walls which cannot help but share our heating and air conditioning with the outside, and which allow me to hear a whisper on the far side of the house, are totally impermeable to WiFi signals.

This fear was initially confirmed when my download told me that it would only be complete in an estimated two hundred and sixty one days. That is to say, it would take several times longer to download than it would for me to fly to the game developer’s headquarters in Sweden and get a copy on a flash drive. Or even to take a leisurely sea voyage.

This prediction turned out, thankfully, to be wrong. The download took a mere five hours; the vast majority of the progress was made during the last half hour when I was alone in the house. This is still far longer than the fifteen minutes or less that I’m accustomed to when installing from a CD. I suppose I ought to give some slack here, given that I didn’t have to physically go somewhere to purchase the CD.

My other point of contention with steam is philosophical. Steam makes it abundantly clear in their terms and conditions (which, yes, I do read, or at least, glaze over, as a general habit), that when you are paying them money to play games, you aren’t actually buying anything. At no point do you actually own the game that you are nominally purchasing. The legal setup here is terribly complicated, and given its novelty, not crystal clear in its definition and precedence, especially with the variations in jurisdictions that come with operating on the Internet. But while it isn’t clear what Steam is, Steam has made it quite clear what it isn’t. It isn’t selling games.

The idea of not owning the things that one buys isn’t strictly new. Software has never really been for sale in the old sense. You don’t buy Microsoft Word; you buy a license to use a copy of it, even if you were receiving it on a disk that was yours to own. Going back further, while you might own the physical token of a book, you don’t own the words on it inasmuch as it is not yours to copy and sell. This is a consequence of copyright and related concepts of intellectual property, which are intended to assist creators by granting them a temporary monopoly on their creations’ manufacture and sale, so as to incentivize more good creative work.

Yet this last example pulls at a loose thread: I may not own the story, but I do own the book. I may not be allowed to manufacture and sell new copies, but I can dispose of my current copy as I see fit. I can mark it, alter it, even destroy it if I so choose. I can take notes and excerpts from it so long as I am not copying the book wholesale, and I can sell my single copy of the book to another person for whatever price the two of us may agree upon, the same as any other piece of property. Software is not like this, though a strong argument can be made that it is only very recently that this new status quo has become practically enforceable.

Indeed, for as long as software has been sold in stores by means of disks and flash drives, it has been closer to the example of the classic book. For, as long as I have my CD, and whatever authentication key might come with it, I can install the contents wherever I might see fit. Without Internet connectivity to report back on my usage, there is no way of the publisher even knowing whether or not I am using their product, let alone whether I am using it in their intended manner. Microsoft can issue updates and changes, but with my CD and non-connected computer, I can keep my version of their software running how I like it forever.

Steam, however, takes this mindset that has existed in theory to its practical conclusion. You do not own the games that you pay for. This is roughly equivalent to the difference between buying a car, and chartering a limo service. Now, there’s nothing inherently wrong with this approach, but it is a major shift. There is of course the shift in power from consumers to providers: rather than you getting to dispose of your games as you see fit, you can have them revoked by Steam if you misbehave or cheat. This is unnerving, especially to one such as myself who is accustomed to having more freedom with things I buy (that’s why I buy them- to do as I please with), but not as interesting as the larger implications on the notion of property as a whole.

I don’t think the average layman knows or even cares about the particulars of license transfers. Ask such a layman what Steam does, and they’ll probably answer that they sell video games, in the same way that iTunes sells music. The actual minutiae of ownership are a distant second to the point of use. I call my games, and digital music, and the information on my Facebook feed mine, even though I don’t own them by any stretch of the imagination.

This use need not be exclusive either, so long as it never infringes on my own plans. After all, if there were a hypothetical person listening to my music and playing my games only precisely when I’m not, I might never notice.

So far I have referred to mostly digital goods, and sharing as it pertains to intellectual property. But this need not be the case. Ridesharing, for example, is already transforming the idea of owning and chartering a vehicle. On a more technical level, this is how mortgages, banknotes, and savings accounts have worked for centuries, in order to increase the money supply and expand the economy. Modern fiat currency, it will be seen, is not so much a commodity that is discretely owned as one that is shared an assigned value between its holder, society, and the government backing it. This quantum state is what allows credit and debt, which permit modern economies to function and flourish.

This shift in thinking around ownership certainly has the capability to be revolutionary, shifting prices and thinking around these new goods. Whether or not it will remains to be seen. Certainly it remains to be seen whether this change will be a net positive for consumers as well as the economy as a whole.

Cities: Skylines seems to be a fun game that our family computer can just barely manage to play. At the moment, this is all that is important to me. Yet I will be keeping an eye on how, if at all, getting games through steam influencers my enjoyment, for good or for ill.

Thanksgivings

So Australia, where I did most of my growing up, doesn’t have a thanksgiving holiday. Not even like Canada, where it’s on a different day. Arbor Day was a bigger deal at my school than American thanksgiving. My family tried to celebrate, but between school schedules that didn’t recognize our traditions, time differences that made watching the Macy’s parade and football game on the day impossible, and a general lack of turkey and pumpkin pie in stores, the effect was that we didn’t really have thanksgiving in the same way it is portrayed.

This is also at least part of the reason that I have none of the compunctions of my neighbors about commencing Christmas decorations, nor wearing holiday apparel. as soon as the leaves start to change in September. Thanksgiving is barely a real holiday, and Halloween was something people barely decorated for, so neither of those things acted as boundaries for the celebration of Christmas, which in contrast to the other two, was heavily celebrated and became an integral part of my cultural identity.

As a result, I don’t trace our thanksgiving traditions back hundreds of years, up the family tree through my mother’s side to our ancestor who signed the Mayflower Compact, and whose name has been passed down through the ages to my brother. Rather, I trace our traditions back less than a decade to my first year in American public school, when my teacher made out class go through a number of stereotypical traditions like making paper turkeys by tracing our hands, and writing down things we were thankful for. Hence: what I’m thankful for this year.

First, as always, I am thankful to be alive. This sounds tacky and cheap, I know, so let me clarify. I am thankful to be alive despite my body which does not keep itself alive. I am thankful to have been lucky enough to have beaten the odds for another year. I am acutely aware that things could have quite easily gone the other way.

Perhaps it is a sad reflection that my greatest joy of this year is to have merely gotten through it. Maybe. But I cannot change the facts of my situation. I cannot change the odds I face. I can only celebrate overcoming them. This victory of staying alive is the one on which all others depend. I could not have other triumphs, let alone celebrate and be thankful for them without first being sufficiently not-dead to achieve and enjoy them.

I’m thankful to be done with school. I’m glad to have it behind me. While it would be disingenuous to say that high school represented the darkest period in my life; partly because it is too soon to say, but mostly because those top few spots are generally dominated by the times I nearly died, was in the ICU, etcetera; there can be no denying that I hated high school. Not just the actual building, or having to go there; I hated my life as a high school student. I didn’t quite realize the depths of by unhappiness until I was done, and realized that I actually didn’t hate my life as a default. So I am thankful to be done and over with that.

I am thankful that I have the resources to write and take care of myself without also having to struggle to pay for the things I need to live. I am immensely thankful that I am able to sequester myself and treat my illnesses without having to think about what I am missing. In other words, I am thankful for being able to be unable to work. I am thankful that I have enough money, power, and privilege to stand up for myself, and to have others stand up for me. I am aware that I am lucky not only to be alive, but I to have access to a standard of care that makes my life worth living. I know that this is an advantage that is far from universal, even in my own country. I cannot really apologize for this, as, without these advantages, it is quite likely that I would be dead, or in such constant agony and anguish that I would wish I was. I am thankful that I am neither of those things.

I am thankful that these days, I am mostly on the giving end of the charitable endeavors that I have recently been involved in. For I have been on the receiving end before. I have been the simultaneously heartbreaking and heartwarming image of the poor, pitiful child, smiling despite barely clinging to life, surrounded by the prayer blankets, get well cards, books, and other care package staples that my friends and relations were able to muster, rush-shipped because it was unclear whether they would arrive “in time” otherwise. I defied the stereotype only insofar as I got better. I am doubly thankful, first that I am no longer in that unenviable position, and second, that I am well enough to begin to pay back that debt.

The Lego Census

So the other day I was wondering about the demographics of Lego mini figures. I’m sure we’re all at least vaguely aware of the fact that Lego minifigs tend to be, by default, adult, male, and yellow-skinned. This wasn’t terribly worthy of serious thought back when Lego had only a handful of different minifigure designs existed. Yet nowadays Lego has thousands, if not millions of different minifigure permutations. Moreover, the total number of minifigures in circulation is set to eclipse the number of living humans within a few years.

Obviously, even with a shift towards trying to be more representative, the demographics of Lego minifigures are not an accurate reflection of the demographics of humankind. But just how out of alignment are they? Or, to ask it another way, could the population of a standard Lego city exist in real life without causing an immediate demographic crisis?

This question has bugged me enough that I decided to conduct an informal study based on a portion of my Lego collection, or rather, a portion of it that I reckon is large enough to be vaguely representative of a population. I have chosen to conduct my counts based on the central district of the Lego city that exists in our family basement, on the grounds that it includes a sizable population from across a variety of different sets.

With that background in mind, I have counted roughly 154 minifigures. The area of survey is defined as the city central district, which for our purposes is defined by the largest tables with the greatest number of buildings and skyscrapers, and so presumably the highest population density.

Because Lego minifigures don’t have numerical ages attached to them, I counted ages by dividing minifigures into four categories. The categories are: Children, Young Adults, Middle Aged, and Elderly. Obviously these categories are qualitative and subject to some interpretation. Children are fairly obvious for their different sized minifigures. An example of adult categories follows.

The figure on the left would be a young adult. The one in the middle would be classified as middle aged, and the one on the right, elderly.

Breakdown by age

Children (14)
Lego children are the most distinct category because, in addition to childish facial features and clothes, they are given shorter leg pieces. This is the youngest category, as Lego doesn’t include infant Lego minifigures in their sets. I would guess that this age includes years 5-12.

Young Adults (75)
Young adults encompasses a fairly wide range, from puberty to early middle age. This group is the largest, partially because it includes the large contingent of conscripts serving in the city. An age range would be roughly 12-32.

Middle Aged (52)
Includes visibly older adults that do not meet the criteria for elderly. This group encompasses most of the city’s administration and professionals.

Elderly (13)
The elderly are those that stand out for being old, including those with features such as beards, wrinkled skin, or off-color hair.

Breakdown by industry

Second is occupations. Again, since minifigures cant exactly give their own occupations, and since most jobs happen indoors where I can’t see, I was forced to make some guesses based on outfits and group them into loose collections.

27 Military
15 Government administration
11 Entertainment
9 Law enforcement
9 Transport / Shipping
9 Aerospace industries
8 Heavy industry
6 Retail / services
5 Healthcare
5 Light Industry

An unemployment rate would be hard to gauge, because most of the time the unemployment rate is adjusted to omit those who aren’t actively seeking work, such as students, retired persons, disabled persons, homemakers, and the like. Unfortunately for our purposes, a minifigure who is transitionally unemployed looks pretty much identical to one who has decided to take an early retirement.

What we can take a stab at is a workforce participation rate. This is a measure of what percentage of the total number of people eligible to be working are doing so. So, for our purposes, this means tallying the total number of people assigned jobs and dividing by the total number of people capable of working, which we will assume means everyone except children. This gives us a ballpark of about 74%, decreasing to 68% if we exclude military to look only at the civilian economy. Either of these numbers would be somewhat high, but not unexplainably so.

Breakdown by sex

With no distinction between the physical form of Lego bodies, the differences between sexes in minifigure is based purely on cosmetic details such as hair type, the presence of eyelashes, makeup, or lipstick on a face, and dresses. This is obviously based on stereotypes, and makes it tricky to tease apart edge cases. Is the figure with poorly-detailed facial features male or female? What about that faceless conscript marching in formation with their helmet and combat armor? Does dwelling on this topic at length make me some kind of weirdo?

The fact that Lego seems to embellish characters that are female with stereotypical traits suggests that the default is male. Operating on this assumption gives you somewhere between 50 and 70 minifigures with at least one distinguishing female trait depending on how particular you get with freckles and other minute facial details.

That’s a male to female ratio somewhere between 2.08:1 and 1.2:1. The latter would be barely within the realm of ordinary populations, and even then would be highly suggestive of some kind of artificial pressure such as sex selective abortion, infanticide, widespread gender violence, a lower standard of medical care for girls, or some kind of widespread exposure, whether to pathogens or pollutants, that causes a far higher childhood fatality rate for girls than would be expected. And here you were thinking that a post about Lego minifigures was going to be a light and gentle read.

The former ratio is completely unnatural, though not completely unheard of in real life under certain contrived circumstances: certain South Asian and Middle Eastern countries have at times had male to female ratios of as high as two owing to the presence of large numbers of guest workers. In such societies, female breadwinners, let alone women traveling alone to foreign countries to send money home, is unheard of.

Such an explanation might be conceivable given a look at the lore of the city. The city is indeed a major trade port and center of commerce, with a non-negligible transient population, and also hosts a sizable military presence. By a similar token, I could simply say that there are more people that I’m not counting hiding inside all those skyscrapers that make everything come out even. Except this kind of narrative explanation dodges the question.

The strait answer is that, no, Lego cities are not particularly accurate reflections of our real life cities. This lack of absolute realism does not make Lego bad toys. Nor does it detract from their value as an artistic and storytelling medium; nor either the benefits for play therapy for patients affected with neuro-cognitive symptoms, my original reason for starting my Lego collection.

 

The War on Kale

I have historically been anti-kale. Not that I don’t approve of the taste of kale. I eat kale in what I would consider fairly normal amounts, and have done even while denouncing kale. My enmity towards kale is not towards the Species Brassica oleracea, Cultivar group Acephala Group. Rather, my hostility is towards that set of notions and ideas for which kale has become a symbol and shorthand for in recent years.

In the circles which I frequent, at least, insofar as kale is known of, it is known as a “superfood”, which I am to understand, means that it is exceptionally healthy. It is touted, by those who are inclined to tout their choices in vegetables, as being an exemplar of the kinds of foods that one ought to eat constantly. That is to say, it is touted as a staple for diets.

Now, just as I have nothing against kale, I also have nothing against diets in the abstract. I recognize that one’s diet is a major factor in one’s long term health, and I appreciate the value of a carefully tailored, personalized diet plan for certain medical situations as a means to an end.

In point of fact, I am on one such plan. My diet plan reflects my medical situation which seems to have the effect of keeping me always on the brink of being clinically underweight, and far below the minimum weight which my doctors believe is healthy for me. My medically-mandated diet plan calls for me to eat more wherever possible; more food, more calories, more fats, proteins, and especially carbohydrates. My diet does not restrict me from eating more, but prohibits me from eating less.

Additionally, because my metabolism and gastronomical system is so capricious as to prevent me from simply eating more of everything without becoming ill and losing more weight, my diet focuses on having me eat the highest density of calories that I can get away with. A perfect meal, according to my dietician, nutrition, endocrinologist and gastroenterologist, would be something along the lines of a massive double burger (well done, per immunologist request), packed with extra cheese, tomatoes, onions, lots of bacon, and a liberal helping of sauce, with a sizable portion of fries, and a thick chocolate malted milkshake. Ideally, I would have this at least three times a day, and preferably a couple more snacks throughout the day.

Here’s the thing: out of all the people who will eventually read this post, only a very small proportion will ever need to be on such a diet. An even smaller proportion will need to stay on this diet outside of a limited timeframe to reach a specific end, such as recovering from an acute medical issue, or bulking up for some manner of physical challenge. This is fine. I wouldn’t expect many other people to be on a diet tailored by a team of medical specialists precisely for me. Despite the overly simplistic terms used in public school health and anatomy classes, every body is subtly (or in my case, not so subtly) different, and has accordingly different needs.

Some people, such as myself, can scarf 10,000 calories a day for a week with no discernible difference in my weight from if I had eaten 2,000. Other people can scarcely eat an entire candy bar without having to answer for it at the doctor’s office six months later. Our diets will, and should, be different to reflect this fact. Moreover, the neither the composition of our respective diets, nor particularly their effectiveness, is at all indicative of some kind of moral character.

This brings me back to kale. I probably couldn’t have told you what kale was before I had fellow high schoolers getting in my face about how kale was the next great superfood, and how if only I were eating more of it, maybe I wouldn’t have so many health problems. Because obviously turning from the diet plan specifically designed by my team of accredited physicians in favor of the one tweeted out by a celebrity is the cure that centuries of research and billions in funding has failed to unlock.

What? How dare I doubt its efficacy? Well obviously it’s not going to “suppress autoimmune activation”, whatever that means, with my kind of attitude. No, of course you know what I’m talking about. Of course you know my disease better than I do. How dare I question your nonexistent credentials? Why, just last night you watched a five minute YouTube video with clip-art graphics and showing how this diet = good and others = bad. Certainly that trumps my meager experience of a combined several months of direct instruction and training from the best healthcare experts in their respective fields, followed by a decade of firsthand self-management, hundreds of hours of volunteer work, and more participation in clinical research than most graduate students. Clearly I know nothing. Besides, those doctors are in the pockets of big pharma; the ones that make those evil vaccines and mind control nanobots.

I do not begrudge those who seek to improve themselves, nor even those who wish to help others by the same means through which they have achieved success themselves. However I cannot abide with those who take their particular diet as the new gospel, and try to see it implemented as a universal morality. Nor can I stand the insistence of those with no medical qualifications telling me that the things I do to stay alive, including my diet; the things that they have the distinct privilege of choice in; are not right for me.

I try to appreciate the honest intentions here where they exist, but frankly I cannot put up with someone who had never walked in my shoes criticizing my life support routine. My medical regimen is not a lifestyle choice any more than breathing is, and I am not going to change either of those things on second-hand advice received in a yoga lesson, or a ted talk, or even a public school health class. I cannot support a movement that calls for the categorical elimination of entire food groups, nor a propaganda campaign against the type of restaurant that helps me stick to my diet, nor the taxation of precisely the kind of foodstuffs which I have been prescribed by my medical team.

With no other option, I can do nothing but vehemently oppose this set of notions pertaining to the new cult of the diet, as I have sometimes referred to it, and its most prominent and recognizable symbol: kale. Indeed, in collages and creative projects in which others have encouraged me to express myself, the phrases “down with kale” and “death to kale”, with accompanying images of scratched-out pictures of kale and other vegetables, have featured prominently. I have one such collage framed and mounted in my bedroom as a reminder of all the wrongs which I seek to right.

This is, I will concede, something of a personal prejudice. Possibly even a stereotype. The kind of people that seem most liable to pierce my bubble and confront me over my diet tend to be the self-assured, zealous sort, and so it seems quite conceivable that I may be experiencing some kind of selection bias that causes me to see only the absolute worst in my interlocutors. It is possible in my ideo-intellectual intifada against kale, that I have thrown the baby out with the bathwater. In honesty, even if this were true, I probably wouldn’t apologize, on the grounds that what I have had to endure has been so upsetting that, with the stakes being my own life and death as they are, that my reaction has been not only justified, but correct.

As a brief aside, there is, I am sure, a great analogy to be drawn here, and an even greater deal of commentary to be drawn on this last train of thought as a reflection of the larger modern socio-political situation; refusing to acknowledge wrongdoing despite being demonstrably in the wrong. Such commentary might even be more interesting and relevant than the post I am currently writing. Nevertheless such musings are outside the scope of this particular post, though I may return to them in the future.

So my position has not changed. I remain convinced that all of my actions have been completely correct. I have not, and do not plan, to renounce my views until such time as I feel I have been conclusively proven wrong, which I do not feel has happened. What has changed is I have been given a glimpse at a different perspective.

What happened is that someone close to me received a new diagnosis of a disease close in pathology to one that I have, and which I am also at higher risk for, which prevents her from eating gluten. This person, who will remain nameless for the purposes of this post, is as good as a sister to me, and the rest of her immediate family are like my own. We see each other at least as often as I see other friends or relations. Our families have gone on vacation together. We visit and dine together regularly enough that any medical issue that affects their kitchen also affects our own.

Now, I try to be an informed person, and prior to my friend’s diagnosis, I was at least peripherally aware of the condition with which she now has to deal. I could have explained the disease’s pathology, symptoms, and treatment, and I probably could have listed a few items that did and did not contain gluten, although this last one is more a consequence of gazing forlornly at the shorter lines at gluten-free buffets at the conferences which I attended than a genuine intent to learn.

What I had not come to appreciate was how difficult it was to find food that was not only free from gluten in itself, but completely safe from any trace of cross contamination, which I have learned, does make a critical difference. Many brands and restaurants offer items that are labeled as gluten free in large print, but then in smaller print immediately below disclaim all responsibility for the results of the actual assembly and preparation of the food, and indeed, for the integrity of the ingredients received from elsewhere. This is, of course, utterly useless.

Where I have found such needed assurances, however, are from those for whom this purity is a point of pride. These are the suppliers that also proudly advertise that they do not stock items containing genetically modified foodstuff, or any produce that has been exposed to chemicals. These are the people who proclaim the supremacy of organic food and vegan diets. They are scrupulous about making sure their food is free of gluten not just because it is necessary for people with certain medical conditions, but as a matter of moral integrity. To them these matters are of not only practical but ethical. In short, these are kale supporters.

This puts me in an awkward position intellectually. On the one hand, the smug superiority with which these kale supporters denounce technologies that have great potential to decrease human hardship based on pseudoscience, and out of dietary pickiness as outlined above, is grating at best. On the other hand, they are among the only people who seem to be invested in providing decent quality gluten free produce which they are willing to stand behind, and though I would trust them on few other things, I am at least willing to trust that they have been thorough in their compulsiveness.

Seeing the results of this attitude I still detest from this new angle has forced me to reconsider my continued denouncements. The presence of a niche gluten-free market, which is undoubtedly a recent development, has, alas, not been driven by increased sensitivity to those with specific medical dietary restrictions, but because in this case my friend’s medical treatment just so happens to align with a subcategory of fad diet. That this niche market exists is a good thing, and it could not exist without kale supporters. The very pickiness that I malign has paved the way for a better quality of life for my comrades who cannot afford to be otherwise. The evangelical attitude that I rage against has also successfully demanded that the food I am buying for my friend is safe for them to eat.

I do not yet think that I have horribly misjudged kale and its supporters. But regardless, I can appreciate that in this matter, they have a point. And I consider it more likely now that I may have misjudged kale supporters on a wider front, or at least, that my impression of them has been biased by my own experiences. I can appreciate that in demanding a market for their fad diets, that they have also created real value.

I am a stubborn person by nature once I have made up my mind, and so even these minor and measured concessions are rather painful. But fair is fair. Kale has proven that it does have a purpose. And to that end I think it is only fitting that I wind down my war on kale. This is not a total cessation of all military action. There are still plenty of nutritional misconceptions to dispel, and bad policies to be refuted, and besides that I am far too stubborn to even promise with a straight face that I’m not going to get into arguments about a topic that is necessarily close to my heart. But the stereotype which I drew up several years ago as a common thread between the people who would pester me about fad diets and misconceptions about my health has become outdated and unhelpful. It is, then, perhaps time to rethink it.

Technological Milestones and the Power of Mundanity

When I was fairly little, probably seven or so, I devised a short list of technologies based on what I had seen on television that I reckoned were at least plausible, and which I earmarked as milestones of sorts to measure how far human technology would progress during my lifetime. I estimated that if I was lucky, I would be able to have my hands on half of them by the time I retired. Delightfully, almost all of these have in fact already been achieved, less than fifteen years later.

Admittedly, all of these technologies that I picked were far closer than I had envisioned at the time. Living in Australia, which seemed to be the opposite side of the world from where everything happened, and living outside of the truly urban areas of Sydney which, as a consequence of international business, were kept up to date, it often seems that even though I technically grew up after the turn of the millennium, that I was raised in a place and culture that was closer to the 90s.

For example, as late as 2009, even among adults, not everyone I knew had a mobile phone. Text messaging was still “SMS”, and was generally regarded with suspicion and disdain, not least of all because not all phones were equipped to handle them, and not all phone plans included provisions for receiving them. “Smart” phones (still two words) did exist on the fringes; I know exactly one person who owned an iPhone, and two who owned a BlackBerry, at that time. But having one was still an oddity. Our public school curriculum was also notably skeptical, bordering on technophobic, about the rapid shift towards Broadband and constant connectivity, diverting much class time to decrying the evils of email and chat rooms.

These were the days when it was a moral imperative to turn off your modem at night, lest the hacker-perverts on the godless web wardial a backdoor into your computer, which weighed as much as the desk it was parked on, or your computer overheat from being left on, and catch fire (this happened to a friend of mine). Mice were wired and had little balls inside them that you could remove in order to sabotage them for the next user. Touch screens might have existed on some newer PDA models, and on some gimmicky machines in the inner city, but no one believed that they were going to replace the workstation PC.

I chose my technological milestones based on my experiences in this environment, and on television. Actually, since most of our television was the same shows that played in the United States, only a few months behind their stateside premier, they tended to be more up to date with the actual state of technology, and depictions of the near future which seemed obvious to an American audience seemed terribly optimistic and even outlandish to me at the time. So, in retrospect, it is not surprising that after I moved back to the US, I saw nearly all of my milestones commercially available within half a decade.

Tablet Computers
The idea of a single surface interface for a computer in the popular consciousness dates back almost as far as futuristic depictions of technology itself. It was an obvious technological niche that, despite numerous attempts, some semi-successful, was never truly cracked until the iPad. True, plenty of tablet computers existed before the iPad. But these were either klunky beyond use, incredibly fragile to the point of being unusable in practical circumstances, or horrifically expensive.

None of them were practical for, say, completing homework for school on, which at seven years old was kind of my litmus test for whether something was useful. I imagined that if I were lucky, I might get to go tablet shopping when it was time for me to enroll my own children. I could not imagine that affordable tablet computers would be widely available in time for me to use them for school myself. I still get a small joy every time I get to pull out my tablet in a productive niche.

Video Calling
Again, this was not a bolt from the blue. Orwell wrote about his telescreens, which amounted to two-way television, in the 1940s. By the 70s, NORAD had developed a fiber-optic based system whereby commanders could conduct video conferences during a crisis. By the time I was growing up, expensive and klunky video teleconferences were possible. But they had to be arranged and planned, and often required special equipment. Even once webcams started to appear, lessening the equipment burden, you were still often better off calling someone.

Skype and FaceTime changed that, spurred on largely by the appearance of smartphones, and later tablets, with front-facing cameras, which were designed largely for this exact purpose. Suddenly, a video call was as easy as a phone call; in some cases easier, because video calls are delivered over the Internet rather than requiring a phone line and number (something which I did not foresee).

Wearable Technology (in particular smartwatches)
This was the one that I was most skeptical of, as I got this mostly from the Jetsons, a show which isn’t exactly renowned for realism or accuracy. An argument can be made that this threshold hasn’t been fully crossed yet, since smartwatches are still niche products that haven’t caught on to the same extent as either of the previous items, and insofar as they can be used for communication like in The Jetsons, they rely on a smartphone or other device as a relay. This is a solid point, to which I have two counterarguments.

First, these are self-centered milestones. The test is not whether an average Joe can afford and use the technology, but whether it has an impact on my life. And indeed, my smart watch, which was enough and functional enough for me to use in an everyday role, does indeed have a noticeable positive impact. Second, while smartwatches may not be as ubiquitous as once portrayed, they do exist, and are commonplace enough to be largely unremarkable. The technology exists and is widely available, whether or not consumers choose to use it.

These were my three main pillars of the future. Other things which I marked down include such milestones as:

Commercial Space Travel
Sure, SpaceX and its ilk aren’t exactly the same as having shuttles to the ISS departing regularly from every major airport, with connecting service to the moon. You can’t have a romantic dinner rendezvous in orbit, gazing at the unclouded stars on one side, and the fragile planet earth on the other. But we’re remarkably close. Private sector delivery to orbit is now cheaper and more ubiquitous than public sector delivery (admittedly this has more to do with government austerity than an unexpected boom in the aerospace sector).

Large-Scale Remotely Controlled or Autonomous Vehicles
This one came from Kim Possible, and a particular episode in which our intrepid heroes got to their remote destination by a borrowed military helicopter flown remotely from a home computer. Today, we have remotely piloted military drones, and early self-driving vehicles. This one hasn’t been fully met yet, since I’ve never ridden in a self-driving vehicle myself, but it is on the horizon, and I eagerly await it.

Cyborgs
I did guess that we’d have technologically altered humans, both for medical purposes, and as part of the road to the enhanced super-humans that rule in movies and television. I never guessed at seven that in less than a decade, that I would be one of them, relying on networked machines and computer chips to keep my biological self functioning, plugging into the wall to charge my batteries when they run low, studiously avoiding magnets, EMPs, and water unless I have planned ahead and am wearing the correct configuration and armor.

This last one highlights an important factor. All of these technologies were, or at least, seemed, revolutionary. And yet today they are mundane. My tablet today is only remarkable to me because I once pegged it as a keystone of the future that I hoped would see the eradication of my then-present woes. This turned out to be overly optimistic, for two reasons.

First, it assumed that I would be happy as soon as the things that bothered me then no longer did, which is a fundamental misunderstanding of human nature. Humans do not remain happy the same way than an object in motion remains in motion until acted upon. Or perhaps it is that as creatures of constant change and reecontextualization, we are always undergoing so much change that remaining happy without constant effort is exceedingly rare. Humans always find more problems that need to be solved. On balance, this is a good thing, as it drives innovation and advancement. But it makes living life as a human rather, well, wanting.

Which lays the groundwork nicely for the second reason: novelty is necessarily fleeting. What advanced technology today marks the boundary of magic will tomorrow be a mere gimmick, and after that, a mere fact of life. Computers hundreds of millions more times more powerful than those used to wage World War II and send men to the moon are so ubiquitous that they are considered a basic necessity of modern life, like clothes, or literacy; both of which have millennia of incremental refinement and scientific striving behind them on their own.

My picture of the glorious shining future assumed that the things which seemed amazing at the time would continue to amaze once they had become commonplace. This isn’t a wholly unreasonable extrapolation on available data, even if it is childishly optimistic. Yet it is self-contradictory. The only way that such technologies could be harnessed to their full capacity would be to have them become so widely available and commonplace that it would be conceivable for product developers to integrate them into every possible facet of life. This both requires and establishes a certain level of mundanity about the technology that will eventually break the spell of novelty.

In this light, the mundanity of the technological breakthroughs that define my present life, relative to the imagined future of my past self, is not a bad thing. Disappointing, yes; and certainly it is a sobering reflection on the ungrateful character of human nature. But this very mundanity that breaks our predictions of the future (or at least, our optimistic predictions) is an integral part of the process of progress. Not only does this mundanity constantly drive us to reach for ever greater heights by making us utterly irreverent of those we have already achieved, but it allows us to keep evolving our current technologies to new applications.

Take, for example, wireless internet. I remember a time, or at least, a place, when wireless internet did not exist for practical purposes. “Wi-Fi” as a term hadn’t caught on yet; in fact, I remember the publicity campaign that was undertaken to educate our technologically backwards selves about what term meant, about how it wasn’t dangerous, and about how it would make all of our lives better, as we could connect to everything. Of course, at that time I didn’t know anyone outside of my father’s office who owned a device capable of connecting to Wi-Fi. But that was beside the point. It was the new thing. It was a shiny, exciting novelty.

And then, for a while, it was a gimmick. Newer computers began to advertise their Wi-Fi antennae, boasting that it was as good as being connected by cable. Hotels and other establishments began to advertise Wi-Fi connectivity. Phones began to connect to Wi-Fi networks, which allowed phones to truly connect to the internet even without a data plan.

Soon, Wi-Fi became not just a gimmick, but a standard. First computers, then phones, without internet began to become obsolete. Customers began to expect Wi-Fi as a standard accommodation wherever they went, for free even. Employers, teachers, and organizations began to assume that the people they were dealing with would have Wi-Fi, and therefore everyone in the house would have internet access. In ten years, the prevailing attitude around me went from “I wouldn’t feel safe having my kid playing in a building with that new Wi-Fi stuff” to “I need to make sure my kid has Wi-Fi so they can do their schoolwork”. Like television, telephones, and electricity, Wi-Fi became just another thing that needed to be had in a modern home. A mundanity.

Now, that very mundanity is driving a second wave of revolution. The “Internet of Things” as it is being called, is using the Wi-Fi networks that are already in place in every modern home to add more niche devices and appliances. We are told to expect that soon that every major device in our house will be connected to out personal network, controllable either from our mobile devices, or even by voice, and soon, gesture, if not through the devices themselves, then through artificially intelligent home assistants (Amazon echo, Google Home, and related).

It is important to realize that this second revolution could not take place while Wi-Fi was still a novelty. No one who wouldn’t otherwise buy into Wi-Fi at the beginning would have bought it because it could also control the sprinklers, or the washing machine, or what have you. Wi-Fi had to become established as a mundane building block in order to be used as the cornerstone of this latest innovation.

Research and development may be focused on the shiny and novel, but technological process on a species-wide scale depends just as much on this mundanity. Breakthroughs have to not only be helpful and exciting, but useful in everyday life, and cheap enough to be usable by everyday consumers. It is easy to get swept up in the exuberance of what is new, but the revolutionary changes happen when those new things are allowed to become mundane.

On Horror Films

Recently, I was confronted with a poll regarding my favorite horror film. This was only slightly awkward, as, of the films listed as options, I had seen… none.

I really like this design.

Broadly speaking, I do not see fit to use my personal time to make myself experience negative emotions. Also, since the majority of horror films tend to focus on narrow, contrived circumstances and be driven by a supernatural, usually vaguely biblical demon, I find it difficult to suspend disbelief and buy into the premise. To me, the far better horror experiences have been disaster films, in particular those like Threads or By Dawn’s Early Light. Also certain alternate history films, in particular the HBO film, Fatherland, which did more to get across the real horror of the holocaust and genocide to thirteen year old me than six months of social studies lessons.

To wit, the only bona-fide horror film I’ve seen was something about Satan coming to haunt elevator-goers for their sins. Honestly I thought it was exceedingly mediocre at best. However, I saw this film at a birthday party for a friend of mine, the confidant of a previous crush. I had come to know this girl after she transferred to our public middle school from the local catholic school. We saw this film at her birthday party, which was, in the manner of things, perceived as the very height of society, in the pressence of an overwhelmingly female audience, most of whom my friend had known from St. Mary’s. Apparently to them the film was excellent, as many professed to be quite scared, and it remained the subject of conversation for some months afterward.

I have come to develop three alternative hypotheses for why everyone but myself seemed to enjoy this distinctly mediocre film. The first is that I am simply not a movie person and was oblivious to the apparent artistic merit of this film. This would fit existing data, as I have similarly ambiguous feelings towards many types of media my friends generally seem to laud. This is the simplest explanation, and thus the null hypothesis which I have broadly accepted for the past half-decade or so.

The second possible explanation is that, since the majority of the audience except for myself was Catholic, attended Catholic Church, and had gone to the Catholic primary school in our neighborhood, and because the film made several references to Catholic doctrine and literature, to the point that several times my friend had to lean over and whisper the names and significance of certain prayers or incantations, that this carried extra weight for those besides myself. Perhaps I lacked the necessary background context to understand what the creators were tying to reach for. Perhaps my relatively secular and avowedly skeptical upbringing had desensitized me to this specific subset of supernatural horror, while the far more mundane terrors of war, genocide, and plague fill much the same role in my psyche.

The third alternative was suggested to me by a male compatriot, who was not in attendance but was familiar with all of the attendees, several years after the fact, and subsequently corroborated by testimony from both male and female attendees. The third possibility is that my artistic assessment at the time was not only entirely on point, but was the silent majority opinion, yet that this opinion was suppressed consciously or unconsciously for social reasons. Perhaps, it has been posited to me, the appearance of being scared was for my own benefit? Going deeper, perhaps some or all of the motivation to see a horror film at a party of both sexes was not entirely platonic?

It is worth distinguishing, at this point, the relative numbers and attitudes of the various sexes. At this party, there were a total of about twenty teenagers. Of this number, there were three or four boys (my memory fails me as to exact figures), including myself. I was on the guest list from the beginning as a matter of course; I had been one of the birthday girl’s closest friends since she arrived in public school, and perhaps more importantly, her parents had met and emphatically approved of me. In fact I will go so far as to suggest that the main reason this girl’s staunchly traditionalist, conservative parents permitted their rebellious teenage daughter to invite boys over to a birthday party was because they trusted me, and believed my presence would be a moderating influence.

Also among the males in attendance were the brother of one of the popular socialite attendees, whose love of soap operas and celebrity gossip, and general stylistic flamboyance had convinced everyone concerned that he was not exactly straight; my closest friend, who was as passive and agreeable a teenager as you will ever have the pleasure to know; and a young man whose politics I staunchly disagreed with and who would later go on to have an eighteen month on and off relationship with the birthday girl, though he did not know it at the time.

Although I noticed this numerical gender discrepancy effectively immediately, at no point did it occur to me that, were I so motivated, I could probably have leveraged these odds into some manner of romantic affair. This, despite what could probably be reasonably interpreted as numerous hints to the effect of “Oh look how big the house is. Wouldn’t it be so easy for two people to get lost in one of these several secluded bedrooms?”

Although I credit this obliviousness largely to the immense respect I maintained for the host’s parents and the sanctity of their home, I must acknowledge a certain level of personal ignorance owing mainly to a lack of similar socialization, and also to childhood brain damage. This acute awareness of my own past, and in all likelihood, present, obliviousness to social subtleties is part of why I am so readily willing to accept that I might have easily missed whatever aspect of this film made it so worthwhile.

In any case, as the hypothesis goes, this particular film was in fact mediocre, just as I believed at the time. However, unlike myself with my single-minded judgement based solely on the artistic merits and lack thereof of the film, it is possible that my female comrades, while agreeing in the abstract with my assessment, opted instead to be somewhat more holistic in their presentation of opinions. Or to put it another way, they opted to be socially opportunistic in the ability to signal their emotional state. As it was described to me, my reaction would then, at least in theory, be to attempt to comfort and reassure them. I would assume the stereotypical role of male defender, and the implications therewith, which would somehow transmogrify into a similarly-structured relationship.

Despite the emphatic insistence of most involved parties, with no conclusive confession, I remain particularly skeptical of this hypothesis, though admittedly it does correlate with existing psychological and sociological research on terror-induced pair-bonding. I doubt I shall ever truly understand the horror genre. It would be easy to state categorically that there is no merit to trying to induce negative emotions without cause, and that those who wish to use such experiences as a cover for other overtures ought simply get over themselves, but given that, as things go, this is an apparently victimless crime, and seems to being a great deal of joy to some people, it is more likely that this issue lies more in myself than the rest of the world.

To a person who seeks to understand the whole truth in its entirety, the notion that there are some things that I simply do not have the capacity to understand is frustrating. Knowing that there are things which other people can comprehend, yet I cannot, is extremely frustrating. More than frustrating; it is horrifying. To know that there is an entire world of subtext and communication that is lost to me; that my brain is damaged in such a way that I am oblivious to things that are supposed to be obvious, is disconcerting to the point of terrifying.

I will probably never know the answer to these questions, as at this point I am probably the only one who yet bothers to dwell on that one evening many moons ago. It will remain in my memory an unsolved mystery, and a reminder that my perception is faulty in ways imperceptible to me, but obvious to others. It might even be accurate to say that I will remain haunted by this episode.

Happy Halloween.

My Superpowers

So, I don’t know if I mentioned this, but I have a minor superpower. Not the cyborg stuff. That exists, but isn’t really a power so much as a bunch of gadgets I wear to keep me alive. Nor any of the intellectual or creative abilities it has been alleged that I possess, for those are both ordinary in the scope of things, and also subjective. Rather I refer to my slight clairvoyance. I can sense changes in the weather. I have had this ability referred to as “my personal barometer”, but in truth it often functions more like a “personal air-raid siren”; specifically one that can’t be shut up.

Near as I can tell, this is related to pressure changes, and happens because something, somewhere inside me, is wired wrong. I have been told that my sinuses are out of order in such a way that would make me vulnerable to comparatively minor changes such as pressure, and strong circumstantial evidence suggests damage somewhere in my nervous system, caused by childhood encephalitis, which creates the microscopic, undetectable vulnerability that manifests in my seizures and migraines, and could plausibly be exploited by other factors.

This has the effect of allowing me to feel major weather changes somewhere between six hours and a week before it appears when I am, depending on the size and speed of a shift. It starts as a mild-bout of light-headedness, the same as the rush of blood flowing away from my head when standing up after not moving for some time. If it is a relatively minor dislocation, this may be all that I feel.

It then grows into a more general feeling of flu-like malaise; the same feeling that normally tells if one is sick even if there are not any active symptoms. At this point, my cognitive function begins to seriously degrade. I start to stutter and stumble, and struggle for the words that are on the tip of my tongue. I forget things and lose track of time. I will struggle both to get to sleep, and to wake up.

Depending on the severity and duration, these symptoms may be scarcely visible, or they may have me appearing on death’s door. It is difficult to tell these symptoms apart from those of allergies, migraines, or an infection, especially once I begin to experience chills and aches. This is compounded by my immune system’s proclivity to give false negatives due to my immunodeficiency, and false positives due to my autoimmune responses, for pathology. Fortunately, the end result is mostly the same: I am advised to stay home, rest, make sure I eat and drink plenty, redouble our protective quarantine procedures, etcetera.

At its worst, these symptoms also induce a cluster migraine, which confines me to bed and limits my ability to process and respond to stimuli to a level only slightly better than comatose. At this point, my symptoms are a storm unto itself, and, short of a hurricane, I’m probably not going to be much concerned with whatever is happening outside the confines of my room, as I’ve already effectively sealed myself off from the outside world. I will remain so confined for however long it takes until my symptoms pass. This may be a few hours, or a few weeks. During these days, my cognitive ability is limited to a couple hundred words, only forty or so of which are unique.

If I am lucky, I will still have the mental faculties to passively watch videos, listen to music with words, and occasionally write a handful of sentences. I generally cannot read long tracts, as reading requires several skills simultaneously – visual focus, language processing, inner narration, and imagination of the plot – which is usually beyond my limits. I can sometimes get by with audiobooks, provided the narration is slow enough and the plot not overly complex. If I am not able to deal with words, then I am limited to passing my waking hours listening to primarily classical music. Fortunately, I also tend to sleep a great deal more in this state.

Once I have entered this state, my superpower; or perhaps it is an unsung quirk of human perception; means that I don’t really consciously recognize time passing in the normal way. Without discrete events, sensations, or thoughts to mark time, the days all kind of meld together. With my shades closed, my light permanently off, and my sleep cycle shattered, days and nights lose their meaning. Every moment is the same as every other moment.

Thus, if it takes two weeks by calendar until I am well enough to return to normal function, I may wake up with only two or three days worth of discrete memories. And so in retrospect, the time that took other people two weeks to pass took me only three days. It therefore emerges that in addition to my limited form of clairvoyance, I also possess a limited form of time travel.

Admittedly, I am not great at controlling these powers. I have virtually no control over them, except some limited ability to treat the worst of the symptoms as they come up. So perhaps it is that they are not so much my powers as they are powers that affect me. They do not control me, as I still exist, albeit diminished, independent and regardless of them. They do affect others, but only through how they affect me.

All of this to say, the storms that are presently approaching the northeastern United States are having a rather large impact on my life at present. If I were of more of a superstitious bent, I might suggest that this is meant as a way to sabotage my plans to get organized and generally rain on my parade (cue canned laughter).

There isn’t a great deal that I can do to work around this, any more than a blind man can work around a print book. The best I can hope for is that this is a “two steps forward, one step back” situation, which will also depend on how quickly this storm clears up, and on me being able to hit the ground running afterwards.