On Horror Films

Recently, I was confronted with a poll regarding my favorite horror film. This was only slightly awkward, as, of the films listed as options, I had seen… none.

I really like this design.

Broadly speaking, I do not see fit to use my personal time to make myself experience negative emotions. Also, since the majority of horror films tend to focus on narrow, contrived circumstances and be driven by a supernatural, usually vaguely biblical demon, I find it difficult to suspend disbelief and buy into the premise. To me, the far better horror experiences have been disaster films, in particular those like Threads or By Dawn’s Early Light. Also certain alternate history films, in particular the HBO film, Fatherland, which did more to get across the real horror of the holocaust and genocide to thirteen year old me than six months of social studies lessons.

To wit, the only bona-fide horror film I’ve seen was something about Satan coming to haunt elevator-goers for their sins. Honestly I thought it was exceedingly mediocre at best. However, I saw this film at a birthday party for a friend of mine, the confidant of a previous crush. I had come to know this girl after she transferred to our public middle school from the local catholic school. We saw this film at her birthday party, which was, in the manner of things, perceived as the very height of society, in the pressence of an overwhelmingly female audience, most of whom my friend had known from St. Mary’s. Apparently to them the film was excellent, as many professed to be quite scared, and it remained the subject of conversation for some months afterward.

I have come to develop three alternative hypotheses for why everyone but myself seemed to enjoy this distinctly mediocre film. The first is that I am simply not a movie person and was oblivious to the apparent artistic merit of this film. This would fit existing data, as I have similarly ambiguous feelings towards many types of media my friends generally seem to laud. This is the simplest explanation, and thus the null hypothesis which I have broadly accepted for the past half-decade or so.

The second possible explanation is that, since the majority of the audience except for myself was Catholic, attended Catholic Church, and had gone to the Catholic primary school in our neighborhood, and because the film made several references to Catholic doctrine and literature, to the point that several times my friend had to lean over and whisper the names and significance of certain prayers or incantations, that this carried extra weight for those besides myself. Perhaps I lacked the necessary background context to understand what the creators were tying to reach for. Perhaps my relatively secular and avowedly skeptical upbringing had desensitized me to this specific subset of supernatural horror, while the far more mundane terrors of war, genocide, and plague fill much the same role in my psyche.

The third alternative was suggested to me by a male compatriot, who was not in attendance but was familiar with all of the attendees, several years after the fact, and subsequently corroborated by testimony from both male and female attendees. The third possibility is that my artistic assessment at the time was not only entirely on point, but was the silent majority opinion, yet that this opinion was suppressed consciously or unconsciously for social reasons. Perhaps, it has been posited to me, the appearance of being scared was for my own benefit? Going deeper, perhaps some or all of the motivation to see a horror film at a party of both sexes was not entirely platonic?

It is worth distinguishing, at this point, the relative numbers and attitudes of the various sexes. At this party, there were a total of about twenty teenagers. Of this number, there were three or four boys (my memory fails me as to exact figures), including myself. I was on the guest list from the beginning as a matter of course; I had been one of the birthday girl’s closest friends since she arrived in public school, and perhaps more importantly, her parents had met and emphatically approved of me. In fact I will go so far as to suggest that the main reason this girl’s staunchly traditionalist, conservative parents permitted their rebellious teenage daughter to invite boys over to a birthday party was because they trusted me, and believed my presence would be a moderating influence.

Also among the males in attendance were the brother of one of the popular socialite attendees, whose love of soap operas and celebrity gossip, and general stylistic flamboyance had convinced everyone concerned that he was not exactly straight; my closest friend, who was as passive and agreeable a teenager as you will ever have the pleasure to know; and a young man whose politics I staunchly disagreed with and who would later go on to have an eighteen month on and off relationship with the birthday girl, though he did not know it at the time.

Although I noticed this numerical gender discrepancy effectively immediately, at no point did it occur to me that, were I so motivated, I could probably have leveraged these odds into some manner of romantic affair. This, despite what could probably be reasonably interpreted as numerous hints to the effect of “Oh look how big the house is. Wouldn’t it be so easy for two people to get lost in one of these several secluded bedrooms?”

Although I credit this obliviousness largely to the immense respect I maintained for the host’s parents and the sanctity of their home, I must acknowledge a certain level of personal ignorance owing mainly to a lack of similar socialization, and also to childhood brain damage. This acute awareness of my own past, and in all likelihood, present, obliviousness to social subtleties is part of why I am so readily willing to accept that I might have easily missed whatever aspect of this film made it so worthwhile.

In any case, as the hypothesis goes, this particular film was in fact mediocre, just as I believed at the time. However, unlike myself with my single-minded judgement based solely on the artistic merits and lack thereof of the film, it is possible that my female comrades, while agreeing in the abstract with my assessment, opted instead to be somewhat more holistic in their presentation of opinions. Or to put it another way, they opted to be socially opportunistic in the ability to signal their emotional state. As it was described to me, my reaction would then, at least in theory, be to attempt to comfort and reassure them. I would assume the stereotypical role of male defender, and the implications therewith, which would somehow transmogrify into a similarly-structured relationship.

Despite the emphatic insistence of most involved parties, with no conclusive confession, I remain particularly skeptical of this hypothesis, though admittedly it does correlate with existing psychological and sociological research on terror-induced pair-bonding. I doubt I shall ever truly understand the horror genre. It would be easy to state categorically that there is no merit to trying to induce negative emotions without cause, and that those who wish to use such experiences as a cover for other overtures ought simply get over themselves, but given that, as things go, this is an apparently victimless crime, and seems to being a great deal of joy to some people, it is more likely that this issue lies more in myself than the rest of the world.

To a person who seeks to understand the whole truth in its entirety, the notion that there are some things that I simply do not have the capacity to understand is frustrating. Knowing that there are things which other people can comprehend, yet I cannot, is extremely frustrating. More than frustrating; it is horrifying. To know that there is an entire world of subtext and communication that is lost to me; that my brain is damaged in such a way that I am oblivious to things that are supposed to be obvious, is disconcerting to the point of terrifying.

I will probably never know the answer to these questions, as at this point I am probably the only one who yet bothers to dwell on that one evening many moons ago. It will remain in my memory an unsolved mystery, and a reminder that my perception is faulty in ways imperceptible to me, but obvious to others. It might even be accurate to say that I will remain haunted by this episode.

Happy Halloween.

My Superpowers

So, I don’t know if I mentioned this, but I have a minor superpower. Not the cyborg stuff. That exists, but isn’t really a power so much as a bunch of gadgets I wear to keep me alive. Nor any of the intellectual or creative abilities it has been alleged that I possess, for those are both ordinary in the scope of things, and also subjective. Rather I refer to my slight clairvoyance. I can sense changes in the weather. I have had this ability referred to as “my personal barometer”, but in truth it often functions more like a “personal air-raid siren”; specifically one that can’t be shut up.

Near as I can tell, this is related to pressure changes, and happens because something, somewhere inside me, is wired wrong. I have been told that my sinuses are out of order in such a way that would make me vulnerable to comparatively minor changes such as pressure, and strong circumstantial evidence suggests damage somewhere in my nervous system, caused by childhood encephalitis, which creates the microscopic, undetectable vulnerability that manifests in my seizures and migraines, and could plausibly be exploited by other factors.

This has the effect of allowing me to feel major weather changes somewhere between six hours and a week before it appears when I am, depending on the size and speed of a shift. It starts as a mild-bout of light-headedness, the same as the rush of blood flowing away from my head when standing up after not moving for some time. If it is a relatively minor dislocation, this may be all that I feel.

It then grows into a more general feeling of flu-like malaise; the same feeling that normally tells if one is sick even if there are not any active symptoms. At this point, my cognitive function begins to seriously degrade. I start to stutter and stumble, and struggle for the words that are on the tip of my tongue. I forget things and lose track of time. I will struggle both to get to sleep, and to wake up.

Depending on the severity and duration, these symptoms may be scarcely visible, or they may have me appearing on death’s door. It is difficult to tell these symptoms apart from those of allergies, migraines, or an infection, especially once I begin to experience chills and aches. This is compounded by my immune system’s proclivity to give false negatives due to my immunodeficiency, and false positives due to my autoimmune responses, for pathology. Fortunately, the end result is mostly the same: I am advised to stay home, rest, make sure I eat and drink plenty, redouble our protective quarantine procedures, etcetera.

At its worst, these symptoms also induce a cluster migraine, which confines me to bed and limits my ability to process and respond to stimuli to a level only slightly better than comatose. At this point, my symptoms are a storm unto itself, and, short of a hurricane, I’m probably not going to be much concerned with whatever is happening outside the confines of my room, as I’ve already effectively sealed myself off from the outside world. I will remain so confined for however long it takes until my symptoms pass. This may be a few hours, or a few weeks. During these days, my cognitive ability is limited to a couple hundred words, only forty or so of which are unique.

If I am lucky, I will still have the mental faculties to passively watch videos, listen to music with words, and occasionally write a handful of sentences. I generally cannot read long tracts, as reading requires several skills simultaneously – visual focus, language processing, inner narration, and imagination of the plot – which is usually beyond my limits. I can sometimes get by with audiobooks, provided the narration is slow enough and the plot not overly complex. If I am not able to deal with words, then I am limited to passing my waking hours listening to primarily classical music. Fortunately, I also tend to sleep a great deal more in this state.

Once I have entered this state, my superpower; or perhaps it is an unsung quirk of human perception; means that I don’t really consciously recognize time passing in the normal way. Without discrete events, sensations, or thoughts to mark time, the days all kind of meld together. With my shades closed, my light permanently off, and my sleep cycle shattered, days and nights lose their meaning. Every moment is the same as every other moment.

Thus, if it takes two weeks by calendar until I am well enough to return to normal function, I may wake up with only two or three days worth of discrete memories. And so in retrospect, the time that took other people two weeks to pass took me only three days. It therefore emerges that in addition to my limited form of clairvoyance, I also possess a limited form of time travel.

Admittedly, I am not great at controlling these powers. I have virtually no control over them, except some limited ability to treat the worst of the symptoms as they come up. So perhaps it is that they are not so much my powers as they are powers that affect me. They do not control me, as I still exist, albeit diminished, independent and regardless of them. They do affect others, but only through how they affect me.

All of this to say, the storms that are presently approaching the northeastern United States are having a rather large impact on my life at present. If I were of more of a superstitious bent, I might suggest that this is meant as a way to sabotage my plans to get organized and generally rain on my parade (cue canned laughter).

There isn’t a great deal that I can do to work around this, any more than a blind man can work around a print book. The best I can hope for is that this is a “two steps forward, one step back” situation, which will also depend on how quickly this storm clears up, and on me being able to hit the ground running afterwards.

Television Bubbles

So there’s a new show on Disney that allegedly follows the cast of That’s So Raven some decade after the show itself ended. This isn’t news per se, considering the show launched in July.

This is news to me, however. For some reason, the existence of this show, it’s premiere, any hype and marketing that may have surrounded it, and generally anything about it, managed to come and go completely unnoticed by me. I learned about this by accident; I happened to recognize the characters on a screen in the back of a burrito restaurant. At first I thought I was watching a very old rerun. But I was informed by other members of my party that, no, that’s part of the new show. Didn’t I know about it?

I have been wracking my brain trying to I ever heard anything about this. The closest I can come up with is a very vague recollection of someone making an offhanded remark in passing that such a concept was under consideration. This would have been probably in February or March. Thing is, I don’t actually remember this as a conversation. It’s just as possible that in trying to remember that I must have heard of this at some point, part of my brain has fabricated a vague sense that I must have heard of this at some point.

In retrospect, if I were going to miss something like an entire television series entirely, the chronology makes sense. May through early July, I was buried in schoolwork. I began Project Crimson, which by my count eliminated some half of all advertising that I see at all, in late April. By July, my whirlwind travel schedule had begun. I stayed more or less up to date on the news, because there were plenty of television screens blaring cable news headlines wherever I went, and because when it is likely that I will meet new people, I do make an effort to brush up on current events so as to have all the relevant discussion points of the day, but this really only applies to news headlines.

So it is possible to imagine that this series premiere happened somewhere further down in my news feed, or in a news podcast episode that got downloaded to my phone but never listened to. I find it slightly odd that I was at, of all places, Disney World, and had no exposure whatsoever to the latest Disney show. But then again, their parks tend to focus on the more classic aspects of the Disney culture. And who knows; perhaps they did have posters and adverts up, or were putting them while my back was turned, or whatever. Clearly, it’s possible, because it happened.

Here are my two big problems with this whole fiasco. First, this is something I would have liked to know. I would understand if some story about, say, sports, or celebrity gossip, slipped under my radar in such a way. I don’t watch a whole lot of TV in general, and I don’t really watch anything related to sports of celebrity news. My online news feeds respond to what I engage with, giving me more stories I am likely to digest, and quietly axing pieces that my eyes would otherwise just glide over. Though this makes me uncomfortable, and I have criticized it in the past, I accept this as a price of having my news conveniently aggregated.

Except that here, I honestly would have liked to know that there was a new That’s So Raven series in the pipes. I would wager that I’m actually part of their target audience, which is part of why I’m so surprised that I wasn’t very aware of this. That’s So Raven ran, at least where I lived in Australia, at roughly the opening of when I was old enough to follow and appreciate the slightly more complicated “all ages” programming. And while I wouldn’t rank it as my favorite, its stories did stick with me. Raven’s struggles against racism, sexism, and discrimination, introduced me to these concepts before I had been diagnosed with all of my medical issues and experienced discrimination firsthand. Raven’s father’s quest to build his own small business, and Corey’s dogged, (some might say, relentless) entrepreneurial spirit, inspired me.

Moreover, the spinoff show Corey in the House, while often cringeworthy at the best of times, even more-so than its predecessor, was the first exposure that I had to, if not the structure and dynamics, than at least the imagery and phraseology, of US politics. This, at a time when I was forbidden to watch cable news (all that was on was the war on terror) and many of my schoolmates and their parents would routinely denounce the United States and its President, as the Australian components of coalition forces in the Middle East began to suffer losses. Naturally, as the token American, I was expected to answer for all of my president’s crimes. Having a TV show that gave me a modicum of a clue as to what people were talking about, but that also taught that America and American ideals, while they might not be perfect, were still at least good in an idealistic sense, was immensely comforting.

All of that is to say that I hold some nostalgia for the original series and the stories they told. Now, I have not seen this new show. I don’t know whether how close it is to the original. But I have to imagine that such nostalgia was a factor in the decision to approve this new series, which would suggest that it is aimed at least partly at my demographic. Given that there are trillions of dollars involved in making sure that targeted demographics are aware of the products they ought to consume, and that I haven’t been living particularly under a rock, it seems strange how this passed me by.

Furthermore, if a series of unusual events has caused me to miss this event this time, I am quite sure that I would have picked up on it earlier five years ago. Even three years ago, I would have within a few weeks of launch, seen some advert, or comment, and investigated. In all probability, I would have watched this show from day one, or shortly thereafter. However, the person who I am and my media habits now have diverged so much from the person that I was then that we no longer have this in common. This rattles me. Even though I understand and accept that selves are not so much constant as changing so slowly as to not notice most days, this is still a shock.

Which brings me nicely to my second problem in all of this. This new series, in many respects represents a best case scenario for something that is likely to cross my path. Yes, there are confounding variables at play: I was traveling, I have cut down how much advertising I tolerate, and I had been mostly skimming the headlines. But these aren’t once-in-a blue moon problems. There was a massive, concerted publicity effort, in behalf of one of the largest media and marketing machines on the planet, to promote a story that I would have embraced if it ever came across my radar, while I was at one of their theme parks, and while I was making a conscious effort to pay attention to headlines. And yet I still missed this.

This begs an important, terrifying question: what else have I missed? The fact that I missed this one event, while idly disappointing, will likely not materially impact my life in the foreseeable future. The face that I could have missed it in the first place, on the other hand, shows that there is a very large blind spot in my awareness of current happenings. It is at least large enough to fly an entire TV series through, and probably quite a bit larger.

I am vaguely aware, even as a teenager, that I do not know all things. But I do take some pride in being at least somewhat well informed, and ready to learn. I like to believe that I some grasp on the big picture, and that I have at least some concept of the things that I am not paying attention to; to repeat an earlier example, sports and celebrity news. I can accept that there are plenty of facts and factoids that I do not know, since I am not, despite protestations, a walking encyclopedia, and I recognize that, in our new age of interconnectedness and fractally-nested cultural rabbit holes, that there are plenty of niche interests with which I am not familiar. But this is in my wheelhouse, or at least I would have thought.

It is still possible, and I do still hope, that this is a fluke. But what if it isn’t? What if this is simply one more product of how I currently organize my life, and of how the internet and my means of connectivity fit into that? Suppose this latest scandal is just one more item that I have missed because of the particular filtering strategies I use to avoid being overloaded. If this best-case scenario didn’t get my attention, what are the odds that something without all of these natural advantages will get to me?

How likely is it that I am going to hear about the obscure piece of legislation being voted on today, or the local budget referendum, which both affect me, but not directly or immediately enough that I’m liable to see people marching in the streets or calling me up personally? How often will I hear about the problems facing my old friends in Australia now that I am living on a different continent, in a different time zone, and with a totally different political landscape to contend with.

For all of my fretting, I can’t conceive of a realistic path out of this. The internet is to large and noisy a place to cover all, or even a substantial number of, the bases. More content is uploaded every second than a human could digest in s lifetime. Getting news online requires either committing to one or two sources, or trusting an aggregation service, whether that be a bot like Facebook, Google, Yahoo, and the like, or paying a human somewhere along the line to curate stories.

Going old fashioned, as I have heard proposed in a few different places, and sticking to a handful of old-fashioned print newspapers with paid subscriptions and a set number of pages to contend with, is either too broad, and hence has the same problem of relying on the internet at large, or too specific and cut down. TV news tends to fall somewhere between newspapers and social media. And crucially, none of these old fashioned services are good at giving me the news that I require. I want to hear about the scandal in the White House, and the one in my local Town Hall, and hear about the new series based on the one that aired when I was young, and what the World Health Organization says about the outbreak in Hong Kong, without hearing about sports or celebrity gossip, or that scandal in Belgrade that I don’t know enough about to comment on.

Figuring out how to reconcile this discrepancy in a way that satisfies both consumers, and society’s needs for a well informed populace, may well be one of the key challenges of this time in history, especially for my generation. For my part, the best I can figure is that I’m going to have to try and be a little more cognizant of things that might be happening outside of my bubble. This isn’t really a solution, any more than ‘being aware of other drivers’ is a solution for car accidents. Media bubbles are the price of casual participation in current events, and from where I stand today, non-participation is not an option.

There is Power in a Wristband


This post is part of the series: The Debriefing. Click to read all posts in this series.


Quick note: this post contains stuff that deals with issues of law and medical advice. While I always try to get things right, I am neither a doctor nor a lawyer, and my blog posts are not to be taken as such advice.

Among people I know for whom it is a going concern, medical identification is a controversial subject. For those not in the know, medical identification is a simple concept. The idea is to have some sort of preestablished method to convey to first responders and medical personnel the presence of a condition which may either require immediate, specific, treatment (say, a neurological issue that requires the immediate application of a specific rescue medication), or impact normal treatment (say, an allergy to a common drug) in the event that the patient is incapacitated.

The utilitarian benefits are obvious. In an emergency situation, where seconds count, making sure that this information is discovered and conveyed can, and often does, make the difference between life and death, and prevent delays and diversions that are costly in time, money, and future health outcomes. The importance of this element cannot be overstated. There are also some possible purported legal benefits to having pertinent medical information easily visible for law enforcement and security to see. On the other hand, some will tell you that this is a very bad idea, since it gives legal adversaries free evidence about your medical conditions, which is something they’d otherwise have to prove.

The arguments against are equally apparent. There are obvious ethical quandaries in compelling a group of people to identify themselves in public, especially as in this case it pertains to normally confidential information about medical and disability status. And even where the macro-scale political considerations do not enter it, there are the personal considerations. Being forced to make a certain statement in the way one dresses is never pleasant, and having that mode of personal choice and self expression can make the risk of exacerbated medical problems down the line seem like a fair trade off.

I can see both sides of the debate here. Personally, I do wear some medical identification at all times – a small bracelet around my left wrist – and have more or less continuously for the last decade. It is not so flamboyantly visible as some people would advise. I have no medical alert tattoos, nor embroidered jacket patches. My disability is not a point of pride. But it is easily discoverable should circumstances require it.

Obviously, I think that what I have done and continue to do is fundamentally correct and right, or at least, is right for me. To do less seems to me foolhardy, and to do more seems not worth the pains required. The pains it would cause me are not particularly logistical. Rather they refer to the social cost of my disability always being the first impression and first topic of conversation.

It bears repeating that, though I am an introvert in general, I am not particularly bashful about my medical situation. Provided I feel sociable, I am perfectly content to speak at length about all the nitty gritty details of the latest chapter in my medical saga. Yet even I have a point at which I am uncomfortable advertising that I have a disability. While I am not averse to inviting empathy, I do not desire others to see me as a burden, nor for my disability to define every aspect of our interactions any more than the face that I am left handed, or brown eyed, or a writer. I am perfectly content to mention my medical situation when it comes up in conversation. I do not think it appropriate to announce it every time I enter a room.

Since I feel this way, and I am also literally a spokesman and disability advocate, it is easy to understand that there are many who do not feel that it is even appropriate for them to say as much as I do. Some dislike the spotlight in general. Others are simply uncomfortable talking about a very personal struggle. Still others fear the stigma and backlash associated with any kind of imperfection and vulnerability, let alone one as significant as a bonafide disability. These fears are not unreasonable. The decision to wear medical identification, though undoubtedly beneficial to health and safety, is not without a tradeoff. Some perceive that tradeoff, rightly or wrongly, as not worth the cost.

Even though this position is certainly against standard medical advice, and I would never advocate people go against medical advice, I cannot bring myself to condemn those who go against this kind of advice with the same definitiveness with which I condemn, say, refusing to vaccinate for non-medical reasons, or insurance companies compelling patients to certain medical decisions for economic reasons. The personal reasons, even though they are personal and not medical, are too close to home. I have trouble finding fault with a child who doesn’t want to wear an itchy wristband, or a teenager who just wants to fit in and make their own decisions about appearance. I cannot fault them for wanting what by all rights should be theirs.

Yet the problem remains. Without proper identification it is impossible for first responders to identify those who have specific, urgent needs. Without having these identifiers be sufficiently obvious and present at all times, the need for security and law enforcement to react appropriately to those with special needs relies solely on their training beforehand, and on them trusting the people they have just detained.

In a perfect world, this problem would be completely moot. Even in a slightly less than perfect world, where all these diseases and conditions still existed, but police and first responder training was perfectly robust and effective, medical identification would not be needed. Likewise, in such a world, the stigma of medical identification would not exist; patients would feel perfectly safe announcing their condition to the world, and there would be no controversy in adhering to the standard medical advice.

In our world, it is a chicken-egg problem, brought on by understandable, if frustrating, human failings at every level. Trying to determine fault and blame ultimately comes down to questioning the nitty gritty of morality, ethics, and human nature, and as such, is more suited to an exercise in navel gazing than an earnest attempt to find solutions to the problems presently faced by modern patients. We can complain, justifiably and with merit, that the system is biased against us. However such complaints, cathartic though they may be, will not accomplish much.

This viscous cycle, however, can be broken. Indeed, it has been broken before, and recently. Historical examples abound of oppressed groups coming to break the stigma of an identifying symbol, and claiming it as a mark of pride. The example that comes most immediately to mind is the recent progress that has been made for LGBT+ groups in eroding the stigma of terms which quite recently were used as slurs, and in appropriating symbols such as the pink triangle as a symbol of pride. In a related vein, the Star of David, once known as a symbol of oppression and exclusion, has come to be used by the Jewish community in general, and Israel in particular, as a symbol of unity and commonality.

In contrast to such groups, the road for those requiring medical identification is comparatively straightforward. The disabled and sick are already widely regarded as sympathetic, if pitiful. Our symbols, though they may be stigmatized, are not generally reviled. When we face insensitivity, it is usually not because those we face are actively conspiring to deny us our needs, but simply because we may well be the first people they have encountered with these specific needs. As noted above, this is a chicken-egg problem, as the less sensitive the average person is, the more likely a given person with a disability that is easily hidden is to try and fly under the radar.

Imagine, then, if you can, such a world, where a medical identification necklace is as commonplace and unremarkable as a necklace with a religious symbol. Imagine seeing a parking lot with stickers announcing the medical condition of a driver or passenger with the same regularity as you see an advertisement for a political cause or a vacation destination. Try to picture a world where people are as unconcerned about seeing durable medical equipment as American flag apparel. It is not difficult to imagine. We are still a ways away from it, but it is within reach.

I know that this world is within reach, partially because I myself have seen the first inklings of it. I have spent time in this world, at conferences and meetings. At several of these conferences, wearing a colored wristband corresponding to one’s medical conditions is a requirement for entry, and here it is not seen as a symbol of stigma, but one of empowerment. Wristbands are worn in proud declaration, amid short sleeved shirts for walkathon teams, showing bare medical devices for all the world to see.

Indeed, in this world, the medical ID bracelet is a symbol of pride. It is shown off amid pictures of fists clenched high in triumph and empowerment. It is shown off in images of gentle hands held in friendship and solidarity.

It is worth mentioning with regards to this last point, that the system of wristbands is truly universal. That is to say, even those who have no medical afflictions whatsoever are issued wristbands, albeit in a different color. To those who are not directly afflicted, they are a symbol of solidarity with those who are. But it remains a positive symbol regardless.

The difference between these wristbands, which are positive symbols, and ordinary medical identification, which is at best inconvenient and at worst oppressive, has nothing to do with the physical discrepancies between them, and everything to do with the attitudes that are attached by both internal and external pressure. The wristbands, it will be seen, are a mere symbol, albeit a powerful one, onto which we project society’s collective feelings towards chronic disease and disability.

Medical identification is in itself amoral, but in its capacity as a symbol, it acts as a conduit to amplify our existing feelings and anxieties about our condition. In a world where disabled people are discriminated against, left to go bankrupt from buying medication for their survival, and even targeted by extremist groups, it is not hard to find legitimate anxieties to amplify in this manner. By contrast an environment in which the collective attitude towards these issues is one of acceptance and empowerment, these projected feelings can be equally positive.

The Moral Hazard of Hope


This post is part of the series: The Debriefing. Click to read all posts in this series.


Suppose that five years from today, you would receive an extremely large windfall. The exact number isn’t important, but let’s just say it’s large enough that you’ll have to budget things again. Not technically infinite, because that would break everything, but for the purposes of one person, basically undepletable. Let’s also assume that this money becomes yours in such a way that it can’t be taxed or swindled in getting it. This is also an alternate universe where inheritance and estates don’t exist, so there’s no scheming among family, and no point in considering them in your plans. Just roll with it.

No one else knows about it, so you can’t borrow against it, nor is anyone going to treat you differently until you have the money. You still have to be alive in five years to collect and enjoy your fortune. Freak accidents can still happen, and you can still go bankrupt in the interim, or get thrown in prison, or whatever, but as long as you’re around to cash the check five years from today, you’re in the money.

How would this change your behavior in the interim? How would your priorities change from what they are?

Well, first of all, you’re probably not going to invest in retirement, or long term savings in general. After all, you won’t need to. In fact, further saving would be foolish. You’re not going to need that extra drop in the bucket, which means saving it would be wasting it. You’re legitimately economically better off living the high life and enjoying yourself as much as possible without putting yourself in such severe financial jeopardy that you would be increasing your chances of being unable to collect your money.

If this seems insane, it’s important to remember here, that your lifestyle and enjoyment are quantifiable economic factors (the keyword is “utility”) that weigh against the (relative and ultimately arbitrary) value of your money. This is the whole reason why people buy stuff they don’t strictly need to survive, and why rich people spend more money than poor people, despite not being physiologically different. Because any money you save is basically worthless, and your happiness still has value, buying happiness, expensive and temporary though it may be, is always the economically rational choice.

This is tied to an important economic concept known as Moral Hazard, a condition where the normal risks and costs involved in a decision fail to apply, encouraging riskier behavior. I’m stretching the idea a little bit here, since it usually refers to more direct situations. For example, if I have a credit card that my parents pay for to use “for emergencies”, and I know I’m never going to see the bill, because my parents care more about our family’s credit score than most anything I would think to buy, then that’s a moral hazard. I have very little incentive to do the “right” thing, and a lot of incentive to do whatever I please.

There are examples in macroeconomics as well. For example, many say that large corporations in the United States are caught in a moral hazard problem, because they know that they are “too big to fail”, and will be bailed out by the government if they get in to serious trouble. As a result, these companies may be encouraged to make riskier decisions, knowing that any profits will be massive, and any losses will be passed along.

In any case, the idea is there. When the consequences of a risky decision become uncoupled from the reward, it can be no surprise when rational actors take more riskier decisions. If you know that in five years you’re going to be basically immune to any hardship, you’re probably not going to prepare for the long term.

Now let’s take a different example. Suppose you’re rushed to the hospital after a heart attack, and diagnosed with a heart condition. The condition is minor for now, but could get worse without treatment, and will get worse as you age regardless.

The bad news is, in order to avoid having more heart attacks, and possible secondary circulatory and organ problems, you’re going to need to follow a very strict regimen, including a draconian diet, a daily exercise routine, and a series of regular injections and blood tests.

The good news, your doctor informs you, is that the scientists, who have been tucked away in their labs and getting millions in yearly funding, are closing in on a cure. In fact, there’s already a new drug that’s worked really well in mice. A researcher giving a talk at a major conference recently showed a slide of a timeline that estimated FDA approval in no more than five years. Once you’re cured, assuming everything works as advertised, you won’t have to go through the laborious process of treatment.

The cure drug won’t help if you die of a heart attack before then, and it won’t fix any problems with your other organs if your heart gets bad enough that it can’t supply them with blood, but otherwise it will be a complete cure, as though you were never diagnosed in the first place. The nurse discharging you tells you that since most organ failure doesn’t appear until patients have been going for at least a decade, so long as you can avoid dying for half that long, you’ll be fine.

So, how are you going to treat this new chronic and life threatening disease? Maybe you will be the diligent, model patient, always deferring to the most conservative and risk averse in the medical literature, certainly hopeful for a cure, but not willing to bet your life on a grad student’s hypothesis. Or maybe, knowing nothing else on the subject, you will trust what your doctor told you, and your first impression of the disease, getting by with only as much invasive treatment as you can get away with to avoid dying and being called out by your medical team for being “noncompliant” (referred to in chronic illness circles in hushed tones as “the n-word”).

If the cure does come in five years, as happens only in stories and fantasies, then either way, you’ll be set. The second version of you might be a bit happier from having more fully sucked the marrow out of life. It’s also possible that the second version would have also had to endure another (probably non-fatal) heart attack or two, and dealt with more day to day symptoms like fatigue, pains, and poor circulation. But you never would have really lost anything for being the n-word.

On the other hand, if by the time five years have elapsed, the drug hasn’t gotten approval, or quite possibly, hasn’t gotten close after the researchers discovered that curing a disease in mice didn’t also solve it in humans, then the difference between the two versions of you are going to start to compound. It may not even be noticeable after five years. But after ten, twenty, thirty years, the second version of you is going to be worse for wear. You might not be dead. But there’s a much higher chance you’re going to have had several more heart attacks, and possibly other problems as well.

This is a case of moral hazard, plain and simple, and it does appear in the attitudes of patients with chronic conditions that require constant treatment. The fact that, in this case, the perception of a lack of risk and consequences is a complete fantasy is not relevant. All risk analyses depend on the information that is given and available, not on whatever the actual facts may be. We know that the patient’s decision is ultimately misguided because we know the information they are being given is false, or at least, misleading, and because our detached perspective allows us to take a dispassionate view of the situation.

The patient does not have this information or perspective. In all probability, they are starting out scared and confused, and want nothing more than to return to their previous normal life with as few interruptions as possible. The information and advice they were given, from a medical team that they trust, and possibly have no practical way of fact checking, has led them to believe that they do not particularly need to be strict about their new regimen, because there will not be time for long term consequences to catch up.

The medical team may earnestly believe this. It is the same problem one level up; the only difference is, their information comes from pharmaceutical manufacturers, who have a marketing interest in keeping patients and doctors optimistic about upcoming products, and researchers, who may be unfamiliar with the hurdles in getting a breakthrough from the early lab discoveries to a consumer-available product, and whose funding is dependent on drumming up public support through hype.

The patient is also complicit in this system that lies to them. Nobody wants to be told that their condition is incurable, and that they will be chronically sick until they die. No one wants to hear that their new diagnosis will either cause them to die early, or live long enough for their organs to fail, because even by adhering to the most rigid medical plan, the tools available simply cannot completely mimic the human body’s natural functions. Indeed, telling a patient that they will still suffer long term complications, whether in ten, twenty, or thirty years, almost regardless of their actions today, it can be argued, will have much the same effect as telling them that they will be healthy regardless.

Given the choice between two extremes, optimism is obviously the better policy. But this policy does have a tradeoff. It creates a moral hazard of hope. Ideally, we would be able to convey an optimistic perspective that also maintains an accurate view of the medical prognosis, and balances the need for bedside manner with incentivizing patients to take the best possible care of themselves. Obviously this is not an easy balance to strike, and the balance will vary from patient to patient. The happy-go-lucky might need to be brought down a peg or two with a reality check, while the nihilistic might need a spoonful of sugar to help the medicine go down. Finding this middle ground is not a task to be accomplished by a practitioner at a single visit, but a process to be achieved over the entire course of treatment, ideally with a diverse and well experienced team including mental health specialists.

In an effort to finish on a positive note, I will point out that this is already happening, or at least, is already starting to happen. As interdisciplinary medicine gains traction, patient mental health becomes more of a focus, and as patients with chronic conditions begin to live longer, more hospitals and practices are working harder to ensure that a positive and constructive mindset for self care is a priority, alongside educating patients on the actual logistics of self-care. Support is easier to find than ever, especially with organized patient conferences and events. This problem, much like the conditions that cause it, are chronic, but are manageable with effort.

 

Reflections on Contentedness

Contentedness is an underrated emotion. True, it doesn’t have the same electricity as joy, or the righteousness of anger. But it has the capability to be every bit as sublime. As an added bonus, contentedness seems to lean towards a more measured, reflective action as a result, rather than the rash impulsiveness of the ecstatic excitement of unadulterated joy, or the burning rage of properly kindled anger.

One of the most valuable lessons I have learned in the past decade has been how to appreciate being merely content instead of requiring utter and complete bliss. It is enough to sit in the park on a nice and sunny day, without having to frolic and chase the specter of absolute happiness. Because in truth, happiness is seldom something that can be chased.

Of course, contentedness also has its more vicious form if left unmoderated. Just as anger can beget wrath, and joy beget gluttony, greed, and lust, too much contentedness can bring about a state of sloth, or perhaps better put, complacency. Avoiding complacency has been a topic on my mind a great deal of late, as I have suddenly found myself with free time and energy, and wish to avoid squandering it as much as possible.

This last week saw a few different events of note in my life, which I will quickly recount here:

I received the notification of the death of an acquaintance and comrade of mine. While not out of the blue, or even particularly surprising, it did nevertheless catch me off guard. This news shook me, and indeed, if this latest post seems to contain an excess of navel-gazing ponderance, without much actual insight to match, that is why. I do have more thoughts and words on the subject, but am waiting for permission from the family before posting anything further on the subject.

The annual (insofar as having something two years in a row makes an annual tradition) company barbecue hosted at our house by my father took place. Such events are inevitably stressful for me, as they require me to exert myself physically in preparation for houseguests, and then to be present and sociable. Nevertheless, the event went on without major incident, which I suppose is a victory.

After much consternation, I finally picked up my diploma and finalized transcript from the high school, marking an anticlimactic end to the more than half-decade long struggle with my local public school to get me what is mine by legal right. In the end, it wasn’t that the school ever shaped up, decided to start following the law, and started helping me. Instead, I learned how to learn and work around them.

I made a quip or two about how, now that I can no longer be blackmailed with grades, I could publish my tell-all book. In truth, such a book will probably have to wait until after I am accepted into higher education, given that I will still have to work with the school administration through the application process.

In that respect, very little is changed by the receiving of my diploma. There was no great ceremony, nor parade, nor party in my honor. I am assured that I could yet have all such things if I were so motivated, but it seems duplicitous to compel others to celebrate me and my specific struggle, outside of the normal milestones and ceremonies which I have failed to qualify for, under the pretense that it is part of that same framework. Moreover, I hesitate to celebrate at all. This is a bittersweet occasion, and a large part of me wants nothing more than for this period of my life to be forgotten as quickly as possible.

Of course, that is impossible, for a variety of reasons. And even if it were possible, I’m not totally convinced it would be the right choice. It is not that I feel strongly that my unnecessary adversity has made me more resilient, or has become an integral part of my identity. It has, but this is a silver lining at best. Rather, it is because as much as I wish to forget the pains of the past, I wish even more strongly to avoid such pains in future. It is therefore necessary that I remember what happened, and bear it constantly in mind.

The events of this week, and the scattershot mix of implications they have for me, make it impossible for me to be unreservedly happy. Even so, being able to sit on my favorite park bench, loosen my metaphorical belt, and enjoy the nice, if unmemorable, weather, secure in the knowledge that the largest concerns of recent memory and foreseeable future are firmly behind me, does lend itself to a sort of contentedness. Given the turmoil and anguish of the last few weeks of scrambling to get schoolwork done, this is certainly a step up.

In other news, my gallery page is now operational, albeit incomplete, as I have yet to go through the full album of photographs that were taken but not posted, nor have I had the time to properly copy the relevant pages from my sketchbook. The fictional story which I continue to write is close to being available. In fact, it is technically online while I continue to preemptively hunt down bugs, it just doesn’t have anything linking to it. This coming weekend it slated to be quite busy, with me going to a conference in Virginia, followed by the Turtles All the Way Down book release party in New York City.

The Professional Sick Person

This last week, I spent a bit of time keeping up my title as a professional sick person. I achieved this, luckily, without having to be in any serious danger, because the cause of my temporary disposition was the series of vaccines I received. I tend to be especially prone to the minor side effects of vaccination- the symptoms that make one feel vaguely under the weather without making one feel seriously at risk of death -which isn’t surprising given my immune pathology.

Enduring a couple of days at most of foggy-headedness, low grade fevers and chills, and frustrating but bearable aches is, if still unpleasant, then at least better than most any other illness I have dealt with in the last decade.

What struck me was when I was told, contrary to my own experience and subsequent expectations, that a couple of days would be in itself an average amount of time for a “normal” person to recover fully from an ordinary illness. That, for someone who has a healthy and attenuated immune system, it is entirely possible to get through the whole cycle of infection for a garden variety cold in a weekend.

This is rather shocking news to me. I had always assumed that when the protagonist of some television show called in sick for a single day, and returned to work/school the next, that this was just one of those idiosyncrasies of the TV universe, the same way characters always wear designer brands and are perfectly made up.

I had always assumed that in reality, of course people who caught a cold would take at least a week to recover, since it usually takes me closer to two, assuming it doesn’t develop into some more severe infection. Of course people who have the flu spend between three and five weeks at home (still optimistic, if you’re asking me), that is, if they can get by without having to be hospitalized.

This probably shouldn’t surprise me. I know, consciously, that I spend more time confined to quarantine by illness than almost anyone I know, and certainly than anyone I’m friends with for reasons other than sharing a medical diagnosis or hospital ward with. Still, it’s easy to forget this. It’s extremely easy to assume, as I find myself often doing even without thinking, that barring obvious differences, other people are fundamentally not unlike myself, and share most of my perspectives, values, and challenges. Even when I am able to avoid doing this consciously, I find that my unconscious mind often does this for me.

It’s jarring to be suddenly reminded, then, of exactly how much my health truly does, and I don’t use this phrase lightly, screw me over; apparently it does so so often and so thoroughly that I have to a large degree ceased to notice, except when it causes a jarring contrast against my peers.

Feeling slightly terrible as a side effect of getting vaccines has, on an intellectual and intuitive level, ceased to be an annoyance in itself. It is only problematic insofar as it prevents me from going about my business otherwise: my mental fog makes writing difficult, my fevers and chills compel me to swaddle my shivering body to offset its failure to maintain temperature, and my omnipresent myalgia gives me a constant nagging reminder of the frailty of my mortal coil, but these are mere physical inconveniences. Of course, this does not negate the direct physical impact of my current disposition; it merely contextualizes it.

Having long ago grown used to the mental feeling of illness, and without feeling poor enough physically to garner any genuine concern for serious danger to my long term health and survival, the fact that I am sick rather than well is reduced to a mere footnote: a status. In the day to day story that I narrate to myself and others, the symptoms I have described are mere observations of the setting, without any lasting impact on the plot, nor on the essence of the story itself.

I often call myself a professional sick person; a phrase which I learnt from John Green via Hazel Grace Lancaster. The more time I spend thinking about my health, the more I find this metaphor apt. After all, in the past decade of being enrolled in and nominally attending public school, I have spent more time in hospitals than in a classroom. My health occupies a majority of my time, and the consequences for ignoring it are both immediate and dire. I regard my health as a fundamental part of my routine and identity, the way most do their jobs. Perhaps most compelling: my focus on it, like that of a professional on their trade, has warped my perspective.

We all know of the story of the IT expert incapable of explaining in human terms, or of the engineer so preoccupied with interesting solutions as to be blind to the obvious ones, or of the artist unable to accept a design that is less than perfect. In my case it is that I have spent so much time dealing with my own medical situation that it is exceedingly difficult to understand the relative simplicity of others’.

The Social Media Embargo

I have previously mentioned that I do not frequently indulge in social media. I thought it might be worthwhile to explore this in a bit more detail.

The Geopolitics of Social Media

Late middle and early high school are a perpetual arms race for popularity and social power. This is a well known and widely accepted thesis, and my experience during adolescence, in addition to my study of the high schools of past ages, and of other countries and cultures, has led me to treat it as a given. Social media hasn’t changed this. It has amplified this effect, however, in the same manner that improved intercontinental rocketry and the invention of nuclear ballistic missile submarines intensified the threat of the Cold War.

To illustrate: In the late 1940s and into the 1950, before ICBMs were accurate or widely deployed enough to make a credible threat of annihilation, the minimum amount of warning of impending doom, and the maximum amount of damage that could be inflicted, were limited by the size and capability of each side’s bomber fleet. Accordingly, a war could only be waged, and hence, could only escalate, as quickly as bombers could reach enemy territory. This both served as an inherent limit on the destructive capability of each side, and acted as a safeguard against accidental escalation by providing a time delay in which snap diplomacy could take place.

The invention of long range ballistic missiles, however, changed this fact by massively decreasing the time from launch order to annihilation, and the ballistic missile submarine carried this further by putting both powers perpetually in range for a decapitation strike – a disabling strike that would wipe out enemy command and launch capability.

This new strategic situation has two primary effects, both of which increase the possibility of accident, and the cost to both players. First, both powers must adopt a policy of “Launch on Warning” – that is, moving immediately to full annihilation based only on early warning, or even acting preemptively when one believes that an attack is or may be imminent. Secondly, both powers must accelerate their own armament programs, both to maintain their own decapitation strike ability, and to ensure that they have sufficient capacity that they will still maintain retaliatory ability after an enemy decapitation strike.

It is a prisoner’s dilemma, plain and simple. And indeed, with each technological iteration, the differences in payoffs and punishments becomes larger and more pronounced. At some point the cost of continuous arms race becomes overwhelming, but whichever player yields first also forfeits their status as a superpower.

The same is, at least in my experience, true of social media use. Regular checking and posting is generally distracting and appears to have serious mental health costs, but so long as the cycle continues, it also serves as the foremost means of social power projection. And indeed, as Mean Girls teaches us, in adolescence as in nuclear politics, the only way to protect against an adversary is to maintain the means to retaliate at the slightest provocation.

This trend is not new. Mean Girls, which codified much of what we think of as modern adolescent politics and social dynamics, was made in 2004. Technology has not changed the underlying nature of adolescence, though it has accelerated and amplified its effects and costs. Nor is it limited to adolescents: the same kind of power structures and popularity contests that dominated high school recur throughout the world, especially as social media and the internet at large play a greater role in organizing our lives.

This is not inherently a bad thing if one is adept at social media. If you have the energy to post, curate, and respond on a continuous schedule, more power to you. I, however, cannot. I blame most of this on my disability, which limits my ability to handle large amounts of stimuli without becoming both physiologically and psychologically overwhelmed. The other part of this I blame on my perfectionist tendencies, which require that I make my responses complete and precise, and that I see through my interactions until I am sure that I have proven my point. While this is a decent enough mindset for academic debate, it is actively counterproductive on the social internet.

Moreover, continuous exposure to the actions of my peers reminded me of a depressing fact that I tried often to forget: that I was not with them. My disability is not so much a handicap in that is prevents me from doing things when I am with my peers in that it prevents me from being present with them in the first place. I become sick, which prevents me from attending school, which keeps me out of conversations, which means I’m not included in plans, which means I can’t attend gatherings, and so forth. Social media reminds me of this by showing me all the exciting things that my friends are doing while I am confined to bed rest.

It is difficult to remedy this kind of depression and anxiety. Stray depressive thoughts that have no basis in reality can, at least sometimes, and for me often, be talked apart when it is proven that they are baseless, and it is relatively simple to dismiss them when they pop up later. But these factual reminders that I am objectively left out; that I am the only person among my peers among these smiling faces; seemingly that my existence is objectively sadder and less interesting; is far harder to argue.

The History of the Embargo

I first got a Facebook account a little less than six years ago, on my fourteenth birthday. This was my first real social media to speak of, and was both the beginning of the end of parental restrictions on my internet consumption, and the beginning of a very specific window of my adolescence that I have since come to particularly loath.

Facebook wasn’t technically new at this point, but it also wasn’t the immutable giant that it is today. It was still viewed as a game of the young, and it was entirely possible to find someone who wasn’t familiar with the concept of social media without being a total Luddite. Perhaps more relevantly, there were then the first wave of people such as myself, who had grown up with the internet as a lower-case entity, who were now of age to join social media. That is, these people had grown up never knowing a world where it was necessary to go to a library for information, or where information was something that was stored physically, or even where past stories were something held in one’s memory rather than on hard drives.

In this respect, I consider myself lucky that the official line of the New South Wales Department of Eduction and Training’s official computer curriculum was, at the time I went through it, almost technophobic by modern standards; vehemently denouncing the evils of “chatrooms” and regarding the use of this newfangled “email” with the darkest suspicion. It didn’t give me real skills to equip me for the revolution that was coming; that I would live through firsthand, but it did, I think, give me a sense of perspective.

Even if that curriculum was already outdated even by the time it got to me, it helped underscore how quickly things had changed in the few years before I had enrolled. This knowledge, even if I didn’t understand it at the time, helped to calibrate a sense of perspective and reasonableness that has been a moderating influence on my technological habits.

During the first two years or so of having a Facebook account, I fell into the rabbit hole of using social media. If I had an announcement, I posted it. If I found a curious photo, I posted it. If I had a funny joke or a stray thought, I posted it. Facebook didn’t take over my life, but it did become a major theatre of it. What was recorded and broadcast there seemed for a time to be equally important as the actual conversations and interactions I had during school.

This same period, perhaps unsurprisingly, also saw a decline in my mental wellbeing. It’s difficult to tease apart a direct cause, as a number of different things all happened at roughly the same time; my physiological health deteriorated, some of my earlier friends began to grow distant from me, and I started attending the school that would continually throw obstacles in my path and refuse to accommodate my disability. But I do think my use of social media amplified the psychological effects of these events, especially inasmuch as it acted a focusing lens on all the things that made me different and apart from my peers.

At the behest of those closest to me, I began to take breaks from social media. These helped, but given that they were always circumstantial or limited in time, their effects were accordingly temporary. Moreover, the fact that these breaks were an exception rather than a standing rule meant that I always returned to social media, and when I did, the chaos of catching up often undid whatever progress I might have made in the interim.

After I finally came to the conclusion that my use of social media was causing me more personal harm than good, I eventually decided that the only way I would be able to remove its influence was total prohibition. Others, perhaps, might find that they have the willpower to deal with shades of gray in their personal policies. And indeed, in my better hours, so do I. The problem is that I have found that social media is most likely to have its negative impacts when I am not in one of my better hours, but rather have been worn down by circumstance. It is therefore not enough for me to resolve that I should endeavor to spend less time on social media, or to log off when I feel it is becoming detrimental. I require strict rules that can only be overridden in the most exceedingly extenuating circumstances.

My solution was to write down the rules which I planned to enact. The idea was that those would be the rules, and if I could justify an exception in writing, I could amend them as necessary. Having this as a step helped to decouple the utilitarian action of checking social media from the compulsive cycle of escalation. If I had a genuine reason to use social media, such as using it to provide announcements to far flung relatives during a crisis, I could write a temporary amendment to my rules. If I merely felt compelled to log on for reasons that I could not express coherently in a written amendment, then that was not a good enough reason.

This decision hasn’t been without its drawbacks. I am, without social media, undoubtedly less connected to my peers as I might otherwise have been, and the trend which already existed of my being the last person to know of anything has continued to intensify, but crucially, I am not so acutely aware of this trend that it has a serious impact one way or another on my day to day psyche. Perhaps some months hence I shall, upon further reflection, come to the conclusion that my current regime is beginning to inflict more damage than that which it originally remedied, and once again amend my embargo.

Arguments Against the Embargo

My reflections on my social media embargo have brought me stumbling upon two relevant moral quandaries. The first is whether ignorance can truly be bliss, and whether there is an appreciable distinction between genuine experience and hedonistic simulation. In walling myself off from the world I have achieved a measure of peace and contentment, at the possible cost of disconnecting myself from my peers, and to a lesser degree from the outside world. In the philosophical terms, I have alienated myself, both from my fellow man, and from my species-essence. Of course, the question of whether social media is a genuine solution to, or a vehicle of, alienation, is a debate unto itself, particularly given my situation.

It is unlikely, if still possible, that my health would have allowed my participation in any kind of physical activity which I could have been foreseeably invited to as a direct result of increased social media presence. Particularly given my deteriorating mental health of the time, it seems far more reasonable to assume that my presence would have been more of a one-sided affair: I would have sat, and scrolled, and become too self conscious and anxious about the things that I saw to contribute in a way that would be noticed by others. With these considerations in mind, the question of authenticity of experience appears to be academic at best, and nothing for me to loose sleep over.

The second question regards the duty of expression. It has oft been posited, particularly with the socio-political turmoils of late, that every citizen has a duty to be informed, and to make their voice heard; and that furthermore in declining to take a position, we are, if not tacitly endorsing the greater evil, then at least tacitly declaring that all positions available are morally equivalent in our apathy. Indeed, I myself have made such arguments on the past as it pertains to voting, and to a lesser extent to advocacy in general.

The argument goes that social media is the modern equivalent of the colonial town square, or the classical forum, and that as the default venue for socio-political discussion, our abstract duty to be informed participants is thus transmogrified into a specific duty to participate on social media. This, combined with the vague Templar-esque compulsion to correct wrongs that also drives me to rearrange objects on the table, acknowledge others’ sneezes, and correct spelling, is not lost on me.

In practice, I have found that these discussions are, at best, pyrrhic, and more often entirely fruitless: they cause opposition to become more and more entrenched, poison relationships, and convert no one, all the while creating a blight in what is supposed to be a shared social space. And as Internet shouting matches tend to be crowned primarily by who blinks first, they create a situation in which any withdrawal, even for perfectly valid reasons such as, say, having more pressing matters than trading insults over tax policy, is viewed as concession.

While this doesn’t directly address the dilemma posited, it does make its proposal untenable. Taking to my social media to agitate is not particularly more effective than conducting a hunger strike against North Korea, and given my health situation, is not really a workable strategy. Given that ought implies can, I feel acceptably satisfied to dismiss any lingering doubts about my present course.

Song of Myself

Music has always played an important role in my life, and I have always found comfort in it during some of my darkest hours. In particular, I have often listened to songs that I feel reflect me as a person, regardless of whether I like them a great deal as songs, during times of crisis, as a means to remind myself who I am and what I fight for. This has led me to what I think is an interesting artistic experiment: putting together a playlist that represents, not necessarily my tastes for listening to today, but me as a person through my personal history.

To put it another way: if I was hosting an Olympics, what would the opening ceremony look, and more importantly, sound, like? Or, if I were designing a Voyager probe record to give a person I’ve never met a taste of what me means, what would it focus on?

I could easily spend a great deal of time compiling, editing, and rearranging a truly epic playlist that would last several hours. But that misses the point of this exercise. Because, while my interest in listening to my own soundtrack bight be effectively infinite, that of other people is not. The goal here is not to compile a soundtrack, but to gather a few selections that convey the zeitgeist of my past.

This is my first attempt at this. I have chosen four songs, each of which represents roughly five years of my life. I have compiled a playlist available for listening here (Yes, I have a YouTube account/channel; I use it to make my own playlists for listening. Nothing special). The songs and my takeaway from them are described below.


1997-2002: Rhapsody in Blue

If I had to pick a single piece to represent my life, it would probably have to be Rhapsody in Blue, by George Gershwin. This piece was my favorite musical piece for a long time, and I remember during our visits with my grandparents when my grandfather would put on his classical records, and I would be thrilled when this song came on.

Rhapsody in Blue is perhaps best known as the United jingle, which is part of why I loved it so much. It represented flying, travel, adventure, and being treated like a member of high society as we flew in business class. I also reveled in knowing the name of a song that everyone else knew merely as a jingle. The energy and strong melody of the piece still captivate me to this day, and remind me of the feeling of childhood delight with each new adventure and horizon.

2002 – 2007: Pack all Your Troubles Arr. Mark Northfield

Aside from being one of my favorite arrangements of any song, this particular arrangement captures many of the conflicting feelings I have towards the first part of my schooling. I was indeed happy to be in a learning environment where I could soak up knowledge, but at the same time I often found the classes themselves dreadfully dull. Additionally, while I was initially quite happy with my social group, within a couple of years I had gone from being at the center of all playground affairs to being a frequently bullied pariah.

This song juxtaposes the cheerful, upbeat World War I song with a musical soundscape of a battlefield of the same time period, becoming more chaotic and pessimistic as time goes on. This also reflects my general experience in primary school, as my social life, my overall happiness, and my physical health all deteriorated over this time from a point of relative contentment to a point of absolute crisis. (2007 was the first year in which I genuinely remember nearly dying, and the first time I was confronted with a bona-fide disability.)

2007-2012: Time, Forward!

If 2007 was a breaking point in my life, then the years following were a period of picking up the pieces, and learning how to adapt to my new reality. Time, Forward, by Georgy Sviridov, captures much the same feeling, which makes sense considering it is frequently used to represent, the Soviet 20s, including at the Sochi games. This period in my life was chaotic and turbulent, and of the things I have come to regret saying, doing, or believing, most of them happened during this period. Yet it was also a formative time, cementing the medical habits that would ensure my survival, and meeting several new friends.

During this time was when my family moved back to the United States. With a fresh start in a new hemisphere, and several new disabilities and diagnoses to juggle, I was determined above all not to allow myself to be bullied and victimized the way I had been during primary school. I threw myself into schoolwork, and tried to avoid any display of vulnerability whatsoever. This, I discovered, did not make me any more popular or liked than I had been during primary school, which yielded a great deal of angst and conflict.

2012 – 2017: Dance of the Knights

You’ll notice that this song is both pseudo-classical, in the same vein as Rhapsody in Blue, while still being known as a work of Prokofiev, a Russian, and later Soviet, composer. In this respect, it is somewhere between the 2007-2012 period, and the 1997-2002 period, which I reckon is a reasonably accurate assessment of the past five years. The great highs and lows between late primary and early high school, which often involved grave medical threats to my life, have thankfully (hopefully) given way to a more predictably unpredictable set of obstacles; not only medically, but socially and psychologically, as my friends and I have grown up and learned to handle drama better.

The commonalities between the earlier pieces also reflect the change in priorities that I have worked very hard to (re)cultivate after seeing the distress that my existentialist focus on schoolwork brought me. I have in the past few years, begun to reprioritize those things that I believe are more likely to bring me happiness over mere success, harkening back to the things I held dear, and found so intriguing in Rhapsody in Blue in early childhood. At the same time, the piece, partly as a result of its context in Romeo and Juliet, has a distinctly mature, adult air to it; something which I struggle with internally, but which I am nevertheless thrust into regularly as I age.


If anyone else is interested in trying this project/challenge, please, go ahead and let me know. I can imagine that this could make a good group prompt, and I would be very interested to compare others’ playlists with my own.

What is a Home?

I know that I’m getting close to where I want to be when the GPS stops naming roads. That’s fine. These roads don’t have names, or even a planned logic to them, so much as they merely exist relative to other things. Out here, the roads are defined by where they go, rather than having places defined by addresses.

After a while I begin to recognize familiar landmarks. Like the roads, these landmarks don’t have names, but rather refer to some event in the past. First we drive through the small hamlet where I was strong armed into my first driving lesson. We pass the spot where my grandmother stopped the golf cart by the side of the road to point out the lavender honeysuckle to far younger versions of myself and my younger brother, and we spent a half hour sampling the taste of the flowers. Next we pass under the tree that my cousin was looking up at nervously when my father grabbed him by the shoulders and screamed that he was under attack by Drop Bears, causing my cousin to quite nearly soil himself.

I have never lived in a single house continuously for more than about eight years. I grew up traveling, an outsider wherever I went, and to me the notion of a single home country, let alone a single house for a home, is as foreign as it is incomprehensible. So is the concept of living within driving distance of most of one’s relatives, for that matter.

To me, home has always been a utilitarian rather than moral designation. Home is where I sleep for free, where my things that don’t fit in my suitcase go, and where the bills get forwarded to. Home is the place where I can take as long as I want in the bathroom, and rearrange the furniture to my arbitrary personal preferences, and invite people over without asking, but that is all. Anywhere these criteria are met can be home to me, with whatever other factors such as ownership, geographic location, and proximity to relatives, or points of personal history, being irrelevant. I can appreciate the logistical value of all of these things, but attaching much more importance to it seems strange.

Yet even as I write this I find myself challenging my points. Walking around my grandfather’s farmhouse, which is the closest thing I have to a consistent home, I am reminded of images of myself from a different time, especially of myself from a time before I was consciously able to make choices about who I am. It’s difficult to think of myself that long ago in terms of me, and my story, and much easier to think of myself in terms of the other objects that were also present.

My grandparents used to run a preschool from their house, and the front room is still stocked with toys and books from that era. Many of the decorations have remained unchanged from when my grandmother ran the place. The doors and cabinets are all painted in bright pastel colors. In my mind, these toys were as much my own as any that stayed at home while we traveled. Each of these toys has wrapped up in it the plot lines from several hundred different games between myself and whoever else I could rope into playing with me.

Against the wall is a height chart listing my, my brother’s, and my cousins’ heights since as early as we could stand. For most of my childhood this was the official scale for determining who was tallest in the ever raging battle for height supremacy, and I remember feeling ready to burst with pride the first time I was verified as tallest. I am tall enough now that I have outgrown the tallest measuring point. I am indisputably the tallest in the family. And yet I still feel some strange compulsion to measure myself there, beyond the mere curiosity that is aroused every time I see a height scale in a doctor’s office.

This place isn’t my home, not by a long shot. In many respects, it meets fewer of my utilitarian criteria than a given hotel. It is the closest I have ever felt to understanding the cultural phenomenon of Home, and yet it is still as foreign as anywhere else. If one’s home is tied to one’s childhood, as both my own observations and those of others I have read seem to indicate, then I will probably never have a home. This might be a sad realization, if I knew any different.

I have often been accused of holding a worldview that does not include room for certain “human” elements. This accusation, as far as I can tell, is probably on point, though somewhat misleading. It is not out of malice nor antipathy towards these elements that I do not place value on concepts such as “home”, “patriotism”, or, for that matter “family”. It is because they are foreign, and because from my viewpoint as an outsider, I genuinely cannot see their value.

I can understand and recognize the utilitarian value; I recognize the importance of having a place to which mail can be delivered and oversized objects can be stored; I can understand the preference for ensuring that one’s country of residence is secure and prosperous; and I can see the value of a close support network, and how one’s close relatives might easily become among one’s closest friends. But inasmuch as these things are said to suppose to have inherent value beyond their utilitarian worth, I cannot see it.

It is probably, I am told, a result of my relatively unusual life trajectory, which has served to isolate me from most cultural touchstones. I never had a home or homeland because we lived abroad and moved around when I was young. I fail to grasp the value of family because I have never lived in close proximity to extended relatives to the point of them becoming friends, and my illness and disability has further limited me from experiencing most of the cultural touchstones with which I might share with family.

It might sound like I am lamenting this fact. Perhaps I would be, if I knew what it was that I am allegedly missing. In reality, I only lament the fact that I cannot understand these things which seem to come naturally to others. That I lack a capital-H Home, or some deeper connection to extended family or country, is neither sad nor happy, but merely a fact of my existence.