The Story of Revival

Okay, I’ll admit it. Rather than writing as I normally do, the last week has been mostly dominated by me playing Cities: Skylines. It is a game which I find distinctly easy to sink many hours into. But I do want to post this week, and so I thought I would tell the story thus far of one of the cities I’ve been working on.

Twenty-odd years ago, a group of plucky, enterprising pioneers ventured forth to settle the pristine stretch of land just beside the highway into a shining city on the hill. The totalitarian government which was backing the project to build a number of planned cities had agreed to open up the land to development, and, apparently eager to prove something, granted the project effectively unlimited funds, and offered to resettle workers immediately as soon as buildings could be constructed. Concerned that they would be punished for the failure of this city personally, settlers came to calling the city “New Roanoke”. The name stuck.

A cloverleaf interchange was built to guide supplies and new settlers towards settlement, with a roundabout in the center of town. The roundabout in turn fed traffic down the main streets; Karl Marx Avenue, Guy Debord Boulevard, and Internationale Drive. Within a year of its establishment, New Roanoke began making strides towards its mandate to build a utopia by mandating strict sustainability guidelines on all new construction. With an infinite budget, the city government established large scale projects to entice new settlers.

With its zeppelins for transport, its high tech sustainable housing initiatives, and its massive investment in education and science, the city gained a reputation as a research haven, and began to attract eccentric futurist types that had been shunned elsewhere. New Roanoke became known as a city that was open to new ideas. A diverse populace flocked to New Roanoke, leading it through a massive boom.

Then, disaster struck, first in the form of a tornado that ripped through the industrial district, trashing the rail network that connected the city to the outside world, and connected the city’s districts. The citizens responded by building a glittering new monorail system to replace it, and with renewed investment in emergency warning and shelters. This system was put to the test when an asteroid impacted just outside the rapidly expanding suburbs of the city.

Although none were hurt, the impact was taken by the population as an ill omen. Soon enough the government had walled off the impact site, and redirected the expansion of the city to new areas. Observant citizens noticed several government agents and scientists loitering around the exclusion zone, and photographs quickly circulated on conspiracy websites detailing the construction of new secret research facilities just beyond the wall.

This story was quickly buried, however, by a wave of mysterious illness. At first it was a small thing; local hospitals reported an uptick in the number of deaths among traditionally vulnerable populations such as children, the elderly, and the disabled. Soon, however, reports began to appear of otherwise healthy individuals collapsing in the middle of their routines. The city’s healthcare network became overloaded within days.

The government clung to the notion that this massive wave of deaths was because of an infection, despite few, if any, symptoms in those who had dies, and so acted to try and stop the spread of infection, closing public spaces and discouraging the use of public transport. Ports of entry, including the city’s air, sea, and rail terminals, were closed to contain the spread. Places of employment also closed, though whether from a desire to assist the government, or to flee the city, none can say. These measures may or may not have helped, but the one thing they did do was create traffic so horrendous that emergency vehicles, and increasingly commonly hearses, could not navigate the city.

With a mounting body count, the government tore up what open space it could find in the city to build graveyards. When these were filled, the city built crematoria to process the tens of thousands of dead. When these were overloaded, people turned to piling bodies in abandoned skyscrapers, which the government dutifully demolished when they were full.

By the time the mortality rate fell back to normal levels, between a third and a half of the population had died, and tensions New Roanoke sat on a knife’s edge. The city government build a monument to honor those who had died in what was being called “the Great Mortality”. The opening ceremony brought visiting dignitaries from the national government, and naturally, inspired protests. These protests were initially small, but a heavy-handed police response caused them to escalate, until soon full-scale riots erupted. The city was once again paralyzed by fear and panic, as all of the tension that had bubbled under the surface during the Great Mortality boiled over.

Local police called in outside reinforcements, including the feared and hated secret police, who had so far been content to allow the city to function mostly autonomously to encourage research. Rioters were forced to surrender by declaring martial law, and shutting down water and power to rebellious parts of the city. With public services suspended, looters and rioters burned themselves out. When the violence began to subside, security forces marched in to restore order by force. Ad-hoc drumhead courts-martial sentenced the guilty to cruel and unusual punishments.

The secret police established a permanent office adjacent to the new courthouse, which was built in the newly-reconstructed historic district. The city was divided into districts for the purposes of administration. Several districts, mainly those in the older, richer sections of the city, and those by the river, cruise terminals, and airports, were given special status as tourist and leisure districts. The bulk of rebuilding aid was directed to these areas.

New suburbs were established outside of the main metropolis, as the national government sought to rekindle the utopian vision and spirit that had once propelled the city to great heights. The government backed the establishment of a spaceport to bring in tourists, and new research initiatives such as a medical research center, a compact particle accelerator, and an experimental fusion power plant. Life remained tightly controlled by the new government, but after a time, settled into a familiar rhythm. Although tensions remained, an influx of new citizens helped bury the memory of the troubled past.

With the completion of its last great monument, the Eden Project, the city government took the opportunity to finally settle on a name more befitting the city that had grown. The metropolis was officially re-christened as “Revival” on the thirtieth anniversary of its founding. Life in Revival is not, despite its billing, a utopia, but it is a far cry from its dystopic past. Revival is not exceptionally rich, despite being reasonably well developed and having high land values, though solvency has never been a priority for the city government.

I cannot say whether or not I would prefer to live in Revival myself. The idea of living in such a glittering antiseptic world of glass and steel and snow-white concrete, with monorails and zeppelins providing transport between particle colliders, science parks, and state of the art medical centers, where energy is clean and all waste is recycled, or treated in such a way to have no discernible environmental impact, sounds attractive, though it would also make me skeptical.

Thoughts on Steam

After much back and forth, I finally have a steam account. I caved eventually because I wanted to be able to actually play my brother’s birthday present to me; the game Cities: Skylines and all of its additional downloadable content packs. I had resisted, what has for some time felt inevitably, downloading steam, for a couple of reasons. The first was practical. Our family’s main computer is now reaching close to a decade old, and in its age does not handle all new things gracefully, or at least, does not do so consistently. Some days it will have no problem running multiple CPU-intensive games at once. Other days it promptly keels over when I so much as try to open a document.

Moreover, our internet is terrible. So terrible in fact that its latest speed test results mean that it does not qualify as broadband under any statutory or technical definition, despite paying not only for broadband, but for the highest available tier of it. Allegedly this problem has to do with the geography of our neighborhood and the construction of our house. Apparently, according to our ISP, the same walls which cannot help but share our heating and air conditioning with the outside, and which allow me to hear a whisper on the far side of the house, are totally impermeable to WiFi signals.

This fear was initially confirmed when my download told me that it would only be complete in an estimated two hundred and sixty one days. That is to say, it would take several times longer to download than it would for me to fly to the game developer’s headquarters in Sweden and get a copy on a flash drive. Or even to take a leisurely sea voyage.

This prediction turned out, thankfully, to be wrong. The download took a mere five hours; the vast majority of the progress was made during the last half hour when I was alone in the house. This is still far longer than the fifteen minutes or less that I’m accustomed to when installing from a CD. I suppose I ought to give some slack here, given that I didn’t have to physically go somewhere to purchase the CD.

My other point of contention with steam is philosophical. Steam makes it abundantly clear in their terms and conditions (which, yes, I do read, or at least, glaze over, as a general habit), that when you are paying them money to play games, you aren’t actually buying anything. At no point do you actually own the game that you are nominally purchasing. The legal setup here is terribly complicated, and given its novelty, not crystal clear in its definition and precedence, especially with the variations in jurisdictions that come with operating on the Internet. But while it isn’t clear what Steam is, Steam has made it quite clear what it isn’t. It isn’t selling games.

The idea of not owning the things that one buys isn’t strictly new. Software has never really been for sale in the old sense. You don’t buy Microsoft Word; you buy a license to use a copy of it, even if you were receiving it on a disk that was yours to own. Going back further, while you might own the physical token of a book, you don’t own the words on it inasmuch as it is not yours to copy and sell. This is a consequence of copyright and related concepts of intellectual property, which are intended to assist creators by granting them a temporary monopoly on their creations’ manufacture and sale, so as to incentivize more good creative work.

Yet this last example pulls at a loose thread: I may not own the story, but I do own the book. I may not be allowed to manufacture and sell new copies, but I can dispose of my current copy as I see fit. I can mark it, alter it, even destroy it if I so choose. I can take notes and excerpts from it so long as I am not copying the book wholesale, and I can sell my single copy of the book to another person for whatever price the two of us may agree upon, the same as any other piece of property. Software is not like this, though a strong argument can be made that it is only very recently that this new status quo has become practically enforceable.

Indeed, for as long as software has been sold in stores by means of disks and flash drives, it has been closer to the example of the classic book. For, as long as I have my CD, and whatever authentication key might come with it, I can install the contents wherever I might see fit. Without Internet connectivity to report back on my usage, there is no way of the publisher even knowing whether or not I am using their product, let alone whether I am using it in their intended manner. Microsoft can issue updates and changes, but with my CD and non-connected computer, I can keep my version of their software running how I like it forever.

Steam, however, takes this mindset that has existed in theory to its practical conclusion. You do not own the games that you pay for. This is roughly equivalent to the difference between buying a car, and chartering a limo service. Now, there’s nothing inherently wrong with this approach, but it is a major shift. There is of course the shift in power from consumers to providers: rather than you getting to dispose of your games as you see fit, you can have them revoked by Steam if you misbehave or cheat. This is unnerving, especially to one such as myself who is accustomed to having more freedom with things I buy (that’s why I buy them- to do as I please with), but not as interesting as the larger implications on the notion of property as a whole.

I don’t think the average layman knows or even cares about the particulars of license transfers. Ask such a layman what Steam does, and they’ll probably answer that they sell video games, in the same way that iTunes sells music. The actual minutiae of ownership are a distant second to the point of use. I call my games, and digital music, and the information on my Facebook feed mine, even though I don’t own them by any stretch of the imagination.

This use need not be exclusive either, so long as it never infringes on my own plans. After all, if there were a hypothetical person listening to my music and playing my games only precisely when I’m not, I might never notice.

So far I have referred to mostly digital goods, and sharing as it pertains to intellectual property. But this need not be the case. Ridesharing, for example, is already transforming the idea of owning and chartering a vehicle. On a more technical level, this is how mortgages, banknotes, and savings accounts have worked for centuries, in order to increase the money supply and expand the economy. Modern fiat currency, it will be seen, is not so much a commodity that is discretely owned as one that is shared an assigned value between its holder, society, and the government backing it. This quantum state is what allows credit and debt, which permit modern economies to function and flourish.

This shift in thinking around ownership certainly has the capability to be revolutionary, shifting prices and thinking around these new goods. Whether or not it will remains to be seen. Certainly it remains to be seen whether this change will be a net positive for consumers as well as the economy as a whole.

Cities: Skylines seems to be a fun game that our family computer can just barely manage to play. At the moment, this is all that is important to me. Yet I will be keeping an eye on how, if at all, getting games through steam influencers my enjoyment, for good or for ill.

Thanksgivings

So Australia, where I did most of my growing up, doesn’t have a thanksgiving holiday. Not even like Canada, where it’s on a different day. Arbor Day was a bigger deal at my school than American thanksgiving. My family tried to celebrate, but between school schedules that didn’t recognize our traditions, time differences that made watching the Macy’s parade and football game on the day impossible, and a general lack of turkey and pumpkin pie in stores, the effect was that we didn’t really have thanksgiving in the same way it is portrayed.

This is also at least part of the reason that I have none of the compunctions of my neighbors about commencing Christmas decorations, nor wearing holiday apparel. as soon as the leaves start to change in September. Thanksgiving is barely a real holiday, and Halloween was something people barely decorated for, so neither of those things acted as boundaries for the celebration of Christmas, which in contrast to the other two, was heavily celebrated and became an integral part of my cultural identity.

As a result, I don’t trace our thanksgiving traditions back hundreds of years, up the family tree through my mother’s side to our ancestor who signed the Mayflower Compact, and whose name has been passed down through the ages to my brother. Rather, I trace our traditions back less than a decade to my first year in American public school, when my teacher made out class go through a number of stereotypical traditions like making paper turkeys by tracing our hands, and writing down things we were thankful for. Hence: what I’m thankful for this year.

First, as always, I am thankful to be alive. This sounds tacky and cheap, I know, so let me clarify. I am thankful to be alive despite my body which does not keep itself alive. I am thankful to have been lucky enough to have beaten the odds for another year. I am acutely aware that things could have quite easily gone the other way.

Perhaps it is a sad reflection that my greatest joy of this year is to have merely gotten through it. Maybe. But I cannot change the facts of my situation. I cannot change the odds I face. I can only celebrate overcoming them. This victory of staying alive is the one on which all others depend. I could not have other triumphs, let alone celebrate and be thankful for them without first being sufficiently not-dead to achieve and enjoy them.

I’m thankful to be done with school. I’m glad to have it behind me. While it would be disingenuous to say that high school represented the darkest period in my life; partly because it is too soon to say, but mostly because those top few spots are generally dominated by the times I nearly died, was in the ICU, etcetera; there can be no denying that I hated high school. Not just the actual building, or having to go there; I hated my life as a high school student. I didn’t quite realize the depths of by unhappiness until I was done, and realized that I actually didn’t hate my life as a default. So I am thankful to be done and over with that.

I am thankful that I have the resources to write and take care of myself without also having to struggle to pay for the things I need to live. I am immensely thankful that I am able to sequester myself and treat my illnesses without having to think about what I am missing. In other words, I am thankful for being able to be unable to work. I am thankful that I have enough money, power, and privilege to stand up for myself, and to have others stand up for me. I am aware that I am lucky not only to be alive, but I to have access to a standard of care that makes my life worth living. I know that this is an advantage that is far from universal, even in my own country. I cannot really apologize for this, as, without these advantages, it is quite likely that I would be dead, or in such constant agony and anguish that I would wish I was. I am thankful that I am neither of those things.

I am thankful that these days, I am mostly on the giving end of the charitable endeavors that I have recently been involved in. For I have been on the receiving end before. I have been the simultaneously heartbreaking and heartwarming image of the poor, pitiful child, smiling despite barely clinging to life, surrounded by the prayer blankets, get well cards, books, and other care package staples that my friends and relations were able to muster, rush-shipped because it was unclear whether they would arrive “in time” otherwise. I defied the stereotype only insofar as I got better. I am doubly thankful, first that I am no longer in that unenviable position, and second, that I am well enough to begin to pay back that debt.

The Lego Census

So the other day I was wondering about the demographics of Lego mini figures. I’m sure we’re all at least vaguely aware of the fact that Lego minifigs tend to be, by default, adult, male, and yellow-skinned. This wasn’t terribly worthy of serious thought back when Lego had only a handful of different minifigure designs existed. Yet nowadays Lego has thousands, if not millions of different minifigure permutations. Moreover, the total number of minifigures in circulation is set to eclipse the number of living humans within a few years.

Obviously, even with a shift towards trying to be more representative, the demographics of Lego minifigures are not an accurate reflection of the demographics of humankind. But just how out of alignment are they? Or, to ask it another way, could the population of a standard Lego city exist in real life without causing an immediate demographic crisis?

This question has bugged me enough that I decided to conduct an informal study based on a portion of my Lego collection, or rather, a portion of it that I reckon is large enough to be vaguely representative of a population. I have chosen to conduct my counts based on the central district of the Lego city that exists in our family basement, on the grounds that it includes a sizable population from across a variety of different sets.

With that background in mind, I have counted roughly 154 minifigures. The area of survey is defined as the city central district, which for our purposes is defined by the largest tables with the greatest number of buildings and skyscrapers, and so presumably the highest population density.

Because Lego minifigures don’t have numerical ages attached to them, I counted ages by dividing minifigures into four categories. The categories are: Children, Young Adults, Middle Aged, and Elderly. Obviously these categories are qualitative and subject to some interpretation. Children are fairly obvious for their different sized minifigures. An example of adult categories follows.

The figure on the left would be a young adult. The one in the middle would be classified as middle aged, and the one on the right, elderly.

Breakdown by age

Children (14)
Lego children are the most distinct category because, in addition to childish facial features and clothes, they are given shorter leg pieces. This is the youngest category, as Lego doesn’t include infant Lego minifigures in their sets. I would guess that this age includes years 5-12.

Young Adults (75)
Young adults encompasses a fairly wide range, from puberty to early middle age. This group is the largest, partially because it includes the large contingent of conscripts serving in the city. An age range would be roughly 12-32.

Middle Aged (52)
Includes visibly older adults that do not meet the criteria for elderly. This group encompasses most of the city’s administration and professionals.

Elderly (13)
The elderly are those that stand out for being old, including those with features such as beards, wrinkled skin, or off-color hair.

Breakdown by industry

Second is occupations. Again, since minifigures cant exactly give their own occupations, and since most jobs happen indoors where I can’t see, I was forced to make some guesses based on outfits and group them into loose collections.

27 Military
15 Government administration
11 Entertainment
9 Law enforcement
9 Transport / Shipping
9 Aerospace industries
8 Heavy industry
6 Retail / services
5 Healthcare
5 Light Industry

An unemployment rate would be hard to gauge, because most of the time the unemployment rate is adjusted to omit those who aren’t actively seeking work, such as students, retired persons, disabled persons, homemakers, and the like. Unfortunately for our purposes, a minifigure who is transitionally unemployed looks pretty much identical to one who has decided to take an early retirement.

What we can take a stab at is a workforce participation rate. This is a measure of what percentage of the total number of people eligible to be working are doing so. So, for our purposes, this means tallying the total number of people assigned jobs and dividing by the total number of people capable of working, which we will assume means everyone except children. This gives us a ballpark of about 74%, decreasing to 68% if we exclude military to look only at the civilian economy. Either of these numbers would be somewhat high, but not unexplainably so.

Breakdown by sex

With no distinction between the physical form of Lego bodies, the differences between sexes in minifigure is based purely on cosmetic details such as hair type, the presence of eyelashes, makeup, or lipstick on a face, and dresses. This is obviously based on stereotypes, and makes it tricky to tease apart edge cases. Is the figure with poorly-detailed facial features male or female? What about that faceless conscript marching in formation with their helmet and combat armor? Does dwelling on this topic at length make me some kind of weirdo?

The fact that Lego seems to embellish characters that are female with stereotypical traits suggests that the default is male. Operating on this assumption gives you somewhere between 50 and 70 minifigures with at least one distinguishing female trait depending on how particular you get with freckles and other minute facial details.

That’s a male to female ratio somewhere between 2.08:1 and 1.2:1. The latter would be barely within the realm of ordinary populations, and even then would be highly suggestive of some kind of artificial pressure such as sex selective abortion, infanticide, widespread gender violence, a lower standard of medical care for girls, or some kind of widespread exposure, whether to pathogens or pollutants, that causes a far higher childhood fatality rate for girls than would be expected. And here you were thinking that a post about Lego minifigures was going to be a light and gentle read.

The former ratio is completely unnatural, though not completely unheard of in real life under certain contrived circumstances: certain South Asian and Middle Eastern countries have at times had male to female ratios of as high as two owing to the presence of large numbers of guest workers. In such societies, female breadwinners, let alone women traveling alone to foreign countries to send money home, is unheard of.

Such an explanation might be conceivable given a look at the lore of the city. The city is indeed a major trade port and center of commerce, with a non-negligible transient population, and also hosts a sizable military presence. By a similar token, I could simply say that there are more people that I’m not counting hiding inside all those skyscrapers that make everything come out even. Except this kind of narrative explanation dodges the question.

The strait answer is that, no, Lego cities are not particularly accurate reflections of our real life cities. This lack of absolute realism does not make Lego bad toys. Nor does it detract from their value as an artistic and storytelling medium; nor either the benefits for play therapy for patients affected with neuro-cognitive symptoms, my original reason for starting my Lego collection.

 

The War on Kale

I have historically been anti-kale. Not that I don’t approve of the taste of kale. I eat kale in what I would consider fairly normal amounts, and have done even while denouncing kale. My enmity towards kale is not towards the Species Brassica oleracea, Cultivar group Acephala Group. Rather, my hostility is towards that set of notions and ideas for which kale has become a symbol and shorthand for in recent years.

In the circles which I frequent, at least, insofar as kale is known of, it is known as a “superfood”, which I am to understand, means that it is exceptionally healthy. It is touted, by those who are inclined to tout their choices in vegetables, as being an exemplar of the kinds of foods that one ought to eat constantly. That is to say, it is touted as a staple for diets.

Now, just as I have nothing against kale, I also have nothing against diets in the abstract. I recognize that one’s diet is a major factor in one’s long term health, and I appreciate the value of a carefully tailored, personalized diet plan for certain medical situations as a means to an end.

In point of fact, I am on one such plan. My diet plan reflects my medical situation which seems to have the effect of keeping me always on the brink of being clinically underweight, and far below the minimum weight which my doctors believe is healthy for me. My medically-mandated diet plan calls for me to eat more wherever possible; more food, more calories, more fats, proteins, and especially carbohydrates. My diet does not restrict me from eating more, but prohibits me from eating less.

Additionally, because my metabolism and gastronomical system is so capricious as to prevent me from simply eating more of everything without becoming ill and losing more weight, my diet focuses on having me eat the highest density of calories that I can get away with. A perfect meal, according to my dietician, nutrition, endocrinologist and gastroenterologist, would be something along the lines of a massive double burger (well done, per immunologist request), packed with extra cheese, tomatoes, onions, lots of bacon, and a liberal helping of sauce, with a sizable portion of fries, and a thick chocolate malted milkshake. Ideally, I would have this at least three times a day, and preferably a couple more snacks throughout the day.

Here’s the thing: out of all the people who will eventually read this post, only a very small proportion will ever need to be on such a diet. An even smaller proportion will need to stay on this diet outside of a limited timeframe to reach a specific end, such as recovering from an acute medical issue, or bulking up for some manner of physical challenge. This is fine. I wouldn’t expect many other people to be on a diet tailored by a team of medical specialists precisely for me. Despite the overly simplistic terms used in public school health and anatomy classes, every body is subtly (or in my case, not so subtly) different, and has accordingly different needs.

Some people, such as myself, can scarf 10,000 calories a day for a week with no discernible difference in my weight from if I had eaten 2,000. Other people can scarcely eat an entire candy bar without having to answer for it at the doctor’s office six months later. Our diets will, and should, be different to reflect this fact. Moreover, the neither the composition of our respective diets, nor particularly their effectiveness, is at all indicative of some kind of moral character.

This brings me back to kale. I probably couldn’t have told you what kale was before I had fellow high schoolers getting in my face about how kale was the next great superfood, and how if only I were eating more of it, maybe I wouldn’t have so many health problems. Because obviously turning from the diet plan specifically designed by my team of accredited physicians in favor of the one tweeted out by a celebrity is the cure that centuries of research and billions in funding has failed to unlock.

What? How dare I doubt its efficacy? Well obviously it’s not going to “suppress autoimmune activation”, whatever that means, with my kind of attitude. No, of course you know what I’m talking about. Of course you know my disease better than I do. How dare I question your nonexistent credentials? Why, just last night you watched a five minute YouTube video with clip-art graphics and showing how this diet = good and others = bad. Certainly that trumps my meager experience of a combined several months of direct instruction and training from the best healthcare experts in their respective fields, followed by a decade of firsthand self-management, hundreds of hours of volunteer work, and more participation in clinical research than most graduate students. Clearly I know nothing. Besides, those doctors are in the pockets of big pharma; the ones that make those evil vaccines and mind control nanobots.

I do not begrudge those who seek to improve themselves, nor even those who wish to help others by the same means through which they have achieved success themselves. However I cannot abide with those who take their particular diet as the new gospel, and try to see it implemented as a universal morality. Nor can I stand the insistence of those with no medical qualifications telling me that the things I do to stay alive, including my diet; the things that they have the distinct privilege of choice in; are not right for me.

I try to appreciate the honest intentions here where they exist, but frankly I cannot put up with someone who had never walked in my shoes criticizing my life support routine. My medical regimen is not a lifestyle choice any more than breathing is, and I am not going to change either of those things on second-hand advice received in a yoga lesson, or a ted talk, or even a public school health class. I cannot support a movement that calls for the categorical elimination of entire food groups, nor a propaganda campaign against the type of restaurant that helps me stick to my diet, nor the taxation of precisely the kind of foodstuffs which I have been prescribed by my medical team.

With no other option, I can do nothing but vehemently oppose this set of notions pertaining to the new cult of the diet, as I have sometimes referred to it, and its most prominent and recognizable symbol: kale. Indeed, in collages and creative projects in which others have encouraged me to express myself, the phrases “down with kale” and “death to kale”, with accompanying images of scratched-out pictures of kale and other vegetables, have featured prominently. I have one such collage framed and mounted in my bedroom as a reminder of all the wrongs which I seek to right.

This is, I will concede, something of a personal prejudice. Possibly even a stereotype. The kind of people that seem most liable to pierce my bubble and confront me over my diet tend to be the self-assured, zealous sort, and so it seems quite conceivable that I may be experiencing some kind of selection bias that causes me to see only the absolute worst in my interlocutors. It is possible in my ideo-intellectual intifada against kale, that I have thrown the baby out with the bathwater. In honesty, even if this were true, I probably wouldn’t apologize, on the grounds that what I have had to endure has been so upsetting that, with the stakes being my own life and death as they are, that my reaction has been not only justified, but correct.

As a brief aside, there is, I am sure, a great analogy to be drawn here, and an even greater deal of commentary to be drawn on this last train of thought as a reflection of the larger modern socio-political situation; refusing to acknowledge wrongdoing despite being demonstrably in the wrong. Such commentary might even be more interesting and relevant than the post I am currently writing. Nevertheless such musings are outside the scope of this particular post, though I may return to them in the future.

So my position has not changed. I remain convinced that all of my actions have been completely correct. I have not, and do not plan, to renounce my views until such time as I feel I have been conclusively proven wrong, which I do not feel has happened. What has changed is I have been given a glimpse at a different perspective.

What happened is that someone close to me received a new diagnosis of a disease close in pathology to one that I have, and which I am also at higher risk for, which prevents her from eating gluten. This person, who will remain nameless for the purposes of this post, is as good as a sister to me, and the rest of her immediate family are like my own. We see each other at least as often as I see other friends or relations. Our families have gone on vacation together. We visit and dine together regularly enough that any medical issue that affects their kitchen also affects our own.

Now, I try to be an informed person, and prior to my friend’s diagnosis, I was at least peripherally aware of the condition with which she now has to deal. I could have explained the disease’s pathology, symptoms, and treatment, and I probably could have listed a few items that did and did not contain gluten, although this last one is more a consequence of gazing forlornly at the shorter lines at gluten-free buffets at the conferences which I attended than a genuine intent to learn.

What I had not come to appreciate was how difficult it was to find food that was not only free from gluten in itself, but completely safe from any trace of cross contamination, which I have learned, does make a critical difference. Many brands and restaurants offer items that are labeled as gluten free in large print, but then in smaller print immediately below disclaim all responsibility for the results of the actual assembly and preparation of the food, and indeed, for the integrity of the ingredients received from elsewhere. This is, of course, utterly useless.

Where I have found such needed assurances, however, are from those for whom this purity is a point of pride. These are the suppliers that also proudly advertise that they do not stock items containing genetically modified foodstuff, or any produce that has been exposed to chemicals. These are the people who proclaim the supremacy of organic food and vegan diets. They are scrupulous about making sure their food is free of gluten not just because it is necessary for people with certain medical conditions, but as a matter of moral integrity. To them these matters are of not only practical but ethical. In short, these are kale supporters.

This puts me in an awkward position intellectually. On the one hand, the smug superiority with which these kale supporters denounce technologies that have great potential to decrease human hardship based on pseudoscience, and out of dietary pickiness as outlined above, is grating at best. On the other hand, they are among the only people who seem to be invested in providing decent quality gluten free produce which they are willing to stand behind, and though I would trust them on few other things, I am at least willing to trust that they have been thorough in their compulsiveness.

Seeing the results of this attitude I still detest from this new angle has forced me to reconsider my continued denouncements. The presence of a niche gluten-free market, which is undoubtedly a recent development, has, alas, not been driven by increased sensitivity to those with specific medical dietary restrictions, but because in this case my friend’s medical treatment just so happens to align with a subcategory of fad diet. That this niche market exists is a good thing, and it could not exist without kale supporters. The very pickiness that I malign has paved the way for a better quality of life for my comrades who cannot afford to be otherwise. The evangelical attitude that I rage against has also successfully demanded that the food I am buying for my friend is safe for them to eat.

I do not yet think that I have horribly misjudged kale and its supporters. But regardless, I can appreciate that in this matter, they have a point. And I consider it more likely now that I may have misjudged kale supporters on a wider front, or at least, that my impression of them has been biased by my own experiences. I can appreciate that in demanding a market for their fad diets, that they have also created real value.

I am a stubborn person by nature once I have made up my mind, and so even these minor and measured concessions are rather painful. But fair is fair. Kale has proven that it does have a purpose. And to that end I think it is only fitting that I wind down my war on kale. This is not a total cessation of all military action. There are still plenty of nutritional misconceptions to dispel, and bad policies to be refuted, and besides that I am far too stubborn to even promise with a straight face that I’m not going to get into arguments about a topic that is necessarily close to my heart. But the stereotype which I drew up several years ago as a common thread between the people who would pester me about fad diets and misconceptions about my health has become outdated and unhelpful. It is, then, perhaps time to rethink it.

Technological Milestones and the Power of Mundanity

When I was fairly little, probably seven or so, I devised a short list of technologies based on what I had seen on television that I reckoned were at least plausible, and which I earmarked as milestones of sorts to measure how far human technology would progress during my lifetime. I estimated that if I was lucky, I would be able to have my hands on half of them by the time I retired. Delightfully, almost all of these have in fact already been achieved, less than fifteen years later.

Admittedly, all of these technologies that I picked were far closer than I had envisioned at the time. Living in Australia, which seemed to be the opposite side of the world from where everything happened, and living outside of the truly urban areas of Sydney which, as a consequence of international business, were kept up to date, it often seems that even though I technically grew up after the turn of the millennium, that I was raised in a place and culture that was closer to the 90s.

For example, as late as 2009, even among adults, not everyone I knew had a mobile phone. Text messaging was still “SMS”, and was generally regarded with suspicion and disdain, not least of all because not all phones were equipped to handle them, and not all phone plans included provisions for receiving them. “Smart” phones (still two words) did exist on the fringes; I know exactly one person who owned an iPhone, and two who owned a BlackBerry, at that time. But having one was still an oddity. Our public school curriculum was also notably skeptical, bordering on technophobic, about the rapid shift towards Broadband and constant connectivity, diverting much class time to decrying the evils of email and chat rooms.

These were the days when it was a moral imperative to turn off your modem at night, lest the hacker-perverts on the godless web wardial a backdoor into your computer, which weighed as much as the desk it was parked on, or your computer overheat from being left on, and catch fire (this happened to a friend of mine). Mice were wired and had little balls inside them that you could remove in order to sabotage them for the next user. Touch screens might have existed on some newer PDA models, and on some gimmicky machines in the inner city, but no one believed that they were going to replace the workstation PC.

I chose my technological milestones based on my experiences in this environment, and on television. Actually, since most of our television was the same shows that played in the United States, only a few months behind their stateside premier, they tended to be more up to date with the actual state of technology, and depictions of the near future which seemed obvious to an American audience seemed terribly optimistic and even outlandish to me at the time. So, in retrospect, it is not surprising that after I moved back to the US, I saw nearly all of my milestones commercially available within half a decade.

Tablet Computers
The idea of a single surface interface for a computer in the popular consciousness dates back almost as far as futuristic depictions of technology itself. It was an obvious technological niche that, despite numerous attempts, some semi-successful, was never truly cracked until the iPad. True, plenty of tablet computers existed before the iPad. But these were either klunky beyond use, incredibly fragile to the point of being unusable in practical circumstances, or horrifically expensive.

None of them were practical for, say, completing homework for school on, which at seven years old was kind of my litmus test for whether something was useful. I imagined that if I were lucky, I might get to go tablet shopping when it was time for me to enroll my own children. I could not imagine that affordable tablet computers would be widely available in time for me to use them for school myself. I still get a small joy every time I get to pull out my tablet in a productive niche.

Video Calling
Again, this was not a bolt from the blue. Orwell wrote about his telescreens, which amounted to two-way television, in the 1940s. By the 70s, NORAD had developed a fiber-optic based system whereby commanders could conduct video conferences during a crisis. By the time I was growing up, expensive and klunky video teleconferences were possible. But they had to be arranged and planned, and often required special equipment. Even once webcams started to appear, lessening the equipment burden, you were still often better off calling someone.

Skype and FaceTime changed that, spurred on largely by the appearance of smartphones, and later tablets, with front-facing cameras, which were designed largely for this exact purpose. Suddenly, a video call was as easy as a phone call; in some cases easier, because video calls are delivered over the Internet rather than requiring a phone line and number (something which I did not foresee).

Wearable Technology (in particular smartwatches)
This was the one that I was most skeptical of, as I got this mostly from the Jetsons, a show which isn’t exactly renowned for realism or accuracy. An argument can be made that this threshold hasn’t been fully crossed yet, since smartwatches are still niche products that haven’t caught on to the same extent as either of the previous items, and insofar as they can be used for communication like in The Jetsons, they rely on a smartphone or other device as a relay. This is a solid point, to which I have two counterarguments.

First, these are self-centered milestones. The test is not whether an average Joe can afford and use the technology, but whether it has an impact on my life. And indeed, my smart watch, which was enough and functional enough for me to use in an everyday role, does indeed have a noticeable positive impact. Second, while smartwatches may not be as ubiquitous as once portrayed, they do exist, and are commonplace enough to be largely unremarkable. The technology exists and is widely available, whether or not consumers choose to use it.

These were my three main pillars of the future. Other things which I marked down include such milestones as:

Commercial Space Travel
Sure, SpaceX and its ilk aren’t exactly the same as having shuttles to the ISS departing regularly from every major airport, with connecting service to the moon. You can’t have a romantic dinner rendezvous in orbit, gazing at the unclouded stars on one side, and the fragile planet earth on the other. But we’re remarkably close. Private sector delivery to orbit is now cheaper and more ubiquitous than public sector delivery (admittedly this has more to do with government austerity than an unexpected boom in the aerospace sector).

Large-Scale Remotely Controlled or Autonomous Vehicles
This one came from Kim Possible, and a particular episode in which our intrepid heroes got to their remote destination by a borrowed military helicopter flown remotely from a home computer. Today, we have remotely piloted military drones, and early self-driving vehicles. This one hasn’t been fully met yet, since I’ve never ridden in a self-driving vehicle myself, but it is on the horizon, and I eagerly await it.

Cyborgs
I did guess that we’d have technologically altered humans, both for medical purposes, and as part of the road to the enhanced super-humans that rule in movies and television. I never guessed at seven that in less than a decade, that I would be one of them, relying on networked machines and computer chips to keep my biological self functioning, plugging into the wall to charge my batteries when they run low, studiously avoiding magnets, EMPs, and water unless I have planned ahead and am wearing the correct configuration and armor.

This last one highlights an important factor. All of these technologies were, or at least, seemed, revolutionary. And yet today they are mundane. My tablet today is only remarkable to me because I once pegged it as a keystone of the future that I hoped would see the eradication of my then-present woes. This turned out to be overly optimistic, for two reasons.

First, it assumed that I would be happy as soon as the things that bothered me then no longer did, which is a fundamental misunderstanding of human nature. Humans do not remain happy the same way than an object in motion remains in motion until acted upon. Or perhaps it is that as creatures of constant change and reecontextualization, we are always undergoing so much change that remaining happy without constant effort is exceedingly rare. Humans always find more problems that need to be solved. On balance, this is a good thing, as it drives innovation and advancement. But it makes living life as a human rather, well, wanting.

Which lays the groundwork nicely for the second reason: novelty is necessarily fleeting. What advanced technology today marks the boundary of magic will tomorrow be a mere gimmick, and after that, a mere fact of life. Computers hundreds of millions more times more powerful than those used to wage World War II and send men to the moon are so ubiquitous that they are considered a basic necessity of modern life, like clothes, or literacy; both of which have millennia of incremental refinement and scientific striving behind them on their own.

My picture of the glorious shining future assumed that the things which seemed amazing at the time would continue to amaze once they had become commonplace. This isn’t a wholly unreasonable extrapolation on available data, even if it is childishly optimistic. Yet it is self-contradictory. The only way that such technologies could be harnessed to their full capacity would be to have them become so widely available and commonplace that it would be conceivable for product developers to integrate them into every possible facet of life. This both requires and establishes a certain level of mundanity about the technology that will eventually break the spell of novelty.

In this light, the mundanity of the technological breakthroughs that define my present life, relative to the imagined future of my past self, is not a bad thing. Disappointing, yes; and certainly it is a sobering reflection on the ungrateful character of human nature. But this very mundanity that breaks our predictions of the future (or at least, our optimistic predictions) is an integral part of the process of progress. Not only does this mundanity constantly drive us to reach for ever greater heights by making us utterly irreverent of those we have already achieved, but it allows us to keep evolving our current technologies to new applications.

Take, for example, wireless internet. I remember a time, or at least, a place, when wireless internet did not exist for practical purposes. “Wi-Fi” as a term hadn’t caught on yet; in fact, I remember the publicity campaign that was undertaken to educate our technologically backwards selves about what term meant, about how it wasn’t dangerous, and about how it would make all of our lives better, as we could connect to everything. Of course, at that time I didn’t know anyone outside of my father’s office who owned a device capable of connecting to Wi-Fi. But that was beside the point. It was the new thing. It was a shiny, exciting novelty.

And then, for a while, it was a gimmick. Newer computers began to advertise their Wi-Fi antennae, boasting that it was as good as being connected by cable. Hotels and other establishments began to advertise Wi-Fi connectivity. Phones began to connect to Wi-Fi networks, which allowed phones to truly connect to the internet even without a data plan.

Soon, Wi-Fi became not just a gimmick, but a standard. First computers, then phones, without internet began to become obsolete. Customers began to expect Wi-Fi as a standard accommodation wherever they went, for free even. Employers, teachers, and organizations began to assume that the people they were dealing with would have Wi-Fi, and therefore everyone in the house would have internet access. In ten years, the prevailing attitude around me went from “I wouldn’t feel safe having my kid playing in a building with that new Wi-Fi stuff” to “I need to make sure my kid has Wi-Fi so they can do their schoolwork”. Like television, telephones, and electricity, Wi-Fi became just another thing that needed to be had in a modern home. A mundanity.

Now, that very mundanity is driving a second wave of revolution. The “Internet of Things” as it is being called, is using the Wi-Fi networks that are already in place in every modern home to add more niche devices and appliances. We are told to expect that soon that every major device in our house will be connected to out personal network, controllable either from our mobile devices, or even by voice, and soon, gesture, if not through the devices themselves, then through artificially intelligent home assistants (Amazon echo, Google Home, and related).

It is important to realize that this second revolution could not take place while Wi-Fi was still a novelty. No one who wouldn’t otherwise buy into Wi-Fi at the beginning would have bought it because it could also control the sprinklers, or the washing machine, or what have you. Wi-Fi had to become established as a mundane building block in order to be used as the cornerstone of this latest innovation.

Research and development may be focused on the shiny and novel, but technological process on a species-wide scale depends just as much on this mundanity. Breakthroughs have to not only be helpful and exciting, but useful in everyday life, and cheap enough to be usable by everyday consumers. It is easy to get swept up in the exuberance of what is new, but the revolutionary changes happen when those new things are allowed to become mundane.

On Horror Films

Recently, I was confronted with a poll regarding my favorite horror film. This was only slightly awkward, as, of the films listed as options, I had seen… none.

I really like this design.

Broadly speaking, I do not see fit to use my personal time to make myself experience negative emotions. Also, since the majority of horror films tend to focus on narrow, contrived circumstances and be driven by a supernatural, usually vaguely biblical demon, I find it difficult to suspend disbelief and buy into the premise. To me, the far better horror experiences have been disaster films, in particular those like Threads or By Dawn’s Early Light. Also certain alternate history films, in particular the HBO film, Fatherland, which did more to get across the real horror of the holocaust and genocide to thirteen year old me than six months of social studies lessons.

To wit, the only bona-fide horror film I’ve seen was something about Satan coming to haunt elevator-goers for their sins. Honestly I thought it was exceedingly mediocre at best. However, I saw this film at a birthday party for a friend of mine, the confidant of a previous crush. I had come to know this girl after she transferred to our public middle school from the local catholic school. We saw this film at her birthday party, which was, in the manner of things, perceived as the very height of society, in the pressence of an overwhelmingly female audience, most of whom my friend had known from St. Mary’s. Apparently to them the film was excellent, as many professed to be quite scared, and it remained the subject of conversation for some months afterward.

I have come to develop three alternative hypotheses for why everyone but myself seemed to enjoy this distinctly mediocre film. The first is that I am simply not a movie person and was oblivious to the apparent artistic merit of this film. This would fit existing data, as I have similarly ambiguous feelings towards many types of media my friends generally seem to laud. This is the simplest explanation, and thus the null hypothesis which I have broadly accepted for the past half-decade or so.

The second possible explanation is that, since the majority of the audience except for myself was Catholic, attended Catholic Church, and had gone to the Catholic primary school in our neighborhood, and because the film made several references to Catholic doctrine and literature, to the point that several times my friend had to lean over and whisper the names and significance of certain prayers or incantations, that this carried extra weight for those besides myself. Perhaps I lacked the necessary background context to understand what the creators were tying to reach for. Perhaps my relatively secular and avowedly skeptical upbringing had desensitized me to this specific subset of supernatural horror, while the far more mundane terrors of war, genocide, and plague fill much the same role in my psyche.

The third alternative was suggested to me by a male compatriot, who was not in attendance but was familiar with all of the attendees, several years after the fact, and subsequently corroborated by testimony from both male and female attendees. The third possibility is that my artistic assessment at the time was not only entirely on point, but was the silent majority opinion, yet that this opinion was suppressed consciously or unconsciously for social reasons. Perhaps, it has been posited to me, the appearance of being scared was for my own benefit? Going deeper, perhaps some or all of the motivation to see a horror film at a party of both sexes was not entirely platonic?

It is worth distinguishing, at this point, the relative numbers and attitudes of the various sexes. At this party, there were a total of about twenty teenagers. Of this number, there were three or four boys (my memory fails me as to exact figures), including myself. I was on the guest list from the beginning as a matter of course; I had been one of the birthday girl’s closest friends since she arrived in public school, and perhaps more importantly, her parents had met and emphatically approved of me. In fact I will go so far as to suggest that the main reason this girl’s staunchly traditionalist, conservative parents permitted their rebellious teenage daughter to invite boys over to a birthday party was because they trusted me, and believed my presence would be a moderating influence.

Also among the males in attendance were the brother of one of the popular socialite attendees, whose love of soap operas and celebrity gossip, and general stylistic flamboyance had convinced everyone concerned that he was not exactly straight; my closest friend, who was as passive and agreeable a teenager as you will ever have the pleasure to know; and a young man whose politics I staunchly disagreed with and who would later go on to have an eighteen month on and off relationship with the birthday girl, though he did not know it at the time.

Although I noticed this numerical gender discrepancy effectively immediately, at no point did it occur to me that, were I so motivated, I could probably have leveraged these odds into some manner of romantic affair. This, despite what could probably be reasonably interpreted as numerous hints to the effect of “Oh look how big the house is. Wouldn’t it be so easy for two people to get lost in one of these several secluded bedrooms?”

Although I credit this obliviousness largely to the immense respect I maintained for the host’s parents and the sanctity of their home, I must acknowledge a certain level of personal ignorance owing mainly to a lack of similar socialization, and also to childhood brain damage. This acute awareness of my own past, and in all likelihood, present, obliviousness to social subtleties is part of why I am so readily willing to accept that I might have easily missed whatever aspect of this film made it so worthwhile.

In any case, as the hypothesis goes, this particular film was in fact mediocre, just as I believed at the time. However, unlike myself with my single-minded judgement based solely on the artistic merits and lack thereof of the film, it is possible that my female comrades, while agreeing in the abstract with my assessment, opted instead to be somewhat more holistic in their presentation of opinions. Or to put it another way, they opted to be socially opportunistic in the ability to signal their emotional state. As it was described to me, my reaction would then, at least in theory, be to attempt to comfort and reassure them. I would assume the stereotypical role of male defender, and the implications therewith, which would somehow transmogrify into a similarly-structured relationship.

Despite the emphatic insistence of most involved parties, with no conclusive confession, I remain particularly skeptical of this hypothesis, though admittedly it does correlate with existing psychological and sociological research on terror-induced pair-bonding. I doubt I shall ever truly understand the horror genre. It would be easy to state categorically that there is no merit to trying to induce negative emotions without cause, and that those who wish to use such experiences as a cover for other overtures ought simply get over themselves, but given that, as things go, this is an apparently victimless crime, and seems to being a great deal of joy to some people, it is more likely that this issue lies more in myself than the rest of the world.

To a person who seeks to understand the whole truth in its entirety, the notion that there are some things that I simply do not have the capacity to understand is frustrating. Knowing that there are things which other people can comprehend, yet I cannot, is extremely frustrating. More than frustrating; it is horrifying. To know that there is an entire world of subtext and communication that is lost to me; that my brain is damaged in such a way that I am oblivious to things that are supposed to be obvious, is disconcerting to the point of terrifying.

I will probably never know the answer to these questions, as at this point I am probably the only one who yet bothers to dwell on that one evening many moons ago. It will remain in my memory an unsolved mystery, and a reminder that my perception is faulty in ways imperceptible to me, but obvious to others. It might even be accurate to say that I will remain haunted by this episode.

Happy Halloween.

My Superpowers

So, I don’t know if I mentioned this, but I have a minor superpower. Not the cyborg stuff. That exists, but isn’t really a power so much as a bunch of gadgets I wear to keep me alive. Nor any of the intellectual or creative abilities it has been alleged that I possess, for those are both ordinary in the scope of things, and also subjective. Rather I refer to my slight clairvoyance. I can sense changes in the weather. I have had this ability referred to as “my personal barometer”, but in truth it often functions more like a “personal air-raid siren”; specifically one that can’t be shut up.

Near as I can tell, this is related to pressure changes, and happens because something, somewhere inside me, is wired wrong. I have been told that my sinuses are out of order in such a way that would make me vulnerable to comparatively minor changes such as pressure, and strong circumstantial evidence suggests damage somewhere in my nervous system, caused by childhood encephalitis, which creates the microscopic, undetectable vulnerability that manifests in my seizures and migraines, and could plausibly be exploited by other factors.

This has the effect of allowing me to feel major weather changes somewhere between six hours and a week before it appears when I am, depending on the size and speed of a shift. It starts as a mild-bout of light-headedness, the same as the rush of blood flowing away from my head when standing up after not moving for some time. If it is a relatively minor dislocation, this may be all that I feel.

It then grows into a more general feeling of flu-like malaise; the same feeling that normally tells if one is sick even if there are not any active symptoms. At this point, my cognitive function begins to seriously degrade. I start to stutter and stumble, and struggle for the words that are on the tip of my tongue. I forget things and lose track of time. I will struggle both to get to sleep, and to wake up.

Depending on the severity and duration, these symptoms may be scarcely visible, or they may have me appearing on death’s door. It is difficult to tell these symptoms apart from those of allergies, migraines, or an infection, especially once I begin to experience chills and aches. This is compounded by my immune system’s proclivity to give false negatives due to my immunodeficiency, and false positives due to my autoimmune responses, for pathology. Fortunately, the end result is mostly the same: I am advised to stay home, rest, make sure I eat and drink plenty, redouble our protective quarantine procedures, etcetera.

At its worst, these symptoms also induce a cluster migraine, which confines me to bed and limits my ability to process and respond to stimuli to a level only slightly better than comatose. At this point, my symptoms are a storm unto itself, and, short of a hurricane, I’m probably not going to be much concerned with whatever is happening outside the confines of my room, as I’ve already effectively sealed myself off from the outside world. I will remain so confined for however long it takes until my symptoms pass. This may be a few hours, or a few weeks. During these days, my cognitive ability is limited to a couple hundred words, only forty or so of which are unique.

If I am lucky, I will still have the mental faculties to passively watch videos, listen to music with words, and occasionally write a handful of sentences. I generally cannot read long tracts, as reading requires several skills simultaneously – visual focus, language processing, inner narration, and imagination of the plot – which is usually beyond my limits. I can sometimes get by with audiobooks, provided the narration is slow enough and the plot not overly complex. If I am not able to deal with words, then I am limited to passing my waking hours listening to primarily classical music. Fortunately, I also tend to sleep a great deal more in this state.

Once I have entered this state, my superpower; or perhaps it is an unsung quirk of human perception; means that I don’t really consciously recognize time passing in the normal way. Without discrete events, sensations, or thoughts to mark time, the days all kind of meld together. With my shades closed, my light permanently off, and my sleep cycle shattered, days and nights lose their meaning. Every moment is the same as every other moment.

Thus, if it takes two weeks by calendar until I am well enough to return to normal function, I may wake up with only two or three days worth of discrete memories. And so in retrospect, the time that took other people two weeks to pass took me only three days. It therefore emerges that in addition to my limited form of clairvoyance, I also possess a limited form of time travel.

Admittedly, I am not great at controlling these powers. I have virtually no control over them, except some limited ability to treat the worst of the symptoms as they come up. So perhaps it is that they are not so much my powers as they are powers that affect me. They do not control me, as I still exist, albeit diminished, independent and regardless of them. They do affect others, but only through how they affect me.

All of this to say, the storms that are presently approaching the northeastern United States are having a rather large impact on my life at present. If I were of more of a superstitious bent, I might suggest that this is meant as a way to sabotage my plans to get organized and generally rain on my parade (cue canned laughter).

There isn’t a great deal that I can do to work around this, any more than a blind man can work around a print book. The best I can hope for is that this is a “two steps forward, one step back” situation, which will also depend on how quickly this storm clears up, and on me being able to hit the ground running afterwards.

Television Bubbles

So there’s a new show on Disney that allegedly follows the cast of That’s So Raven some decade after the show itself ended. This isn’t news per se, considering the show launched in July.

This is news to me, however. For some reason, the existence of this show, it’s premiere, any hype and marketing that may have surrounded it, and generally anything about it, managed to come and go completely unnoticed by me. I learned about this by accident; I happened to recognize the characters on a screen in the back of a burrito restaurant. At first I thought I was watching a very old rerun. But I was informed by other members of my party that, no, that’s part of the new show. Didn’t I know about it?

I have been wracking my brain trying to I ever heard anything about this. The closest I can come up with is a very vague recollection of someone making an offhanded remark in passing that such a concept was under consideration. This would have been probably in February or March. Thing is, I don’t actually remember this as a conversation. It’s just as possible that in trying to remember that I must have heard of this at some point, part of my brain has fabricated a vague sense that I must have heard of this at some point.

In retrospect, if I were going to miss something like an entire television series entirely, the chronology makes sense. May through early July, I was buried in schoolwork. I began Project Crimson, which by my count eliminated some half of all advertising that I see at all, in late April. By July, my whirlwind travel schedule had begun. I stayed more or less up to date on the news, because there were plenty of television screens blaring cable news headlines wherever I went, and because when it is likely that I will meet new people, I do make an effort to brush up on current events so as to have all the relevant discussion points of the day, but this really only applies to news headlines.

So it is possible to imagine that this series premiere happened somewhere further down in my news feed, or in a news podcast episode that got downloaded to my phone but never listened to. I find it slightly odd that I was at, of all places, Disney World, and had no exposure whatsoever to the latest Disney show. But then again, their parks tend to focus on the more classic aspects of the Disney culture. And who knows; perhaps they did have posters and adverts up, or were putting them while my back was turned, or whatever. Clearly, it’s possible, because it happened.

Here are my two big problems with this whole fiasco. First, this is something I would have liked to know. I would understand if some story about, say, sports, or celebrity gossip, slipped under my radar in such a way. I don’t watch a whole lot of TV in general, and I don’t really watch anything related to sports of celebrity news. My online news feeds respond to what I engage with, giving me more stories I am likely to digest, and quietly axing pieces that my eyes would otherwise just glide over. Though this makes me uncomfortable, and I have criticized it in the past, I accept this as a price of having my news conveniently aggregated.

Except that here, I honestly would have liked to know that there was a new That’s So Raven series in the pipes. I would wager that I’m actually part of their target audience, which is part of why I’m so surprised that I wasn’t very aware of this. That’s So Raven ran, at least where I lived in Australia, at roughly the opening of when I was old enough to follow and appreciate the slightly more complicated “all ages” programming. And while I wouldn’t rank it as my favorite, its stories did stick with me. Raven’s struggles against racism, sexism, and discrimination, introduced me to these concepts before I had been diagnosed with all of my medical issues and experienced discrimination firsthand. Raven’s father’s quest to build his own small business, and Corey’s dogged, (some might say, relentless) entrepreneurial spirit, inspired me.

Moreover, the spinoff show Corey in the House, while often cringeworthy at the best of times, even more-so than its predecessor, was the first exposure that I had to, if not the structure and dynamics, than at least the imagery and phraseology, of US politics. This, at a time when I was forbidden to watch cable news (all that was on was the war on terror) and many of my schoolmates and their parents would routinely denounce the United States and its President, as the Australian components of coalition forces in the Middle East began to suffer losses. Naturally, as the token American, I was expected to answer for all of my president’s crimes. Having a TV show that gave me a modicum of a clue as to what people were talking about, but that also taught that America and American ideals, while they might not be perfect, were still at least good in an idealistic sense, was immensely comforting.

All of that is to say that I hold some nostalgia for the original series and the stories they told. Now, I have not seen this new show. I don’t know whether how close it is to the original. But I have to imagine that such nostalgia was a factor in the decision to approve this new series, which would suggest that it is aimed at least partly at my demographic. Given that there are trillions of dollars involved in making sure that targeted demographics are aware of the products they ought to consume, and that I haven’t been living particularly under a rock, it seems strange how this passed me by.

Furthermore, if a series of unusual events has caused me to miss this event this time, I am quite sure that I would have picked up on it earlier five years ago. Even three years ago, I would have within a few weeks of launch, seen some advert, or comment, and investigated. In all probability, I would have watched this show from day one, or shortly thereafter. However, the person who I am and my media habits now have diverged so much from the person that I was then that we no longer have this in common. This rattles me. Even though I understand and accept that selves are not so much constant as changing so slowly as to not notice most days, this is still a shock.

Which brings me nicely to my second problem in all of this. This new series, in many respects represents a best case scenario for something that is likely to cross my path. Yes, there are confounding variables at play: I was traveling, I have cut down how much advertising I tolerate, and I had been mostly skimming the headlines. But these aren’t once-in-a blue moon problems. There was a massive, concerted publicity effort, in behalf of one of the largest media and marketing machines on the planet, to promote a story that I would have embraced if it ever came across my radar, while I was at one of their theme parks, and while I was making a conscious effort to pay attention to headlines. And yet I still missed this.

This begs an important, terrifying question: what else have I missed? The fact that I missed this one event, while idly disappointing, will likely not materially impact my life in the foreseeable future. The face that I could have missed it in the first place, on the other hand, shows that there is a very large blind spot in my awareness of current happenings. It is at least large enough to fly an entire TV series through, and probably quite a bit larger.

I am vaguely aware, even as a teenager, that I do not know all things. But I do take some pride in being at least somewhat well informed, and ready to learn. I like to believe that I some grasp on the big picture, and that I have at least some concept of the things that I am not paying attention to; to repeat an earlier example, sports and celebrity news. I can accept that there are plenty of facts and factoids that I do not know, since I am not, despite protestations, a walking encyclopedia, and I recognize that, in our new age of interconnectedness and fractally-nested cultural rabbit holes, that there are plenty of niche interests with which I am not familiar. But this is in my wheelhouse, or at least I would have thought.

It is still possible, and I do still hope, that this is a fluke. But what if it isn’t? What if this is simply one more product of how I currently organize my life, and of how the internet and my means of connectivity fit into that? Suppose this latest scandal is just one more item that I have missed because of the particular filtering strategies I use to avoid being overloaded. If this best-case scenario didn’t get my attention, what are the odds that something without all of these natural advantages will get to me?

How likely is it that I am going to hear about the obscure piece of legislation being voted on today, or the local budget referendum, which both affect me, but not directly or immediately enough that I’m liable to see people marching in the streets or calling me up personally? How often will I hear about the problems facing my old friends in Australia now that I am living on a different continent, in a different time zone, and with a totally different political landscape to contend with.

For all of my fretting, I can’t conceive of a realistic path out of this. The internet is to large and noisy a place to cover all, or even a substantial number of, the bases. More content is uploaded every second than a human could digest in s lifetime. Getting news online requires either committing to one or two sources, or trusting an aggregation service, whether that be a bot like Facebook, Google, Yahoo, and the like, or paying a human somewhere along the line to curate stories.

Going old fashioned, as I have heard proposed in a few different places, and sticking to a handful of old-fashioned print newspapers with paid subscriptions and a set number of pages to contend with, is either too broad, and hence has the same problem of relying on the internet at large, or too specific and cut down. TV news tends to fall somewhere between newspapers and social media. And crucially, none of these old fashioned services are good at giving me the news that I require. I want to hear about the scandal in the White House, and the one in my local Town Hall, and hear about the new series based on the one that aired when I was young, and what the World Health Organization says about the outbreak in Hong Kong, without hearing about sports or celebrity gossip, or that scandal in Belgrade that I don’t know enough about to comment on.

Figuring out how to reconcile this discrepancy in a way that satisfies both consumers, and society’s needs for a well informed populace, may well be one of the key challenges of this time in history, especially for my generation. For my part, the best I can figure is that I’m going to have to try and be a little more cognizant of things that might be happening outside of my bubble. This isn’t really a solution, any more than ‘being aware of other drivers’ is a solution for car accidents. Media bubbles are the price of casual participation in current events, and from where I stand today, non-participation is not an option.