Why Vote?

Yes, this is a theme. Enough of my friends and acquaintances are on the fence on the issue of voting that I have been stirred into a patriotic fervor. Like Captain America, I have, despite my adversities, arisen to defend democracy in its hour of need. Or at least, I have decided to write about voting until my friends are motivated to get out and vote.


Why vote? In today’s America, why bother to go out and vote? Elections these days are won and lost not at the ballot, but on maps and budget sheets, with faraway oligarchs drawing boundary lines that defy all logic to ensure their own job security, and shadowy mega corporations spending more on media campaigns designed to confuse and disorient you to their advantage than the GDP of several small nations. The mathematics of first past the post voting means that our elections are, and for the foreseeable future, always will be, picking the lesser of two evils.

Statistically, you live not only in a state that is safely for one party or another, but an electoral district that has already been gerrymandered. Depending on where you live, there may be laws designed to target certain demographics, making it harder or easier for certain groups to get to the polls. The effort required to cast a ballot varies from place to place; it might be as easy as dropping by a polling place at your leisure, or it might involve waiting for hours in line, being harassed by officials and election monitors, all in order to fill out a piece of paper the effect of which is unlikely to make a major difference.

So why bother? Why not stay home, and take some well deserved time off.

It’s an older poster, but it checks out

Obviously, this logic wouldn’t work if everyone applied it. But that’s not a compelling reason why you specifically ought go to the effort of voting. Because it is an effort, and much as I might take it for granted that the effort is worthwhile, to participate in and safeguard the future of democracy, not everyone does.

Well, I’ll start by attacking the argument itself. Because yes, massive efforts have been made, and are being made, by those who have power and wish to keep it, and by those who seek power and are willing to gamble on it, to sway the odds in your favor. But consider for a moment these efforts. Would corporations, which are, if nothing else, ruthlessly efficient and stingy, spend such amounts if they really thought victory was assured? Would politicians expend so much effort and political capital campaigning, mudslinging, and yes, cheating through gerrymandering, registration deadlines, and ID laws, if they believed it wasn’t absolutely necessary?

The funny thing about voting trends is, the richer a person is, the more likely they are to vote. Surely, if elections were bought and paid for, the reverse would be true? Instead, the consistent trend is that those who allegedly need to vote the least do so the most.

The game may not be fair, or right, but it is not preordained. It may be biased, but it is not rigged. If it were rigged, the powers that be wouldn’t be making the effort. They are making an effort, on the assumption that they can overcome your will to defend your right to vote by apathy and antipathy. Like any right, your right to vote is only good when exercised.

The American Promise

One of my more controversial opinions regards the founding of the United States regards the circumstances of its foundation. See, having read the historical literature, I’m not convinced the colonists were right to revolt when they did. The troops that were stationed in the colonies were there to keep the peace while the colonies were reconstructed following the damages of the Seven Years’ War, while the Stamp Act actually lowered taxes from what they had been. The colonists were getting more services for lower taxes right after a war had been fought on their behalf.

The complaints about taxes mostly stemmed from enforcement; in order to abide by the terms of the treaties that ended the war, the British government had begun a crackdown on smuggling, which had previously grown to such a state that it was almost impossible for legitimate businesses to compete with the colonial cartels. This epidemic, and the ineptitude or collusion of local enforcement, was the reason for the extraordinary enforcement measures such as the oft-cited writs of assistance. Meanwhile complaints about land claims in native territory- that the crown was being oppressive by restricting settlers from encroaching on native land -are hard to justify with historical retrospect.

So the idea that the American Independence War was justified from the beginning by the actions of the British administration is nonsense. The British were in fact one of the most progressive and representative governments in history. The only possible justifications for independence lay in a total rejection of ordained authority, a prospect so radical that it made the United States comparable to the Soviet Union in its relation to its contemporaries; the idea that men hold inalienable rights, that defending these rights is the sole mandate of governments, and that these governments derive their powers from the consent of the governed.

And this is what really made the United States unique in history. Because republics, even systems that might be called democratic, had existed since antiquity. But these had always been a means to en end. Allowing the governed, or at least some portion thereof, to have a say in matters normally confined to kings and emperors was only incidental to the task of administration. This was already the case in Great Britain, and several Italian states. But the idea that power of government wasn’t an innate thing, but something that had to be willingly given, was revolutionary.

The problem, aside from the considerable logistical feat of organizing a heretofore unprecedented system of governance, is that this justification, if not necessarily retrospective in itself, is at least contingent on those promises being achieved. It is easy, not least from a historical perspective, to promise revolutionary liberation, and then not follow up. Indeed, depending on how one views the Soviet model as to whether it ever really came close to achieving the promises of revolution (which really depends on how one reads Marx, and how much one is willing to take Soviet talking points at their word), most of the revolutions of the modern period have failed to live up to their promises.

Washington could have declared himself King of America, either as a hereditary appointment, as a monarch elected by the states, akin to the Holy Roman Emperor, or even as a non-hereditary dynasty, like the Soviets, or the strongmen of the developing world. Most European states presumably expected this, or they expected the United States to collapse into anarchy. Instead, Washington set a precedent in line with the rhetoric of the USA’s foundation, with the intention of living up to the promises laid out in independence.

But while Washington certainly helped legitimize the United States and its promise, he didn’t do so singlehandedly. After all, he couldn’t have. The promise of the United States is not that those who happened to fight, or be present at the constitutional convention, be granted certain rights. No, the promise is that all are granted inalienable rights by a power higher than any government, and that everyone has the right to participate in the process of government. Notice the present tense. Because this is not an idea that expires, or will eventually come to be, but how things ought to be now.

The measure of this promise; the independent variable in the American experiment, is not the wars that were won, nor the words that were written on paper long ago to lay the foundation, nor even the progress that has been made since, but rather the state of affairs today. The success of America is not what was written into law yesterday, but what percentage are participating today.

The notion that, as the world’s superpower, America has already succeeded, and we need only sit back and reap the dividends of the investments made by our forebears is not only false, but dangerously hubristic and misleading. The failure of America does not require foreign armies on our streets, or a bottomed out economy; only complacency on our part. If we forget what our forefathers fought for, if we choose comfort over our values, indeed, if we decide voting isn’t worth the hassle, then we lose. And as a proud American, I believe both we, and the world, would be worse off for it.


Creative Commons License
In the interest of encouraging discussion about voting, this post is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

A Witch’s Parable

Addendum: Oh good grief. This was supposed to go up at the beginning of the week, but something went awry. Alas! Well, it’s up now.


Suppose we live in colonial times, in a town on an archipelago. The islands are individually small and isolated, but their position relative to the prevailing winds and ocean currents mean that different small islands can grow a wide variety of crops that are normally only obtainable by intercontinental trade. The presence of these crops, and good, predictable winds and currents, has made those islands that don’t grow food into world renowned trade hubs, and attracted overseas investment.

With access to capital and a wide variety goods, the archipelago has boomed. Artisans, taking advantage of access to exotic painting supplies, have taken to the islands, and scientists of all stripes have flocked to the archipelago, both to study the exotic flora and fauna, and to set up workshops and universities in this rising world capital. As a result of this local renaissance, denizens of the islands enjoy a quality of life hitherto undreamt of, and matched only in the palaces of Europe.

The archipelago is officially designated as a free port, open to ships from across the globe, but most of daily life on the islands is managed by the Honorable South India Trading Company, who collect taxes and manage infrastructure. Nobody likes the HSITC, whose governor is the jealous brother of the king, and is constantly appropriating funds meant for infrastructure investment to spend on court intrigue.

Still, the HSITC is entrenched in the islands, and few are willing to risk jeopardizing what they’ve accomplished by attempting insurrection. The cramped, aging vessels employed by the HSITC as ferries between the islands pale in comparison to the new, foreign ships that dock at the harbors, and their taxes seem to grow larger each year, but as long as the ferry system continues to function, there is little more than idle complaint.

In this town, a local woman, who let’s say is your neighbor, is accused of witchcraft. After the debacle at Salem, the local magistrates are unwilling to prosecute her without absolute proof, which obviously fails to materialize. Nevertheless, vicious rumors about men being transmogrified into newts, and satanic rituals conducted at night, spread. Local schoolchildren and off duty laborers congregate around your house, hoping to get a glimpse of the hideous wretch that legend tells dwells next door.
For your part, you carry on with your daily business as best you can, until one day, while waiting at the docks to board a ferry to the apothecary, a spat erupts between the woman in question and the dock guard, who insists that he shan’t allow her to board, lest her witchery cause them to become shipwrecked. The woman is denied boarding, and since the HSITC run all the ferries, this now means that she’s effectively cut off from rest of the world, not by any conviction, but because there were not adequate safeguards against the whims of an unaccountable monopoly.
As you’ve probably guessed, this is a parable about the dangers posed by the removal of net neutrality regulations. The internet these days is more than content. We have banks, schools, even healthcare infrastructure that exist solely online. In my own case, my life support systems rely on internet connectivity, and leverage software and platforms that are distributed through open source code sharing. These projects are not possible without a free and open internet.
Others with more resources than I have already thoroughly debunked the claims made by ISPs against net neutrality. The overwhelming economic consensus is that the regulations on the table will only increase economic growth, and will have no impact on ISP investment. The senate has already passed a bill to restore the preexisting regulations that were rescinded under dubious circumstances, and a house vote is expected soon.
I would ask that you contact your elected representatives, but this issue requires more than that. Who has access to the internet, and under what terms, may well be the defining question of this generation, and regardless of how the vote in the house goes, this issue and variants of it will continue to crop up. I therefore ask instead that you become an active participant in the discussion, wherever it takes us. Get informed, stay informed, and use your information to persuade others.
I truly believe that the internet, and its related technologies, have the potential to bring about a new renaissance. But this can only happen if all of us are aware and active in striving for the future we seek. This call to arms marks the beginning of a story that in all likelihood will continue for the duration of most of our lifetimes. We must consult with each other, and our elected representatives, and march, and rally, and vote, by all means, vote. Vote for an open internet, for equal access, for progress, and for the future.

My Experiences With Guns

Note: This post talks about guns, and some of my experiences with them and opinions about them, some of which are, let’s say, charged. This post may not be appropriate for everyone. Reader discretion is advised.

I have a few different stories about guns. The first come from Australia. Most Americans are vaguely aware that Australia has adopted fairly tight regulations around guns as a consequence of a mass shooting several years ago. It does indeed have tight restrictions, but it is indeed still quite possible to own guns in Australia. I know this because my mother shot competitively while we lived there. She applied and was granted a license to own and shoot pistols for sport. She was actually quite good at it.

The process involved plenty of paperwork and questions. It also involved having a new safe installed in our house under close supervision to make sure it was properly bolted to the wall, and couldn’t be accessed improperly. But even as a foreign immigrant and a mere amateur, her permit was granted. Of course, after she got her license, she had to use it often enough to prove that she was in fact shooting for sport. As a child I spent time at pistol clubs and shooting arenas watching my mother compete.

Occasionally we would be subject to police inspections to see that my mother’s pistols were being stored according to regulation. The officers were perfectly courteous about the whole affair, and often gave me and my brother tokens, like coloring pages and trading cards featuring glossy color photographs of police helicopters, and the off-road vehicles they used in the outback.

Not everyone was satisfied with the way things worked. Many of the people we met a the various pistol clubs grumbled about the restrictions, and more broadly, the vilification of their hobby. Several others, mostly schoolmates and friends of schoolmates, thought that the restrictions weren’t enough; that there was no reason for anyone outside of the military to have a gun (our local police, when they carried weapons, mostly used tasers when on ordinary patrol, and even this was widely seen as too intimidating for police), and certainly no reason to keep one at home.

The balance struck by the law was a compromise. Very few were completely happy, but almost everyone agreed that it was preferable to one extreme or the other. Those who would shoot for sport could still do so, albeit with some safety precautions, and checks to prevent the notion of sport from becoming a loophole. Those who lived in the outback, and were in danger from wildlife, or too far away from settlements to rely on police, were still permitted arms to defend themselves. However, one could not simply decide to purchase a gun on a mere whim.

My second story, which is quite a bit longer, was several years later, on an unassuming Friday in December, almost four years after moving back to the United States. Like most days, I was sick, more reeling than recovering from a recurrent sinus infection that had knocked me off my feet for most of the first semester. I had slept through most of the morning, but after a hearty brunch felt well enough to try my hand going into school for the afternoon. My first sign that something might be amiss was a news alert; a national headline flagged for my attention because it was local. Police were responding to an incident at an elementary school in neighboring Newtown. There were no details to be had at that exact moment, so I shuffled out the door towards school.

My second sign that something was wrong were the police cars parked around the school building. I was stopped getting out of the car, by the police officer I knew from middle school DARE sessions. He shouted from where he stood behind the squad car, which was positioned between the curb and the school doors, as if to barricade the entrance, and told me that the school was on lockdown.

I hesitated, car door still open, and asked if it was about whatever was going on in Newtown. His face stiffened, and he asked what I knew. I explained the vague news alert. After a moment’s hesitation, he said that there had been an attack, and it was possible that there was a second gunman. Hence the lockdown. So far there had been no reports from our town, but we were close enough that even if the suspect had fled on foot, as was suspected, we were still potentially a target. Classes were still going on inside, but the school buildings were all sealed, and police had been dispatched to secure key sites around town.

I looked back to the car, then at my schoolbag, then at the school. I asked if I should go home, if he school was on lockdown. The officer hesitated for a long moment, looking me over, and then looking at the building. In a low, almost conspiratorial voice, he told me to go ahead in. He knew me, after all. He cracked a halfhearted joke, saying that I wasn’t the suspect they were looking for, and that I should move along. I chuckled politely.

Class was never so quiet and so disorderly at the same time. Any pretense of productive work had disappeared. Despite classes still nominally occurring, the bell schedule had been suspended; either because they wanted to minimize the number of students in the halls in the event that a full lockdown had to be initiated, or because students were already so distracted and distraught that it didn’t particularly matter if they wasted time in their classrooms for period six or period seven.

My teacher kept a slide up on the smart board with all of the key points from the lesson we had been supposed to cover, just in case anyone wanted to distract themselves with schoolwork. In the back of the room, students paced anxiously, awaiting phone calls or messages from friends and loved ones with news. In the corner, a girl I was tangentially friends with wept, trying every few moments to regain her composure, only to lose it anew in a fresh wave of sobbing. Someone she knew had lost a sibling. A few other girls, who were better friends with her than I, sat with her.

In the center of the room, a handful of students had pulled chairs together in a loose circle, and were trying to scrape together all the information they could between themselves, exchanging screenshots and headlines on cell phones and laptops. The idea, I think, was that if we knew what was going on, that it would make the news easier to take. That, and the idea that doing something, applying this familiar method or coordination and research, gave us back some small modicum of power over this thing being wrought upon us.

The teacher sent us home without any homework, and waived the assignments that would have been due soon. In a moment that reflected why he was one of my favorite teachers, he took a moment to urge all of us to look after ourselves first, to take time off or see the guidance office if we felt we needed to. The next day back, the guidance office brought in extra counselors and therapy dogs. Several dark jokes circulated that this level of tragedy was the only thing that could cause the teachers of AP classes to let up on homework.

The mood in the hallways over the next several days was so heavy it was palpable. It seemed that students moved slowly, as though physically wading through grief, staring either at the floor, or at some invisible point a thousand yards off. You would see students at lunch tables weeping silently alone or in groups. I remember in one instance, a girl who was walking down the hallway suddenly halted, and broke down right there. Her books fell out of her hands and her head and shoulders slumped forward as she started crying. One of the extra counselors wove his way through the stopped crowd and silently put a protective arm around her, and walked her to the counseling office.

Several days later, the unthinkable happened as, without any kind of instruction or official sanction, our school dressed up in the colors of our rival, Newtown High School. Even the cheerleaders and football players, those dual bastions of school tribalism, donned the uniforms of their enemies, not as a prank, but in solidarity. It was a bold statement covered in all the papers, and captured on local TV news.

Despite having memories of the period, it’s a bit of a stretch to say that I actually remember the attacks of September 11th. Certainly I took note of the marines stationed at the consulate, and the way they regarded even my infant brother with the kind of paranoid suspicion that is learned from loss. I recall how in the days after, people would recognize our American accents on the street, and stop us to offer condolences, solidarity, and hugs. But I don’t have enough memories from before that to form a meaningful context, at least not from my own experiences. Some bad people had done a bad thing, and people were sad and angry and scared, but I didn’t know enough to feel those things myself except as a reflection of the adults around me.

I imagine that people felt on September 11th the way we felt on the day of Sandy Hook. For that matter, I imagine that is roughly how those who lived through it felt after Pearl Harbor. We had been attacked. Our community had been attacked, savagely and deliberately, without warning, and without any apparent reason other than the unknowable agenda of a probable lunatic. A bad person did a bad thing, and now children and teachers were dead, and our whole community was grieving and looking for answers.

There was a caveat to our shared grief. Not a silver lining; it was an unadulterated tragedy, without qualification. But a footnote. We saw the media attention that this local tragedy was getting. We saw the world grieving with us. For my part, I had old friends half a world away, who didn’t didn’t know anything about US geography, but who knew I lived in the same general area as the places suddenly mentioned on the news calling me. We saw the massive reverberations, and we were comforted in the fact that we were not alone.

There was no silver lining. But the caveat was that the same tragedy that had touched us personally had set things in motion on a larger scale. Our world had been shaken up, but things were righting themselves, and in doing so it seemed like there would have to be consequences. The adults seemed to agree that this was a tragedy, and that it could not be allowed to happen again. The outcry seemed to demand change, which we took to mean that those who had lost would not have lost in vain, and that there would be new laws so that we could put this incident behind us, and feel safe again.

We waited for the change that seemed so inevitable that most hardly even bothered advocating for it. It seemed so blatantly obvious that we needed to update our laws to keep guns out of the hands of madmen. Perhaps because we were children, we took it as given that all those adults who had sent their hopes and prayers would realize what was painfully, tearfully obvious to us: that the current balance on gun control had failed miserably, and needed to be renegotiated. As the police and then the media dug in to the details of which loopholes and lapses had been exploited to create this tragedy, we assumed, perhaps naively, that our leaders would take note, and close them.

We waited in vain. The promised reforms never came. As the immediate sting faded for those who hadn’t been close enough to see any kind of firsthand, or even, as in my case, secondhand, consequences, people stopped asking questions. And those who did, instead of focusing on questions like why a madman could access unsecured weapons of war, or why such weapons exist in abundance among civilians in the first places, focused on other questions, like why a school isn’t build to withstand a literal siege, and whether the people who are stricken by grief because of this are even real people at all.

Instead of a safer society with fewer possibilities for mass murder, our government helped to fortify our school, replacing the windows and glass doors we passed through each day on the way to our classrooms with bulletproof glass and reinforced steel, under supervision of increased police and armed security. More dark jokes circulated through the student body, comparing our building to a prison, or a Maginot fortress. A handful of brave students and other adults did speak out, gathering signatures and organizing demonstrations, but they faced fierce backlash, and in some instances, came under attack from conspiracy theorists who accused them of orchestrating the whole tragedy.

For many people I knew, who were motivated by grief and a need for closure, this broke them. To have the worst day of their life scrutinized, torn apart, twisted, perverted, and then thrown back in their face with hostility and accusation was simply too much. The toxicity of conspiracy theorists and professional pundits, coupled with the deafening silence of our leaders, broke their resolve. And so the tragedy at Newtown became just another event in a long list of tragedies mentioned occasionally in passing on anniversaries or during political debates. The camera crews left, and life went on, indifferent to those of us still grieving or looking for answers.

Many of the people I knew who were most passionate about seeing change in the immediate aftermath eventually let up, not because their opinions changed, but because they lost hope. New mass shootings, even school shootings, happened, pushing our local tragedy further and further into distant memory. Nothing happened, or at least, nothing large enough on a large enough scale to shift the balance from the current decidedly pro-gun stance, happened. Those of us who waited after Newtown, or whatever other tragedy touched them personally, as there have been so many, still wait, while those of us who have seen other systems work, possibly even work better, silently lament.

It is perhaps worth reiterating explicitly what has been mentioned previously: any conclusion on gun regulation will be a compromise. This is not merely a realistic view of politics, but a matter of reality. No country, even those cited as having overly draconian laws, has completely outlawed firearms, for essentially the same reasons that no country has completely outlawed painkillers. Every country wants to ensure that sportsmen (and women) can hone their craft, that serious hunters can enjoy their hobby, and citizens can defend themselves, even if they disagree to what extent these activities themselves ought be regulated.

Every solution is a compromise; a tradeoff. And naturally, the balance which is best suited to one country may not be as effective in another. I do not suppose that the Australian system, which despite its ample criticisms, did mostly work for Australia, could be copied wholesale for the United States, at least not without serious teething issues. Yet I also think it is obvious to all that the current balance is untenable. With so many unsecured weapons in so many untrained hands, there are simply too many points of failure.

Perhaps the solution is to focus not on restricting firearms purchases, but on training and storage. Maybe this is an issue of better an more consistent enforcement of existing laws. There is also certainly a pressing need for improvements in mental health, though the kind of comprehensive system that might conceivably be able to counterbalance the inordinate ease of access to weapons; the kind of system that can identify, intervene, and treat a sick person, possibly before they have any symptoms, probably against their will, would require not only enormous year to year funding, but the kind of governmental machinery that is fundamentally inimical to the American zeitgeist (see: American attitudes towards socialized medicine).

Every solution is a tradeoff. Some are better than others, but none are perfect. But one thing is clear: the current solution is unacceptable. Scores of children murdered is not an acceptable tradeoff for being legally permitted to buy firearms at Walmart. If ensuring that students of the future do not have to cower in ad-hoc shelters means eliminating some weapons from a hobbyist’s arsenal, then so be it. If preventing the next soft target terrorist attack requires us to foot the bill for extra police to get out into the communities and enforce the laws before the next crisis, then so be it. And if preventing these tragedies which are unique to our country requires the erection of a unique and unprecedented mental health machinery, which will cost an inordinate amount as it tries to address a gun problem without touching guns, then so be it. But a new solution is needed, and urgently.

The Social Media Embargo

I have previously mentioned that I do not frequently indulge in social media. I thought it might be worthwhile to explore this in a bit more detail.

The Geopolitics of Social Media

Late middle and early high school are a perpetual arms race for popularity and social power. This is a well known and widely accepted thesis, and my experience during adolescence, in addition to my study of the high schools of past ages, and of other countries and cultures, has led me to treat it as a given. Social media hasn’t changed this. It has amplified this effect, however, in the same manner that improved intercontinental rocketry and the invention of nuclear ballistic missile submarines intensified the threat of the Cold War.

To illustrate: In the late 1940s and into the 1950, before ICBMs were accurate or widely deployed enough to make a credible threat of annihilation, the minimum amount of warning of impending doom, and the maximum amount of damage that could be inflicted, were limited by the size and capability of each side’s bomber fleet. Accordingly, a war could only be waged, and hence, could only escalate, as quickly as bombers could reach enemy territory. This both served as an inherent limit on the destructive capability of each side, and acted as a safeguard against accidental escalation by providing a time delay in which snap diplomacy could take place.

The invention of long range ballistic missiles, however, changed this fact by massively decreasing the time from launch order to annihilation, and the ballistic missile submarine carried this further by putting both powers perpetually in range for a decapitation strike – a disabling strike that would wipe out enemy command and launch capability.

This new strategic situation has two primary effects, both of which increase the possibility of accident, and the cost to both players. First, both powers must adopt a policy of “Launch on Warning” – that is, moving immediately to full annihilation based only on early warning, or even acting preemptively when one believes that an attack is or may be imminent. Secondly, both powers must accelerate their own armament programs, both to maintain their own decapitation strike ability, and to ensure that they have sufficient capacity that they will still maintain retaliatory ability after an enemy decapitation strike.

It is a prisoner’s dilemma, plain and simple. And indeed, with each technological iteration, the differences in payoffs and punishments becomes larger and more pronounced. At some point the cost of continuous arms race becomes overwhelming, but whichever player yields first also forfeits their status as a superpower.

The same is, at least in my experience, true of social media use. Regular checking and posting is generally distracting and appears to have serious mental health costs, but so long as the cycle continues, it also serves as the foremost means of social power projection. And indeed, as Mean Girls teaches us, in adolescence as in nuclear politics, the only way to protect against an adversary is to maintain the means to retaliate at the slightest provocation.

This trend is not new. Mean Girls, which codified much of what we think of as modern adolescent politics and social dynamics, was made in 2004. Technology has not changed the underlying nature of adolescence, though it has accelerated and amplified its effects and costs. Nor is it limited to adolescents: the same kind of power structures and popularity contests that dominated high school recur throughout the world, especially as social media and the internet at large play a greater role in organizing our lives.

This is not inherently a bad thing if one is adept at social media. If you have the energy to post, curate, and respond on a continuous schedule, more power to you. I, however, cannot. I blame most of this on my disability, which limits my ability to handle large amounts of stimuli without becoming both physiologically and psychologically overwhelmed. The other part of this I blame on my perfectionist tendencies, which require that I make my responses complete and precise, and that I see through my interactions until I am sure that I have proven my point. While this is a decent enough mindset for academic debate, it is actively counterproductive on the social internet.

Moreover, continuous exposure to the actions of my peers reminded me of a depressing fact that I tried often to forget: that I was not with them. My disability is not so much a handicap in that is prevents me from doing things when I am with my peers in that it prevents me from being present with them in the first place. I become sick, which prevents me from attending school, which keeps me out of conversations, which means I’m not included in plans, which means I can’t attend gatherings, and so forth. Social media reminds me of this by showing me all the exciting things that my friends are doing while I am confined to bed rest.

It is difficult to remedy this kind of depression and anxiety. Stray depressive thoughts that have no basis in reality can, at least sometimes, and for me often, be talked apart when it is proven that they are baseless, and it is relatively simple to dismiss them when they pop up later. But these factual reminders that I am objectively left out; that I am the only person among my peers among these smiling faces; seemingly that my existence is objectively sadder and less interesting; is far harder to argue.

The History of the Embargo

I first got a Facebook account a little less than six years ago, on my fourteenth birthday. This was my first real social media to speak of, and was both the beginning of the end of parental restrictions on my internet consumption, and the beginning of a very specific window of my adolescence that I have since come to particularly loath.

Facebook wasn’t technically new at this point, but it also wasn’t the immutable giant that it is today. It was still viewed as a game of the young, and it was entirely possible to find someone who wasn’t familiar with the concept of social media without being a total Luddite. Perhaps more relevantly, there were then the first wave of people such as myself, who had grown up with the internet as a lower-case entity, who were now of age to join social media. That is, these people had grown up never knowing a world where it was necessary to go to a library for information, or where information was something that was stored physically, or even where past stories were something held in one’s memory rather than on hard drives.

In this respect, I consider myself lucky that the official line of the New South Wales Department of Eduction and Training’s official computer curriculum was, at the time I went through it, almost technophobic by modern standards; vehemently denouncing the evils of “chatrooms” and regarding the use of this newfangled “email” with the darkest suspicion. It didn’t give me real skills to equip me for the revolution that was coming; that I would live through firsthand, but it did, I think, give me a sense of perspective.

Even if that curriculum was already outdated even by the time it got to me, it helped underscore how quickly things had changed in the few years before I had enrolled. This knowledge, even if I didn’t understand it at the time, helped to calibrate a sense of perspective and reasonableness that has been a moderating influence on my technological habits.

During the first two years or so of having a Facebook account, I fell into the rabbit hole of using social media. If I had an announcement, I posted it. If I found a curious photo, I posted it. If I had a funny joke or a stray thought, I posted it. Facebook didn’t take over my life, but it did become a major theatre of it. What was recorded and broadcast there seemed for a time to be equally important as the actual conversations and interactions I had during school.

This same period, perhaps unsurprisingly, also saw a decline in my mental wellbeing. It’s difficult to tease apart a direct cause, as a number of different things all happened at roughly the same time; my physiological health deteriorated, some of my earlier friends began to grow distant from me, and I started attending the school that would continually throw obstacles in my path and refuse to accommodate my disability. But I do think my use of social media amplified the psychological effects of these events, especially inasmuch as it acted a focusing lens on all the things that made me different and apart from my peers.

At the behest of those closest to me, I began to take breaks from social media. These helped, but given that they were always circumstantial or limited in time, their effects were accordingly temporary. Moreover, the fact that these breaks were an exception rather than a standing rule meant that I always returned to social media, and when I did, the chaos of catching up often undid whatever progress I might have made in the interim.

After I finally came to the conclusion that my use of social media was causing me more personal harm than good, I eventually decided that the only way I would be able to remove its influence was total prohibition. Others, perhaps, might find that they have the willpower to deal with shades of gray in their personal policies. And indeed, in my better hours, so do I. The problem is that I have found that social media is most likely to have its negative impacts when I am not in one of my better hours, but rather have been worn down by circumstance. It is therefore not enough for me to resolve that I should endeavor to spend less time on social media, or to log off when I feel it is becoming detrimental. I require strict rules that can only be overridden in the most exceedingly extenuating circumstances.

My solution was to write down the rules which I planned to enact. The idea was that those would be the rules, and if I could justify an exception in writing, I could amend them as necessary. Having this as a step helped to decouple the utilitarian action of checking social media from the compulsive cycle of escalation. If I had a genuine reason to use social media, such as using it to provide announcements to far flung relatives during a crisis, I could write a temporary amendment to my rules. If I merely felt compelled to log on for reasons that I could not express coherently in a written amendment, then that was not a good enough reason.

This decision hasn’t been without its drawbacks. I am, without social media, undoubtedly less connected to my peers as I might otherwise have been, and the trend which already existed of my being the last person to know of anything has continued to intensify, but crucially, I am not so acutely aware of this trend that it has a serious impact one way or another on my day to day psyche. Perhaps some months hence I shall, upon further reflection, come to the conclusion that my current regime is beginning to inflict more damage than that which it originally remedied, and once again amend my embargo.

Arguments Against the Embargo

My reflections on my social media embargo have brought me stumbling upon two relevant moral quandaries. The first is whether ignorance can truly be bliss, and whether there is an appreciable distinction between genuine experience and hedonistic simulation. In walling myself off from the world I have achieved a measure of peace and contentment, at the possible cost of disconnecting myself from my peers, and to a lesser degree from the outside world. In the philosophical terms, I have alienated myself, both from my fellow man, and from my species-essence. Of course, the question of whether social media is a genuine solution to, or a vehicle of, alienation, is a debate unto itself, particularly given my situation.

It is unlikely, if still possible, that my health would have allowed my participation in any kind of physical activity which I could have been foreseeably invited to as a direct result of increased social media presence. Particularly given my deteriorating mental health of the time, it seems far more reasonable to assume that my presence would have been more of a one-sided affair: I would have sat, and scrolled, and become too self conscious and anxious about the things that I saw to contribute in a way that would be noticed by others. With these considerations in mind, the question of authenticity of experience appears to be academic at best, and nothing for me to loose sleep over.

The second question regards the duty of expression. It has oft been posited, particularly with the socio-political turmoils of late, that every citizen has a duty to be informed, and to make their voice heard; and that furthermore in declining to take a position, we are, if not tacitly endorsing the greater evil, then at least tacitly declaring that all positions available are morally equivalent in our apathy. Indeed, I myself have made such arguments on the past as it pertains to voting, and to a lesser extent to advocacy in general.

The argument goes that social media is the modern equivalent of the colonial town square, or the classical forum, and that as the default venue for socio-political discussion, our abstract duty to be informed participants is thus transmogrified into a specific duty to participate on social media. This, combined with the vague Templar-esque compulsion to correct wrongs that also drives me to rearrange objects on the table, acknowledge others’ sneezes, and correct spelling, is not lost on me.

In practice, I have found that these discussions are, at best, pyrrhic, and more often entirely fruitless: they cause opposition to become more and more entrenched, poison relationships, and convert no one, all the while creating a blight in what is supposed to be a shared social space. And as Internet shouting matches tend to be crowned primarily by who blinks first, they create a situation in which any withdrawal, even for perfectly valid reasons such as, say, having more pressing matters than trading insults over tax policy, is viewed as concession.

While this doesn’t directly address the dilemma posited, it does make its proposal untenable. Taking to my social media to agitate is not particularly more effective than conducting a hunger strike against North Korea, and given my health situation, is not really a workable strategy. Given that ought implies can, I feel acceptably satisfied to dismiss any lingering doubts about my present course.

The Laptop Manifesto

The following is an open letter to my fellow students of our local public high school, which has just recently announced, without warning, that all students will henceforth be required to buy google chromebooks at their own expense.


I consider myself a good citizen. I obey the traffic laws when I walk into town. I vote on every issue. I turn in my assignments promptly. I raise my hand and wait to be called on. When my classmates come to me at the beginning of class with a sob story about how they lost their last pencil, and the teacher won’t loan them another for the big test, I am sympathetic to their plight. With education budgets as tight as they are, I am willing to share what I have.

Yet something about the rollout of our school’s new laptop policy does not sit well with me. That the school should announce mere weeks before school begins that henceforth all students shall be mandated to have a specific, high-end device strikes me as, at best, rude, and, at worst, an undue burden on students for a service that is legally supposed to be provided by the state at no cost.

Ours is, after all, a public school. Part of being a public school is being accessible to the public. That means all members of the public. Contrary to the apparent belief of the school board and high school administration, the entire student population does not consist solely of financially wealthy and economically stable families. Despite the fact that our government at both the local and state level is apparently content to routinely leave the burden of basic classroom necessities to students and individual teachers, it is still, legally, the responsibility of the school, not the student, to see that the student is equipped to learn.

Now, I am not opposed to technology. On the contrary, I think our school is long overdue for such a 1:1 program. Nor am I particularly opposed the ongoing effort to make more class materials digitally accessible. Nor even that the school should offer their own Chromebooks to students at the student’s expense. However, there is something profoundly wrong about the school making such costs mandatory.

Public school is supposed to be the default, free option for compulsory education. To enforce compulsory education as our state does, (to the point of calling child protective services on parents of students who miss what the administration considers to be too many days,) and then enforcing the cost of that education amounts to a kind of double taxation against families that attend public schools. Moreover, this double taxation has a disproportionate impact on those who need public schools the most.

This program as it stands is unfair, unjust, and as far as I can see, indefensible. I therefore call upon my fellow students to resist this unjust and arguably illegal decree, by refusing to comply. I call in particular upon those who are otherwise able to afford such luxuries as chromebooks to resist the pressure to bow to the system, and stand up for your fellow students.

Duck and Cover

“Imminent” you say? Whelp, time to start digging.

I have always been fascinated by civil defence, and more broadly the notion of “home defence as it emerged during the two world wars and into the Cold War. There is, I think something romantic about the image of those not fit to fight in the front lines banding together to protect cities and families, shore up static fortifications, and generally pitch in for the cause of one’s people. In everyone “Doing Their Bit”. In the commonwealth, this is usually summed up as the “Blitz Spirit”.

I haven’t found an equivalently all-encompassing term in the American lexicon (hence why I’m using “defence” rather than “defense”), though the concept is obviously still there. Just think of the romanticism of the Minuteman rushing to defend his home town, or of your average apocalypse story. Like all romantic images, however, I fear that this false nostalgia over Civil Defence may be out of touch with reality.

This probably wouldn’t have been an issue for one such as myself who grew up well after the age of nuclear standoffs. Except somehow, while I was off in the mountains, what should have been some minor sabre rattling from North Korea has now become a brewing crisis.

Now, there is still a chance that all of this will blow over. Indeed, the opinion of most professionals (as of writing) is that it will. Yet at the same time, numerous local governments have apparently seen fit to issue new preparedness advice for citizens living in potentially targeted areas. The peculiar thing about these new guidelines: they’re almost word for word from the civil defence films and pamphlets of the previous century.

Some areas, like Hawaii, have even gone so far as to reactivate old emergency centers. Seeing new, high definition pictures of bureaucrats working on tablets and computers amid command bunkers built in the 1950s is not just sobering, it is surreal. Hearing modern government officials suggesting on television that citizens learn how to “duck and cover” would be comical, if this weren’t honestly the reality we’re in.

Just out of morbid curiosity, I decided to follow some of the advice given and try to locate likely targets in my area so that I might have some idea of what level of apocalypse I’m looking at. The answer depends on what kind of strike occurs, and also which set of numbers we believe for the DPRK’s capabilities. Let’s start with a rather conservative view.

Most scenarios in the past have assumed that any conflict with North Korea would play out as “Korean War II: Atomic Boogaloo”. That is to say, most conventional and even nuclear strikes will remain focused within the pacific region. With as many artillery pieces as the Korean People’s Army has stationed along the DMZ, it is likely that most of the initial fighting, which would entail a Northern push towards Seoul, would be primarily conventional. That is, until the US began moving reinforcements.

Busan and other South Korean ports, as well as US bases such as Okinawa, Guam, Pearl Harbor, and Garden Island would all be major strategic targets for DPRK nuclear strikes. Most of these targets have some level of missile defense, although reports vary on how effective these might be. It seems unlikely that North Korea is capable of reliably hitting targets much further than Hawaii, though this isn’t guaranteed to stop them.

A strike on the naval base in San Diego is possible, though with the difficulty of hitting a precise target at that range, it seems equally likely that it would miss, or the North Koreans would opt for something harder to miss in the first place, like a major city. A city with major cultural importance, like Los Angeles, or a city near the edge of their range, like Chicago, would be possible targets.

While this isn’t a good outcome for me, I probably get out of this one relatively unscathed. My portfolio would take a hit, and I would probably have trouble finding things at the stores for a few months as panic set in. There’s a possibility that we would see looting and breakdown in a fashion similar to immediately after Hurricane Sandy, as panic and market shocks cause people to freak out, but that kind of speculation is outside the scope of this post.

I might end up spending some time in the basement depending on the prevailing winds, and I might have to cash in on my dual citizenship and spend some time away from the United States in order to get reliable medical treatment, as the US healthcare system would be completely overloaded, but barring some unexpected collapse, the world would go on. I give myself 80% odds of escaping unscathed.

This is a (relatively) conservative view. If we assume that the number of warheads is towards the upper bound of estimates, and that by the time judgement day comes the North Koreans have successfully miniaturized their warheads, and gotten the navigation worked out to a reasonable degree, we get a very different picture.

With only a limited number of warheads, only a handful of which will be on missiles that can reach the east coast, there will be some picking and choosing to be done on targets. Here’s the problem: Strategically, there’s not really a scenario where the DPRK can strike the US and not be annihilated by the US response. They lack the resources for a war of nuclear attrition. So unless Kim Jong Un decided his best option is to go out in a suicidal blaze of glory, a massive first strike makes no sense from a military standpoint (not that such concerns are necessarily pertinent to a madman).

There are a few places near me that would almost certainly be hit in such a scenario, namely New York City. This would almost certainly require me to hide in the basement for a while and would probably derail my posting schedule. Based on estimates of DPRK warhead size, I’m probably not in the blast radius, but I am certainly within immediate fallout distance, and quite possibly within the range of fires ignited by the flash. While I do have evacuation prospects, getting out safely would be difficult. I give myself 50% odds .

On the other hand, if the US is the aggressor, the DPRK does officially have mutual defense treaties with China. While it’s hard to say whether China’s leadership would actually be willing to go down with Pyongyang, or whether they would be willing to see the US use nuclear force to expand its hegemony in the region, if we’re considering East Asian nuclear war scenarios, China is an obvious elephant in the room that needs to be addressed.

While the US would probably still “win” a nuclear exchange with a joint PRC-DPRK force, it would be a hollow victory. US missile defenses would be unable to take down hundreds of modern rockets, and with Chinese ICBMs in play, mainland targets would be totally up for grabs. This is the doomsday scenario here.

Opinions vary on whether counter-force (i.e. Military) targets would be given preference over counter-value (i.e. civilian, leadership, and cultural) targets. While China’s military size, doctrine, and culture generally lend themselves to the kind of strategic and doctrinal conservatism that would prioritize military targets, among nations that have published their nuclear doctrine, smaller warhead arsenals such as the one maintained by the PLA generally lean towards a school of thought known of “minimal deterrence” over the “mutually assured destruction” of the US and Russia.

Minimal deterrence is a doctrine that holds that any level of nuclear exchange will lead to unacceptable casualties on both sides, and to this end, only a small arsenal is required to deter strikes (as opposed to MAD, which focuses on having a large enough arsenal to still have a fully capable force regardless of the first strike of an enemy). This sounds quite reasonable, until one considers the logical conclusions of this thinking.

First, because “any strike is unacceptable”, it means that any nuclear strike, regardless of whether it is counter-force or counter-value, will be met with a full counter-value response. Secondly, because it makes no provisions for surviving a counter-force first strike (like the US might launch against the DPRK or PRC), it calls for a policy of “launch on warning” rather than waiting for tit for tat casualty escalation. Or occasionally, for preemptive strikes as soon as it becomes apparent that the enemy is preparing an imminent attack.

This second part is important. Normally, this is where analysts look at things like political rhetoric, media reaction, and public perception to gauge whether an enemy first strike is imminent or not. This is why there has always been a certain predictable cadence to diplomatic and political rhetoric surrounding possible nuclear war scenarios. That rhythm determines the pulse of the actual military operations. And that is why what might otherwise be harmless banter can be profoundly destabilizing when it comes from people in power.

Anyways, for an attack on that kind of scale, I’m pretty well and truly hosed. The map of likely nuclear targets pretty well covers the entire northeast, and even if I manage to survive both the initial attack, and the weeks after, during which radiation would be deadly to anyone outside for more than a few seconds, the catastrophic damage to the infrastructure that keeps the global economy running, and upon which I rely to get my highly complicated, impossible-to-recreate-without-a-post-industrial-economic-base life support medication would mean that I would die as soon as my on hand stockpile ran out. There’s no future for me in that world, and so there’s nothing I can do to change that. It seems a little foolish, then, to try and prepare.

Luckily, I don’t expect that an attack will be of that scale. I don’t expect that an attack will come in any case, but I’ve more or less given up on relying on sanity and normalcy to prevail for the time being. In the meantime, I suppose I shall have to look at practicing my duck and cover skills.

Bretton Woods

So I realized earlier this week, while staring at the return address stamped on the sign outside the small post office on the lower level of the resort my grandfather selected for us on our family trip, that we were in fact staying in the same hotel which hosted the famous Bretton Woods Conference, that resulted in the Bretton Woods System that governed post-WWII economic rebuilding around the world, and laid the groundwork for our modern economic system, helping to cement the idea of currency as we consider it today.

Needless to say, I find this intensely fascinating; both the conference itself as a gathering of some of the most powerful people at one of the major turning points in history, and the system that resulted from it. Since I can’t recall having spent any time on this subject in my high school economics course, I thought I would go over some of the highlights, along with pictures of the resort that I was able to snap.

Pictured: The Room Where It Happened

First, some background on the conference. The Bretton Woods conference took place in July of 1944, while the Second World War was still in full swing. The allied landings in Normandy, less than a month earlier, had been successful in establishing isolated beachheads, but Operation Overlord as a whole could still fail if British, Canadian, American, and Free French forces were prevented from linking up and liberating Paris.

On the Eastern European front, the Red Army had just begun Operation Bagration, the long planned grand offensive to push Nazi forces out of the Soviet Union entirely, and begin pushing offensively through occupied Eastern Europe and into Germany. Soviet victories would continue to rack up as the conference went on, as the Red Army executed the largest and most successful offensive in its history, escalating political concerns among the western allies about the role the Soviet Union and its newly “liberated” territory could play in a postwar world.

In the pacific, the Battle of Saipan was winding down towards an American victory, radically changing the strategic situation by putting the Japanese homeland in range of American strategic bombing. Even as the battles rage on, more and more leaders on both sides look increasingly to the possibility of an imminent allied victory.

As the specter of rebuilding a world ravaged by the most expensive and most devastating conflict in human history (and hopefully ever) began to seem closer, representatives of all nations in the allied powers met in a resort in Bretton Woods, New Hampshire, at the foot of Mount Washington, to discuss the economic future of a postwar world in the United Nations Monetary and Financial Conference, more commonly referred to as the Bretton Woods Conference. The site was chosen because, in addition to being vacant (since the war had effectively killed tourism), the isolation of the surrounding mountains made the site suitably defensible against any sort of attack. It was hoped that this show of hospitality and safety would assuage delegates coming from war torn and occupied parts of the world.

After being told that the hotel had only 200-odd rooms for a conference of 700-odd delegates, most delegates, naturally, decided to bring their families, an many cases bringing as many extended relatives as could be admitted on diplomatic credentials. Of course, this was probably as much about escaping the ongoing horrors in Europe and Asia as it was getting a free resort vacation.

These were just the delegates. Now imagine adding families, attachés, and technical staff.

As such, every bed within a 22 mile radius was occupied. Staff were forced out of their quarters and relocated to the stable barns to make room for delegates. Even then, guests were sleeping in chairs, bathtubs, even on the floors of the conference rooms themselves.

The conference was attended by such illustrious figures as John Maynard Keynes (yes, that Keynes) and Harry Dexter White (who, in addition to being the lead American delegate, was also almost certainly a spy for the Soviet NKVD, the forerunner to the KGB), who clashed on what, fundamentally, should be the aim of the allies to establish in a postwar economic order.

Spoiler: That guy on the right is going to keep coming up.

Everyone agreed that protectionist, mercantilist, and “economic nationalist” policies of the interwar period had contributed both to the utter collapse of the Great Depression, and the collapse of European markets, which created the socioeconomic conditions for the rise of fascism. Everyone agreed that punitive reparations placed on Germany after WWI had set up European governments for a cascade of defaults and collapses when Germany inevitably failed to pay up, and turned to playing fast and loose with its currency and trade policies to adhere to the letter of the Treaty of Versailles.

It was also agreed that even if reparations were entirely done away with, which would leave allied nations such as France, and the British commonwealth bankrupt for their noble efforts, that the sheer upfront cost of rebuilding would be nigh impossible by normal economic means, and that leaving the task of rebuilding entire continents would inevitably lead to the same kind of zero-sum competition and unsound monetary policy that had led to the prewar economic collapse in the first place. It was decided, then, that the only way to ensure economic stability through the period of rebuilding was to enforce universal trade policies, and to institute a number of centralized financial organizations under the purview of the United Nations, to oversee postwar rebuilding and monetary policy.

It was also, evidently, the beginning of the age of minituraized flags.

The devil was in the details, however. The United States, having spent the war safe from serious economic infrastructure damage, serving as the “arsenal of democracy”, and generally being the only country that had reserves of capital, wanted to use its position of relative economic supremacy to gain permanent leverage. As the host of the conference and the de-facto lead for the western allies, the US held a great deal of negotiating power, and the US delegates fully intended to use it to see that the new world order would be one friendly to American interests.

Moreover, the US, and to a lesser degree, the United Kingdom, wanted to do as much as possible to prevent the Soviet Union from coming to dominate the world after it rebuilt itself. As World War II was beginning to wind down, the Cold War was beginning to wind up. To this end, the news of daily Soviet advances, first pushing the Nazis out of its borders, and then steamrolling into Poland, Finland, and the Baltics was troubling. Even more troubling were the rumors of the ruthless NKVD suppression of non-communist partisan groups that had resisted Nazi occupation in Eastern Europe, indicating that the Soviets might be looking to establish their own postwar hegemony.

Although something tells me this friendship isn't going to last
Pictured: The beginning of a remarkable friendship between US and USSR delegates

The first major set piece of the conference agreement was relatively uncontroversial: the International Bank for Reconstruction and Development, drafted by Keynes and his committee, was established to offer grants and loans to countries recovering from the war. As an independent institution, it was hoped that the IBRD would offer flexibility to rebuilding nations that loans from other governments with their own financial and political obligations and interests could not. This was also a precursor to, and later backbone of, the Marshal Plan, in which the US would spend exorbitant amounts on foreign aid to rebuild capitalism in Europe and Asia in order to prevent the rise of communist movements fueled by lack of opportunity.

The second major set piece is where things get really complicated. I’m massively oversimplifying here, but global macroeconomic policy is inevitably complicated in places. The second major set-piece, a proposed “International Clearing Union” devised by Keynes back in 1941, was far more controversial.

The plan, as best I am able to understand it, called for all international trade to be handled through a single centralized institution, which would measure the value of all other goods and currencies relative to a standard unit, tentatively called a “bancor”. The ICU would then offer incentives to maintain trade balances relative to the size of a nation’s economy, by charging interest off of countries with a major trade surplus, and using the excess to devalue the exchange rates of countries with trade deficits, making imports more expensive and products more desirable to overseas consumers.

The Grand Ballroom was thrown into fierce debate, and the local Boy Scouts that had been conscripted to run microphones between delegates (most of the normal staff either having been drafted, or completely overloaded) struggled to keep up with these giants of economics and diplomacy.

Photo of the Grand Ballroom, slightly digitally adjusted to compensate for bad lighting during our tour

Unsurprisingly, the US delegate, White, was absolutely against Keynes’s hair brained scheme. Instead, he proposed a far less ambitious “International Monetary Fund”, which would judge trade balances, and prescribe limits for nations seeking aid from the IMF or IBRD, but otherwise would generally avoid intervening. The IMF did keep Keynes’s idea of judging trade based on a pre-set exchange rate (also obligatory for members), but avoided handing over the power to unilaterally affect the value of individual currencies to the IMF, instead leaving it in the hands of national governments, and merely insisting on certain requirements for aid and membership. It also did away with notions of an ultranational currency.

Of course, this raised the question of how to judge currency values other than against each other alone (which was still seen as a bridge too far in the eyes of many). The solution, proposed by White, was simple: judge other currencies against the US dollar. After all, the United States was already the largest and most developed economy. And since other countries had spent the duration of the war buying materiel from the US, it also held the world’s largest reserves of almost every currency, including gold and silver, and sovereign debt. The US was the only country to come out of WWII with enough gold in reserve to stay on the gold standard and also finance postwar rebuilding, which made it a perfect candidate as a default currency.

US, Canadian, and Soviet delegates discuss the merits of Free Trade

Now, you can see this move either as a sensible compromise for a world of countries that couldn’t have gone back to their old ways if they tried, or as a master stroke attempt by the US government to cement its supremacy at the beginning of the Cold War. Either way, it worked as a solution, both in the short term, and in the long term, creating a perfect balance of stability and flexibility in monetary policy for a postwar economic boom, not just in the US, but throughout the capitalist world.

The third set piece was a proposed “International Trade Organization”, which was to oversee implementation and enforcement of the sort of universal free trade policies that almost everyone agreed would be most conducive not only to prosperity, but to peace as a whole. Perhaps surprisingly, this wasn’t terribly divisive at the conference.

The final agreement for the ITO, however, was eventually shot down when the US Senate refused to ratify its charter, partly because the final conference had been administered in Havana under Keynes, who used the opportunity to incorporate many of his earlier ideas on an International Clearing Union. Much of the basic policies of the ITO, however, influenced the successful General Agreements on Tarriffs and Trade, which would later be replaced by the World Trade Organization.

Pictured: The main hallway as seen from the Grand Ballroom. Notice the moose on the right, above the fireplace.

The Bretton Woods agreement was signed by the allied delegates in the resort’s Gold Room. Not all countries that signed immediately ratified. The Soviet Union, perhaps unsurprisingly, reversed its position on the agreement, calling the new international organizations “a branch of Wall Street”, going on to found the Council for Mutual Economic Assistance, a forerunner to the Warsaw Pact, within five years. The British Empire, particularly its overseas possessions, also took time in ratifying, owing to the longstanding colonial trade policies that had to be dismantled in order for free trade requirements to be met.

The consensus of most economists is that Bretton Woods was a success. The system more or less ceased to exist when Nixon, prompted by Cold War drains on US resources, and French schemes to exchange all of its reserve US dollars for gold, suspended the Gold Standard for the US dollar, effectively ushering in the age of free-floating fiat currencies; that is, money that has value because we all collectively accept that it does; an assumption that underlies most of our modern economic thinking.

There’s a plaque on the door to the room in which the agreement was signed. I’m sure there’s something metaphorical in there.

While it certainly didn’t last forever, the Bretton Woods system did accomplish its primary goal of setting the groundwork for a stable world economy, capable of rebuilding and maintaining the peace. This is a pretty lofty achievement when one considers the background against which the conference took place, the vast differences between the players, and the general uncertainty about the future.

The vision set forth in the Bretton Woods Conference was an incredibly optimistic, even idealistic, one. It’s easy to scoff at the idea of hammering out an entire global economic system, in less than a month, at a backwoods hotel in the White Mountains, but I think it speaks to the intense optimism and hope for the future that is often left out of the narrative of those dark moments. The belief that we can, out of chaos and despair, forge a brighter future not just for ourselves, but for all, is not in itself crazy, and the relative success of the Bretton Woods System, flawed though it certainly was, speaks to that.

A beautiful picture of Mt. Washington at sunset from the hotel’s lounge

Works Consulted

IMF. “60th Anniversary of Bretton Woods.” 60th Anniversary – Background Information, what is the Bretton Woods Conference. International Monetary Fund, n.d. Web. 10 Aug. 2017. <http://external.worldbankimflib.org/Bwf/whatisbw.htm>.

“Cooperation and Reconstruction (1944-71).” About the IMF: History. International Monetary Fund, n.d. Web. 10 Aug. 2017. <http://www.imf.org/external/about/histcoop.htm>

YouTube. Extra Credits, n.d. Web. 10 Aug. 2017. <http://www.youtube.com/playlist?list=PLhyKYa0YJ_5CL-krstYn532QY1Ayo27s1>.

Burant, Stephen R. East Germany, a country study. Washington, D.C.: The Division, 1988. Library of Congress. Web. 10 Aug. 2017. <https://archive.org/details/eastgermanycount00bura_0>.

US Department of State. “Proceedings and Documents of the United Nations Monetary and Financial Conference, Bretton Woods, New Hampshire, July 1-22, 1944.” Proceedings and Documents of the United Nations Monetary and Financial Conference, Bretton Woods, New Hampshire, July 1-22, 1944 – FRASER – St. Louis Fed. N.p., n.d. Web. 10 Aug. 2017. <https://fraser.stlouisfed.org/title/430>.

Additional information provided by resort staff and exhibitions visitited in person.

Environmentalist Existentialist

Within the past several days, several of my concerns regarding my contribution to the environment have gone from troubling to existentially crippling. This has a lot to do with the recent announcement that the US federal government will no longer be party to the Paris Climate Agreement, but also a lot to with the revelation that my personal carbon footprint is somewhere between four and five times the average for a US resident, roughly nine times the average for a citizen living in an industrialized nation, about twenty five times the average for all humans, and a whopping forty seven times the target footprint which all humans will need to adopt to continue our present rate of economic growth and avoid a global cataclysm. Needless to say, this news is both sobering and distressing.

As it were, I can say quite easily why my footprint is so large. First, there is the fact that the house I live in is terribly, awfully, horrifically inefficient. With a combination of poor planning and construction, historically questionable maintenance, and periodic weather damage from the day I moved in, the house leaks energy like a sieve. The construction quality of the foundation and plumbing is such that massive, energy-sucking dehumidifiers are required to keep mold to tolerable minimums. Fixing these problems, though it would be enormously expensive and disruptive, would go some way towards slashing the energy and utility bills, and would shave a good portion of the excess off. By my back of the envelope calculations, it would reduce the household energy use by some 35% and the total carbon footprint by about 5%.

There is transportation, which comprises some 15-20% of the total. While there is room for improvement here, the nature of my health is such that regular trips by private motor vehicle is a necessity. Public transport infrastructure in my area is lacking, and even where it exists, is often difficult to take full advantage of due to health reasons. This points to a recurring theme in my attempts to reduce the environmental impact which I inflict: reducing harm to the planet always ends up taking a backseat to my personal health and safety. I have been reliably told that this is the way that it ought to be, but this does not calm my anxieties.

The largest portion of by carbon footprint, by an overwhelming margin, is the so-called “secondary” footprint; that is, the additional carbon generated by things one buys and participates in, in addition to things one does. So, for example, if some luxury good is shipped air mail from another continent, the secondary footprint factors in the impact of that cargo plane, even though one was not physically on said plane. This isn’t factored into every carbon footprint calculator, and some weight it differently than others. If I were to ignore my secondary footprint entirely , my resulting impact would be roughly equivalent to the average American (though still ten times where it needs to be to avoid cataclysm).

Of my secondary footprint, the overwhelming majority is produced by my consumption of pharmaceutical products, which, it is noted, are especially waste-prone (not unreasonably; given the life-and-death nature of the industry, it is generally accepted that the additional waste created by being cautious is worth it). Herein lies my problem. Even if I completely eliminated all other sources of emissions, the impact of my health treatments alone would put me well beyond any acceptable bounds. Expending fewer resources is not realistically possible, unless I plan to roll over and stop breathing.

The implications for my largely utilitarian moral framework are dire. If, as it seems, thirty people (or three average Americans) could live comfortably with the same resources that I expend, how can I reasonably justify my continued existence? True, this isn’t quite so clear cut as one person eating the food of thirty. Those numbers represent averages, and all averages have outliers. Carbon output reduction isn’t a zero-sum game, but rather a collective effort. Moreover, the calculation represents averages derived from current industrial processes, which will need be innovated on a systemwide level to make the listed goals achievable on the global level which is required to prevent cataclysm.

These points might be more assuring if I still had faith that such a collective solution would in fact be implemented. However, current events have called this into serious question. The Paris Climate Agreement represents a barest minimum of what needs to be done, and was specifically calibrated to have a minimal impact on economic growth. The United States was already ahead of current targets to meet its obligations due to existing forces. While this does reinforce the common consensus that the actual withdrawal of the US will have a relatively small impact on its contribution to environmental damage, it not only makes it easier for other countries to squirm their way out of their own obligations by using the US as an example, but also demonstrates a complete lack of the scientific understanding, political comprehension, and international good faith which will be necessary to make true progress towards averting future cataclysm.

That is to say, it leaves the burden of preventing environmental catastrophe, at least in the United States, in the hands of individuals. And given that I have almost as much (or, as it happens, as little) faith in individuals as I do in the current presidential administration, this means in effect that I feel compelled to take such matters upon myself personally. Carrying the weight of the world upon my shoulders is a feeling that I have grown accustomed to, particularly of late, but to have such a situation where these necessary duties are openly abandoned by the relevant authorities makes it seem all the more real.

So, now that I have been given the solemn task of saving the world, there are a few different possibilities. Obviously the most urgent problem for me is solving my own problems, or at least, finding a way to counteract their effects. For a decent chunk of cash, I could simply pay someone to take action on my behalf, either by planting trees, or offering startup cash for projects that reduce carbon emissions somewhere else in the world, so that the net impact is zero. Some of these programs also hit two birds with one stone by targeting areas that are economically or ecologically vulnerable, doing things like boosting crop yields and providing solar power to developing communities. While there is something poetic about taking this approach, it strikes me as too much like throwing money at a problem. And, critically, while these services can compensate for a given amount, they do not solve the long-term problem.

Making repairs and upgrades to the house will no doubt help nudge things in the right direction. Putting up the cash to properly insulate the house will not only save excess heating fuel from being burned, but will likely result in the house staying at a more reasonable temperature, which is bound to help my health. Getting out and exercising more, which has for a long while now been one of those goals that I’ve always had in mind but never quite gotten around to, particularly given the catch-22 of my health, will hopefully improve my health as well, lessening the long term detriments of my disability, as well as cutting down on resources used at home when indoors (digital outdoors may still outclass physical outdoors, but also sucks up a lot more energy to render).

This is where my efforts hit a brick wall. For as busy as I am, I don’t actually do a great deal of extraneous consumption. I travel slightly less than average, and like most of my activities, my travel is clustered in short bursts rather than routine commutes which could be modified to include public transport or ride sharing. A personal electric vehicle could conceivably cut this down a little, at great cost, but not nearly enough to get my footprint to where it needs to be. I don’t do a great deal of shopping, so cutting my consumption is difficult. Once again, it all comes back to my medical consumption. As long as that number doesn’t budge, and I have no reason to believe that it will, my carbon footprint will continue to be unconscionably large.

There are, of course, ways to play around with the numbers; for example, capping the (absurd) list price of my medications according to what I would pay if I moved back to Australia and got my care through the NHS (for the record: a difference of a factor of twenty), or shifting the cost from the “pharmaceuticals” section to the “insurance” section, and only tallying up to the out of pocket maximum. While these might be, within a reasonable stretch, technically accurate, I feel that they miss the point. Also, even by the most aggressively distorted numbers, my carbon footprint is still an order of magnitude larger than it needs to be. This would still be true even if I completely eliminated home and travel emissions, perhaps by buying a bundle package from Tesla at the expense of several tens of thousands of dollars.

The data is unequivocal. I cannot save the world alone. I rely on society to get me the medical supplies I require to stay alive on a daily basis, and this dependence massively amplifies my comparatively small contribution to environmental destruction. I feel distress about this state of affairs, but there is very little I can personally do to change it, unless I feel like dying, which I don’t, particularly.

This is why I feel disproportionately distressed that the US federal government has indicated that it does not intend to comply with the Paris Climate Agreement; my only recourse for my personal impact is a systematic solution. I suppose it is fortunate, then, that I am not the only one trying to save the world. Other countries are scrambling to pick up America’s slack, and individuals and companies are stepping up to do their part. This is arguably a best case scenario for those who seek to promote climate responsibility in this new era of tribalist politics.

History Has its Eyes on You

In case it isn’t obvious from some of my recent writings, I’ve been thinking a lot about history. This has been mostly the fault of John Green, who decided in a recent step of his ongoing scavenger hunt, to pitch the age old question: “[I]s it enough to behold the universe of which we are part, or must we leave a footprint in the moondust for it all to have been worthwhile?” It’s a question that I have personally struggled with a great deal, more so recently as my health and circumstances have made it clear that trying to follow the usual school > college > career > marriage > 2.5 children > retirement and in that order thank you very much life path is a losing proposition.

The current political climate also has me thinking about the larger historical context of the present moment. Most people, regardless of their political affiliation, agree that our present drama is unprecedented, and the manner in which it plays out will certainly be significant to future generations. There seems to be a feeling in the air, a zeitgeist, if you will, that we are living in a critical time.

I recognize that this kind of talk isn’t new. Nearly a millennium ago, the participants of the first crusade, on both sides, believed they were living in the end times. The fall of Rome was acknowledged by most contemporary European scholars to be the end of history. Both world wars were regarded as the war to end all wars, and for many, including the famed George Orwell, the postwar destruction was regarded as the insurmountable beginning of the end for human progress and civilization. Every generation has believed that their problems were of such magnitude that they would irreparably change the course of the species.

Yet for every one of these times when a group has mistakenly believed that radical change is imminent, there has been another revolution that has arrived virtually unannounced because people assumed that life would always go on as it always had gone on. Until the 20th century, imperial rule was the way of the world, and European empires were expected to last for hundreds or even thousands of years. In the space of a single century, Marxism-Leninism went from being viewed as a fringe phenomenon, to a global threat expected to last well into the time when mankind was colonizing other worlds, to a discredited historical footnote. Computers could never replace humans in thinking jobs, until they suddenly began to do so in large numbers.

It is easy to look at history with perfect hindsight, and be led to believe that this is the way that things would always have gone regardless. This is especially true for anyone born in the past twenty five years, in an age after superpowers, where the biggest threat to the current world order has always been fringe radicals living in caves. I mean, really, am I just supposed to believe that there were two Germanies that both hated each other, and that everyone thought this was perfectly normal and would go on forever? Sure, there are still two Koreas, but no one really takes that division much seriously anymore, except maybe for the Koreans.

I’ve never been quite sure where I personally fit into history, and I’m sure a large part of that is because nothing of real capital-H Historical Importance has happened close to me in my lifetime. With the exception of the September 11th attacks, which happened so early in my life, and while I was living overseas, that they may as well have happened a decade earlier during the Cold War, and the rise of smartphones and social media, which happened only just as I turned old enough to never have known an adolescence without Facebook, things have, for the most part, been the same historical setting for my whole life.

The old people in my life have told me about watching or hearing about the moon landing, or the fall of the Berlin Wall, and about how it was a special moment because everyone knew that this was history unfolding in front of them. Until quite recently, the closest experiences I had in that vein were New Year’s celebrations, which always carry with them a certain air of historicity, and getting to stay up late (in Australian time) to watch a shuttle launch on television. Lately, though, this has changed, and I feel more and more that the news I am seeing today may well turn out to be a turning point in the historical narrative that I will tell my children and grandchildren.

Moreover, I increasingly feel a sensation that I can only describe as historical pressure; the feeling that this turmoil and chaos may well be the moment that leaves my footprint in the moondust, depending on how I act. The feeling that the world is in crisis, and it is up to me to cast my lot in with one cause or another.

One of my friends encapsulated this feeling with a quote, often attributed to Vladimir Lenin, but which it appears is quite likely from some later scholar or translator.
“There are decades where nothing happens; and there are weeks where decades happen.”
Although I’m not sure I entirely agree with this sentiment (I can’t, to my mind, think of a single decade where absolutely nothing happened), I think this illustrates the point that I am trying to make quite well. We seem to be living in a time where change is moving quickly, in many cases too quickly to properly contextualize and adjust, and we are being asked to pick a position and hold it. There is no time for rational middle ground because there is no time for rational contemplation.

Or, to put it another way: It is the best of times, it is the worst of times, it is the age of wisdom, it is the age of foolishness, it is the epoch of belief, it is the epoch of incredulity, it is the season of Light, it is the season of Darkness, it is the spring of hope, it is the winter of despair, we have everything before us, we have nothing before us, we are all going direct to Heaven, we are all going direct the other way – in short, the period is so far like the present period, that some of its noisiest authorities insist on its being received, for good or for evil, in the superlative degree of comparison only.

How, then, will this period be remembered? How will my actions, and the actions of my peers, go down in the larger historical story? Perhaps in future media, the year 2017 will be thought of as “just before that terrible thing happened, when everyone knew something bad was happening but none yet had the courage to face it”, the way we think of the early 1930s. Or will 2017 be remembered like the 1950s, as the beginning of a brave new era which saw humanity in general and the west in particular reach new heights?

It seems to be a recurring theme in these sorts of posts that I finish with something to the effect of “I don’t know, but maybe I’m fine not knowing in this instance”. This remains true, but I also certainly wish to avoid encouraging complacency. Not knowing the answers is okay, it’s human, even. But not continuing to question in the first place is how we wind up with a far worse future.