Since I’ve been traveling, I’ve come up with quite a few things to write about. More than that, I’ve actually already started on writing up several of these topics, and gotten far enough that I think they’re past the phase where most posts die unwritten. However most of these topics are, well, topical to my situation now, which means that if I wait to publish them according to my regular schedule, It’s going to be several months before I’m back to writing actually new material, and the stuff being published won’t be current when it is seen.
While this approach of delaying everything is arguably less work, and more consistent for readers from a scheduling viewpoint, this isn’t the way that I want to be writing things. This is my personal blog, not a media company (at least, not yet). I want to be writing things as they come to me, and publishing as I feel like it.
So, we’re going to try something new. For the next few days, I’m going to have a marathon. That is, I’m going to have a new post go up every day. These posts will be accordingly tagged with the “postaday” tag. This has been something that’s been nagging at the back of my mind as an interesting experiment for a while now, and I think I am now in a position to execute it.
I have no idea how, or even if, this will work out. I don’t yet have fully written posts, though I do have at least three half baked ideas for posts. If this initiative sputters and dies in a few days then so be it. Otherwise, I will be aiming to get five to seven posts in a row over the coming days. If this goes well enough I may even decide to ramp up my regular once a week routine.
I have always been fascinated by civil defence, and more broadly the notion of “home defence as it emerged during the two world wars and into the Cold War. There is, I think something romantic about the image of those not fit to fight in the front lines banding together to protect cities and families, shore up static fortifications, and generally pitch in for the cause of one’s people. In everyone “Doing Their Bit”. In the commonwealth, this is usually summed up as the “Blitz Spirit”.
I haven’t found an equivalently all-encompassing term in the American lexicon (hence why I’m using “defence” rather than “defense”), though the concept is obviously still there. Just think of the romanticism of the Minuteman rushing to defend his home town, or of your average apocalypse story. Like all romantic images, however, I fear that this false nostalgia over Civil Defence may be out of touch with reality.
This probably wouldn’t have been an issue for one such as myself who grew up well after the age of nuclear standoffs. Except somehow, while I was off in the mountains, what should have been some minor sabre rattling from North Korea has now become a brewing crisis.
Now, there is still a chance that all of this will blow over. Indeed, the opinion of most professionals (as of writing) is that it will. Yet at the same time, numerous local governments have apparently seen fit to issue new preparedness advice for citizens living in potentially targeted areas. The peculiar thing about these new guidelines: they’re almost word for word from the civil defence films and pamphlets of the previous century.
Some areas, like Hawaii, have even gone so far as to reactivate old emergency centers. Seeing new, high definition pictures of bureaucrats working on tablets and computers amid command bunkers built in the 1950s is not just sobering, it is surreal. Hearing modern government officials suggesting on television that citizens learn how to “duck and cover” would be comical, if this weren’t honestly the reality we’re in.
Just out of morbid curiosity, I decided to follow some of the advice given and try to locate likely targets in my area so that I might have some idea of what level of apocalypse I’m looking at. The answer depends on what kind of strike occurs, and also which set of numbers we believe for the DPRK’s capabilities. Let’s start with a rather conservative view.
Most scenarios in the past have assumed that any conflict with North Korea would play out as “Korean War II: Atomic Boogaloo”. That is to say, most conventional and even nuclear strikes will remain focused within the pacific region. With as many artillery pieces as the Korean People’s Army has stationed along the DMZ, it is likely that most of the initial fighting, which would entail a Northern push towards Seoul, would be primarily conventional. That is, until the US began moving reinforcements.
Busan and other South Korean ports, as well as US bases such as Okinawa, Guam, Pearl Harbor, and Garden Island would all be major strategic targets for DPRK nuclear strikes. Most of these targets have some level of missile defense, although reports vary on how effective these might be. It seems unlikely that North Korea is capable of reliably hitting targets much further than Hawaii, though this isn’t guaranteed to stop them.
A strike on the naval base in San Diego is possible, though with the difficulty of hitting a precise target at that range, it seems equally likely that it would miss, or the North Koreans would opt for something harder to miss in the first place, like a major city. A city with major cultural importance, like Los Angeles, or a city near the edge of their range, like Chicago, would be possible targets.
While this isn’t a good outcome for me, I probably get out of this one relatively unscathed. My portfolio would take a hit, and I would probably have trouble finding things at the stores for a few months as panic set in. There’s a possibility that we would see looting and breakdown in a fashion similar to immediately after Hurricane Sandy, as panic and market shocks cause people to freak out, but that kind of speculation is outside the scope of this post.
I might end up spending some time in the basement depending on the prevailing winds, and I might have to cash in on my dual citizenship and spend some time away from the United States in order to get reliable medical treatment, as the US healthcare system would be completely overloaded, but barring some unexpected collapse, the world would go on. I give myself 80% odds of escaping unscathed.
This is a (relatively) conservative view. If we assume that the number of warheads is towards the upper bound of estimates, and that by the time judgement day comes the North Koreans have successfully miniaturized their warheads, and gotten the navigation worked out to a reasonable degree, we get a very different picture.
With only a limited number of warheads, only a handful of which will be on missiles that can reach the east coast, there will be some picking and choosing to be done on targets. Here’s the problem: Strategically, there’s not really a scenario where the DPRK can strike the US and not be annihilated by the US response. They lack the resources for a war of nuclear attrition. So unless Kim Jong Un decided his best option is to go out in a suicidal blaze of glory, a massive first strike makes no sense from a military standpoint (not that such concerns are necessarily pertinent to a madman).
There are a few places near me that would almost certainly be hit in such a scenario, namely New York City. This would almost certainly require me to hide in the basement for a while and would probably derail my posting schedule. Based on estimates of DPRK warhead size, I’m probably not in the blast radius, but I am certainly within immediate fallout distance, and quite possibly within the range of fires ignited by the flash. While I do have evacuation prospects, getting out safely would be difficult. I give myself 50% odds .
On the other hand, if the US is the aggressor, the DPRK does officially have mutual defense treaties with China. While it’s hard to say whether China’s leadership would actually be willing to go down with Pyongyang, or whether they would be willing to see the US use nuclear force to expand its hegemony in the region, if we’re considering East Asian nuclear war scenarios, China is an obvious elephant in the room that needs to be addressed.
While the US would probably still “win” a nuclear exchange with a joint PRC-DPRK force, it would be a hollow victory. US missile defenses would be unable to take down hundreds of modern rockets, and with Chinese ICBMs in play, mainland targets would be totally up for grabs. This is the doomsday scenario here.
Opinions vary on whether counter-force (i.e. Military) targets would be given preference over counter-value (i.e. civilian, leadership, and cultural) targets. While China’s military size, doctrine, and culture generally lend themselves to the kind of strategic and doctrinal conservatism that would prioritize military targets, among nations that have published their nuclear doctrine, smaller warhead arsenals such as the one maintained by the PLA generally lean towards a school of thought known of “minimal deterrence” over the “mutually assured destruction” of the US and Russia.
Minimal deterrence is a doctrine that holds that any level of nuclear exchange will lead to unacceptable casualties on both sides, and to this end, only a small arsenal is required to deter strikes (as opposed to MAD, which focuses on having a large enough arsenal to still have a fully capable force regardless of the first strike of an enemy). This sounds quite reasonable, until one considers the logical conclusions of this thinking.
First, because “any strike is unacceptable”, it means that any nuclear strike, regardless of whether it is counter-force or counter-value, will be met with a full counter-value response. Secondly, because it makes no provisions for surviving a counter-force first strike (like the US might launch against the DPRK or PRC), it calls for a policy of “launch on warning” rather than waiting for tit for tat casualty escalation. Or occasionally, for preemptive strikes as soon as it becomes apparent that the enemy is preparing an imminent attack.
This second part is important. Normally, this is where analysts look at things like political rhetoric, media reaction, and public perception to gauge whether an enemy first strike is imminent or not. This is why there has always been a certain predictable cadence to diplomatic and political rhetoric surrounding possible nuclear war scenarios. That rhythm determines the pulse of the actual military operations. And that is why what might otherwise be harmless banter can be profoundly destabilizing when it comes from people in power.
Anyways, for an attack on that kind of scale, I’m pretty well and truly hosed. The map of likely nuclear targets pretty well covers the entire northeast, and even if I manage to survive both the initial attack, and the weeks after, during which radiation would be deadly to anyone outside for more than a few seconds, the catastrophic damage to the infrastructure that keeps the global economy running, and upon which I rely to get my highly complicated, impossible-to-recreate-without-a-post-industrial-economic-base life support medication would mean that I would die as soon as my on hand stockpile ran out. There’s no future for me in that world, and so there’s nothing I can do to change that. It seems a little foolish, then, to try and prepare.
Luckily, I don’t expect that an attack will be of that scale. I don’t expect that an attack will come in any case, but I’ve more or less given up on relying on sanity and normalcy to prevail for the time being. In the meantime, I suppose I shall have to look at practicing my duck and cover skills.
So I realized earlier this week, while staring at the return address stamped on the sign outside the small post office on the lower level of the resort my grandfather selected for us on our family trip, that we were in fact staying in the same hotel which hosted the famous Bretton Woods Conference, that resulted in the Bretton Woods System that governed post-WWII economic rebuilding around the world, and laid the groundwork for our modern economic system, helping to cement the idea of currency as we consider it today.
Needless to say, I find this intensely fascinating; both the conference itself as a gathering of some of the most powerful people at one of the major turning points in history, and the system that resulted from it. Since I can’t recall having spent any time on this subject in my high school economics course, I thought I would go over some of the highlights, along with pictures of the resort that I was able to snap.
First, some background on the conference. The Bretton Woods conference took place in July of 1944, while the Second World War was still in full swing. The allied landings in Normandy, less than a month earlier, had been successful in establishing isolated beachheads, but Operation Overlord as a whole could still fail if British, Canadian, American, and Free French forces were prevented from linking up and liberating Paris.
On the Eastern European front, the Red Army had just begun Operation Bagration, the long planned grand offensive to push Nazi forces out of the Soviet Union entirely, and begin pushing offensively through occupied Eastern Europe and into Germany. Soviet victories would continue to rack up as the conference went on, as the Red Army executed the largest and most successful offensive in its history, escalating political concerns among the western allies about the role the Soviet Union and its newly “liberated” territory could play in a postwar world.
In the pacific, the Battle of Saipan was winding down towards an American victory, radically changing the strategic situation by putting the Japanese homeland in range of American strategic bombing. Even as the battles rage on, more and more leaders on both sides look increasingly to the possibility of an imminent allied victory.
As the specter of rebuilding a world ravaged by the most expensive and most devastating conflict in human history (and hopefully ever) began to seem closer, representatives of all nations in the allied powers met in a resort in Bretton Woods, New Hampshire, at the foot of Mount Washington, to discuss the economic future of a postwar world in the United Nations Monetary and Financial Conference, more commonly referred to as the Bretton Woods Conference. The site was chosen because, in addition to being vacant (since the war had effectively killed tourism), the isolation of the surrounding mountains made the site suitably defensible against any sort of attack. It was hoped that this show of hospitality and safety would assuage delegates coming from war torn and occupied parts of the world.
After being told that the hotel had only 200-odd rooms for a conference of 700-odd delegates, most delegates, naturally, decided to bring their families, an many cases bringing as many extended relatives as could be admitted on diplomatic credentials. Of course, this was probably as much about escaping the ongoing horrors in Europe and Asia as it was getting a free resort vacation.
As such, every bed within a 22 mile radius was occupied. Staff were forced out of their quarters and relocated to the stable barns to make room for delegates. Even then, guests were sleeping in chairs, bathtubs, even on the floors of the conference rooms themselves.
The conference was attended by such illustrious figures as John Maynard Keynes (yes, that Keynes) and Harry Dexter White (who, in addition to being the lead American delegate, was also almost certainly a spy for the Soviet NKVD, the forerunner to the KGB), who clashed on what, fundamentally, should be the aim of the allies to establish in a postwar economic order.
Everyone agreed that protectionist, mercantilist, and “economic nationalist” policies of the interwar period had contributed both to the utter collapse of the Great Depression, and the collapse of European markets, which created the socioeconomic conditions for the rise of fascism. Everyone agreed that punitive reparations placed on Germany after WWI had set up European governments for a cascade of defaults and collapses when Germany inevitably failed to pay up, and turned to playing fast and loose with its currency and trade policies to adhere to the letter of the Treaty of Versailles.
It was also agreed that even if reparations were entirely done away with, which would leave allied nations such as France, and the British commonwealth bankrupt for their noble efforts, that the sheer upfront cost of rebuilding would be nigh impossible by normal economic means, and that leaving the task of rebuilding entire continents would inevitably lead to the same kind of zero-sum competition and unsound monetary policy that had led to the prewar economic collapse in the first place. It was decided, then, that the only way to ensure economic stability through the period of rebuilding was to enforce universal trade policies, and to institute a number of centralized financial organizations under the purview of the United Nations, to oversee postwar rebuilding and monetary policy.
The devil was in the details, however. The United States, having spent the war safe from serious economic infrastructure damage, serving as the “arsenal of democracy”, and generally being the only country that had reserves of capital, wanted to use its position of relative economic supremacy to gain permanent leverage. As the host of the conference and the de-facto lead for the western allies, the US held a great deal of negotiating power, and the US delegates fully intended to use it to see that the new world order would be one friendly to American interests.
Moreover, the US, and to a lesser degree, the United Kingdom, wanted to do as much as possible to prevent the Soviet Union from coming to dominate the world after it rebuilt itself. As World War II was beginning to wind down, the Cold War was beginning to wind up. To this end, the news of daily Soviet advances, first pushing the Nazis out of its borders, and then steamrolling into Poland, Finland, and the Baltics was troubling. Even more troubling were the rumors of the ruthless NKVD suppression of non-communist partisan groups that had resisted Nazi occupation in Eastern Europe, indicating that the Soviets might be looking to establish their own postwar hegemony.
The first major set piece of the conference agreement was relatively uncontroversial: the International Bank for Reconstruction and Development, drafted by Keynes and his committee, was established to offer grants and loans to countries recovering from the war. As an independent institution, it was hoped that the IBRD would offer flexibility to rebuilding nations that loans from other governments with their own financial and political obligations and interests could not. This was also a precursor to, and later backbone of, the Marshal Plan, in which the US would spend exorbitant amounts on foreign aid to rebuild capitalism in Europe and Asia in order to prevent the rise of communist movements fueled by lack of opportunity.
The second major set piece is where things get really complicated. I’m massively oversimplifying here, but global macroeconomic policy is inevitably complicated in places. The second major set-piece, a proposed “International Clearing Union” devised by Keynes back in 1941, was far more controversial.
The plan, as best I am able to understand it, called for all international trade to be handled through a single centralized institution, which would measure the value of all other goods and currencies relative to a standard unit, tentatively called a “bancor”. The ICU would then offer incentives to maintain trade balances relative to the size of a nation’s economy, by charging interest off of countries with a major trade surplus, and using the excess to devalue the exchange rates of countries with trade deficits, making imports more expensive and products more desirable to overseas consumers.
The Grand Ballroom was thrown into fierce debate, and the local Boy Scouts that had been conscripted to run microphones between delegates (most of the normal staff either having been drafted, or completely overloaded) struggled to keep up with these giants of economics and diplomacy.
Unsurprisingly, the US delegate, White, was absolutely against Keynes’s hair brained scheme. Instead, he proposed a far less ambitious “International Monetary Fund”, which would judge trade balances, and prescribe limits for nations seeking aid from the IMF or IBRD, but otherwise would generally avoid intervening. The IMF did keep Keynes’s idea of judging trade based on a pre-set exchange rate (also obligatory for members), but avoided handing over the power to unilaterally affect the value of individual currencies to the IMF, instead leaving it in the hands of national governments, and merely insisting on certain requirements for aid and membership. It also did away with notions of an ultranational currency.
Of course, this raised the question of how to judge currency values other than against each other alone (which was still seen as a bridge too far in the eyes of many). The solution, proposed by White, was simple: judge other currencies against the US dollar. After all, the United States was already the largest and most developed economy. And since other countries had spent the duration of the war buying materiel from the US, it also held the world’s largest reserves of almost every currency, including gold and silver, and sovereign debt. The US was the only country to come out of WWII with enough gold in reserve to stay on the gold standard and also finance postwar rebuilding, which made it a perfect candidate as a default currency.
Now, you can see this move either as a sensible compromise for a world of countries that couldn’t have gone back to their old ways if they tried, or as a master stroke attempt by the US government to cement its supremacy at the beginning of the Cold War. Either way, it worked as a solution, both in the short term, and in the long term, creating a perfect balance of stability and flexibility in monetary policy for a postwar economic boom, not just in the US, but throughout the capitalist world.
The third set piece was a proposed “International Trade Organization”, which was to oversee implementation and enforcement of the sort of universal free trade policies that almost everyone agreed would be most conducive not only to prosperity, but to peace as a whole. Perhaps surprisingly, this wasn’t terribly divisive at the conference.
The final agreement for the ITO, however, was eventually shot down when the US Senate refused to ratify its charter, partly because the final conference had been administered in Havana under Keynes, who used the opportunity to incorporate many of his earlier ideas on an International Clearing Union. Much of the basic policies of the ITO, however, influenced the successful General Agreements on Tarriffs and Trade, which would later be replaced by the World Trade Organization.
The Bretton Woods agreement was signed by the allied delegates in the resort’s Gold Room. Not all countries that signed immediately ratified. The Soviet Union, perhaps unsurprisingly, reversed its position on the agreement, calling the new international organizations “a branch of Wall Street”, going on to found the Council for Mutual Economic Assistance, a forerunner to the Warsaw Pact, within five years. The British Empire, particularly its overseas possessions, also took time in ratifying, owing to the longstanding colonial trade policies that had to be dismantled in order for free trade requirements to be met.
The consensus of most economists is that Bretton Woods was a success. The system more or less ceased to exist when Nixon, prompted by Cold War drains on US resources, and French schemes to exchange all of its reserve US dollars for gold, suspended the Gold Standard for the US dollar, effectively ushering in the age of free-floating fiat currencies; that is, money that has value because we all collectively accept that it does; an assumption that underlies most of our modern economic thinking.
While it certainly didn’t last forever, the Bretton Woods system did accomplish its primary goal of setting the groundwork for a stable world economy, capable of rebuilding and maintaining the peace. This is a pretty lofty achievement when one considers the background against which the conference took place, the vast differences between the players, and the general uncertainty about the future.
The vision set forth in the Bretton Woods Conference was an incredibly optimistic, even idealistic, one. It’s easy to scoff at the idea of hammering out an entire global economic system, in less than a month, at a backwoods hotel in the White Mountains, but I think it speaks to the intense optimism and hope for the future that is often left out of the narrative of those dark moments. The belief that we can, out of chaos and despair, forge a brighter future not just for ourselves, but for all, is not in itself crazy, and the relative success of the Bretton Woods System, flawed though it certainly was, speaks to that.
Works Consulted
IMF. “60th Anniversary of Bretton Woods.” 60th Anniversary – Background Information, what is the Bretton Woods Conference. International Monetary Fund, n.d. Web. 10 Aug. 2017. <http://external.worldbankimflib.org/Bwf/whatisbw.htm>.
“Cooperation and Reconstruction (1944-71).” About the IMF: History. International Monetary Fund, n.d. Web. 10 Aug. 2017. <http://www.imf.org/external/about/histcoop.htm>
YouTube. Extra Credits, n.d. Web. 10 Aug. 2017. <http://www.youtube.com/playlist?list=PLhyKYa0YJ_5CL-krstYn532QY1Ayo27s1>.
Burant, Stephen R. East Germany, a country study. Washington, D.C.: The Division, 1988. Library of Congress. Web. 10 Aug. 2017. <https://archive.org/details/eastgermanycount00bura_0>.
US Department of State. “Proceedings and Documents of the United Nations Monetary and Financial Conference, Bretton Woods, New Hampshire, July 1-22, 1944.” Proceedings and Documents of the United Nations Monetary and Financial Conference, Bretton Woods, New Hampshire, July 1-22, 1944 – FRASER – St. Louis Fed. N.p., n.d. Web. 10 Aug. 2017. <https://fraser.stlouisfed.org/title/430>.
Additional information provided by resort staff and exhibitions visitited in person.
Unless I am struck by a pressing need to add something in the next few days, I reckon that part 4 of the Incremental Progress series will be the last, at least for now. I may add to it in the future, or restart it after the next conference, but for the time being I have no plans to add to it.
While this mini-series has been fun to write in some respects, it has also nearly driven me to abandon it, and possibly even take a break from writing entirely and fall back on my buffer of prewritten posts to avoid losing my postaweek credentials. Having a preselected topic and an idea of when and how I want to release stuff has some upsides, certainly, but creatively, it’s a double-edged sword.
These frustrations are amplified by my aversion to constraint. Part of this aversion is based on the unpredictable nature of my handicap, as I have described at length elsewhere, but this also cuts to the heart of my technique. My creative process, if you can call it a conscious process, is generally one of waiting for inspiration to strike me, and then writing for precisely as long as it sticks with me. This usually produces somewhere between 0.9 and 2.1 posts per week, only about 1.4 of which are truly coherent enough to be considered for publication, and my loose versions of editing and scheduling cuts that down to a nice, predictable one post per week.
My capital-P Professional author contacts tell me that this frustration is a normal part of the writing process that sets in during any suitably large project that involves deadlines and staying on topic, which is to say, any project much more extensive than a casual blog. The good news is that allegedly getting through these frustrations is a large part of what separates the true masters of the art from the amateurs. That, and, you know, getting paid. But allegedly it’s the former that enables the latter down the road. I can’t really testify to that part, at least not on my own behalf.
All that said, I’m glad I decided to do this. I think it has helped me flex my writing muscles a bit, so to speak, and I am reasonably satisfied with the end result. I made the decision to split up my thoughts on the conference and structure it like I did because the alternatives would have been cutting down dramatically to only one or two subtopics, or waiting several weeks until the whole thing could be compiled and posted at once; an approach which had historically been less successful.
Starting today I will be setting off on a new set of adventures, starting with a family expedition into the White Mountains, and followed shortly by a tour of the Midwestern United States, which is expected to include reunions with several local relatives, and an attempt to view that astronomical event which has been recently dubbed by the papers as “the Great American Eclipse”.
Though I will, as always, try to maintain my habit of posting, it seems quite likely that I may miss a post or two, even after I return. I do not know whether I shall come back from these trips with new experiences to write about at length, similar to last month’s conference at Disney World, or whether the stresses of another family trip will push me over the brink and sap my creative abilities for some time.
I appreciate all the support I have gotten from this series, and hope to continue to work on similar projects in the future.
I have spent the last three parts of this series bemoaning various aspects of the cycle of medical progress for patients enduring chronic health issues. At this point, I feel it is only fair that I highlight some of the brighter spots.
I have long come to accept that human progress is, with the exception of the occasional major breakthrough, incremental in nature; a reorganization here paves the way for a streamlining there, which unlocks the capacity for a minor tweak here and there, and so on and so forth. However, while this does help adjust one’s day to day expectations from what is shown in popular media to something more realistic, it also risks minimizing the progress that is made over time.
To refer back to an example used in part 2 that everyone should be familiar with, let’s refer to the progress being made on cancer. Here is a chart detailing the rate of FDA approvals for new treatments, which is a decent, if oversimplified, metric for understanding how a given patient’s options have increased, and hence, how specific and targeted their treatment will be (which has the capacity to minimize disruption to quality of life), and the overall average 5-year survival rate over a ten year period.
Does this progress mean that cancer is cured? No, not even close. Is it close to being cured? Not particularly.
It’s important to note that even as these numbers tick up, we’re not intrinsically closer to a “cure”. Coronaviruses, which cause the common cold, have a mortality rate pretty darn close to zero, at least in the developed world, and that number gets a lot closer if we ignore “novel” coronaviruses like SARS and MERS, and focus only on the rare person who has died as a direct result of the common cold. Yet I don’t think anyone would call the common cold cured. Coronaviruses, like cancer, aren’t cured, and there’s a reasonable suspicion on the part of many that they aren’t really curable in the sense that we’d like.
“Wait,” I hear you thinking, “I thought you were going to talk about bright spots”. Well, yes, while it’s true that progress on a full cure is inconclusive at best, material progress is still being made every day, for both colds and cancer. While neither is at present curable, they are, increasingly treatable, and this is where the real progress is happening. Better treatment, not cures, is from whence all the media buzz is generated, and why I can attend a conference about my disease year after year, hearing all the horror stories of my comrades, and still walk away feeling optimistic about the future.
So, what am I optimistic about this time around, even when I know that progress is so slow coming? Well, for starters, there’s life expectancy. I’ve mentioned a few different times here that my projected lifespan is significantly shorter than the statistical average for someone of my lifestyle, medical issues excluded. While this is still true, this is becoming less true. The technology which is used for my life support is finally reaching a level of precision, in both measurement and dosing, where it can be said to genuinely mimic natural bodily functions instead of merely being an indefinite stopgap.
To take a specific example, new infusion mechanisms now allow dosing precision down to the ten-thousandth of a milliliter. For reference, the average raindrop is between 0.5 and 4 milliliters. Given that a single thousandth of a milliliter in either direction at the wrong time can be the difference between being a productive member of society and being dead, this is a welcome improvement.
Such improvements in delivery mechanisms has also enabled innovation on the drugs themselves by making more targeted treatments wth a smaller window for error viable to a wider audience, which makes them more commercially viable. Better drugs and dosaging has likewise raised the bar for infusion cannulas, and at the conference, a new round of cannulas was already being hyped as the next big breakthrough to hit the market imminently.
In the last part I mentioned, though did not elaborate at length on, the appearance of AI-controlled artificial organs being built using DIY processes. These systems now exist, not only in laboratories, but in homes, offices, and schools, quietly taking in more data than the human mind can process, and making decisions with a level of precision and speed that humans cannot dream of achieving. We are equipping humans as cyborgs with fully autonomous robotic parts to take over functions they have lost to disease. If this does not excite you as a sure sign of the brave new future that awaits all of us, then frankly I am not sure what I can say to impress you.
Like other improvements explored here, this development isn’t so much a breakthrough as it is a culmination. After all, all of the included hardware in these systems has existed for decades. The computer algorithms are not particularly different from the calculations made daily by humans, except that they contain slightly more data and slightly fewer heuristic guesses, and can execute commands faster and more precisely than humans. The algorithms are simple enough that they can be run on a cell phone, and have an effectiveness on par with any other system in existence.
These DIY initiatives have already caused shockwaves throughout the medical device industry, for both the companies themselves, and the regulators that were previously taking their sweet time in approving new technologies, acting as a catalyst for a renewed push for commercial innovation. But deeper than this, a far greater change is also taking root: a revolution not so much in technology or application, but in thought.
If my memory and math are on point, this has been the eighth year since I started attending this particular conference, out of ten years dealing with the particular disease that is the topic of this conference, among other diagnoses. While neither of these stretches are long enough to truly have proper capital-H historical context, in the span of a single lifetime, especially for a relatively young person such as myself, I do believe that ten or even eight years is long enough to reflect upon in earnest.
Since I started attending this conference, but especially within the past three years, I have witnessed, and been the subject of, a shift in tone and demeanor. When I first arrived, the tone at this conference seemed to be, as one might expect one primarily of commiseration. Yes, there was solidarity, and all the positive emotion that comes from being with people like oneself, but this was, at best, a bittersweet feeling. People were glad to have met each other, but still nevertheless resentful to have been put in the unenviable circumstances that dictated their meeting.
More recently, however, I have seen and felt more and more an optimism accompanying these meetings. Perhaps it is the consistently record-breaking attendance that demonstrates, if nothing else, that we stand united against the common threat to our lives, and against the political and corporate forces that would seek to hold up our progress back towards being normal, fully functioning humans. Perhaps it is merely the promise of free trade show goodies and meals catered to a medically restricted diet. But I think it is something different.
While a full cure, of the sort that would allow me and my comrades to leave the life support at home, serve in the military, and the like, is still far off, today more than ever before, the future looks, if not bright, then at least survivable.
In other areas of research, one of the main genetic research efforts, which has maintained a presence at the conference, is now closing in on the genetic and environmental triggers that cause the elusive autoimmune reaction which has been known to cause the disease, and on various methods to prevent and reverse it. Serious talk of future gene therapies, the kind of science fiction that has traditionally been the stuff of of comic books and film, is already ongoing. It is a strange and exciting thing to finish an episode of a science-fiction drama television series focused on near-future medical technology (and how evil minds exploit it) in my hotel room, only to walk into the conference room to see posters advertising clinical trial sign ups and planned product releases.
It is difficult to be so optimistic in the face of incurable illness. It is even more difficult to remain optimistic after many years of only incremental progress. But pessimism too has its price. It is not the same emotional toll as the disappointment which naive expectations of an imminent cure are apt to bring; rather it is an opportunity cost. It is the cost of missing out on adventures, on missing major life milestones, on being conservative rather than opportunistic.
Much of this pessimism, especially in the past, has been inspired and cultivated by doctors themselves. In a way, this makes sense. No doctor in their right mind is going to say “Yes, you should definitely take your savings and go on that cliff diving excursion in New Zealand.” Medicine is, by its very nature, conservative and risk averse. Much like the scientist, a doctor will avoid saying anything until after it has been tested and proven beyond a shadow of a doubt. As noted previously, this is extremely effective in achieving specific, consistent, and above all, safe, treatment results. But what about when the situation being treated is so all-encompassing in a patient’s life so as to render specificity and consistency impossible?
Historically, the answer has been to impose restrictions on patients’ lifestyles. If laboratory conditions don’t align with real life for patients, then we’ll simply change the patients. This approach can work, at least for a while. But patients are people, and people are messy. Moreover, when patients include children and adolescents, who, for better or worse, are generally inclined to pursue short term comfort over vague notions of future health, patients will rebel. Thus, eventually, trading ten years at the end of one’s life for the ability to live the remainder more comfortably seems like a more balanced proposition.
This concept of such a tradeoff is inevitably controversial. I personally take no particular position on it, other than that it is a true tragedy of the highest proportion that anyone should be forced into such a situation. With that firmly stated, many of the recent breakthroughs, particularly in new delivery mechanisms and patient comfort, and especially in the rapidly growing DIY movement, have focused on this tradeoff. The thinking has shifted from a “top-down” approach of finding a full cure, to a more grassroots approach of making life more livable now, and making inroads into future scientific progress at a later date. It is no surprise that many of the groups dominating this new push have either been grassroots nonprofits, or, where they have been commercial, have been primarily from silicon valley style, engineer-founded, startups.
This in itself is already a fairly appreciable and innovative thesis on modern progress, yet one I think has been tossed around enough to be reasonably defensible. But I will go a step further. I submit that much of the optimism and positivity; the empowerment and liberation which has been the consistent takeaway of myself and other authors from this and similar conferences, and which I believe has become more intensely palpable in recent years than when I began attending, has been the result of this same shift in thinking.
Instead of competing against each other and shaming each other over inevitable bad blood test results, as was my primary complaint during conferences past, the new spirit is one of camaraderie and solidarity. It is now increasingly understood at such gatherings, and among medical professionals in general, that fear and shame tactics are not effective in the long run, and do nothing to mitigate the damage of patients deciding that survival at the cost of living simply isn’t worth it [1]. Thus the focus has shifted from commiseration over common setbacks, to collaboration and celebration over common victories.
Thus it will be seen that the feeling of progress, and hence, of hope for the future, seems to lie not so much in renewed pushes, but in more targeted treatments, and better quality of life. Long term patients such as myself have largely given up hope in the vague, messianic cure, to be discovered all at once at some undetermined future date. Instead, our hope for a better future; indeed, for a future at all; exists in the incremental, but critically, consistent, improvement upon the technologies which we are already using, and which have already been proven. Our hope lies in understanding that bad days and failures will inevitably come, and in supporting, not shaming, each other when they do.
While this may not qualify for being strictly optimistic, as it does entail a certain degree of pragmatic fatalism in accepting the realities of disabled life, it is the closest I have yet come to optimism. It is a determination that even if things will not be good, they will at least be better. This mindset, unlike rooting for a cure, does not require constant fanatical dedication to fundraising, nor does it breed innovation fatigue from watching the scientific media like a hawk, because it prioritizes the imminent, material, incremental progress of today over the faraway promises of tomorrow.
[1] Footnote: I credit the proximal cause of this cognitive shift in the conference to the progressive aging of the attendee population, and more broadly, to the aging and expanding afflicted population. As more people find themselves in the situation of a “tradeoff” as described above, the focus of care inevitably shifts from disciplinarian deterrence and prevention to one of harm reduction. This is especially true of those coming into the 13-25 demographic, who seem most likely to undertake such acts of “rebellion”. This is, perhaps unsurprisingly, one of the fastest growing demographics for attendance at this particular conference over the last several years, as patients who began attending in childhood come of age.
This post is a bit of a hodgepodge hot mess, because after three days of intense writers’ block, I realized at 10:00pm, that there were a number of things that, in fact, I really did need to address today, and that being timely in this case was more important than being perfectly organized in presentation.
First, Happy Esther Day. For those not well versed on internet age holidays, Esther Day, August 3rd, so chosen by the late Esther Earl (who one may know as the dedicatee of and partial inspiration for the book The Fault In Our Stars), is a day on which to recognize all the people one loves in a non-romantic way. This includes family, but also friends, teachers, mentors, doctors, and the like; basically it is a day to recognize all important relationships not covered by Valentine’s Day.
I certainly have my work cut out for me, given that I have received a great deal of love and compassion throughout my life, and especially during my darker hours. In fact, it would not be an exaggeration to say that on several occasions, I would not have survived but for the love of those around me.
Of course, it’s been oft-noted that, particularly in our western culture, this holiday creates all manner of awkward moments, especially where it involves gender. A man is expected not to talk at great length about his feelings in general, and trying to tell one of the opposite gender that one loves the other either creates all sort of unhelpful ambiguity from a romantic perspective, or, if clarified, opens up a whole can of worms involving relationship stereotypes that no one, least of all a socially awkward writer like myself, wants to touch with a thirty nine and a half foot pole. So I won’t.
I do still want to participate in Esther Day, as uncomfortable as the execution makes me, because I believe in its message, and I believe in the legacy that Esther Earl left us. So, to people who read this, and participate in this blog by enjoying it, especially those who have gotten in touch specifically to say so, know this; to those of you who I have had the pleasure of meeting in person, and to those who I’ve never met but by proxy: I love you. You are an important part of my life, and the value you (hopefully) get from being here adds value to my life.
In tangentially related news…
Earlier this week this blog passed an important milestone: We witnessed the first crisis that required me to summon technical support. I had known that this day would eventually come, though I did not expect it so soon, nor to happen the way it did.
The proximal cause of this minor disaster was apparently a fault in an outdated third-party plugin I had foolishly installed and activated some six weeks ago, because it promised to enable certain features which would have made the rollout of a few of my ongoing projects for this place easier and cleaner. In my defense, the reviews prior to 2012, when the code author apparently abandoned the plugin, were all positive, and the ones after were scarce enough that I reckoned the chances of such a problem occurring to me were acceptably low.
Also, for the record, when I cautiously activated the plugin some six weeks ago during a time of day when visitors are relatively few and far between, it did seem to work fine. Indeed, it did work perfectly fine, right up until Monday, when it suddenly didn’t. Exactly what caused the crash to happen precisely then and not earlier (or never) wasn’t explained to me, presumably because it involves far greater in depth understanding of the inner workings of the internet than I am able to parse at this time.
The distal cause of this whole affair is that, with computers as with many aspects of my life, I am just savvy enough to get myself into trouble, without having the education nor the training to get myself out of it. This is a recurring theme in my life, to a point where it has become a default comment by teachers on my report cards. Unfortunately, being aware of this phenomenon does little to help me avoid it. Which is to say, I expect that similar server problems for related issues are probably also in the future, at least until such time as I actually get around to taking courses in coding, or find a way to hire someone to write code for me.
On the subject of milestones and absurdly optimistic plans: after much waffling back and forth, culminating in an outright dare from my close friends, I launched an official patreon page for this blog. Patreon, for those not well acquainted with the evolving economics of online content creation, is a service which allows creators (such as myself) to accept monthly contributions from supporters. I have added a new page to the sidebar explaining this in more detail.
I do not expect that I shall make a living off this. In point of fact, I will be pleasantly surprised if the site hosting pays for itself. I am mostly setting this up now so that it exists in the future on the off chance that some future post of mine is mentioned somewhere prominent, attracting overnight popularity. Also, I like having a claim, however tenuous, to being a professional writer like Shakespeare or Machiavelli.
Neither of these announcements changes anything substantial on this website. Everything will continue to be published on the same (non-)schedule, and will continue to be publicly accessible as before. Think of the Patreon page like a tip jar; if you like my stuff and want to indulge me, you can, but you’re under no obligation.
There is one thing that will be changing soon. I intend to begin publishing some of my fictional works in addition to my regular nonfiction commentary. Similar to the mindset behind my writing blog posts in the first place, this is partially at the behest of those close to me, and partially out of a Pascal’s Wager type logic that, even if only one person enjoys what I publish, with no real downside to publishing, that in itself makes the utilitarian calculation worth it.
Though I don’t have a planned release date or schedule for this venture, I want to put it out as something I’m planning to move forward with, both in order to nail my colors to the mast to motivate myself, and also to help contextualize the Patreon launch.
The first fictional venture will be a serial story, which is the kind of venture that having a Patreon page already set up is useful for, since serial stories can be discovered partway through and gain mass support overnight more so than blogs usually do. Again, I don’t expect fame and fortune to follow my first venture into serial fiction. But I am willing to leave the door open for them going forward.
Previously, I have talked some of the ways that patients of chronic health issues and medical disabilities feel impacted by the research cycle. Part one of this ongoing series detailed a discussion I participated in at an ad-hoc support group of 18-21 year olds at a major health conference. Part two detailed some of the things I wish I had gotten a chance to add, based on my own experiences and the words of those around me, but never got the chance to due to time constraints.
After talking at length about the patient side of things, I’d like to pivot slightly to the clinical side. If we go by what most patients know about the clinical research process, here is a rough picture of how things work:
First, a conclave of elite doctors and professor gather in secret, presumably in a poorly lit conference room deep beneath the surface of the earth, and hold a brainstorming session of possible questions to study. Illicit substances may or not be involved in this process, as the creativity required to come up with such obscure and esoteric concerns as “why do certain subspecies of rats have funny looking brains?” and “why do stressful things make people act stressed out?” is immense. At the end of the session, all of the ideas are written down on pieces of parchment, thrown inside a hat, and drawn randomly to decide who will study what.
Second, money is extracted from the public at large by showing people on the street pictures of cute, sad looking children being held at needle-point by an ominously dressed person in a lab coat, with the threat that unless that person hands over all of their disposable income, the child will be forced to receive several injections per day. This process is repeated until a large enough pile of cash is acquired. The cash is then passed through a series of middlemen in dark suits smoking cigars, who all take a small cut for all their hard work of carrying the big pile of cash.
At this point, the cash is loaded onto a private jet and flown out to the remote laboratories hidden deep in the Brazilian rainforests, the barren Australian deserts, the lost islands of the arctic and Antarctic regions, and inside the active volcanoes of the pacific islands. These facilities are pristine, shining snow white and steel grey, outfitted with all the latest technology from a mid-century science fiction film. All of these facilities are outfitted either by national governments, or the rich elite of major multinational corporations, who see to all of the upkeep and grant work, leaving only the truly groundbreaking work to the trained scientists.
And who are the scientists? The scientist is a curious creature. First observed in 1543 naturalists hypothesized scientists to be former humans transmogrified by the devil himself in a Faustian bargain whereby the subject loses most interpersonal skills and material wealth in exchange for incredible intelligence a steady, monotonous career playing with glassware and measuring equipment. No one has ever seen a scientist in real life, although much footage exists of the scientist online, usually flaunting its immense funding and wearing its trademark lab coat and glasses. Because of the abundance of such footage, yet lack of real-life interactions, it has been speculated that scientists may possess some manner of cloaking which renders them invisible and inaudible outside of their native habitat.
The scientists spend their time exchanging various colored fluid between Erlenmeyer flasks and test tubes, watching to see which produces the best colors. When the best colors are found, a large brazier is lit with all of the paper currency acquired earlier. The photons from the fire reaction may, if the stars are properly aligned, hit the colored fluid in such a way as to cause the fluid to begin to bubble and change into a different color. If this happens often enough, the experiment is called a success.
The scientists spend the rest of their time meticulously recording the precise color that was achieved, which will provide the necessary data for analyst teams to divine the answers to the questions asked. These records are kept not in English, or any other commonly spoken language, but in Scientific, which is written and understood by only a handful of non-scientists, mainly doctors, teachers, and engineers. The process of translation is arduous, and in order to be fully encrypted requires several teams working in tandem. This process is called peer review, and, at least theoretically, this method makes it far more difficult to publish false information, because the arduousness of the process provides an insurmountable barrier to those motivated by anything other than the purest truth.
Now, obviously all of this is complete fiction. But the fact that I can make all of this up with a straight face speaks volumes, both about the lack of public understanding of how modern clinical research works, and the lack of transparency of the research itself. For as much as we cheer on the march of scientific advancement and technological development, for as much media attention is spent on new results hot off the presses, and for as much as the stock images and characters of the bespectacled expert adorned in a lab coat and armed with test tubes resounds in both popular culture and the popular consciousness, the actual details of what research is being done, and how it is being executed, is notably opaque.
Much of this is by design, or is a direct consequence of how research is structured. The scientific method by which we separate fact from fiction demands a level of rigor that is often antithetical to human nature, which requires extreme discipline and restraint. A properly organized double-blind controlled trial, the cornerstone of true scientific research, requires that the participants and even the scientists measuring results be kept in the dark as to what they are looking for, to prevent even the subtlest of unconscious biases from interfering. This approach, while great at testing hypotheses, means that the full story is only known to a handful of supervisors until the results are ready to be published.
The standard of scientific writing is also incredibly rigorous. In professional writing, a scientist is not permitted to make any claims or assumptions unless either they have just proven it themselves, in which case they are expected to provide full details of their data and methodology, or can directly cite a study that did so. For example, a scientist cannot simply say that the sky is blue, no matter how obvious this may seem. Nor even can a scientist refer to some other publication in which the author agreed that the sky is blue, like a journalist might while providing citations for a story. A scientist must find the original data proving that the sky is blue, that it is consistently blue, and so forth, and provide the documentation for others to cross check the claims themselves.
These standards are not only obligatory for those who wish to receive recognition and funding, but they are enforced for accreditation and publication in the first place. This mindset has only become more entrenched as economic circumstances have caused funding to become more scarce, and as political and cultural pressure have cast doubts on “mainstream institutions” like academia and major research organizations. Scientists are trained to only give the most defensible claims, in the most impersonal of words, and only in the narrow context for which they are responsible for studying. Unfortunately, although this process is unquestionably effective at testing complex hypotheses, it is antithetical to the nature of everyday discourse.
It is not, as my colleague said during our conference session said, that “scientists suck at marketing”, but rather that marketing is fundamentally incongruous with the mindset required for scientific research. Scientific literature ideally attempts to lay out the evidence with as little human perspective as possible, and let the facts speak for themselves, while marketing is in many respects the art of conjuring and manipulating human perspective, even where such perspectives may diverge from reality.
Moreover, the consumerist mindset of our capitalist society amplifies this discrepancy. The constant arms race between advertisers, media, and political factions means that we are awash in information. This information is targeted to us, adjusted to our preferences, and continually served up on a silver platter. We are taught that our arbitrary personal views are fundamentally righteous, that we have no need to change our views unless it suits us, and that if there is really something that requires any sort of action or thought on our part, that it will be similarly presented in a pleasant, custom tailored way. In essence, we are taught to ignore things that require intellectual investment, or challenge our worldview.
There is also the nature of funding. Because it is so difficult to ensure that trials are actually controlled, and to write the results in such a counterintuitive way, the costs of good research can be staggering, and finding funding can be a real struggle. Scientists may be forced to work under restrictions, or to tailor their research to only the most profitable applications. Results may not be shared to prevent infringement, or to ensure that everyone citing the results is made to pay a fee first. I could spend pages on different stories of technologies that could have benefited humanity, but were kept under wraps for commercial or political reasons.
But of course, it’s easy to rat on antisocial scientists and pharmaceutical companies. And it doesn’t really get to the heart of the problem. The problem is that, for most patients, especially those who aren’t enrolled in clinical trials, and don’t necessarily have access to the latest devices, the whole world of research is a black hole into which money is poured with no apparent benefit in return. Maybe if they follow the news, or hear about it from excited friends and relations (see previous section), they might be aware of a few very specific discoveries, usually involving curing one or two rats out of a dozen tries.
Perhaps, if they are inclined towards optimism, they will be able to look at the trend over the last several decades towards better technology and better outcomes. But in most cases, the truly everyday noticeable changes seem to only occur long after they have long been obvious to the users. The process from patient complaints with a medical device, especially in a non-critical area like usability and quality of life, that does not carry the same profit incentive for insurers to apply pressure, to a market product, is agonizingly slow.
Many of these issues aren’t research problems so much as manufacturing and distribution problems. The bottleneck in making most usability tweaks, the ones that patients notice and appreciate, isn’t in research, or even usually in engineering, but in getting a whole new product approved by executives, shareholders, and of course, regulatory bodies. (Again, this is another topic that I could, and probably will at some future date, rant on about for several pages, but suffice it to say that when US companies complain about innovation being held up by the FDA, their complaints are not entirely without merit).
Even after such processes are eventually finished, there is the problem of insurance. Insurance companies are, naturally, incredibly averse to spending money on anything unless and until it has been proven beyond a shadow of a doubt that it is not only safe, but cost effective. Especially for basic, low income plans, change can come at a glacial pace, and for state-funded services, convincing legislators to adjust statutes to permit funding for new innovations can be a major political battle. This doesn’t even begin to take into account the various negotiated deals and alliances between certain providers and manufacturers that make it harder for new breakthroughs to gain traction (Another good topic for a different post).
But these are economic problems, not research. For that matter, most of the supposed research problems are simply perception problems. Why am I talking about markets and marketing when I said I was going to talk about research?
Because for most people, the notions of “science” and “progress” are synonymous. We are constantly told, by our politicians, by our insurers, by our doctors, and by our professors that not only do we have the very best level of care that has ever been available in human history, but that we also have the most diligent, most efficient, most powerful organizations and institutions working tirelessly on our behalf to constantly push forward the frontier. If we take both of these statements at face value, then it follows that anything that we do not already have is a research problem.
For as much talk as there was during our conference sessions about how difficult life was, how so very badly we all wanted change, and how disappointed and discouraged we have felt over the lack of apparent progress, it might be easy to overlook the fact that far better technologies than are currently used by anyone in that room already exist. At this very moment, there are patients going about their lives using systems that amount to AI-controlled artificial organs. These systems react faster and more accurately than humans could ever hope to, and the clinical results are obvious.
The catch? None of these systems are commercially available. None of them have even been submitted to the FDA. A handful of these systems are open source DIY projects, and so can be cobbled together by interested patients, though in many cases this requires patients to go against medical advice, and take on more engineering and technical responsibility than is considered normal for a patient. Others are in clinical trials, or more often, have successfully completed their trials and are waiting for manufacturers to begin the FDA approval process.
This bottleneck, combined with the requisite rigor of clinical trials themselves, is what has given rise to the stereotype that modern research is primarily chasing after its own tail. This perception makes even realistic progress seem far off, and makes it all the more difficult to appreciate what incremental improvements are released.
This is part two of a multi-part perspective on patient engagement in charity and research. Though not strictly required, it is strongly recommended that you read part one before continuing.
The vague pretense of order in the conversation, created by the presence of the few convention staff members, broke all at once, as several dozen eighteen to twenty one year olds all rushed to get in their two cents on the topic of fundraising burnout (see previous section). Naturally this was precisely the moment where I struck upon what I wanted to say. The jumbled thoughts and feelings, that had hinted at something to add while other people were talking, suddenly crystallized into a handful of points I wanted to make, all clustered around a phrase I had heard a few years earlier.
Not one to interrupt someone else, and also wanting to have undivided attention in making my point, I attempted to wait until the cacophony of discordant voices became more organized. And, taking example from similar times earlier in my life when I had something I wished to contribute before a group, I raised my hand and waited for silence.
Although the conversation was eventually brought back under control by some of the staff, I never got a chance to make my points. The block of time we had been allotted in the conference room ran out, and the hotel staff were anxious to get the room cleared and organized for the next group.
And yet, I still had my points to make. They still resonated within me, and I honestly believed that they might be both relevant and of interest to the other people who were in that room. I took out my phone and jotted down the two words which I had pulled from the depths of my memory: Innovation Fatigue.
That phrase has actually come to mean several different things to different groups, and so I shall spend a moment on etymology before moving forward. In research groups and think tanks, the phrase is essentially a stand in for generic mental and psychological fatigue. In the corporate world, it means a phenomenon of diminishing returns on creative, “innovative” projects, that often comes about as a result of attempts to force “innovation” on a regular schedule. More broadly in this context, the phrase has come to mean an opposition to “innovation” when used as a buzzword similar to “synergy” and “ideate”.
I first came across this term in a webcomic of all places, where it was used in a science fiction context to explain why the society depicted, which has advanced technology such as humanoid robots, neurally integrated prostheses, luxury commercial space travel, and artificial intelligence, is so similar to our own, at least culturally. That is to say, technology continues to advance at the exponential pace that it has across recorded history, but in a primarily incremental manner, and therefore most people, either out of learned complacency or a psychological defense mechanism to avoid constant hysteria, act as though all is as it always has been, and are not impressed or excited by the prospects of the future.
In addition to the feeling of fundraising burnout detailed in part one, I often find that I suffer from innovation fatigue as presented in the comic, particularly when it comes to medical research that ought to directly affect my quality of life, or promises to in the future. And what I heard from other patients during our young adults sessions has led me to believe that this is a fairly common feeling.
It is easy to be pessimistic about the long term outlook with chronic health issues. Almost definitionally, the outlook is worse than average, and the nature of human biology is such that the long term outlook is often dictated by the tools we have today. After all, even if the messianic cure arrives perfectly on schedule in five to ten years (for the record, the cure has been ten years away for the last half-century), that may not matter if things take a sharp turn for the worse six months from now. Everyone already knows someone for whom the cure came too late. And since the best way to predict future results, we are told, is from past behavior, then it would be accurate to say that no serious progress is likely to be made before it is too late.
This is not to say that progress is not being made. On the contrary, scientific progress is continuous and universal across all fields. Over the past decade alone, there has been consistent, exponential progress in not only quality of care, and quality of health outcomes, but quality of life. Disease, where it is not less frequent, but it is less impactful. Nor is this progress being made in secret. Indeed, amid all the headlines about radical new treatment options, it can be easy to forget that the diseases they are made to treat still have a massive impact. And this is precisely part of the problem.
To take an example that will be familiar to a wider audience, take cancer. It seems that in a given week, there is at least one segment on the evening TV news about some new treatment, early detection method, or some substance or habit to avoid in order to minimize one’s risk. Sometimes these segments play every day, or even multiple times per day. In checking my online news feed, one of every four stories was something regarding improvements in the state of cancer; to be precise, one was a list of habits to avoid, while one was about a “revolutionary treatment [that] offers new hope to patients”.
If you had just been diagnosed with cancer, you would be forgiven for thinking that with all this seemingly daily progress, that the path forward would be relatively simple and easy to understand. And it would be easy for one who knows nothing else to get the impression that cancer treatment is fundamentally easy nowadays. This is obviously untrue, or at least, grossly misleading. Even as cancer treatments become more effective and better targeted, the impact to life and lifestyle remains massive.
It is all well and good to be optimistic about the future. For my part, I enjoy tales about the great big beautiful tomorrow shining at the end of the day as much as anyone. In as much as I have a job, it is talking to people about new and exciting innovations in their medical field, and how they can best take advantage of them as soon as possible for the least cost possible. I don’t get paid to do this; I volunteer because I am passionate about keeping progress moving forward, and because some people have found that my viewpoint and manner of expression are uniquely helpful.
However, this cycle of minor discoveries, followed by a great deal of public overstatement and media excitement, which never (or at least, so seldom as to appear never) quite lives up to the hype, is exhausting. Active hoping, in the short term, as distinct from long term hope for future change, is acutely exhausting. Moreover, the routine of having to answer every minor breakthrough with some statement to interested, but not personally-versed friends and relations, who see media hyperbole about (steps towards) a cure and immediately begin rejoicing, is quite tiring.
Furthermore, these almost weekly interactions, in addition to carrying all of the normal pitfalls of socio-familial transactions, have a unique capability to color the perceptions of those who are closest to oneself. The people who are excited about these announcements because they know, or else believe, it represents an end, or at least, decrease, to one’s medical burden, are often among those who one wishes least to alienate with causal pessimism.
For indeed, failing to respond with appropriate zeal to each and every announcement does lead to public branding of pessimism, even depression. Or worse: it suggests that one is not taking all appropriate actions to combat one’s disease, and therefore is undeserving of sympathy and support. After all, if the person on the TV says that cancer is curable nowadays, and your cancer hasn’t been cured yet, it must be because you’re not trying hard enough. Clearly you don’t deserve my tax dollars and donations to fund your treatment and research. After all, you don’t really need it anymore. Possibly you are deliberately causing harm to yourself, and therefore are insane, and I needn’t listen to anything you say to the contrary. Hopefully, it is easy to see how frustrating this dynamic can become, even when it is not quite so exaggerated to the point of satire.
One of the phrases that I heard being repeated at the conference a lot was “patient investment in research and treatment”. When patients aren’t willing to invest emotionally and mentally in their own treatment; in their own wellbeing, the problems are obvious. To me, the cause, or at least, one of the causes, is equally obvious. Patients aren’t willing to invest because it is a risky investment. The up front cost of pinning all of the hopes and dreams for one’s future on a research hypothesis is enormous. The risk is high, as anyone who has stupefied the economics of research and development knows. Payouts aren’t guaranteed, and when they do come, they will be incremental.
Patients who aren’t “investing” in state of the art care aren’t doing so because they don’t want to get better care. They aren’t investing because they either haven’t been convinced that it is a worthwhile investment, or are emotionally and psychologically spent. They have tried investing, and have lost out. They have developed innovation fatigue. Tired of incremental progress which does not offer enough payback to earnestly plan for a better future, they turn instead to what they know to be stable: the pessimism here and now. Pessimism isn’t nearly as shiny or enticing, and it doesn’t offer the slim chance of an enormous payout, but it is reliable and predictable.
This is the real tragedy of disability, and I am not surprised in the slightest that now that sufficient treatments have been discovered to enable what amounts to eternally repeatable stopgaps, but not a full cure, that researchers, medical professionals, and patients themselves, have begun to encounter this problem. The incremental nature of progress, the exaggeratory nature of popular media, and the basic nature of humans in society amplify this problem and cause it to concentrate and calcify into the form of innovation fatigue.
Yes, I know I said that I would continue with the Incremental Progress series with my next post. It is coming, probably over or near the weekend, as that seems to be my approximate unwritten schedule. But I would be remiss if I failed to mark today of all days somehow on here.
The twentieth of July, two thousand and seven. A date which I shall be reminded of for as long as I live. The date that I define as the abrupt end of my childhood and the beginning of my current identity. The date which is a strong contender for the absolute worst day of my life, and would win hands down save for the fact that I slipped out of consciousness due to overwhelming pain, and remained in a coma through the next day.
It is the day that is marked in my calendar simply as “Victory Day”, because on that day, I did two things. First, I beat the odds on what was, according to my doctors, a coin toss over whether I would live or die. Second, it was the day that I became a survivor, and swore to myself that I would keep surviving.
I was in enough pain and misery that day, that I know I could have very easily given up. My respiratory system was already failing, and it would have been easy enough to simply stop giving the effort to keep breathing. It might have even been the less painful option. But as close as I already felt to the abyss, I decided I would go no further. I kept fighting, as I have kept fighting ever since.
I call this date Victory Day in my calendar, partly because of the victory that I won then, but also because each year, each annual observance, is another victory in itself. Each year still alive is a noteworthy triumph. I am still breathing, and while that may not mean much for people who have never had to endure as I have endured, it is certainly not nothing.
I know it’s not nothing, partly because this year I got a medal for surviving ten years. The medals are produced by one of the many multinational pharmaceutical corporations on which I depend upon for my continued existence, and date back to a few decades ago, when ten years was about the upper bound for life expectancy with this disease.
Getting a medal for surviving provokes a lot of bizarre feelings. Or perhaps I should say, it amplifies them, since it acts as a physical token of my annual Victory Day observances. This has always been a bittersweet occasion. It reminds me of what my life used to be like before the twentieth July two thousand and seven, and of the pain that I endured that day I nearly died, that I work so diligently to avoid. In short, it reminds me why I fight.
Today we’re trying something a little bit different. The conference I recently attended has given me lots of ideas along similar lines for things to write about, mostly centered around the notion of medical progress, which incidentally seems to have become a recurring theme on this blog. Based on several conversations I had at the conference, I know that this topic is important to a lot of people, and I have been told that I would be a good person to write about it.
Rather than waiting several weeks in order to finish one super-long post, and probably forget half of what I intended to write, I am planning to divide this topic into several sections. I don’t know whether this approach will prove better or worse, but after receiving much positive feedback on my writing in general and this blog specifically, it is something I am willing to try. It is my intention that these will be posted sequentially, though I reserve the right to Mix that up if something pertinent crops up, or if I get sick of writing about the same topic. So, here goes.
“I’m feeling fundraising burnout.” Announced one of the boys in our group, leaning into the rough circle that our chairs had been drawn into in the center of the conference room. “I’m tired of raising money and advocating for a cure that just isn’t coming. It’s been just around the corner since I was diagnosed, and it isn’t any closer.”
The nominal topic of our session, reserved for those aged 18-21 at the conference, was “Adulting 101”, though this was as much a placeholder name as anything. We were told that we were free to talk about anything that we felt needed to be said, and in practice this anarchy led mostly to a prolonged ritual of denouncing parents, teachers, doctors, insurance, employers, lawyers, law enforcement, bureaucrats, younger siblings, older siblings, friends both former and current, and anyone else who wasn’t represented in the room. The psychologist attached to the 18-21 group tried to steer the discussion towards the traditional topics; hopes, fears, and avoiding the ever-looming specter of burnout.
For those unfamiliar with chronic diseases, burnout is pretty much exactly what it sounds like. When someone experiences burnout, their morale is broken. They can no longer muster the will to fight; to keep to the strict routines and discipline that is required to stay alive despite medical issues. Without a strong support system to fall back on while recovering, this can have immediate and deadly consequences, although in most cases the effects are not seen until several years later, when organs and nervous tissue begin to fail prematurely.
Burnout isn’t the same thing as surrendering. Surrender happens all at once, whereas burnout can occur over months or even years. People with burnout don’t necessarily have to be suicidal or even of a mind towards self harm, even if they are cognizant of the consequences of their choices. Burnout is not the commander striking their colors, but the soldiers themselves gradually refusing to follow tough orders, possibly refusing to obey at all. Like the gradual loss of morale and organization by units in combat, burnout is considered in many respects to be inevitable to some degree or another.
Because of the inherent stigma attached to medical complications, it is always a topic of discussion at large gatherings, though often not one that people are apt to openly admit to. Fundraising burnout, on the other hand, proved a fertile ground for an interesting discussion.
The popular conception of disabled or medically afflicted people, especially young people, as being human bastions of charity and compassion, has come under a great deal of critique recently (see The Fault in Our Stars, Speechless, et al). Despite this, it remains a popular trope.
For my part, I am ambivalent. There are definitely worse stereotypes than being too humanitarian, and, for what it is worth, there does seem to be some correlation between medical affliction and medical fundraising. Though I am inclined to believe that attributing this correlation to the inherent or acquired surplus of human spirit in afflicted persons is a case of reverse causality. That is to say, disabled people aren’t more inclined to focus on charity, but rather that charity is more inclined to focus on them.
Indeed, for many people, myself included, ostensibly charitable acts are often taken with selfish aims. Yes, there are plenty of incidental benefits to curing a disease, any disease, that happens to affect millions in addition to oneself. But mainly it is about erasing the pains which one feels on a daily basis.
Moreover, the fact that such charitable organizations will continue to advance progress largely regardless of the individual contributions of one or two afflicted persons, in addition to the popular stereotype that disabled people ought naturally to actively support the charities that claim to represent them, has created, according to the consensus of our group, at least, a feeling of profound guilt among those who fail to make a meaningful contribution. Which, given the scale on which these charities and research organizations operate, generally translates to an annual contribution of tens or even hundreds of thousands of dollars, plus several hours of public appearances, constant queries to political representatives, and steadfast mental and spiritual commitment. Thus, those who fail to contribute on this scale are left with immense feelings of guilt for benefiting from research which they failed to contribute towards in any meaningful way. Paradoxically, these feelings are more rather than less likely to appear when giving a small contribution rather than no contribution, because, after all, out of sight, out of mind.
“At least from a research point of view, it does make a difference.” A second boy, a student working as a lab technician in one of the research centers in question, interjected. “If we’re in the lab, and testing ten samples for a reaction, that extra two hundred dollars can mean an extra eleventh sample gets tested.”
“Then why don’t we get told that?” The first boy countered. “If I knew my money was going to buy another extra Petri dish in a lab, I might be more motivated than just throwing my money towards a cure that never gets any closer.”
The student threw up his hands in resignation. “Because scientists suck at marketing.”
“It’s to try and appeal to the masses.” Someone else added, the cynicism in his tone palpable. “Most people are dumb and won’t understand what that means. They get motivated by ‘finding the cure’, not paying for toilet paper in some lab.”
Everyone in that room admitted that they had felt some degree of guilt over not fundraising more, myself included. This seemed to remain true regardless of whether the person in question was themselves disabled or merely related to one who was, or how much they had done for ‘the cause’ in recent memory. The fact that charity marketing did so much to emphasize how even minor contributions were relevant to saving lives only increased these feelings. The terms “survivor’s guilt” and “post-traumatic stress disorder” got tossed around a lot.
The consensus was that rather than act as a catalyst for further action, these feelings were more likely to lead to a sense of hopelessness in the future, which is amplified by the continuously disappointing news on the research front. Progress continues, certainly, and this important point of order was brought up repeatedly; but never a cure. Despite walking, cycling, fundraising, hoping, and praying for a cure, none has materialized, and none seem particularly closer than a decade ago.
This sense of hopelessness has lead, naturally, to disengagement and resentment, which in turn leads to a disinclination to continue fundraising efforts. After all, if there’s not going to be visible progress either way, why waste the time and money? This is, of course, a self-fulfilling prophecy, since less money and engagement leads to less research, which means less progress, and so forth. Furthermore, if patients themselves, who are seen, rightly or wrongly, as the public face of, and therefore most important advocate of, said organizations, seem to be disinterested, what motivation is there for those with no direct connection to the disease to care? Why should wealthy donors allocate large but sill limited donations to a charity that no one seems interested in? Why should politicians bother keeping up research funding, or worse, funding for the medical care itself?
Despite having just discussed at length the dangers of fundraising burnout, I have yet to find a decent resolution for it. The psychologist on hand raised the possibility of non-financial contributions, such as volunteering and engaging in clinical trials, or bypassing charity research and its false advertising entirely, and contributing to more direct initiatives to improve quality of life, such as support groups, patient advocacy, and the like. Although decent ideas on paper, none of these really caught the imagination of the group. The benefit which is created from being present and offering solidarity during support sessions, while certainly real, isn’t quite as tangible as donating a certain number of thousands of dollars to charity, nor is it as publicly valued and socially rewarded.
It seems that fundraising, and the psychological complexities that come with it, are an inevitable part of how research, and hence progress, happens in our society. This is unfortunate, because it adds an additional stressor to patients, who may feel as though the future of the world, in addition to their own future, is resting on their ability to part others from their money. This obsession, even if it does produce short term results, cannot be healthy, and the consensus seems to be that it isn’t. However, this seems to be part of the price of progress nowadays.
This is the first part of a multi-part commentary on patient perspective (specifically, my perspective) on the fundraising and research cycle, and more specifically how the larger cause of trying to cure diseases fits in with a more individual perspective, which I have started writing as a result of a conference I attended recently. Additional segments will be posted at a later date.