Is Donated Blood a Gift or a Commodity?

By Ben Belek

In the spring of 2013, my son, then 5 months old, became very sick. It took just under a month of anxious head-scratching before doctors finally diagnosed his condition as Kawasaki disease, an uncommon form of autoimmune inflammatory vasculitis.

Once his diagnosis was confirmed, he was treated through a procedure known as intravenous immunoglobulin therapy, in which antibodies collected from the blood plasma of between 1,000 and 10,000 donors are injected intravenously in the patient. To my partner’s and my relief, the treatment worked: He got better almost immediately.

Once this crisis had passed, I began to marvel at the incredible blood donation system that had made his recovery possible.

How did these trillions of potent proteins, originating in thousands of human bodies, find their way to my son’s blood circulation? What historical, social, and economic conditions enabled this extraordinary exchange of substances? As a social and medical anthropologist, these questions were the prisms that guided my subsequent dive into the fascinating world of the blood industry.

Blood has long been a focus for anthropological thinking. Anthropologists in the early 20th century were mostly interested in blood as a metaphor, a powerful symbol across cultures of vitality, kinship, sanctity, or defilement. In the past two decades, however, anthropologists began taking an interest not in the pristine symbolism of blood but in the actual substance—with all its red and gory messiness.

Anthropologists like myself find blood intriguing because of what it can reveal about a society: The specific ways the substance circulates outside and between bodies brings to light conflicting values, shifting political and ethical priorities, and the potentials and perils of economic systems.

These days, most of us take for granted that human blood can be extracted, stored, and infused in patients. But this hasn’t always been the case. The history of blood donation tells a remarkable and optimistic story of human altruism, ingenuity, and social organization—but also of greed, racism, and negligence.

BLOOD EXCHANGE OVER THE CENTURIES

So, let’s start from the beginning. When, how, and why did extracted human blood become seen as a public good?

As early as the 19th century in Europe, doctors began performing transfusions using human blood. At the time, blood transfusions were understood not only as a treatment for blood loss, but as a procedure that could potentially alter a person’s character. These attempts, however, were severely limited by physicians’ poor understanding of the circulatory system, crude equipment, and ineffective techniques. Many patients, as well as donors, died during these premature endeavors.

Read on, from the archives: “The Cultural Anxieties of Xenotransplantation

Then, in 1900, Karl Landsteiner, an anatomist in Vienna, discovered that human blood had different types, and the donor and recipient blood types needed to be compatible. In 1908, French surgeon Alexis Carrel devised a method for direct transfusion—suturing the blood vessels of the donor and recipient together—that warded off the dire effects of coagulation. In 1913, physician Edward Lindeman in the U.S. introduced a novel method of transfusion that involved inserting a hollow tube into the vessels of both donor and recipient, ridding doctors of the need to cut open any wrists. This made the practice much safer—and more palatable to potential donors.

Finally, in 1915, Richard Lewisohn, another physician from the U.S., introduced an effective chemical anticoagulant. For the first time in history, blood could be drawn from a donor’s vein, stored for a reasonable period, and transfused in the recipient at the medical staff’s convenience.

The notion of a blood reserve—a blood bank—suddenly became conceivable.

Yet the successful operation of blood transfusion on a large scale still had two final hurdles to cross. The first was logistical: Adequate systems had to be designed to support the creation of a blood bank. The second hurdle required a cultural shift: People had to recognize a moral precept to donate one’s blood for the sake of some greater good.

As with many cultural shifts, it was war that ultimately changed the public’s thinking on blood donation.

The first public health initiative involving a large-scale blood transfer operation occurred in Spain during the Civil War (1936–1939). Following a successful example from the Soviet Union (where hospitals began collecting and storing small quantities of blood from fresh cadavers), surgeons on the frontlines began operating mobile blood services using living donors.

A few years later, in the months leading up to WWII, a blood service started operating in London, subsequently saving thousands of lives during the German bombing campaign of the city. In the U.S., the Red Cross began collecting blood nationwide for military use in 1941.

These initiatives demonstrated both the need and the feasibility of an efficient blood donation service. But even more importantly, they charged the act of donating blood with powerful moral undertones. Donating blood was no longer seen as an individual act of generosity but as a way to participate in a collective society.

THE RISE OF THE PLASMA INDUSTRY

In the decades that followed WWII, the U.K., France, Holland, and other countries still kept their blood services centralized and based on altruistic donation. Yet some countries, most notably the U.S., saw a massive decentralization of the blood economy. Blood itself was gradually reconfigured from a public resource to a commodity, bought and sold by private merchants for the sake of making a profit.

In the 1960s and 1970s, the blood business in the U.S. diversified to meet new medical demands for plasma. One driver of this demand was the discovery of new plasma-based treatments for hemophiliac patients. The other was the development of plasmapheresis, a therapy that involves the collection of plasma and the reintroduction of the red blood cells into the donor’s blood circulation.

The for-profit plasma industry boomed. Companies set up plasma-collection units mostly in impoverished areas, including along the Mexico-U.S. border—areas where an average price of US$10 for a pint of plasma (sometimes paid in liquor-store vouchers) seemed a lucrative transaction for many.

The most striking example of this new form of biological exploitation was the widespread collection of plasma from inmates in U.S. prisons starting in the 1960s. In Arkansas, for example, a controversial Prison Blood and Plasma Center started operating at Cummins prison in 1963. Inmates who had virtually no other means to earn an income were paid as little as US$7 for around a pint of blood, which the prisons subsequently sold for over US$100.

The hygiene and safety standards of these plasma swindlers were even more dubious than their ethical standards. Due to their lack of oversight, an epidemic of hepatitis began to spread in prisons where such operations were underway. Hepatitis then spread throughout the supply when companies pooled the plasma collected from thousands of donors as a way of cutting expenses—guaranteeing that so much as one infected unit would contaminate the entire pool.

Starting in the early 1980s, HIV made its way into the blood and plasma supply in a similar way. In 1984, the Center for Disease Control alerted the industry that blood transfusions seemed to be causing an AIDS outbreak among hemophilia patients. However, it took years for government regulations to fully prohibit the distribution of prison-sourced blood products. In Arkansas, prisons managed to find loopholes in federal regulations to continue selling extracted plasma abroad until 1994.

Total numbers are impossible to pin down, but experts estimate tens of thousands of hemophilia patients in North America, Europe, and Asia were infected with hepatitis C and HIV after being treated with contaminated blood and plasma from exploitative sources.

RICHARD TITMUSS’ LEGACY

This brief history of blood donation reveals how corporate greed, medical negligence, and structural inequality combined to turn an auspicious moral project into a global outbreak of HIV. Zooming out, it also points to how the blood industry as we know it today is contingent upon historical shifts, economic trends, and collective values—all of which are up for debate.

Enter Richard Titmuss.

Titmuss, a British scholar of social administration, conducted a comparative study of blood systems in the 1960s. In one corner was the British blood service. There, blood was treated as a community resource: donated freely and distributed by the welfare state. In the opposite corner was the U.S. blood system, where blood was sometimes freely donated and distributed by the American Red Cross, though more often sold by donors for a price and distributed by for-profit blood and plasma merchants.

Through a meticulous analysis of the two systems, Titmuss arrived at a definite conclusion: A blood system based on voluntary donations, and run by the state, is safer, more efficient, more economically viable, and crucially, more moral than its private-sector alternative.

The publication of Titmuss’ book, The Gift Relationship, in 1970 set off a series of events and public conversations that resulted in much stricter regulations of blood services in the U.S. Whole blood donations were effectively made volunteer-based with the introduction of new federal labeling practices.

However, most plasma collection was excluded from these labeling regulations. Today the global plasma industry is worth around US$24 billion, with demand continuing to outpace supply. Thousands of people in the U.S. who live under the poverty line continue to depend on income from selling their plasma.

Titmuss’ work remains relevant for understanding the meanings people ascribe to blood donation. Do donors understand their blood donation as a form of charity? A commercial exchange? A religious sacrifice? A fulfillment of their patriotic duty? An opportunity for spiritual atonement? An act of communal solidarity?

The anthropology of blood exchange tells us that the answers to these questions vary across national and cultural contexts. This variation matters because the values ascribed to blood go on to inform political systems. These values, which often reflect—and strengthen—preexisting biases and prejudices, shape policymakers’ decisions and ultimately determine the regulations that govern who can donate blood, who will be treated with it, and under what circumstances.

WHERE DO WE GO FROM HERE?

In many countries, regulating the blood industry has made its products safer—but it’s also led to new forms of disparities.

Some people are denied the right to donate blood based on their sexual behaviors, substance use, or where they come from. This form of selection sometimes has its public health justifications. But when certain groups (such as gay men, drug users, or some ethnic minorities) are excluded from donating blood, they are denied the right to express their civic belonging. Fortunately, this seems to be changing in some places. Canada, for example, recently lifted a nationwide ban on blood donations from men who have sex with men.

Preexisting biases and inequalities also shape who receives health care in different countries, including the medications and therapies that use blood products.

In other words, I may donate a pint of blood or plasma—but I have little say over where it goes once it enters the marketplace. If our national and global health care systems worked as they should, my donation would be acknowledged as a community resource and distributed freely to populations in need, including to those who are undocumented migrants or living in extreme poverty. But in our current systems, it’s possible my blood—or some components of it—would be assigned monetary value, bought and sold to the highest bidder to yield profits for the already affluent.

So, where does all this lead us?

On one hand, we shouldn’t fool ourselves into thinking that the systematic exchange of blood represents some utopic ideal based entirely on kindhearted gift-giving. Like any other human arrangement, blood systems are complex and morally fraught.

But we mustn’t be too cynical either. Even if our donated blood is sometimes commodified, it does ultimately save lives, as I learned with my son. And for that to keep happening, we must be willing to keep giving it.

Still, in donating blood products, we do earn the right to ask difficult questions about what precisely is being done with our precious cells and molecules.

Are our blood and plasma being handled safely—and morally? Is our donation accessible to those in need regardless of their ethnicity, nationality, or financial status? In short: Are the right people benefiting from our gift?

If the answer to any of these questions is no, then we as citizen-donors still have some work to do.

This work first appeared on SAPIENS under a CC BY-ND 4.0 license. Read the original here.

Advertisement

When Argonauts Roamed the Earth

This is a post on and in anthropology. Very different, then, from what I normally post on this blog, but perhaps my readers will nevertheless find it interesting.

W_Malinowski_Trobriand_Isles_1918

I should probably be more than a bit embarrassed to admit that even after doing social anthropology for the past 10 years or so (counting from the beginning of my undergraduate studies) I’ve never really read Malinowski. I mean, of course I skimmed through long bits of his writing, but mostly just to tick the boxes on various required reading lists. And while I’ve also enjoyed lengthy excerpts from his diaries revealing his conflicted inner-workings and problematic attitude toward his hosts and informants, I just never truly read his stuff. Never truly engaged with it, never truly studied it.

I suppose it was from this rather ignorant position that I always thought of Malinowski as a somewhat dull writer; a forefather whose historical significance should certainly not be downplayed, but an otherwise outdated and irrelevant thinker.

And to be perfectly honest, I’m not that much into the old classics of anthropology anyway. Unlike some people (of whom I am jealous), I’ve always opted to read something from the most recent edition of American Anthropologist, for example, rather than poring over a seminal work by Frazer or Evans-Pritchard. Just the look of that old formatting style on unsearchable and unhighlightable pdfs (not to mention actual printed paper) makes me cringe.

And since I got away with it during my bachelor’s studies (where I honestly wasn’t required to read much of anything), the necessity never presented itself in any immediate way, and so I was satisfied with having read the absolute minimum. I always seemed to be just well-read enough to avoid making a fool of myself in anthropology conference lunches (only to then boastfully expose my ignorance following the downing of several pints at the pub).

But this stance changed recently when giving myself a crash course in economic anthropology for an upcoming project. A few relevant syllabi I found online recommended starting with Malinowski. Fair enough, I thought. I’ve obviously delayed my proper initiation into the discipline for long enough. Seeing as I’ve decidedly put myself in a position of undergrad again, why not do it properly this time around? Yes, I thought enthusiastically – if slightly hesitantly – why not start in the very beginning, and head right back to good old Bronislaw Kasper?

And I’m glad I did, because what an amazing anthropologist I found this bold pole to have been.

I will mention that even after a very enjoyable reading of two articles (working up towards a proper celebratory reading of Argonauts) his significance is still mostly historical in my eyes. Very little of it was a completely new taste for me in the sense of a new type of argument or theoretical approach or conceptualisation. This makes perfect sense of course, as so many of my teachers and colleagues have been cooking with his recipes for over 9 decades. But what nevertheless struck me by surprise was how absolutely clear, precise and relevant so many of his claims still remain. So it’s not only in the sense of knowing one’s history, but it’s also in the sense of knowing one’s purpose and articulating one’s goals as an anthropologist that Malinowski proves to be still extremely valuable.

A small point before I proceed: of course, Malinowski, as so many of his contemporaries, could never hold up to our discipline’s current ethical and political standards, for numerous reasons. But I personally don’t think this should prevent us from returning to his ideas, albeit with some necessary caveats in mind. I do, however, understand this move might be resisted by some, and for justifiable reasons.

So, if you don’t mind going through the trouble of revisiting some decontextualized quotes from Malinowski’s 1921 article “The Primitive Economics of the Trobriand Islanders” published in The Economic Journal, I’ll show you what I mean.

“A student of economics”, Bronislaw writes,

in possession of a systematic theory, might be naturally tempted to inquire how far, if at all, his conclusions can be applied to a type of society entirely different from our own. He would attempt in vain, however, to answer this question on the basis of the ethnological data extant, or, if he did, his results could not be correct. (p. 1)

So right on the outset Malinowski is making a plea for a broad view of human societies as necessarily varied and heterogeneous, and argues that assumptions about the nature of human behaviours are wholly invalid if not supported by evidence. And while for social anthropologists this claim is more or less obvious, it is not the case for most other people. I recall so many arguments with friends and acquaintances where they casually made the following (il)logical move: “the people I know about,” (usually one’s own society and maybe another one he or she is familiar with) “behave in a certain manner, and therefore, all societies must be behaving in an essentially similar manner”. And Malinowski is saying well, no, that is an incorrect assumption. Do not, he says, assume to know anything about any society until you, or someone else, had studied it thoroughly. And more often than not, you will be surprised by what you find. The fact that he is making this claim here about economics is so entirely appropriate, as economists, it seems to me – alongside psychologists – are quite often the culprits in this sort of fallacy. So already, go Bronek.

Moving on to Page 3, where he writes:

When I began to inquire into this subject [of land tenure], I first received from my native informant a series of general statements, such as that the chief is the owner of all land, or that each garden plot has its owner, or that all the men of a village community own the land jointly. Then I tried to answer the question by the method of concrete investigation: taking a definite plot, I inquired successively, from several independent informants, who was the owner of it. In some cases I had mentioned to me successively as many as five different ‘owners’ to one plot – each answer, as I found out later on, containing part of the truth, but none being correct by itself.

Here’s a bit of wisdom that we all need to keep in mind, and even we who speak Soc-Anth as a first second language often tend to lose sight of in periods of absent-mindedness. Variation exists not only between different societies, but within each society as well. And what follows from this is that firstly, one answer from an interlocutor, intelligent and informed as she may well be, should always be complemented with those of further interlocutors. And secondly, seeing as this is the case, the role of the anthropologist is not – cannot – be merely that of collector of ethnographic data, because fitting different views into anything that resembles a coherent account requires intellectual work. It is precisely because our informants are seldom able – or at all willing – to square the different and even contradictory beliefs that exist within and among them, that anthropologists could play a valuable part in explicating their practices, systems and institutions.

So then he immediately goes on to state:

The main difficulty in this, as in ever so many similar questions, lies in our giving our own meaning of ‘ownership’ to the corresponding native word. In doing this we overlook the fact that to the natives the word ‘ownership’ not only has a different significance, but that they use one word to denote several legal and economic relationships, between which it is absolutely necessary for us to distinguish.

I’m surprisingly mind-blown by this observation. One of the first things I was taught as an anthropology student, after all, is that “our” (whoever “we” may be) concepts and systems of classification may not be appropriate when studying societies different from our own. I guess I just never expected to see it articulated so plainly and straightforwardly in such an old text – especially since this kind of centrism is still such a common fallacy (yes psychologists, I might be talking to some of you still). What we take ownership to mean – or scratch that; our very assumption that the concept of single ownership must even exist – is culturally specific, and anything but natural or inevitable. And so we need to always consider the possibility of alternatives.

Finally, in the very last paragraph of his article Malinowski writes:

Economic elements enter into tribal life in all its aspects – social, customary, legal and magico-religious – and are in turn controlled by these. It is not for the observer in the field to answer or to contemplate the metaphysical question as to what is the cause and effect – the economic or the other aspects. To study their interplay and correlation is, however, his duty.

It is not for the observer to answer – or even contemplate – questions of cause and effect with regards to the relation between culture and economy. I find myself admiring the sheer gutsiness in this purposeful delineation of the boundaries of our discipline, that just seems to undermine perhaps the most basic instinct of the scholarly mind: determining what causes what. But Malinowski doesn’t only defenestrate this burning, perhaps, but ultimately unproductive question, he offers a valuable alternative: study the interplay between the different aspects of native life (where native, at least in our modern usage, can mean any social life at all). We study interplay, not causality. If I would have to offer the most succinct way to differentiate social anthropology from other social sciences, this would be it.

And if that set of bold requisitions isn’t enough, Malinowski offers his reader this final piece of wisdom, just before signing off: “For to overlook the relation between two or several aspects of native life is as much an error of omission as to overlook any one aspect”. Relation as the space where the meaningfulness of social action can be attended. What else needs to be said?

So to sum up, I suppose I’m just humbled to learn that certainly not my generation, but neither did the previous one or even two, had conceived of all those understandings and principles that have become the benchmark of (good) social anthropology, still today. I’m not sure if this strengthens the claim that the importance of Malinowski (as it is evaluated by this one article, obviously a very far from exhaustive account) is more than just historical. It’s not like I wasn’t aware until now of cultural relativism, human heterogeneity or the multiplicity of views within a specific group. But I guess what my emerging realisation – namely that such pieces of wisdom have been in existence for nearly a century – did for me was shine a light on the fact that such views don’t exist in a vacuum, but are always and necessarily (among other things) a response to claims to the contrary. And as such, seeing as claims to the contrary are as popular now as they’ve ever been, Malinowski’s articulateness, straightforwardness and concreteness are still an extremely valuable resource. There’s power in traditions, and there’s authority that comes with repetition, and for me, at least, comes a certain confidence with the knowledge that our very basic epistemological assumptions have a specific and traceable source. And to hear it straight from the horse’s mouth – and in this particular horse’s clear and unapologetic style – just makes it all the more appealing.

 

A New Entry on Autism in the Cambridge Encyclopedia of Anthropology

Hi everyone,

About a year ago, I was asked to write an entry for the fast growing, online and open access Cambridge Encyclopedia of Anthropology. The topic of the entry would be, of course, autism.

It’s been a real challenge, trying to condense 20 years of academic literature into a 5000 word text, in a way that would be both interesting and accessible to a wide readership, but I ended up quite liking the final result. In addition to the encyclopedia’s main goal, which is to present a succinct summary how various concepts were explained and elaborated by anthropologists, I decided my entry on autism would pursue yet another goal of offering a broad bibliography of writings about autism from the social sciences, obviously with an emphasis on anthropology.

So the entry is now online, and I would be grateful to hear your thoughts.

https://www.anthroencyclopedia.com/entry/autism 

 

The Friendship Factor

Ethos – a well-respected psychological anthropology journal – has already established itself as a very reputable publication when it comes to the social study of autism. It had actually once devoted an entire issue to the topic; and that special issue has provided us, in my view, with some of the best works on the social reality of the condition to date.

Recently, Ethos has published yet another article on autism – this time by Elizabeth Fein from Duquesne University (currently open access). It’s always exciting when a social science study about autism gets published in a major journal, and I was very much looking forward to reading it. It’s been on my to-do list for several weeks now, waiting for me for to get round to it. And now that I finally have, I thought I’d share some of my impressions.

Questioning Answers does a thing in his (excellent) blog where he reads recent papers on autism from various disciplines (mostly biomedical if I’m not mistaken) and mediates the bottom lines of these studies to his readers. It strikes me that I’m too not badly placed to do something similar in my own blog, but with regards to publications on autism from anthropology, sociology, media studies and – as in this case – psychological anthropology.

At least I think this is the discipline from which Fein’s approach can be said to have emerged. Or is it cultural psychology? I have to admit that the line between them isn’t always particularly clear to me. Well, in Fein’s bio page, under the ‘expertise’ tab, she lists the following: clinical ethnography, cultural psychology, psychological and psychiatric anthropology, neurodevelopmental disorders, and science and technology studies. So basically wherever brain, mind and society cross paths, Fein seem to be on top of it. Now that’s already a pretty decent starting from which to talk about autism.

Let me start out by saying flat out that Fein’s article is really good. Like, really good. It’s got its few possible flaws, which I will get to in a minute, but these are negligible in relation to the usefulness and persuasiveness of her arguments. Basically, Fein offers us a way to think about autism while taking into account both its biological and social-structural components. And really, this is what any of us are trying to do, with varying levels of success. If you ask me, she nailed it.

So what is Fein arguing? It’s rather simple, really. She’s saying that the condition we refer to as autism is – at least in part – shaped at the interface between a person’s natural tendencies and their social environment. When the social environment, she says, is heavily structured around various exclusionary practices, a person with certain tendencies is more likely to eventually fall within the autism category.

In other words, according to Fine, a person might, for whatever reason, find it difficult to socialise with her peers at an early age. Now, if this difficulty is unmistakably debilitating, society refers to it as a clear-cut case of autism (given the fulfillment of a whole range of other conditions, of course). But sometimes, these difficulties are relatively subtle, and could potentially go either way.

Presumably, those would be the kids who a diagnostician would be reluctant to diagnose with autism at a very young age, and would suggest a few more years of observation.

So what Fein is saying is that in many of today’s Western societies, where social relationships are most often based on choice rather than obligation, social difficulties at an early age (even subtle ones) very often lead to exclusion and loneliness. The child’s peers, to put it bluntly, don’t want to play with her. They don’t want to be her friends. And nowhere in our system is it acceptable to make her peers play with her if they don’t want to.

This social exclusion, then, in turn, leads to the exacerbation of whatever difficulties were there to begin with. The child doesn’t get to experiment in socialising. The child suffers from bullying and cruelty. And the child doesn’t have relevant role models from which to learn socially desirable codes of language and behaviour (parents and teachers can’t be seen as relevant role models, as Fein demonstrates very well, because they don’t hail from the appropriate social milieu. For example, if a 10 year old child speaks exactly like his dad, his classmates are not likely to go nuts over him).

So take a person with subtle social difficulties, add two, five, or ten years of social solitude and an absence of relevant role models, and you get a person who by now easily qualifies as being on the autism spectrum.

Fein could be said to argue this: Society’s role in shaping autism goes well beyond framing, defining and diagnosing it. It also, by means of exclusionary practices, produces more autistic people than it used to.

Fein begins her article by using this claim as a way to reject two other claims. One claim she purports to reject is that the increase in prevalence rates is entirely biological and thus ‘real’. And the second claim she rejects is that the increasing rates have everything to do with definitions and diagnostic criteria and are thus ‘not real’. This was where I had my one main grievance with this paper; at its very onset, it builds a straw-man. And why is this a straw man? Two reasons.

First, very few researchers would disagree that the increase in autism rates is at least partly the result of social processes. They may seek out genes, pollutants, or pathogenic fungi, but they still see clearly that diagnostic criteria to autism have changed. The frequent question, then, is not whether the increase is real or not, but what proportion of the increase can be explained by this moving of the goal posts, and what proportion of it is in fact grounded in biology.

Second, there is something misleading about presenting this debate as being about the question of ‘reality’ to begin with. That there are more people labelled autistic nowadays than there have been three decades ago is a fact. One researcher might explain this fact in biological terms. Another might explain it in social-cultural terms. But it’s seldom the case that one explanation is seen to question the ‘reality’ of the other. Autism is no less ‘real’ because it was diagnosed using a more recent diagnostic manual with broadened criteria. Nor is it any more ‘real’ because some gene was identified.

At its very core – and Fein is absolutely aware of this – autism is both biological and social. I don’t see a reason to maintain some sort of dichotomy between those two inseparable components of it. Not even when this dichotomy is just used as a rhetorical instrument.

Going back to the main argument, Fein makes use of three different kinds of evidence when supporting her claim. First, theories in psychology and anthropology which emphasise the role of interpersonal relationships in developing social skills. This is pretty intuitive, in fact; it’s basically the idea that practice leads to skill. This is true for football, and it’s true for socialising.

Second, she draws on work from sociology which suggests that social relationships in some societies, including the US, have become increasingly choice-based. And this, to me, is a fascinating claim when put in this context. The literature Fein refers to compares modern-day societies to something like ‘friendship-economies’ or ‘identity markets’, whereby people can be said to be ‘friendship-worthy’ or not, based on all sorts of evaluation criteria. At the basis of such an economy is the belief that friendships are voluntaristic; that people should – and do – get to choose who they’re friends with. Basically we’re talking about a free market based on the principle of supply and demand. Similarly to the way a product’s value is determined by how much consumers are willing to pay for it; in a friendship economy, a person’s value as a friend is determined by whether or not people want to be friends with them. People are products; friendships are marketable.

You’re not seen as particularly good ‘friendship material’? You’re less likely to have any friends.

Finally, Fein draws on qualitative material generated from her own conversations and encounters with people on the autism spectrum. Now, this may be my bias as a social anthropologist, but these are always the data that speak to me the most. And Fein’s data speak clearly and powerfully.

If I had to mention just one fragment from this article which I’m likely to remember for a good long time, it’s the bit where her informant, who she calls Mara, tells her about the way she imagines her afterlife (Mara is deeply religious, we’re told). Mara’s vision of heaven is a frighteningly vivid representation of the difficulties she encounters in life:

“I would bring heaven down. Everyone would be having such a good time until I got there. So [I would be told]: you know, we understand you tried, but if you could just go sit outside and not bother people that would be great.” (p.93)

This heart clenching narrative is analysed very aptly by Fein: “Even in a space where inclusion is ostensibly governed by a moral logic of rights and wrongs”, she writes, “Mara imagines herself as the eternal exception to such rules. Neither laughing with the sinners nor crying with the saints, she is banished to a place of lonely liminality.” (p.93)

Mara paints a picture in which no matter what she does, no matter how hard she tries, she can’t seem to make friends. Even in a place governed by justice and love, she would still be excluded.

Lonely and friendless, Mara grows increasingly different from her peers, increasingly emotionally volatile, increasingly vulnerable, increasingly (as Fein would have it), autistic.

To me, this argument is convincing, though it does raise very important questions, which really have to do with the nature of the thing we call autism. Basically, if we accept Fein’s claim, then we are forced to imagine autism as being on the same spectrum as non-autism; we need to imagine a space where people can cross from one category to the other, following an accumulation of social experiences.

I’m sure some people would resist this construction. But I’m fairly ok with this.

The difficulty with this, however, is when we try and explain autism in light of this view in any way that’s not completely social constructivist. If some people can step in and out of autism, then how can we discuss it in any way other than as a social category, a culturally specific label, a historically contingent concept? If autism can be subclinical, to put it in still other words, then what is the nature of subclinical autism? Is it still autism in any sense of the word? Are we suggesting an idea of borderline autism, similar to the one they apparently have in South Korea (Grinker & Cho 2013)?

And if we’re suggesting that people can become autistic during their lifetime; are we also accepting that people can cease to be autistic? Because it seems to me we can’t avoid this implication.

The implications of Fein’s argument in political terms are massive, and potentially quite upsetting. The neurodiversity movements is structured, in many ways, around the view that autism is a naturally occurring form of difference. A form of neurodevelopmental diversity. When society is introduced as a significant factor – be it friendships, therapy sessions, or indeed parenting – this could potentially undermine much of their claim to equality and acceptance.

At the same time, there’s at least one very positive implication to Fein’s model, which I wholeheartedly accept. It places the responsibility for the life-prospects of people (autistic or otherwise) on society rather than on themselves. Too much current research – into the brain, into the DNA, into the psyche – regards the individual as the locus of ability and disability; the locus of giftedness or impairment; the locus of success and the locus of failure. We do need to pay more attention to the role of society – systems, structures, institutions – in shaping people’s life outcomes. Society’s role in affording success, and in determining failure. Fein’s approach allows us to do that when it comes to autism, and that’s just great.

 

References:

Fein, Elizabeth. 2015. “‘No One Has to Be Your Friend’: Asperger’s Syndrome and the Vicious Cycle of Social Disorder in Late Modern Identity Markets.” Ethos 43 (1): 82–107.

Grinker, Roy Richard, and Kyungjin Cho. 2013. “Border Children: Interpreting Autism Spectrum Disorder in South Korea.” Ethos 41 (1): 46–74.

 

Related Posts:

Thoughts on Awkward

Every language has those words, and plenty of them, that can’t really be reduced to their translations or dictionary definitions. To be fair, that’s the case with most words. Meanings derive from context; from the intentions of the speaker and from the connotations they raise in the listener.

Words have specific histories. Sometimes their meanings depend on the intonations in which they are used, or from the body language or facial expressions that accompany them. Sometimes words can even mean two exactly opposite things. ‘Cleave’ is the classic example in English, but even think of the word ‘love’: ‘I love Lisa’, means one thing. Now think of the same utterance spoken with eyes rolled: ‘Yeah, I love Lisa’; which effectively means that my feelings towards Lisa are less than positive.

Don’t you just love these sorts of ambiguities?

Anyway, in context, most words always mean more (or less) than their dictionary definitions would let on. Which is what makes them so interesting and valuable. And the best of these are basically like really good clues, to put it simply, about what matters to a given set of people, what drives them, how they see the world.

So anthropologists trying to make sense of a society or culture different than their own often choose such words to focus on. A great recent example is found in Gabriella Coleman’s fantastic book on hacker culture – Hacker, hoaxer, Whistleblower, Spy – where she devotes a lot of much deserved attention to the concept of lulz. What are the lulz? Well I could try and explain, but I would be doing it an injustice. Coleman uses dozens of pages to explain the concept of lulz, and she does this for a reason; the word means much more than any simple sentence or paragraph could convey. And in meaning as much as it does, it teaches us a great deal about the people using it.

In other words, if you’re gonna try and understand what motivates Anonymous, for example, you need to first appreciate, in a very nuanced way, the meaning of the concept of the lulz.

Another great example of such a deep engagement with a single word or phrase is from this lecture by anthropologist Michael Wesch, where he presents the audience with ‘a brief history of whatever’. ‘Whatever’ is not merely an extremely common – and therefore meaningful – word, but its shifting meanings can also be used as a metaphor for the process of shifting sensibilities among young Americans over the past couple of generations. And it’s a great lecture anyway, so give it a look.

Or not. Whatever.

So anyway, there’s at least one such word in the English vocabulary whose story has not yet been told (well, as far as I know, anyway). A word that’s not quite definable, not quite translatable, and at the same time extremely common and meaningful. A word that if I could get my head around it, I feel, I could claim to know a bit more about the people who use it. A word that is just familiar enough to me that I can use it myself in a way that may seem natural; but that is at the same time just foreign enough that when I’m using it I always feel like I’m experimenting just a little bit.

Like someone is likely jump out from behind a plant or something and say “nope, sorry, you’re using it wrong’.

And wouldn’t that be awkward.

Awkward. So many people use this word, in oh so many contexts. But what does it mean? What do they mean?

My initial intention in this post was to produce, similar to Wesch, a brief history of ‘awkward’. But this would take more time and resources than I can currently invest. So instead, I just thought I’d lay out some thoughts about this concept, in a very unsystematic way. Basically what I’m wondering is what motivations, sensibilities, imaginations, concerns and values are in action when someone describes something or someone (or crucially, themselves) as awkward?

Dictionary.com defines it as any one of the follows: lacking skill or dexterity; lacking grace or ease in movement; lacking social graces or manners; hard to deal with; difficult; requiring skill, tact, or the like; and finally, embarrassing or inconvenient; caused by lack of social grace.

I know I know, nothing is more of a cliché than spicing an article with a dictionary definition. I can’t believe I just did that. Jesus. How awkward.

So anyway, according to the dictionary, awkward means either a lack, or the consequence of a lack. Meh. Come on. It has to mean more than that doesn’t it?

I think any one of these definitions tells a partial story at best. Not every lack of skill is seen as awkward. Not even every embarrassment or inconvenience caused by a lack of social skill is considered awkward either. When ‘awkward’ is uttered, an affect is invoked that goes beyond, I think, embarrassment or discomfort, or at the very least denotes a very specific incidence of those.

Urban Dictionary already tells a slightly different story. Not constrained to the formal rules of dictionary definitions (whatever those may be), this user-generated source often provides more intuitive and immediate definitions that somehow manage to capture not only the meaning of a word, but also its essence; its spirit, if you will. 2235 users ‘liked’ the following definition of awkward (as opposed to 417 who took the trouble to actively ‘dislike’ it): “passing a homeless person on your way to a coin star machine.”

I love this definition (no rolling of the eyes needed), because it somehow represents that ineffable nature of awkwardness; it’s not necessarily a ‘lack’ of anything, nor the result of such a lack. It’s just what sometimes happens between people when certain situations occur. Oh and confusingly, it’s also the situations themselves. So passing a homeless person while holding a bucket of change is awkward. And the feeling you get when this happens is awkward. And the look he gives you is awkward. And you’re a pretty awkward person, anyway.

Most people hate awkward moments. But others seem to relish in them. Some are oblivious to them. Some are hypersensitive to them. But whatever their affinity toward them, a lot of people seem to describe so many of their experiences, as well as events and people, as being awkward.

How did awkward become such a common phrase? Well, I just checked it on Ngram, and it turns out that awkward isn’t really any more common than it used to be, at least in terms of how many times it appears in books. But I think I can explain it in one of two ways (or both). First, awkward in the sense of weird, unexpected or uncomfortable isn’t new at all, and has been around for ages. At the same time, ‘awkward’ in what I suspect, at least, is a rather recent usage (as in meaning that social awkwardness of embarrassingly not knowing what to say), would not necessarily find its way into so many book. Or maybe I’m just wrong and this meaning is just not recent at all. I don’t know.

<https://books.google.com/ngrams/interactive_chart?content=awkward&year_start=1800&year_end=2000&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Cawkward%3B%2Cc0>

So why do we say awkward so much? And if it’s such an important word, why doesn’t it exist in Hebrew, for example?

As per the first question, I guess it all comes does to that feeling many of us share of wanting to always be in sync, socially speaking. Always wanting to know the right thing to say. Always wanting to be told what we expect to hear. Always wanting to have a grasp of the situation. And when those expectations aren’t met, like when social rules are broken, or when we’re at a loss for words, a certain kind of feeling ensues. A feeling of powerlessness; of confusion or anxiety. And at that moment, everything sort of feels like it’s falling apart. Not only our composure (“It just felt awkward”), or the interaction itself (“we just had a really awkward conversation”) but our whole selves (“I can’t believe how awkward I was”), as well as the person in front of us (“and she was awkward too, by the way”).

Until eventually nothing makes sense any more, and everything just turns into a big pile of awkwardness (“I dunno, everything just turned into a big pile of awkwardness”).

There’s really no reason to think this feeling is in any way new. And while I do suspect it’s slightly more common in some cultural contexts than others, I don’t see why it should be more common in English speaking countries than in Israel, for example. So what’s going on? Why is it still so popular in English, yet non-existent in at least one other language (and probably many more)?

I think the explanation would have something to do not with the existence of the feeling itself, but with the attention given to it. What is unique in the word ‘awkward’ is not the fact that it describes a certain experience; but that it turns it, very consciously and deliberately, into an on object of reflection. Basically, it allows us to talk about the feeling at length. To tell stories about it. To delve into it.

In other words, English speakers have been going through a process of fetishizing this uncomfortable experience now known as awkwardness. Searching for ‘awkward’ on google bring up the following results (I’m not including results from online dictionaries): A TV series actually named Awkward. A YouTube video entitled 18 Awkward Hairstyles That’ll Make You Smile. A radio program about Six awkward moments at Jay Z’s Tidal relaunch. A photo blog post entitled The most awkward hover hands in awkward dude history, and a website called Awkward Family Photos.

I see a pattern here. Do you? Awkward has become a form of entertainment.

Certain TV shows have made noting socially awkward moments into an art form. Seinfeld is a classic example. But as much as its creators relished the humour in those moments, it was Larry David’s later project, Curb Your Enthusiasm, that really sunk its teeth into them. Some scenes in Curb literally make me cringe with discomfort. This happens especially when Larry David’s character disregards all the rules of appropriate social conduct. He knew this would have this effect, of course. That is what he was going for. And yet I love this show. As he hoped people would.

Awkwardness is mesmerising. It is entertaining. It is also, currently, marketable. We just can’t get enough.

So where does this take us? I’m not sure, to be honest. Maybe us English speakers have gotten so used to thinking deliberately and explicitly about the nuances of social interaction; while at the same time growing ever more sensitive to failures in that regard; that the concept of awkward just emerged to the A-List of popular words as a result.

And as always, between and betwixt emerging lay sensibilities, cowers the anthropologist in pursuit of even the vaguest of valuable propositions. I dunno. Whatever. God that’s awkward.

December Clouds

 

Wordcloud3I got a bit of an urge to get creative with this post. Maybe it’s the holiday season.

Well, to be honest, I’m not sure copy-pasting a list of words into a word-cloud-generator counts as being very creative, but it is a change from the usual flux of words that normally fill this blog. And I do like what came out. In a way, this word-cloud is a pretty decent visualisation, I think, of my work in the past 18 months or so. And I’ve already written about how difficult I find it to visualise any aspect of my study.

This pile of words you see here is compiled of the themes of my study. Basically, every interview I did, every note I had taken, every blog-post I had read, I categorised into several themes. So for example, if Alan had told me about how his friends helped him when he became overloaded by the music and lights in a hotel lobby, I would classify that story in my notes as relating to sensory sensitivities, sensory overload, and friendship.

So the list you see here is basically a list of all the major topics my interlocutors have been discussing. The bigger ones are the ones which were more common than others. But if any topic appears here, this already means it was quite common.

In other words, this is the list of things my interlocutors – namely autistic adults – were mostly occupied with. These were the things that concerned them, interested them or worried them. So in a sense, I find this to be a relatively decent approximation of what being autistic might mean. Well, in some roundabout way, anyway.

Of course, not all these topics are interesting or relevant to everyone who is autistic. That almost goes without saying. But the whole project of trying to understand autism – and this is true whether you’re a neuroscientist, teacher, advocate or anthropologist – involves making some generalisations. Nothing applies to everyone, and some things may only apply to a few and still be important. But judging from all I have learnt so far, this word-cloud, as a whole, represents quite reasonably what autism seems to mean to autistic people.

It represents what aspects of being autistic seem to matter most.

I should add that this list is of my own making, in the sense that an interlocutor of mine did not necessarily have to utter the word ‘loneliness’ or ‘exclusion’ for me to attribute those specific themes to her account. So for example, she might have been talking about struggling to find company, or being rejected by a social group where she tried to fit in. Then it would be my judgement to deem that reflection as indicative of an experience of exclusion or loneliness. In that sense, I have as much a role in shaping this list of themes – if not a bigger role, even – than my interlocutors did. There’s something just a bit problematic, even reductionist here. I’ll admit that. Every individual’s experience is singular and unique, and lumping unique stories into broad categories is creating a pretty artificial lens through which to view things. But then again, this grouping is only meant for my own ease and convenience when searching through my notes, trying to figure out what questions I should be asking about autism, and what sort of material I might use to try and answer them. So these themes are really mere signposts. Just my own way of creating some sense of order for myself amidst the many rich and nuanced stories I’ve been collecting.

I like this list, because it paints a picture of autism as neither inherently good nor bad. As neither merely a neurological condition nor as just as social category. As both something one is and as something one does. And it colours autism, as any text dealing with it always should, with the multifacetedness and nuance that it deserves. And also, it’s pretty.

Happy Christmas, happy Hanukkah, and a good new year.

 

Related Posts:

An Archaeology of Funding Applications

As I was going over my folders recently in search of a paragraph I remembered writing at some point and of which I lost track, I came across something interesting. It was a letter I sent to the head of a funding body to which I applied for some research funding a year or so ago.

I was upset when I wrote this letter; my application has just been rejected. But my dismay at this professional failure was overshadowed, I recall, by the reasons for the rejection, as these were specified by my application’s reviewer. These stem out for my response letter, so there is really no need to repeat them. In reading the referee’s comments now, by the way, I do see they weren’t really all that bad. But they were certainly misinformed. And so my ‘letter of complaint’ is still, I feel, quite valid.

I have no real reason to put it up here, except that when reading it now, I found it rather enjoyable. A pretty decent piece of writing, if I may say so myself. A shame that only one person would get to read it, I thought (especially since that person, namely the head of the funding body, has brushed it aside with calm). And yes, it does say something, perhaps, about the decision-making processes that go behind research funding. Not that they’re necessarily bad. But maybe, sometimes, they might be just a bit misinformed.

Dear sir / madam,

I am writing to you with regards to the resubmission of my grant proposal entitled “Experiencing Emotion and Emotional Experiences among Individuals with Autism”. I would like to humbly urge the committee to reconsider its decision to reject the grant proposal, for the reasons stated below.

While I honestly appreciate the reviewer’s comments, I would like to express my sincere doubt as to whether he/she is sufficiently familiar with the field of autism to reach a qualified evaluation of the worth of my proposal, and its possible feasibility and contribution to the field in question.

Autism is a spectrum of conditions with differing levels of severity and social functioning. In his/her review, the reviewer uses the term “highly autistic” twice, and once uses “severely autistic”; I myself have not used neither of these in my proposal! My proposal merely expresses a desire to study autistic people. Indeed, if I only meant to include severely autistic participants, some reformulations of the proposal would be in order, but that was never my intention. I fear the entire evaluation of my proposal was made on the basis of this very false reading.

Furthermore, I must admit that I was somewhat shocked to learn that according to the reviewer, “…in severely autistic individuals, emotions are a “black box” that scientists, therapists, teachers, and parents struggle to untangle”. It would appear that two decades of autism self-advocacy and activism, promoting notions of diversity and understanding, and combating prevalent stereotypes– have all escaped the reviewer entirely. I would therefore like to offer a crucial correction:

While it is certainly true that managing, discerning, expressing and interpreting emotions may pose considerable challenges for autistic people, to argue that their emotions are in any way a “black box” is alarmingly misinformed, and frighteningly reminiscent of Bruno Bettleheim’s destructive and long abandoned “empty fortress” metaphor (1967). I urge the committee to find one recent work by a researcher of autism that would even come close to making such an unfounded claim.

Having studied autism and engaging in meaningful conversations with autistic individuals for nearly two years now, I can state without doubt that many (although not all) adults with autism are reflexive, communicative, and articulate social agents, given the right conditions. They are thus in a very good position to speak and reflect about their emotional states and experiences – and indeed about their difficulties with emotions. In fact, they are in a better position to do so than anyone else is, and they should be given a chance to express their expertise, and to contribute to the academic discourse which too often revolves around them, but fails to include them.

This is particularly true given the unfortunately popular prejudice that emotional experiences of autistic people are not even worth studying, because they are trapped in some imaginary, impenetrable box (and my theoretical perspective views emotions as intrinsically relational… how could a box hold a connection?)

Yes, emotions are complex, in autistics and neurotypicals alike. Yes, neither the natural sciences nor the social sciences have any definitive answers as to what emotions “are” (although several anthropologists have developed fantastic theories to that extent). But should that prevent us from studying them, from trying to make sense of them, from appreciating their importance in people’s lives? Also, since when do social anthropologists shy away from difficult questions, simply because “scientists, therapists, teachers, and parents” struggle with their meaning? If anything, that is in itself a good enough reason to do research!

According to the reviewer, “there is a huge literature regarding idioms of distress, for example, that should be taken into consideration.” Why? Why is it assumed that distress must be a prominent theme in this work? There is an equally large body of literature on love, friendship, sexual attraction, pride, or compassion; and none of which were explicitly mentioned in my proposal, which deals with emotional states at large. Distress? Yes, of course, that too. But this particular emotional state will only outweigh others in my work if it is found to outweigh others in my participants’ lives. My preliminary research shows this is not the case.

Finally, the reviewer questions whether “simply following subjects through their online activities or attending an autistic conference” will provide me with sufficient and reliable ethnographic data. My response is simple; No, it will not. That is why many more methods – which the reviewer for some reason chooses to ignore completely – are mentioned and elaborated on in my proposal. They include accompanying interlocutors in their daily lives, i.e. family life, work, studies, and autism related social activities, while forming meaningful relationships with them based on mutual respect; as well as conducting semi-structured in-depth interviews. The online aspect of my research is crucial and exciting – but is quite far from being my main source of data. Why would the reviewer ignore all these? I honestly do not know.

I appreciate that I am still in the beginning of my career, and still have a lot to learn. I acknowledge that my proposal is not perfect, and might be improved in various ways. I respect the opinions of those with more knowledge and experience than I. However, in this case I feel my hard work was not taken under serious consideration, and was rejected offhandedly due to a lack of familiarity with the field of study in question. I therefore respectfully ask that my proposal goes to the next stage of selection, to be read and judged by someone more engaged with the anthropological study of autism and/or emotions. Thank you, and I look forward to hearing back from you soon.

Best regards,

Being in-between

 

Am I studying autism to contribute knowledge to the discipline of anthropology? Or am I using anthropology to better understand autism? These two approaches – these two options – are not necessarily mutually exclusive; in fact they have shown over the past couple of years to be quite in concert, at least as far as I was concerned. And yet often, I’m faced with the dilemma of having to present my work as either one or the other. For example, I recently had to decide: should I end my article with “and this is why studying autism offers extremely useful insights to our understanding of human interaction”? Or do I end it with “and this is why anthropology is so indispensable for our understanding of autism”?  (I’m paraphrasing, of course)

Most social anthropologists aren’t faced with a similar dilemma. Someone researching healing practices in Ghana, for example, is quite explicitly – as far as I can tell – working to understand (a) the people of Ghana, (b) Healing practices and (c) human behaviour. They would often have different research communities for each of these topics (the latter just being ‘general social anthropology’ or something to that extent): conferences would be organized, and journals would be published that address each of these concerns. And that would be fine. Natural, even. I don’t recall West Africanists agonizing over their role and aims, wondering if they’re doing one or the other. They’re simply doing both, and that’s what’s expected. So they’re using the accumulated knowledge from anthropology to understand the specific people they’re working with, while at the same time using specific knowledge they acquired about the people they’re studying in order to add to the cumulative anthropological body of knowledge. That’s pretty much how the system is meant to work. No problem.

So why does my case feel a bit different? Well, to be honest, I could simply be wrong about my not sharing this dilemma with others. That is very likely. Or it may just be that I’m biased into thinking that my area of study is so exceptional that it raises exceptional dilemmas, but that in fact, there is no dilemma to speak of. It’s all in my head. Again, that could very well be the case. Yet none of this means that the third option, namely that I am right in feeling that I am in a somewhat unusual position among my colleagues, is necessarily wrong. And I think that perhaps I can explain why my dilemma is, well, if not entirely unique, then at the very least more immediate and present.

Because this is where the difference lies: experts on healing practices in Ghana are expected to be social anthropologists. It’s quite unlikely that they will be anything else except social anthropologists (well they could be local healers, of course, but that’s a whole different story). And so when a person becomes an expert on healing practices in Ghana, they’re entering a very well-defined social role; hence the ready-made conference titles, specific journals and so forth. Yes, there would be other experts in the field; cultural psychologists, ethno-botanists, folklorists etc. But the social anthropologist’s position in that particular setting is more or less undisputed. Their authority as experts on healing practices in Ghana is grounded in their disciplinary tradition and is pretty much taken for granted.

But when a social anthropologist, such as myself, is studying autism, that’s a whole different matter. And I’m not saying autism is uncharted territory for people of my discipline (it’s not), only that our authority in this area is very far from established. Experts on autism are, traditionally, psychiatrists, psychologists, neuroscientists, geneticists, speech and occupational therapists and educators – and that’s just one brand of experts, namely those coming from the world of academic research.  So when a social anthropologist – and I’m speculating here – makes a claim about autism, it is never merely academic; instead, it is very much political. It is an unapologetic attempt to put one’s foot in the door, into a room already fully occupied by others. It requires him or her to cry out almost explicitly – “I have something to say about autism, and I assert the authority to do so”.  This, as far as I can tell, is quite different than if this same anthropologist was studying healing practices in Ghana, they would not only be happily invited into the room; they would be the one guarding the door.

So my dilemma is not just an intellectual one. Rather (as is so often the case) it is a social and political one, and has everything to do with the system of academic disciplines. Which group do I consider myself a part of? Am I a social anthropologist? Well then why aren’t I doing fieldwork in Africa, for example (some would ask)? Oh, I’m an autism researcher?  Well then what’s all this business with participant-observation, relationality, and constructivism (some would ask)? Occupying this position of in-betweenness forces me, it would appear, to choose sides. But even if I do choose just one, there would be some explaining to do on my part. And seeing as I actually want to occupy both positions, I need to work all the harder to justify my claims and arguments, positioning them both within the discipline (social anthropology) and within the field (autism research). Which if not an insurmountable challenge, is definitely something that takes some effort.

How many participants have you got?

It’s happened to me several times recently that when telling someone about my research project (I’m doing an ethnographic study on the emotional landscapes of autistic adults in the UK), I was confronted with: “oh cool… so how many participants have you got?” It’s a relevant question, of course, both fair and valid, which is exactly what makes it so difficult to address. Because it puts me in a position where I have to explain something that to me is very fundamental, but to others often sounds quite odd; that the number of my “research participants” is (a) undetermined and (b) not particularly important.

The simple answer is that social anthropology just doesn’t really work like that. We don’t normally count our participants (well alright, sometimes we do, but for different purposes such as getting grant money etc.) We don’t really even define the people contributing to our study as ‘participants’ in the first place – instead they would be our interlocutors, collaborators, informants. This is more than semantics – this represents a very different relationship between researcher and contributor than that which is imagined when we speak of ‘research participants’. Our interlocutors are those people who have very kindly agreed to let us in to their homes, to become part of their communities, and to join their activities so that we could understand them better. So to be honest, if anyone is ‘participating’ it’s not them; it’s us.

Also our research setting has no defined space or structure; you don’t get to invite someone to your lab at 4pm to do an experiment, or to get people to answer your questionnaires. Data is generated much more spontaneously than that, After all, every conversation we have, however short, is data. Being taught how to make a surfboard is data. Having a pint of beer is data. So in this highly unstructured research environment, who would even count as a research participant?

In my case, I suppose the answer would be anyone who has ever enlightened me in any way about my research topic. People I interviewed, of course, but also people I exchanged emails with, people I chatted with at social gatherings, people who wrote books, blogs, or information brochures… In other words, hundreds and hundreds of people. But I don’t think that’s what the person asking me about my study meant. I suppose she was thinking more in terms of an exact number. A sample size. An ‘n’.

But like I said, I don’t have a sample size. I can’t really even say I have a sample. I realize to some people this very statement is quite mind-blowing. “No sample? So how can you tell if your study is at all representative?”

And that’s just the thing: Representativity works differently in social anthropology. In general, we just don’t quite buy into that whole idea that the more people you talk to, the more likely you are to capture that big whole. I’ll try and explain. Let’s say I did a study on 1,000 people from – I don’t know – let’s say Crete. And I might have found that, for example, 68% of Cretans support, say, controlled immigration. That’s a useful finding in many ways. But as a social anthropologist, I’m not particularly interested in this figure. Instead, I’m much more interested in learning about the variety of opinions, the plurality of voices, the disagreement, the dilemmas, the change of heart. I want to understand how different people interpret the very meaning of the concept of immigration; what connotations and emotions does it raise in them? does it arouse fear? Uncertainty? Hope? When thinking about immigration, does one look back into the past, or forward into the future? Does one see immigration as a moral issue? An economic issue? An identity issue? And at what point, and under what conditions, can ‘one of them’ finally become ‘one of us’?

The thing is that there are hundreds of thousands of people living in Crete. And each person is unique. Each person has their own singular, complex, intricate and subtle reasons for doing what they do, saying what they say, and thinking what they think. In trying to capture all those differences, a sample of 1000 is just as inadequate as a sample of 3.

And yet there are similarities, of course; Cretans all live on the same island, speak the same language, are subjected to the same laws, and are governed by the same people and institutions. Understanding the delicate and complicated mechanism by which those similarities have affected each of them individually would provide the very best answers, I think, on who Cretans actually are as a group. And to understand this process, this mechanism, you don’t need to talk to 1000 people. Potentially, you could do that by only talking to 10, as long as you’re taking your good time and asking the right questions.

I’m not saying large scale studies are inevitably blind to nuance. I don’t mean to be making such an overarching generalization. What I am saying, however, is that when nuance is all what you care about, you tend to give much less regard to quantity, and much more attention to quality. When every new conversation I have with the same person consistently reveals new insights, new understandings, new knowledge; I’m absolutely in no rush to go looking for someone else to talk to.

Which leads me back to my own study. Studies on autism often have dozens, hundreds, and even thousands of participants. And yet there are millions of autistic people out there. Can a study that considers the genetic / brain imaging / social-economic data of 10,000 autistic people contribute to our understanding of what autism is? Of course it can. But can it help us understand what being autistic is like? Can it help us understand what autism means to the very people who are themselves autistic? Not really, no. However an extremely well-told, sensitive and nuanced study of even just one autistic person can do exactly that. Just think of Temple Grandin, Dawn Prince-Hughes, or any other autistic author who has written so beautifully about their own lives, expanding our understanding in ways that large studies never did.

As I go about my study, day in day out, trying to understand as much as I can about what being autistic is like, I find it most useful to focus on those highly specific incidences where the interplay of many different circumstances create meaningful and unique experiences. For that reason, yes, I do find it helpful to speak to people of different genders, age groups, physical ability, marital status etc. But that’s all that this is – helpful. It’s not crucial. After all, all I’m doing is looking for examples – elaborate examples – to would help me make sense of things. And there are only so many examples you really need, provided they’re good enough, before it’s just up to you to make of them what you can.

Related posts:

Anti-Psychiatry and the Neurodiversity Movement

I just finished reading this excellent article by Nick Crossley about the anti-psychiatry movement (the article is behind a paywall, unfortunately). It’s funny – I’ve heard that term, ‘the anti-psychiatry movement’ tossed around so many times that I mistakenly came to believe that I knew what it meant. Having read Crossley’s article, I realise that I actually didn’t. Well, not really, anyway.  At any rate, it got me thinking about some of the similarities that might exist between that movement and the current neurodiversity movement. But before I get to that, a bit about the article.

In a nutshell, Crossley offers a really useful social and historical review of the anti-psychiatry movement as it emerged in Britain in the 1960s and 1970s. He seems somewhat enamoured by one particular figure who the movement apparently centred around: a certain R.D. Laing (not to be confused with the contemporary American entertainer Artie Lange – not that anyone was going to, of course. But seeing as I already went there: if you can stomach some really filthy language, and you are not easily offended, check out this comic, if only for his hilarious and often unsettling recollections of – until quite recently – being a heroin addict).

Whether the history of the anti-psychiatry movement is necessarily best told through the story of R.D. Laing, or if this focus is simply the result of Crossley’s fascination with this character, I do not know. And anyway, as Crossley himself writes in the beginning of his paper, you can’t ever tell every possible angle of a story; you simply need to choose that one piece of the story which you want to tell the most. And in this case, Crossley made a great choice, because this really is a fantastic read. And R.D. Laing really is a fascinating figure. Do a little reading about him if you don’t believe me.

But anyway, obviously the anti-psychiatry movement can’t be credited to just this one man, brilliant and charismatic as he may well have been. In fact, Crossley’s main argument is that Laing’s ideas would never have caught on – could never have caught on – if the social, cultural, political, historical, and even disciplinary conditions weren’t “just right”. The conditions were ideal, though, right then and there, and consequently Laing’s views made sense to people. Had he had those ideas 10 years earlier, they might never have been of any interest at all to anyone. In fact, had he had those idea 10 years earlier, he may never have had them in the first place (a bit of a paradox there, you know what I mean), because the conditions wouldn’t even have been right for him to be having those thoughts in the first place. You see my point?

In this sense, Crossley resolves the structure vs. agency debate as well as anyone can, really: individual human agents operate within an existing structure, he says, and they are also, to a large extent, products of that very structure. Individuals do have the power to shape society, sometimes even to change it; but this can only ever happen within the limitations posed by, and the possibilities afforded by the existing social structures (themselves, of course, prone to be shaped and changed by individuals… and so the game goes on and on).

So while he does focus on R.D. Laing the person – what it was that made him, and consequently his ideas and influence unique among his peers and in that particular place and time – what Crossley is saying is that what allowed these ideas to grow and become as influential as they did was the specific social and historical context from which they emerged and in which they were received. More specifically, Crossley is referring to: (1) the extremely powerful and fiercely uncompromising state of the discipline of psychiatry at that time; (2) the advent of the political new left in Britain; and (3) the emergence of the 1960s counter culture, whose members were all too happy to accept and appropriate a philosophy that viewed psychiatry as an instrument of political power, utilised to control, in both obvious and subtle ways, the bodies and minds of the masses.

So yeah, reading this paper led me to think about the parallels between the anti-psychiatry movement and the neurodiversity movement. There are quite a few similarities here (the most important one being that both movements are extremely varied and heterogeneous, so please excuse my gross generalising and simplification). Both movements are very critical towards the medical construing of neurological variation as diseases or disorders. And both movements call for de-medicalization, to some extent at least. Relatedly, both movements tend to view society as the locus of mental illness rather than the individual; or in other words, they share the notion that to reduce individual suffering, one must target the ‘pathologies’ found in society and it’s institutions, not those imagined as properties of the individual. And importantly, both movements present an agenda that is as much political as it is academic, if not more so: power hierarchies, namely who gets to say what about whom and why (e.g. money, influence, hegemony), feature prominently in both movements’ view of things.

And yet – and do correct me if I’m wrong – it seems that the neurodiversity movement isn’t gaining as much popularity as the anti-psychiatry did back when. Sure, it’s everywhere when you look for it, and every newspaper and magazine features the odd shocked article where it reports that ‘that’ perspective exists (rarely crediting it as being as equally valid as the generally accepted medical view of mental conditions), but it doesn’t quite seem to even get close to the wide reaching, near-world-changing popularity of its predecessor.

Why is that? Given the unbelievably impressive array of authors and speakers promoting the neurodiversity paradigm, I suppose it would make sense to shift our gaze away from individual agents and look for answers instead in the all-powerful ‘context’: the systems, institutions, discourses, disciplines – in short, society. There it would be ‘decided’, in history’s tribunal, whether an idea will change the world or destined to become a small footnote.

So, what? Isn’t society capable and willing to accept the neurodiversity paradigm as a valid alternative to the biomedical model of mental illness? Can’t our existing discourses entertain this network of ideas? And aren’t society’s institutions – medical, legal, educational, legislative – able to adapt such notions into their heavy, complex and necessarily slow moving mechanisms?

Maybe they are, maybe they aren’t. The thing about social structure is that it constantly changes. What’s true today night not necessarily be true tomorrow. So, time will tell, I suppose.

Related Posts:

Going Native

A once common ethos in social anthropology has been to “go native”, namely, to try one’s best to adopt the habits, norms and lifestyles – and even the values and beliefs to the extent that it is possible – of the people one studies. While “going native” is still often regarded as a prerequisite of good ethnographic work, this trope has since been problematized in a great many ways, and its meaning has gradually changed.

Primarily, it has been noted that the anthropologist – like any other person – always carries with him/her their own social and cultural baggage, and so despite their very best efforts, that baggage is always and inevitably going to shape their understanding of a particular cultural system, phenomenon, or event. In other words, one can never entirely ‘go native’.

But although this goal of going native is clearly unattainable, the anthropologist is still expected to give it their best shot. For instance, if everyone in the village is going fishing, the anthropologist needs to get on a boat with the rest of them. If people around are fasting to commemorate a historical event, the anthropologist should do likewise. And if the general belief is that ancestral spirits cause disease, the anthropologist, upon falling ill, might be wise to consult a local healer as to what the cause of his/her specific ailment may be.

The idea is that while the great distance between people from different places and backgrounds can never be completely erased – or in other words, truly ‘becoming native’ isn’t possible – this distance can certainly be bridged to some extent. That’s what social anthropology is all about, after all. And to bridge this distance, one must actively seek ways to come to appreciate the experiences and perspectives of those one wishes to understand. If not to become them, then at least to pretend to become them for long enough to get an insight into their motivations and reason.

This issue has haunted me for some time as I was doing my fieldwork. Because while cultural traits can perhaps be partly acquired through learning a language, studying texts, engaging in specific kinds of labour, or assuming a particular social role in the community, my study does not focus on cultural differences. My study focuses on neurological differences. So what do I do? How can anyone participate in ‘being autistic?’, if they’ve not been born with that particular neurological variance? How does one adopt a form of difference, when that form of difference is embedded in the brain?

The short answer is “you can’t”, but I don’t think this matter should be resolved quite this quickly.

In my post about the film Playtime, I discussed my experience of watching the film as instrumental in my understanding of what being autistic might be like: the irritation by noise; the disorientation; the confusion; and the frequent sense of inconsistency and incoherence. It demonstrated that it was possible for a NT to perhaps get a glimpse into a neurodiverse experience.

So I began to pay more attention to these and similar sensations in my own everyday life. Take noise, for example. I used to find it very easy to block out background noises. It’s a question of attention, after all, and I simply focused mine on the sounds that were relevant. Gradually, however, while I was doing my fieldwork, I began to consciously shift my attention outwards, and effectively force myself to absorb more and more sounds from the environment; the traffic, the fridge, falling rain, anything. Gradually the world – my world – had become a rather noisy place, at least as long as I was paying attention to it. But fascinatingly, what started as an exercise, has become rather permanent – I have trained myself to become more sensitive to auditory stimuli.

The other day I was sitting in a busy restaurant in nice company, and I found it extremely difficult to concentrate on the conversation. The loud music and the constant chatter impressed so heavily on me, that I began to develop a headache. Moreover, distracted by all the noise, I couldn’t articulate myself as well as I wanted to. After a while, the loud sounds were all I could hear.

Mind you, I’m not saying “this is what being autistic feels like”. I have talked to enough autistic people to know that there is much, much more to it than this. And anyway, I doubt my auditory experience was even nearly as uncomfortable as it can be for so many people out there. Not even close, as far as I can tell.

But it’s a window, you know?

Because here is where this small discomfort led to a rather significant insight: what can I do about the fact that the noise is making me uncomfortable, ineloquent, and slightly ill? Should I excuse myself and step outside (not a huge improvement, as I have also become more sensitized to the heat and air pollution)? Should I try and share my discomfort and apologize, which may result in my being stigmatized in unexpected ways? Should I suggest that we speed up our lunch and go find some place quieter? None of these options is ideal, and each would involve some breach of social etiquette. I was trapped.

Let me be clear: I wasn’t suffering. Mostly, I was enjoying myself. It would be inexcusable of me to claim that I understood what being autistic was like at that moment. However, I could also imagine it being ten times worse. And I could imagine finding it that much more difficult to navigate the social dimensions of this event. And I could imagine the stress, even panic, of trying to assess how each of my possible choices would turn out.

No, I wasn’t “becoming native”, but I did manage to briefly pretend that I was, if only for the sake of my own education.

Related posts:

How to Visualise Autism?

I very recently presented a paper to a group of my colleagues (that is, a bunch of PhD students in social anthropology, such as myself) at a conference. I’m not going to go into too much detail about the topic of the presentation, as this isn’t what this post is about, but maybe I’ll just give you the main gist of my argument, in case anyone’s interested.

I basically said that to claim that ‘many autistic people find it difficult to recognize, manage, or express their emotions’ might well be true, but it’s also very inaccurate. I said that we need this statement to be our starting point, not our bottom line. I made the point that it’s never enough to say “this person can’t manage their emotions”. Instead, we – and by ‘we’ I mostly mean researchers, but potentially anyone – need to ask why this person can’t seem to recognize, manage or express their emotions. And I said that “because s/he’s autistic” is never a good enough answer. Emotions, I said, are very complex processes, that involve one’s brain, one’s body, one’s memory and experience, one’s social environment and upbringing, and one’s sensory input. So really, any discussion about emotions has to take all these different factors into consideration. It’s hard work, of course, but anything less would be pointless.

Anyway, that’s more or less what I had to say. I intend to pursue this line of thought over the next year or so, and I’ll try to keep you posted about where this is headed.

What I do want this post to be about is something a bit more mundane, but interesting nonetheless. Here’s the thing: while I was preparing my talk, I tried to think about what I should put in my PowerPoint presentation. Presentations – or so claim the experts – should be accompanied with some form of visualization. These are supposed to provide another focal point for the audience, making it easier to concentrate, and appeal to their sense of sight while the speaker tends to their sense of hearing, which supposedly has the effect of making the whole thing more interesting. Or something like that.

Anthropologists are lucky, in that they often get to accompany their presentations with beautiful pictures from far away countries; dramatic landscapes, interesting architecture, curious festivals, and exquisite costumes. Sometimes, interesting or unexpected contrasts do the trick, for example an elder from a tribe of hunters-gatherers using a smartphone, or devout Muslim women burning off calories on the treadmill at the gym. Or how about this highly recognizable photo? These are often tricks, of course. A Mongolian rural elder holding a smartphone might mean nothing more than that he was handed a smartphone a minute earlier. These types of pictures may well just be creative ways to appeal to the most basic human tendency to admire – or revere – that which is different, unexpected or new. But so what? Everyone likes looking at nice things. And if a beautiful image helps you tell a story, all the better.

But as I went through my mental image-bank to try and figure out what I might use as visual references in my own presentation – about autistic adults in the UK – I very quickly realized that in fact I don’t have any such images. I took zero pictures during my fieldwork. And even if I did take photos, I would never display them publically, because my interlocutors’ anonymity is a very important concern of mine. But even, say I could find a way to get around that, like, for example, showing a picture of a person whose face isn’t shown. Fine, ok, but still; what am I showing, exactly? A picture of a person whose autistic? As if her hair, clothes, or – what, exactly? – tell any story at all about this person as someone who is autistic. It would be meaningless. Just a picture of a person.

I’m reminded of the picture that’s displayed in the English Wikipedia article on autism. When I was just starting out in the field, I took endless trips to that Wikipedia article for background and general reference (the merits and many shortcomings of using Wikipedia as a source on autism is an issue for another post. At any rate as a novice in the field, I was unaware of the many issues this presented, and found it very useful). If you visited that article more than once or twice, you probably know what photo I’m talking about: a little red haired boy, with mostly his back to the camera, standing in front of an open cupboard and stacking cans in a high column, reaching as high as the boy’s head.

I’m actually quite amazed by how long this image has stood there, uninterrupted.

I do like it, though. It may just be me, but there is something very empathic about it. This boy is enjoying himself (I imagine) by doing something that might be slightly unorthodox, but so what? I love that the photographer just lets him have that fun, not interrupting him, not even to face the camera when the picture is taken. The scene sort of makes me want to sit beside this boy and make my own column of cans. Or maybe even, if he lets me, make one together.

The person who took this photo (I did some detective work… Couldn’t resist. And yeah ok, this information is just written there in the file page, so it’s very lazy detective work) is the boy’s mother, Nancy J. Price. Apparently she’s a writer, among many other things, and you can see her webpage here. She took this photo in 2003, which would mean the small boy should be 12 or 13 years old. And wait a minute while I look… Yes! What do you know? In her webpage there are some current photos of her now teenage son, whose name is Quinn by the way. They’re there under the heading ‘My Favorite Face of Autism’. How can it not be? These are priceless.

But returning to the topic at hand, can there really be a ‘face of autism’ from a broader point of view? (That is, not that of a mum). Honestly, what would it even look like? Would it be a child or an adult? A boy or a girl? Man or woman? White or black? Would it be a university professor or an artist? A sci-fi fan or a social activist? Would they speak, sign, type? Would they be happy or upset? Lonely or surrounded with friends?

Invariably, any choice as to how to visualize autism would be problematic. It would create a bias, perpetuate a stereotype, deflect attention, or just create controversy. The very feature of autism, as far as I can tell, is diversity.

I eventually gave my paper without a PowerPoint presentation, and instead I just let the text – heavy with quotes from an autistic interlocutor of mine – to speak for itself. It worked out well, I think, but I can’t help but feel that there’s something I’m still missing. That by avoiding the issue I’m not doing it any justice.

Any ideas?

Related Posts:

How to Write about Autism (or any other group, for that matter)

Update: Apparently since I’ve written this post the article in question has been removed. That’s very good news. I’m curious what led to it being removed, though. Was it the author’s choice, or was it a decision made by the people at Psychology Today? If anyone has any insights about this, I’m happy to hear them. 

Recently, on PsychologyToday.com, a blogger and career advisor by the name of Marty Nemko, who also holds a PhD in educational psychology, wrote this an article where he offers some suggestions on how to help people with Asperger’s syndrome find jobs. Many found this article offensive and devaluing of autistic people, and subsequently expressed their anger and concern on various platforms – including this facebook event. As a response, Dr. Nemko added an update to his article, defending his position while implicitly accusing those who criticised it as being easily impressionable, not representing others in the autistic community, and of actually hurting the chances of autistic people to find work. His addition is, to be honest, much worse than the original article itself.

The prejudice and misconceptions in his claims have been discussed extensively online. Gladly, there are quite enough opinionated, eloquent, and persuasive people in the autistic community, and so my own contribution might very well be redundant. Still, inspired by Dr. Nemko’s poorly written article and the way it was received, I thought I’d share some of my own realizations as a neurotypical social anthropologist writing about autism, on how to – well – write about autism. This is by no means an exhaustive list; but it does point to, I think, some of the most disturbing errors often made by people writing about autism in the media and in the academy.

If you feel I left out something important, if you feel I’ve made mistakes myself, or if you have any other contribution to make to this discussion, I’m very happy to read your thoughts in the comment section.  So, how (not) to write about autism (or any other group, for that matter):

 

You want to help people? Help them on their own terms

“Helping People with Asperger’s Syndrome or Autism Find Work” is the title of Dr. Nemko’s article. Fair enough, I suppose, but why don’t we take a few moments to ponder over the meaning of this concept of ‘help’.

Help is often thought of in very positive terms, isn’t it? It denotes altruism and empathy, reaching out for someone in need even at the expense of one’s own self-interest. But is that really always the case? I mean quite often, ‘help’ is merely used as a means of earning influence or respect, or just as a way to make money. I’m not saying that profiting from helping others is necessarily immoral, mind you. I am saying that it’s not necessarily unselfish. The details – e.g. who’s helping whom and in what way – matter.

There are many political and social implications to ‘help’ that we should constantly be mindful of as well. When someone in a position of power – political, financial, social, whatever – decides to help someone disadvantaged, the inequality between them, the same inequality that led to their respective positions in the first place, is both strengthened and made painfully visible. This might be inevitable, and I certainly don’t mean to imply that help is necessarily a bad thing for this reason. But when offering our help, even to those who seem to greatly need it, we need to be conscious of how we use the power that we just won over them. Throwing small change at the feet of a homeless person does much more harm than good; it is humiliating and degrading. “But those things aren’t important when they obviously need money for food!” you might argue. Well, in most likelihood, they could do with some cash in their pocket, yes. But does this contradict their need for respect, for decency, for acknowledgement of their humanity?

This was an obvious example. Quite often, the mechanisms of degradation to do with ‘help’ are infinitely more subtle. This doesn’t excuse us from our obligation to be mindful of them.  The goal should be to balance, as much as possible, the unequal power relations between those in a position of privilege and those in a position of need. How is that done? By acknowledging that those who are disadvantaged, disenabled or marginalized have their own idea of who they are, what led to the position they’re in, and most importantly – what should be done about it. Forcing one’s own idea of what another person or group of people need is not help. It is arrogance and audacity. And it’s no surprise that people are angered and offended when such behaviour is directed at them.

But how would you know what their ideas are about what sort of support they need? It would take such hard work to find out!  Why yes, yes it would. And if you can’t be bothered to do that work, perhaps you should reconsider your desire to ‘help’ them in the first place.

But still, there is just so much information out there. It is sometimes difficult to tease out what’s relevant and what’s not, what’s valid and what’s false. I mean, using Wikipedia as one of his sources was a very poor decision by Dr. Nemko, but the rest of his sources are not inherently bad sources of information (Autism Speaks is actually a terrible source for information about autism, for various reasons, but in order to know that, one still has to do some amount of research. He would only then learn that quoting it as “the leading organization advocating for people on the autism spectrum” is so grossly inadequate as to invalidate all his further claims almost instantly). Dr. Nemko does mention in his update to the article that he spent 4 hours with a group of over 40 people with Asperger’s, and spoke lengthily with the group leaders as well as some members.  Indeed, if what he had heard from these people corresponded with the Autism Speaks approach, for example, how could he ever have avoided making the mistakes he did? Well, this leads me to my next piece of advice:

Never assume the group you’re writing about is homogenous

Every group of people has differences of opinion among its members. These might be subtle differences, or they may be huge and insurmountable. It’s easy to mistakenly think that if you heard one perspective, or indeed ten perspectives, then you know the whole story, but that is never the case. If you’re going to write about a large population, you must assume that such differences exist, and – this is crucial – you must actively seek out these differences. Don’t stop researching until you find a controversy, and then try and determine how deep rooted and widespread it is.

I’d like to think that if Dr. Nemko was aware of the perspectives of those who subscribe to the neurodiversity movement, he would have written a very different article; if not different in its basic premise, then at least more respectful and more informed, less prejudiced and not quite as offensive.

As it turned out, Dr. Nemko did at some point become aware of the fact that not everyone sees autism as a disability, that some (I would say many) people are actually outraged by the notion that cleaning cars is a career that people with Asperger’s should aspire to, or deeply insulted by the claim that “scavenging through garbage cans” is just one of those “unusual habits” that people on the autism spectrum seem to enjoy. Better late than never, I suppose, but instead of retracting his article, apologizing and rethinking his engagement with the issue at hand, he chose a different course of action: he defends his questionable position by attacking those who found it offensive.

His attack is based on the premise that his critics represent a small few, an insignificant minority; that they were driven to criticise him under a false pretence (namely that his article was poorly sourced; an accusation that was a) absolutely true, and b) not even the main issue); and therefore can be – if not completely ignored – swiftly brushed aside. Let us look beyond his arrogance and unshakable self-conviction. Here’s the important thing: You don’t get to choose who represents the group you’re writing about. You’ve come across members of the group who feel you’re completely wrong in everything you say about them? They’re probably right. If you couldn’t anticipate their angered reaction, you’re obviously just not sufficiently familiar with the field to write about it.

Do not take liberties in defining the people you write about

Dr. Nemko uses the term Aspie very freely. In his update, he justifies this by explaining that he was told this was the term most people with an Asperger’s diagnosis preferred. I’m not worried whether that’s actually true or not; I don’t think a proper survey was ever held, and either way, this is likely to vary with age, gender and other factors.

When a person with Asperger’s identifies as an Aspie, he or she is making a conscious choice – a political choice – to adopt the label of Asperger’s in a very particular way.  To raise certain connotations. To emphasize some aspects of their neurology; indeed of their being. It’s not up to us NTs to impose this label on everyone with an AS diagnosis. This is a discourse from which we are more or less excluded, and for good reasons. Similar (though different) examples exist in more or less every other minority group.

I imagine hearing some readers sigh with exasperation, “enough with this political correctness already! I should be allowed to call people what I want”. No you shouldn’t. And if you don’t understand why, you haven’t done your research, and you shouldn’t be writing about this group of people in the first place.

(A note: When I write about people on the autism spectrum, I often refer to them as autistic (or otherwise as “people on the autism spectrum”, or simply “on the spectrum” in short). This might seem to contradict the point I just made. But here’s the thing: it’s been my experience that by far more people are offended by “person with autism”, than by “autistic person”. This is because the former implies that autism is something external to the person, while the latter implies that autism is an important part of who that person is. There is no consensus in this matter; but I’ve been given the impression that while some find “autistic” distasteful, few are offended by it. However, a great many people find “person with autism” extremely offensive, and I’ve been repeatedly told this was, in most cases, preferable. See, for example here and here)

Back to the article…  It gets worse: “Aspies … are often intelligent, kind, and eager”. And elsewhere: “most Aspies are friendly”. What might induce statements such as these? How can a population of millions be characterized by such simplistic terms? Need it be said? People with Asperger’s are a very varied group of people. Yes, many of them who I’ve met are extremely intelligent and often kind. Others less so. I’m sure there are many out there who are neither. I’m not just pointing out the inevitable inaccuracy of this statement – I’m troubled, once again, by its implications. It implies that there’s “us” and “them”. And that “we” are in a position to pass moral judgement on “them”.

And it’s also extremely stereotypical. Are Black people intelligent? Are people who use wheelchairs kind? Are homosexuals friendly? These statements are not just absurd, they’re profoundly offensive and condescending.

Do not mention prevention or cure for autism as desirable technologies

I was going to simply type “just don’t do it”, and leave it at that, but obviously this needs some further clarification, seeing as folks like Dr. Nemko still feel it is proper to express wishes that a cure for autism be found. It’s not. Here’s why: regardless of whether autism is seen as a disability or not, it is nearly always experienced by autistic people as an inseparable part of their very being, of who they are. To say autism should be prevented, is telling them you wish they had never been born. To hope for an autism cure, is telling them you would have chosen to have them killed and replaced by someone else entirely – if only you had the technology to do so. It is categorically hurtful, insulting, immoral and cruel. So… Just don’t do it.

I could probably go on, but I’ll leave it at that for now. Maybe I’ll just end with this quote from the author of the article:

If reporting based on a degree of research well beyond what’s conducted for most blog posts generates a firestorm call for it being censured and censored from activists believing that autism is a difference, not a disability, fair-minded writers, indeed any fair-minded people considering where to devote their efforts will–unless they’re masochists–turn their attention to issues other than disability, which frankly, in light of your comments, I plan to do. I’m sure you agree that’s a good idea.”

In this case, I quite agree with Dr. Nemko. If all he has to say about the vast amount of legitimate, well-articulated and detailed critique he received for this article is “you’re an ungrateful lot and I want nothing more to do with you”, then by all means! Not writing about disability is, in his case, probably a very good idea indeed. What do you think?

 

Related Posts:

The Horse Boy

The Horse Boy

I haven’t written about documentaries so far in this blog, and so I figured I should probably begin this post by laying out some basic truths about documentary films. You know, just so we’re on all the same page here.

A documentary film uses selective filming, editing, and narration to tell the viewer the story it wishes to tell. Nothing more, nothing less. That is absolutely fair, of course; Storytelling is what documentary films are all about. However, seeing as that is the case, documentary films should never be taken at face value. They do not give the whole picture; and they don’t necessarily – nor are they obliged to – give even an honest picture. The fact that the raw material of which they are crafted is footage of mostly spontaneous social interaction contributes greatly to their magic and appeal.  But we must avoid using such terms as truth, reality, objectivity etc. when discussing documentaries. They’re not necessarily any more “real” than a romantic comedy starring Adam Sandler. So there’s simply no use in questioning their validity or truthfulness, any more than we would that of 50 First Dates. They’re stories. They’re representations of reality, yes; but that doesn’t make them particularly real.

Right? Right. Now that that’s out of the way, I can begin.

I have to admit that as a social anthropologist studying autism, I have made a decision (not necessarily a conscious one) to focus on the experiences of autistic people themselves, rather than those of the people around them. I felt the experiences and perspective of parents to autistic children, for example – important as they may be – are already getting quite enough attention as it is. And maybe I just didn’t want my own understanding of autism to be skewed by them. I can’t vouch that this is the best way to go; I did have parents suggesting to me that my perspective would be intolerably swayed without considering their perspectives. Well yes, maybe. The thing is that any perspective is always swayed, so you might as well be aware and in control of just how you allow yours to be influenced. Either way – the fact of the matter is that I mostly distance myself from the perspectives of parents to autistic children. This also means, almost inevitably, that I distance myself from the experiences of autistic children; except when those are reflected upon by autistic adults when recalling their own childhoods.

So The Horse Boy was, in a way, an important reminder of the very obvious fact that every autistic adult has a history of being an autistic child.  And that parents are very often the most influential factors in those children’s lives. I needed this reminder, strange as it may seem.

horseboy

*

I enjoyed The Horse Boy. It had a certain honesty to it that appealed to me and made me think long and hard not only about autism, but also about parenting – including my own. Because The Horse Boy wasn’t so much a film about Rowan (the child). It was more a film about his parents – particularly his dad, Rupert, who apparently was the one to come up with the idea of going to Mongolia in the first place.  It is a film about relationships – Rupert’s relationship with his wife, with his son, with autism (as a thing, a category, a concept), with horses, with Mongolia, and with himself. And yes, also, implicitly, with a camera-crew and the prospect of making a successful documentary. So The Horse Boy, the way I saw and interpreted it, is indeed a film about a parent’s journey; but as with any good parent, his child – his son’s well-being, comfort, happiness – is an inseparable part of his own experience of life; of his own well-being, comfort and happiness. These connections and interrelations are the stuff of which all families are made of. So ultimately, this is a film about a family. A family that struggles. A family that needs help – and that seeks an unorthodox way to relieve it of its struggles.

The Horse Boy isn’t about healing autism, and it deserves credit for that. Sure, they all struggle with autism; Rowan especially, but his parents as well. But autism is never framed as a rival or an enemy; the idea of somehow eradicating it is never brought up. Autism is not conceptualized as a separate thing from who their son is. Instead, the family is simply trying to deal in the best way it can with the challenges having an autistic child – or in Rowan’s case, with being autistic – presents. The Horse Boy is about healing the distress that often accompanies being autistic, and that which accompanies loving and caring for an autistic person. This cannot be done with a drug or any other sort of biomedical intervention; because such interventions inevitably focus on the body. But the problem isn’t in the body; or at least not just in the body. Indeed, some forms of distress are made of broken or loose social connections. Or impossible expectations. Or negative emotions. Or confusion. Or doubt. Or uncertainty. Or fear.

And it is these aspects of the child’s and parents’ distress that were targeted by the Mongolian shamans. And it is why – to the audience’s perceived amazement – the rituals actually helped.

*

I don’t know hardly anything about the Mongolian belief system or its traditional medicine. From the film, I can infer that it involves some sort of ancestor-reverence and belief in spirit possession (so that the shaman argues that Rowan’s soul is possessed by Kristin’s deceased grandmother). Explicitly, the healing rituals are apparently meant to both appease the spirit and confront it in battle, in order to remove its grip of the child. But we don’t have to accept the metaphysical belief system of the shamans to appreciate the positive effect that such a ritual may have. There are other, more earthly ways to account for why this ritual – or rather, this series of rituals – had made a difference in the lives of Rowan and his parents.

It’s not so easy to tell what this effect could be, however. With the limited information we have, it is indeed quite impossible. To do that, we will have had to take a much, much deeper look at the rites themselves, where exactly they were performed and why these places are significant, who precisely performed them and what their exact role in society is, what artefacts were used and what they symbolize, what texts were recited and what they mean, as well as the specific interactions between the healers and Rowan, between the healers and Rowan’s parents, and between the various healers themselves. Not least, a very profound familiarity with this particular society’s beliefs, values, and language is required. Without any such knowledge, the best we can do is speculate. And speculate is precisely what I am going to do. I am hoping to show that whatever the specific characteristics and attributions of the rituals may be, such rituals in general may indeed have a positive, durable effect on relieving one’s distress. And this is regardless of whether one is willing to accept the existence of spirits and demons.

Mainly, what the series of rituals carried out during the family’s visit to Mongolia did was to put Rowan’s suffering in context – a different context. It has given it a narrative: a cause, a reason, an explanation. A history that goes far beyond his own still short existence. It has located Rowan’s suffering; and significantly, it has located it outside of Rowan’s own body (or more accurately, inside his body, but as an external intruder). The shamans never mentioned ‘autism’, mind you. ‘Autism’ was never the object targeted by their rituals. They targeted only the suffering; only the distress. So in the eyes of father, mother and son, what the healing rituals did was to strip Rowan’s distress – as well as his parents’ – from the binding label ‘autism’, with its usually-not-very-positive, Western and Modern and Medical connotations. Instead, they have placed them elsewhere. Once this change is achieved – and it’s not easy to achieve, as one can easily imagine – many other things are likely to change with it.

For example, the series of rituals had the rather immediate effect of altering Rowans’ surroundings – mountains, horses, streams etc. – as well as, arguably, his symbolic position within those surroundings. It has placed him in the centre – in fact, it has placed him as the centre – rather than viewing him as (metaphorically) lagging behind or being pulled forward to somehow keep up. It has altered Rowan’s parents’ understanding of him and expectations of him, thus in a sense modifying and revalidating Rowan’s presumed role within the family, within society – and indeed within the world.  It has probably affected the relationship between the two parents, perhaps readjusting it so that it is more geared towards Rowan’s own difficulties and capabilities, which are presumably very different from those imagined by Rupert and Kristin since before he was born. Or perhaps the rituals have somehow ruptured Rowan’s constant painful memory of a (short) lifetime of much distress, anxiety and discomfort, fixing his gaze forward instead, towards a more comfortable, accepting, bright future.

*

Like I said, I can’t be sure of any of these claims. They are all mere speculations – if that. But my point is that too often, when we think of autism and the distress and suffering that accompany it, we think of brain wiring, cognitive functioning, DNA strings etc. Those all play a part, yes. But other factors are also meaningful. It could be argued that other factors are even more meaningful. These ‘other’ factors, such as those noted above, are neither fixed nor inevitable aspects of autism. Their transformation shouldn’t ever be conceived easy, but it shouldn’t be reckoned to be impossible either. Our experience of the world is constructed of many types of materials, connected in an infinite number of ways. At least some of them are potentially alterable.

 

What do you make of all of this?

 

Related Posts:

Playtime

Playtime (1967)

It was suggested to me recently that if I want to watch a movie that really sheds some light on what being autistic is like, then I should watch Jacques Tati’s 1967 Playtime.

Here’s what M Kelter from Invisible Strings had to say (this is copied from the comments section of my blog):

 m kelter November 25, 2013 at 4:04 pm

…I’ll tell you one thing: I said there’s no film that assumes an autistic POV…in a way, that’s not quite true. There’s a film by French director Tati called Playtime. I’ve always felt that, on many levels, this film replicates my experiences of the world. The sensory experience…the presentation of “normal”…all of it comes from a POV that replicates how the world feels to me. If an NT wants to understand how it can feel to experience sensory overload…or how it can feel to be confused by non-verbal communication, by systems of normalcy…Playtime is a great experience. I love the film, but I also think it puts the viewer in a world that is warped, confusing, hard to process…it’s a world I am very, very familiar with. Anyway, instead of films that present autistics as these walking diagnostic manuals, I’d rather see more films like Playtime, films that assume a POV that pushes normal to the side.

Well, I took M Kelter’s advice, and I’m thrilled I did. What a ride.

playtime

Now, if you haven’t seen Playtime, I strongly urge you to give it shot. Granted, it’s not for everyone. It hasn’t got a plot – not in the conventional sense, anyway – and it hasn’t really got characters – again, at least not in the conventional sense. There’s really nothing conventional about this film, to be honest, but that’s exactly what makes it so magical. It’s an experience more than anything else – a trip, if you will. By ‘trip’ I don’t mean the kind you get from LSD. Or maybe, somehow, it’s exactly what I mean; the same sort of metaphor applies. You get a glimpse of a world experienced differently: sounds are accentuated; your sense of orientation goes awry; things are not what they appear; confusion ensues – even panic. Is this the sort of thing you were referring to, M Kelter, when you wrote that “it puts the viewer in a world that is warped, confusing, hard to process … a world I am very, very familiar with”? I suppose it is.  And so as an anthropologist trying to appreciate the experience of what it’s like to be autistic, I relished the opportunity for this masterfully crafted glimpse at a different way of seeing the world; however short and artificial.

Now, for those of you who haven’t seen the film, I should probably give a brief description of what goes on there. Yes, I will be doing Jacques Tati a terrible injustice, because the experience of watching this film is precisely the sort of experience that just can’t be put into words. It is very much a visual and auditory journey, not a narrative. But I might as well give it a try.

The film is set in an imaginary “modern Paris”. Not the old city with its unique architecture and very particular charm.  No, there’s nothing old in Playtime; everything is sparkling new. This city, which was constructed solely for the purpose of the film, is ultra-modern, by what I imagine to have been the standards of “modernity” in 1967, the year this movie was filmed. It depicts modernity gone mad, stretched to its absolute point of absurdity. It’s all glass and metal, right angles, spic and span cleanliness, and abundant technology. Tati makes a good job at not allowing his creation to be reduced to any crude or simplistic idea of good or bad; it is neither, really. It simply is. In fact, oppositions are a theme in Tativille; friendliness and alienation, order and chaos, dreariness and ecstasy are all present and are intermittently drawn to their extremes at varying degrees of simultaneity.

And the noise!! It’s everywhere and it’s relentless, it starts off as protruding from a distance, but gradually it’s made to feel ubiquitous and near, as it virtually becomes synchronized at some point, or so it feels, with the viewer’s own fluctuating hart rate. Roaring vacuum cleaners, buzzing intercoms, wheezing sofa cushions, pounding footsteps, beeping car horns, ear-deafening announcement speakers and screeching TV sets, and the list goes on and on… Oh and the chatter! The constant, rampant rambling, loud laughs and indistinguishable babbling are at times almost too much to bear. Also, Tativille and its residents are perpetually in motion – bustling roads and lively shops and escalators and elevators and construction work. Later, in the evening, increasingly frantic dancers and waiters blend in a less-then-perfect harmony, gradually seasoned by random drunkards, who quite naturally join in the seemingly improvised though endlessly complex choreography.

Utter disorientation is perhaps the hallmark of this film, as along with the protagonist Mousier Hulot, the viewer is ingeniously led to constantly wonder in confusion: wait – are we inside or out? That there – is that a wall, a door, or just an absence? Am I looking into this building, or is it merely a reflection of that other one? Are those people up there dancing to the music…? (No, they’re just taking a window apart) Is that truck going to pull-over or keep going? Are these people leaving or just standing up to say hello to friends? Is that desk an item for sale or is it a functioning desk in an office? Do I recognize this person from before or is it someone else entirely? What language is that person speaking? Is he speaking to me? Where is everybody gone? Where on earth is this film headed??

I could go on. There is so much more to Playtime. So many astute observations on the various layers of absurdity in modern urban living; on divisions and their breaking; on the fine, almost invisible line between intimacy and estrangement; on globalization, with its apparent effect of alienation, but the underlying reality that people will always be people, for better or worse. Their behaviour does take very different forms, though; because the environment matters, and our interaction with it affects us, often in ways that are unpredictable to us, but that make sense nonetheless.

Well that’s enough of that. That’s all I can do to put into words what was in fact never meant to be worded. Just watch the film. I don’t think you’ll regret it.

So let us get back to the issue at hand – what has Playtime taught me about what being autistic might be like? What do I make of this film as an anthropologist? What’s to be learned?

As I implied a moment ago, I think Playtime can help us to understand the interaction between people and their environment; particularly those people who are more susceptible than others to being affected by their surroundings. But wait, are autistic people more susceptible then others to being affected by their surroundings? Well, yes, I suppose they are. We all experience the world through our senses. When our senses are enhanced or very sensitive, our experience of the world is likely to be affected.

Thus, it should be interesting to use Playtime as an example and ask – What effect do autistic people’s enhanced sensory perception and sensitivity – often to the point of it being unbearable, sometimes to the point of it being mesmerizing and pleasurable – play in their lives?

These are difficult questions. Social anthropologists often struggle to incorporate the body – in any form – into their analyses. Try and ask yourselves: What’s the role of sensory input in social structures, in social relationships, in social forces, in social dynamics? It has a vast influence, clearly, but isn’t it inevitably just a bit vague and elusive?

What’s difficult about this is that when we talk of senses and sensory input, or the actual ways in which the environment becomes inscribed on our bodies and brains, we almost inevitably wind up over-generalizing. After all, every sound is different. Every sight, every texture, every smell or taste is unique. Even if two sounds – identical in every measurable way – are played to 2 different people in different contexts, they will have different meanings, they will be interpreted and experienced differently; indeed, they will be heard differently. Every single sensory input is, in many ways, singular and unique. How can such a singular occurrence be incorporated into any sort of general theory?

And how can this even be framed within a social science perspective?

Well, the single most relevant concept that can help us to start making sense of these questions is what’s known as ‘affect’.

I will not presume to define affect; better men and women then me have tried, to varying degrees of success; and I’m not yet at a point where I can synthesize these often very varied framings of this concept to anything very coherent, or even intelligible, without this turning into a heavy-laden theoretical discussion, which would be grossly inappropriate for this platform (and not a whole lot of fun to write, either). Instead, I will toss around some very partial explanations of what ‘affect’, in the context of the social sciences, might mean:

Affect refers to the universal and innate human capacity to affect one’s environment (including other people) and be affected by it. Affect refers to that elusive sense of one’s body playing a significant role in the intensity of one’s experience of the world. Affect refers to the immediacy of interaction, that layer of it that has not yet been “contaminated” or thwarted by meaning, interpretation, or language. Affect emphasizes the singularity of any human experience, those aspects of it that can never be accurately represented, duplicated, translated, or reproduced. Affect refers to that constant sense of motion in one’s state of mind, mood or thought. It is that unnameable sensation that follows an idea, right before that sensation is translated into language to form just another idea. Affect is that which is inscribed on us through our senses in a way that makes a difference – whatever that difference may be.

Ah, I wish I could offer a more structured or coherent explanation. But that’s the whole thing with affect; by its very definition, it eludes structure and coherence. It is exactly that thing that language could never quite get a handle on, whether because it is pre-lingual, or extra-lingual, or simply ineffable. Affect pertains to those sensations we feel that we can never find quite the right words for. And the instant we find the words – the sensation is gone.

I don’t think I have ever spoken to anyone on the autism spectrum who hadn’t told me at one point or another about sensory sensitivities that they have. And this is never regarded as inconsequential, trivial, or insignificant.  Quite often, in fact, sensory sensitivities are mentioned as the single most important aspect of living with autism. And it makes perfect sense, after all. Our senses are our window to the outside world; it is the media through which our environments affect us, right from the moment of our birth (and, indeed, even beforehand). It is the basis of all learning, of all knowledge, of all experience. So when our senses work differently, this is likely to make pretty much everything different. Social interaction, language, communication, control of one’s limbs, the sense of one’s body, preferences, emotions – it impacts it all.

Have I explained anything at all? No, I don’t believe I have. But by throwing these observations around, I am merely hoping to sow some seeds of understanding. You know, for later.

So allow me to end this post with a question for those of you on the autism spectrum: what sensory sensitivities do you experience? And more importantly, how do you feel these affected you throughout your life? Feel free to give one or two examples, or if you don’t mind, a lengthier answer will do perfectly. I genuinely look forward to hearing your replies.

Related Posts:

Bartleby the Scrivener (Part 2 of 2)

In my previous post I began to discuss the wonderful short story by Herman Melville, ‘Bartleby the Scrivener’ (You can find it for free online, on Project Gutenberg.) Following a great many caveats, I suggested that to assume that Bartleby was autistic (had he been a real person living today, that is) is not an outrageous notion. His many eccentricities, as these are noted and interpreted by the (neurotypical) narrator, seem to indicate a neurological-developmental difference in Bartleby; one that today would very likely be deemed an autism spectrum condition.

Also in that post (oh just go ahead and read it, it’s not very long), I mentioned the claim that Melville himself may very well have had the traits that would qualify him for an autism diagnosis today. If that is indeed the case, then Bartleby is essentially a story of an autistic person, told by a neurotypical narrator, who is in turn written by an autistic author. It is autism seen from the eyes of neurotypicality as seen through the eyes of an autistic person. This makes for a fascinating focus for a blog on autism from a social anthropology perspective; a perspective that emphasizes social relationships and social dynamics, as well as the different points of view that people from various social, cultural and – yes – neurological groups have on themselves and of others.

So let’s begin. Finally.

Possibly the most prominent theme in Bartleby is the lawyer’s/narrator’s constant and ongoing struggle to understand Bartleby. Initially merely feeling perplexed and baffled by Bartleby’s determined yet polite refusal to adhere to his boss’s requests (“I would prefer not to”), the lawyer realizes that this is not done as a provocation; it is not an act of impertinence or disrespect. This realization makes it easier for him to excuse Bartleby’s disobedience. But he still doesn’t understand why Bartleby refuses. Choosing to make no further assumptions without compelling evidence (very anthropological of him), he decides the best course of action would be to simply ask Bartleby. Surely, if the scrivener has good reasons to refuse to do his job, he shall share them with his employer. But no such luck. “I would prefer not to”, Bartleby once again replies. And again. And again.

This word, “prefer”, which appears in various forms 47 times throughout the story, seems to fascinate Melville as the characters in his tale all take the habit of using it themselves quite frequently.  Not unaware of the massive effect that this single word had on him and his other employees, the lawyer tries to make sense of its gripping influence – but unfortunately, to not real avail. So just to have a bit of fun, I’ll give it a go myself, if you don’t mind: “Prefer” seems to denote a somewhat flexible approach to a matter. A personal inclination that’s not bound by any rule or law. As such, it is seen as more or less contingent; preferences change. But Bartleby’s preference doesn’t change. Ever. Not even when his life is put on the line, and he is imprisoned and at the point of near starvation. Unlike other people’s, Bartleby’s preference is a solid as a rock. Moreover, when someone has a preference, it is expected that there be a reason behind it. “Why do you refuse?” inquires the lawyer. “I would prefer not to”, Bartleby frustratingly repeats himself. Without an apparent reason behind it, and without a prospect of it ever changing, Bartleby’s preference gains almost mystical powers, against which there is nothing the lawyer feels he can do. Had Bartleby simply “refused”, he would have been instantly let go and be over with, as the lawyer himself admits. Had he stated a reason for him preferring “not to”, he may have won his employer’s sympathies, and been allowed to loiter idly in his office to his heart’s content. But he had done neither; thus, unable to fully resent Bartleby or fully accept him, his employer is left in a perpetual state of liminality – suspended between empathy and anger, kindness and cruelty, care and pity, determination and inaction.

There is something very telling about the fact that it is the lawyer, and not Bartleby, who seems to struggle most in this story. It is him who constantly questions and negotiates his own morality, on the basis of his relationship with Bartleby. Bartleby, on his part, is quite serene. He knows who he is and what he wants; and it is the very fact that he is so resolute about this that arouses such extreme and contradictory emotions in the lawyer. With which of his two main characters does Melville sympathize more, I wonder? Hard to say, really. While diametrically opposed in almost every way, both the lawyer and the scrivener are portrayed as generally positive characters. Well if that’s the case, what is the problem? What is the source for all the tension, drama, and ultimate tragedy that occur in the story?  Is it Bartleby’s fault, with his eccentric habits and preferences (which include not doing the job he has come to do)? Not really, no. Melville never seems to suggest it is. So is it the lawyer’s inability to elicit a response from Bartleby? To force an answer? To make him do his job? No, he’d done all that was in his powers, surely, and Melville never implies otherwise. So what is it then? Who’s to blame? Where’s the fault?

*
I suggested previously that we might assume that Bartleby is autistic. It is an inaccurate assumption, to say the least, but it will do the trick, as it were, to help us understand a very simple – though not nearly adequately known or accepted – truth about autism. Let us imagine that Bartleby represents autistic people as a whole. And that the lawyer represents neurotypical society.

Autistic people are not sufficiently understood by neurotypicals (much like Bartleby is not understood by his boss). That much is more or less a fact. Curiously (or not), there is seldom any doubt among neurotypicals as to the source of this shortage of communication. The question “where’s the fault” is answered so hurriedly in autism research as to hide the fact that the question was ever worth asking. “It is in autistic people!” neurotypical society seems to enthusiastically proclaim: “I don’t understand those people”, they say. “And worse, they don’t seem to understand me! So there must be something wrong with them”. Researchers then go about looking for the specific location and source of this so-called “impairment” – is it in their genes? Is it in their brains? Were they exposed to pollutants? Infections? Abuse?

Damian Milton, an English sociologist, is perhaps the most eloquent author to frame the problem in a very different – and quite more productive – way; a way that is not dissimilar from what appears to have been Melville’s approach to the matter 150 years ago. Yes, neurotypicals don’t always understand the motives, intentions, and behaviours of autistic people, Milton asserts in his excellent article titled “On the ontological status of autism: the ‘double empathy problem’ (2012); But at the same time, autistic people don’t usually understand the motives, intentions, and behaviours of neurotypical people, either. So, here are two groups who regularly fail to communicate successfully between them. What, other than prevailing discourses about normality as well as unequal power dynamics between those deemed “normal” and those deemed “deviant”, would compel anyone to immediately assume that the problem is fixed inside autistic people? This is an entirely false view, Milton argues. The problem is not fixed anywhere; it is simply not specific – It is not bounded within autistic people nor in neurotypical people.  Instead, the problem is relational. The aforementioned communication problem merely lies in the relation between autist and neurotypical; between autism and neurotypicality. Only once we acknowledge this, can we start seeking for solutions.

Milton calls this the Double-Empathy Problem, and he defines it thus:

“The ‘double empathy problem’: a disjuncture in reciprocity between two differently disposed social actors which becomes more marked the wider the disjuncture in dispositional perceptions of the lifeworld – perceived as a breach in the ‘natural attitude’ of what constitutes ‘social reality’ for ‘non-autistic spectrum’ people and yet an everyday and often traumatic experience for ‘autistic people’. (Author’s concept and definition)” (Milton 2012:884)

Quite a handy concept, don’t you think?

I want to make a couple of notes on this. First, mind the part that says “…perceived as a breach in the ‘natural attitude’ of what constitutes ‘social reality’”. That the definition states that the breach is perceived, rather than simply ‘is’, is important.  It conveys an important message – the reality of autism (or “The Ontological Status of Autism”, as the article heading reads) is co-constructed in the social sphere by social actors; it is not static or inevitable, but contingent and fluid. Also note how ‘natural attitude’ is put into scare quotes; it is used almost ironically, I think, to indicate that there is nothing “natural” or permanent about “social reality”; but that it is instead constantly negotiated and changing. At any rate, it is certainly naturalized; namely, it is made to appear natural, but really, it is a social construct if there ever was one! Alas, the tall pile of construction rubble that was left behind is regularly swept clumsily under the rug.

Finally, and perhaps most importantly, Milton draws our attention to the fact that the double-empathy problem affects autistic people much more frequently, and much more severely, than it does neurotypicals. For NTs, the breaking down of communication with an autistic person is an anomaly, and may be frustrating. For the autistic person, it is everyday life, and it may very well be traumatizing.

 

Shall we have a quick go at seeing how this approach helps us understand the story a bit better? Yeah, why not.

 

The story of Bartleby and the lawyer does not occur in a vacuum – though we are given this impression up until the very last couple of pages. Only then, are we made aware of the fact that Bartleby has a history. Before coming to work on Wall-Street as a scrivener, Bartleby had worked at the dead-letter office; an office dedicated to processing broken communications: messages, gifts, and expressions of emotion and intent that never made it to their destination. A beautiful metaphor, this is the sad life story of Bartleby. The lawyer constantly asks the reader for their sympathies; oh, how he struggles to make sense of Bartleby’s seemingly illegible behaviour!  How hard he tries to accept Bartleby, to help him, to save him. Yes, he does. And it is admirable. But let us think of Bartleby. What to his employer was a single frustrating experience, would most likely have been an excruciating recurring theme in Bartley’s entire life. Broken communications, undeliverable messages, intercepted gestures. Time and time again. If he had finally grown weary of futilely trying to be understood, right up to the point of giving up altogether; can we blame him?

 

 

 

Bartleby the Scrivener (Part 1 of 2)

In a previous post, I discussed the claim that autism is a social construction; that in many ways, it is a product of modern society; and that autism hadn’t existed – in fact that it couldn’t have existed – before the diagnostic label known as autism had emerged. This is a somewhat controversial claim of course, and so it should be properly understood before we go any further. First, I should make it clear that the social constructivist perspective to autism does assert that in all likelihood, the various core biological and neurological aspects that are currently associated with as autism have been around since man existed. Yes indeed – there would always have been those people who experienced extreme sensitivity to sensory stimuli; who thought in patterns or pictures rather than in symbolic language; who found social interactions difficult, confusing, uncomfortable, or scary. None of this is new.

So to say that autism hadn’t existed before it was identified is merely meant to acknowledge and emphasize that the institutions associated with this specific label hadn’t yet existed; that the social and cultural ideas, stereotypes, beliefs, expectations and misconceptions regarding autistic people hadn’t yet existed; that relevant therapies and special education programs hadn’t yet existed; and that autism as a source of identity and understanding of one’s self hadn’t yet existed. So in the mid-1800s, for example, you simply couldn’t have been autistic. You might have had all the traits that would qualify you for an autism diagnosis today, for sure, but at the time, these traits would have been ascribed with very different meanings, interpreted differently, and framed differently by both you and others.

But the question arises: how would such a person been labelled back then?

An interesting way to try and approach this question is by asking how present-day autistic people – diagnosed late in life – were seen, labelled, and treated before they received their autism diagnoses. This is a question I often ask my interviewees, and the answers I receive are as interesting as they are varied. For an idea of the sort of answers I’m getting, it’s enough to take a look at this great post by Misplaced Mermaid (awesome blog title, btw – I like the mermaid metaphor better than the alien metaphor, though they’re admittedly quite similar) to appreciate just how many words, labels, categories, titles and adjectives are used to describe someone who for various reasons, just doesn’t quite “fit in” (of course, “doesn’t fit-in” is just another vague sort of label, isn’t it?). This is helpful in trying to imagine how autistic people (or, more accurately, people who today would have been labelled autistic) would have been seen and labelled before the category of autism ever existed.

An important thing to remember is that autism – fluid and dynamic as this category may be – is in fact quite fixed and steady compared to the labels one would have been ascribed with prior to the emergence of this label. Before, a person would have been said to be stupid by one group of people, and brilliant by another; labelled introverted by one, and outspoken by another; deemed crazy in one context, and holy in another. This is why so many people find relief and comfort in being diagnosed with autism later in life – finally they can stop negotiating a thousand different labels – some of which are actually in contradiction to one another – and settle on one single label that is supposed to replace all the others (of course, it doesn’t always fulfill that purpose).

But anyway, I digress. My point is this: claims that various historical figures were autistic need to be taken with a pinch of salt, as they say. They’re never entirely accurate. Sure, it’s an interesting intellectual exercise. And yes, it has very important political implications (namely, that autism can be a valued form of difference rather than a deficit or impairment) as well as research implications (e.g., that vaccines or modern-day pollutants do not cause autism). But given the very different social and cultural contexts in which people of the past lived, autism – as we currently understand and frame it – is just not a relevant category. It is an anachronism. Sort of like saying Jean of Arc was a feminist, or that Julius Caesar was Italian.

Right. Glad that’s over with. This very long introduction was merely a caveat for what this post is actually meant to be about – Herman Melville’s marvelous short story published in 1853: Bartleby the Scrivener.

*

It was recently mentioned to me that Herman Melville, the American novelist, has been said to be autistic (or that he had Asperger’s syndrome – though given the intrinsically inaccurate nature of both these claims, for the reasons outlined above, this is nitpicking). You can read about this claim here if you’re interested (“Writers on the Spectrum” by Julie Brown). It makes for an interesting read, surely, especially if you have read and loved Moby Dick; there are definitely parts in this novel when even the most enthused reader must stop for a moment and note that the author was really, really, really interested in whales. Like – really.

No, no – this doesn’t prove a thing. In mentioning Melville’s apparent obsession with whales I wasn’t intending anything but to make a humorous anecdote. Nor does Julia Brown’s engagement with Melville’s rigid breakfast habits, late onset of speech, preference to withdraw from the company of others, awkward social demeanor and difficulties with making eye-contact indicate anything but a curiousity… Ok, enough with this cuteness. Let’s face it: If the statements about Melville are correct (and I’ll leave this to be determined by others much more skilled and enthusiastic about this sort of thing than I am), then it’s a safe bet that had he been alive today, he could have safely been said to be autistic. And why not? Melville is certainly a respectable addition to the seemingly ever growing “historical figures with Asperger’s” club. Granted, I personally have no urge or desire to act as a gatekeeper for this club. I’m quite happy to sit in the stands, and quietly sulk over the inaccuracy of it all.

*

If Melville had indeed had all the traits that today would have indicated an autism spectrum condition, then it is probably not a coincidence that Bartleby the Scrivener, the protagonist of Melville’s story that bears his name, is himself so stereotypically autistic. Except he can’t be stereotypically autistic, because autism hadn’t existed. Which means that the autism stereotype (or rather, in this case, the Asperger’s stereotype) hadn’t existed. So Bartleby is not a stereotype. And yet so many of the traits with which Melville describes him are commonly associated with Asperger’s. Here are a few quotes that I highlighted from the book (page numbers refer to the Kindle edition):

“Meanwhile Bartleby sat in his hermitage, oblivious to every thing but his own peculiar business there”. – 178-178

(Bartleby tends to get extremely focused on his work to the point of being oblivious to his surroundings)

“His late remarkable conduct led me to regard his ways narrowly. I observed that he never went to dinner; indeed that he never went any where.” – 179-180

(Bartleby withdraws from society, and avoids social gatherings and interactions)

“He lives, then, on ginger-nuts, thought I; never eats a dinner, properly speaking;” – 183-184

(Bartleby is a picky eater; in fact, he only ever eats this one type of food)

“I had a singular confidence in his honesty. I felt my most precious papers perfectly safe in his hands.” – 224-228

(Bartleby is not inclined to deception or theft)

“He did not look at me while I spoke, but kept his glance fixed upon my bust of Cicero, which as I then sat, was directly behind me, some six inches above my head.” – 299-301

(Bartleby avoids eye-contact)

“”I would prefer to be left alone here,” said Bartleby, as if offended at being mobbed in his privacy.” – 321-322

(Bartleby is uncomfortable with his privacy being invaded)

“If he would but have named a single relative or friend, I would instantly have written, and urged their taking the poor fellow away to some convenient retreat. But he seemed alone, absolutely alone in the universe. A bit of wreck in the mid Atlantic.” – 342-344

(Bartleby is socially isolated)

“Going up stairs to my old haunt, there was Bartleby silently sitting upon the banister at the landing. “What are you doing here, Bartleby?” said I. “Sitting upon the banister,” he mildly replied.” – 481-483

(Bartleby interprets questions literally)

“”No: at present I would prefer not to make any change at all.”” 499-500

(Bartleby is resistant to change of any kind)

That’ll do for now, I suppose. In my post about Lars and the Real Girl, I mentioned how pointless it is to try and diagnose a fictional character. That it’s just speculation, anyhow. I still think it is pointless, but I have to admit that when this character was written at a time when autism hadn’t yet existed, and by an author who presumably would have been diagnosed with Asperger’s had he been alive today – then there’s actually something quite interesting (and confusing) about this sort of speculation. Is Bartleby autistic? How would this question even be properly phrased? It would need to be something along the lines of “assuming that Bartleby was an actual person; and assuming he had been alive today; (and assuming he had access to diagnostic services); would he have been diagnosed as autistic?” That’s a lot of ifs. Trying to resolve this intricate matter in any intellectually honest way would take way longer than I intend this post to be (this post in already way longer than I intended it to be). So I shall resolve this question the same way I resolved my query about Lars and the Real Girl – by tossing aside my objections, and simply proposing this: Let’s assume Bartleby was autistic. It will have to do. Sorry if you feel you’ve been cheated out of a long dialectic over the ontological status of cognitively deviant past fictional characters – but let’s just leave that one for another time.

Oh dear. I’m already at over 1600 words and I haven’t even started making my point about Bartleby yet.

Well then… To be continued.

Bartleby the Scrivener (Part 2 of 2)

Why Should a NT Anthropologist Try to Study Autism Anyway?

Recently, the morality and potential contribution of my project were put into question by a member of the autism community. The main argument laid forward (briefly; this was twitter after all) was that being neurotypical, I could never understand – presumably not even to an extent – what being autistic is like, and so my attempts are inevitably doomed to fail. Not only that, it was suggested that my attempts could actually be harmful, as any desire to discuss autism as an experience – without ever having experienced it myself – are likely to involve at least some degree of Othering; “The ultimate form of othering” was in fact the exact words used. I’ve considered this before, of course, but never had this thrown at me with such explicitness. And I had to admit that this was a very legitimate concern.

Not only that, my interlocutor doubted that my project could in any way be beneficial for the autistic community, as any advancement in the autistic plight to acceptance and understanding can only come from within the autistic community itself, through self-advocacy and social activism.

These very fair allegations stuck with me for the two weeks that have since then passed.

So I decided to take a break from my Autism in Movies series, and write about this dilemma instead.

First, here is my reply to this person via email (with very slight changes):

“Hi …,

Thanks for giving me this opportunity to try and convince you about the possible contribution my research (and others like mine) could have. Like I said in our short twitter exchange, I think that in any field, disagreement is both inevitable and potentially productive. My goal here is not only to try and change your mind – but also to allow you to try and change mine. My ultimate motivation is to understand the various perspectives of autistic people (yes, these are infinitely varied of course). If these perspectives include an objection to the very idea of my research, well, that’s something I’m going to have to take very seriously indeed. But first thing’s first.

Let me start by referring to your comment that the word ‘autistic’ in ‘a social anthropologist trying to figure out what being autistic actually means’ couldn’t be replaced with any other group and still be taken seriously. I disagree, primarily because this is precisely what social anthropology does! It’s about trying to make sense of perspectives, motivations, beliefs, and behaviours that are different from one’s own; I have colleagues trying to make sense of the role of music and dance in the beliefs of Krishna worshippers in India, of how sheep herders in Tibet understand the state and its institutions, and of how homosexual university students in South Korea interpret masculinity. The idea is not to document these groups – hasn’t been for decades. The idea is to properly understand them. What drives them, why they believe what they do, and how they make sense of their lives and their surroundings; in other words, what their experience of the world is like.

My own motivation is similar, but different. People of different cultures differ in many aspects – but their neurology is (presumably) similar. But autistic people and neurotypical people are different in a different way – our brains work differently (again, presumably… and then there are many other differences as well of course, big and small). Questions such as those posed above need to be addressed from a different angle, rethought, and constantly questioned – but I for one believe they are still very much worth asking.

How do autistic people in the UK experience the world around them? How do they experience their own sense of self and identity? How do autistic people experience socializing? How do they experience objects? How do they experience their own bodies? None of these questions have easy or straightforward answers, of course; not even close. It would take a book – in fact it WILL take a book – to only start to try and address them. It will require great care, so as not to generalize, simplify, reduce, fetishize, romanticize, offend, or misrepresent. I have taken a very hard task upon myself; I’m aware of the responsibility this entails.

But I do believe this can be done. No, I will never know what being autistic is like. To do this, I will have had to be born autistic. However, I am convinced that by talking to enough autistic people, asking the right questions, listening very carefully to the answers, reading autistic authors’ books and blogs, spending significant amounts of time with autistic people – I can come to appreciate, TO SOME EXTENT – what being autistic is like FOR THEM.

So how is this helpful? Because few – very few – neurotypical people will ever take nearly as much time to think about what being autistic is like as I do. Because even if they did, they haven’t had the training that I had, which allows me to proceed with extreme caution in this rather delicate task of representing an entire population of people. To consider the various social and cultural factors that take part in making autism what it is. To critically assess research in neuroscience, psychology, genetics etc., and tease out the valuable knowledge from the utter rubbish. I have spent the greater part of 8 years learning to do this. There’s still a lot I don’t know, but I suppose I am still more qualified for this sort of work than many other people. I can’t do it perfectly – I’m going to make mistakes here and there, I’m afraid. But I hope I can do it well nonetheless. Several other people could; But very few others actually DO.

So how IS this helpful? Hopefully, people – NT people, that is: parents, partners, teachers, therapists, researchers, employers, journalists, policy makers – could read my book and understand autism better. Simple as that. Not the genetics of it, or its cognitive mechanisms, or the neurochemistry involved, or the various treatments that are offered. But the actual experience; what being autistic is LIKE. What difficulties this entails. What sorts of satisfaction. In what ways precisely is autism a challenge. In what ways is it a gift. And what society can and should do to minimize the difficulties and allow for people’s individual talents to emerge. Whether you are a parent to an autistic child or the prime minister of a large country – you need to know these sorts of things. It would be my role to try and inform them.

Why shouldn’t autistic people do this work? Well they should, and they do. Often quite brilliantly. In some ways, they do it better than I ever could. My contribution lies in my training as a social scientist, my close familiarity with research methods, social theory, ethics etc.; and in the fact that I am neurotypical, which possibly – possibly – puts me in a position of ally and mediator. Mediation is important.

Finally, you mentioned my study sounds like the ultimate form of Othering. That was a blow!! This is what we (social anthropologists) invariably try so hard to avoid! And it’s always a big risk. I guess all I have to say about it is this: A study such as mine COULD be very Othering. But it doesn’t HAVE to be. It’s not a question of what research you do; but of how you do it.

… “

Out of the many, many concerns my letter addresses, the one I’m least confident about is the part where I explain what my own contribution as a neurotypical could be; and why it was morally acceptable that I should engage in this sort of work. And it is this question that I still most struggle with. Am I not just trespassing on other people’s domain? In choosing autism as my topic of interest, am I merely exploiting autism and autistic people? Is there even a slight chance of me doing a better job at this than someone who is actually on the autistic spectrum?

After all, there are indeed quite a few social scientists who are themselves autistic and already do brilliant work in studying autism from this particular angle; names that come to mind are Dawn Prince-Hughes, Donna Williams, Dinah Murray, Damian Milton, and Rachel Cohen-Rottenberg; most likely among many others. And then there are the many, many self-advocates and bloggers such as Lydia Brown, Kassiane S., Sparrow Rose Jones and others who may not be social scientists per se, but do fantastic work in studying and analyzing autism from a social and cultural perspective nonetheless.

These researchers and thinkers are all brilliant at what they do; they are creative, careful, diligent, honest, eloquent, and passionate. They may be autistic, but if any neurotypical reader finds it hard to relate to their ideas for some reason, the problem is with the reader. Absolutely no mediation is required between the ideas of Dawn Prince Hughes, for example, and a neurotypical reader – despite her making no apologies for her unique style of doing social science.

So what am I doing in this field? What’s my contribution? Where’s the added value of my work?

I suppose for the time being, I can’t really answer this question; I honestly don’t know. But hopefully, by the time I will have completed my research and written my thesis, I will have learned enough, thought enough, and hopefully I will not only have realized but have in fact demonstrated that I am not a stowaway on the autism wagon. That I can contribute. And that, as the principle precept of health care and research mandates, I will at the very least ‘do no harm’. The only thing I can say with any certainty is that the burden of proof, in this matter, lies completely on me.

I would love to hear everyone’s thoughts on this.

Related Posts:

Temple Grandin

Temple Grandin (2010)

Temple Grandin is probably the best known autistic person out there; to say that she’s a global celebrity would not be going too far. She has written several bestselling books, she is an extremely respected professional in her field, she is often cited and quoted in all sorts of discussions about autism (popular and journalistic, as well as academic) and she gives frequent lectures and interviews on various media. But maybe the most significant contribution to her celebrity status outside the autism community (and the cattle industry) is this 2010 film which is based on her memoir.  The first time I watched this film was when it came out. I knew very little of autism then, and I have never met an autistic person (as far as I knew). And at the time, I loved Temple Grandin. In fact, I have a strong feeling that (warning: cliché) this film was one of the reasons that made me want to study autism in the first place.

2010_temple_grandin_001

I was slightly hesitant before watching it again earlier this week. Was it really that good, I asked myself, or was I simply too naïve back then? Won’t I be terribly disappointed? I’m a bit more knowledgeable about autism now, I’m a lot more suspicious, and I’m also a bit cynical. In short, I was afraid to find that Temple Grandin wasn’t half as good as I remembered it. And seeing as I know something now of the actual person who inspired this film (not personally, unfortunately, but from reading her books and watching her lectures), I was worried of finding that Hollywood – as it often does – had done her an injustice.

I was happy to find I was wrong.

I mean yes, ok, a lot was left out (obviously), and yes, the film came short of offering a complete account of the array of different aspects of autism (again, of course it would), but all in all, I felt Temple Grandin the movie is a beautiful, sensitive, and honest telling of a remarkable story about a truly exceptional person. As far as representations go, I thought the makers did a very good job. The autistic protagonist mostly makes her own choices, and while the love and care of those around her are framed as indispensable for her growth and achievement, these are acknowledged as secondary to Temple’s own intelligence, talent, and formidable sense of self-worth. The makers of Temple were not (as is often the case) rushing to spread some simplistic message about autism that downplays its disabling features, or reducing it to a generic form of mental disability that stands for weakness, dependence, or deficit. “Different, not less” was actually the film’s catch phrase, and the movie remained loyal to this message from the beginning.

In other words, I loved Temple Grandin. Possibly even more so now than when I watched it first.

*

Temple Grandin was born in 1947, merely 4 years after autism was named and identified for the very first time. In a way, her life story can be said to parallel the history of autism itself.

Often, when people think of autism, they imagine a sort of static quality that exists solely within the confines of the body. A biological condition that if not properly understood by science, it is only because we still lack in scientific knowledge; that the truth is still unveiled, but science is slowly and surely progressing towards this very goal. This is a somewhat narrow view of a much more complicated reality. Like any medical category, and perhaps even more so, autism is as susceptible to historical, social and cultural conditions as it is to biological processes in the genes or in-utero. After all, it takes people to recognize autism, define it, study it, explain it, treat it, experience it, represent it, and make sense of it. And those people come from different cultures, subscribe to different theories, value different methods of inquiry, and have different perspectives about what’s desirable, what’s normal, what’s important and what’s right. As time goes by, as society changes, autism, in a very real sense, changes with it.

Who are those people who arguably have so much influence over an apparently unbending neurological condition? Well, it’s quite a long list, actually. Neuroscientists of various sub-disciplines (cognitive neuroscience, neurophysiology, neurochemistry etc.), psychiatrists, and geneticists usually make up the group in charge of scientific research into autism, which obviously has a massive effect on how it is framed, categorized, understood and treated. Epidemiologists make a huge impact in determining ­and communicating the prevalence rates of autism. Psychologists are responsible for characterizing its cognitive and developmental aspects. There are speech therapists, occupational therapists and physical therapists who possibly know best which therapies work and which don’t (not that they are all in agreement). There are teachers in charge of instructing and educating autistic children and adults. There are those who provide welfare services, devise policy, and design legislation. There are those who advocate autism awareness and acceptance, and those who advocate the search for a cure. There are those who spend their everyday lives with autistic people, those who love them and know them best, namely their parents, siblings, partners, children or caregivers. There are those who write autism into books, make films, or write about it in newspapers and journals. There are those who study it from a humanities or social science perspective – like me. And ultimately, of course, there are those who are themselves autistic; living, talking, writing, acting, connecting, and making; they are sons, daughters, parents, partners, and friends; some are teachers, writers, researchers and artists; and they have significant influence on how autism is understood, treated, explained, experienced and even performed. Autism is therefore anything but static – it is as dynamic, fluid, and mutable as social categories get.

So wait, who are the real experts then? Well, they all are, but you know what happens when you have too many experts in one place. You get tension, disagreement, and conflict. And autism, with its myriad sorts of experts, is a fertile breeding ground for exactly that. In one of the very best books written about autism (on my opinion, that is), Gil Eyal and his colleagues, a team of sociologists, take a deep and thoughtful look into this field of contention, which they call (and named their book after) “The Autism Matrix”. Give it a read if you have the time. It’s not an easy read, but it’s well worth your effort.

Temple Grandin offers a wonderful insight into how these disagreements between the various sorts of experts come about.

The earliest scene in the movie, chronologically, depicts four-year-old Temple and her mother in a meeting with the doctor – most likely a psychiatrist. He diagnoses Temple as having autism. At that time autism diagnoses were scarce, and autistic children of Temple’s generation were very unlikely to be diagnosed as autistic. The clinician is thus arguably very well read and well trained, being familiar with such a “rare” new condition. Yet he still considers autism as interchangeable with childhood schizophrenia; this was an extremely common misconception at the time, and it remained common for decades more to come. Well trained as he may be, the doctor regards autism as hopeless and destructive. Temple will likely never speak, he says. It’s caused by a lack of bond between mother and child, he reproaches. And he recommends she should be institutionalized. The psychiatrist represents the common scientific perception of the time. He is an expert.

But Temple’s mother – a strong, educated, intelligent, and outspoken woman –rejects the doctor’s prognosis, and refuses to accept that her daughter will amount to nothing. She thus effectively opposes the psychiatrist’s expertise, and situates herself and her own understanding of her daughter’s condition – namely her own expertise as a mother – as equal, if not superior to that of the trained physician. Decades later, such opposition by parents will become widespread, as parents begin to collectively question the medical establishment’s approach to autism, its prognosis, and most importantly – its etiology. So that in contrast to what most experts claimed during the 1940s, 50s, 60s and 70s (and in some parts of the world still do), mothers knew they didn’t cause their children’s autism. It took a while for medical professionals to accept this as true – and nowadays, the fact that autism is innate is near consensus.

Another scene shows Temple’s team of school teachers frustrated by her behaviour. They consider expelling her. Only her science teacher realizes that her “bad” behaviour is not an integral part of her condition; but a result of a lack of support and care in the school environment. He takes it upon himself to mentor her, challenge her, and indeed love her.  “Just a science teacher”, but nevertheless an expert in his own right, he has a huge positive effect on her life.

In college, a psychologist interviews Temple about her squeeze machine; utterly oblivious to the communication barrier between them, he asserts it has a sexual purpose, and forbids her to use it. Her mother, concerned of the impression that Temple’s use of her machine might create among her peers, follows suit. This time it is Temple’s aunt that insists that if the machine helps Temple, it must be allowed, and encourages her niece to devise an experiment to demonstrate that the machine is in fact helpful. The authority of the mental health specialist is thus undermined by a caring relative, who shows herself to be more of an expert then he is, at least in this regard. When Temple conducts her own study as to the possible benefits of her squeeze machine, a remarkable social process unfolds: she situates herself as an expert on autism not only by being autistic, but also by employing scientific methods to substantiate her assertions. The division between experiential knowledge and scientific fact is becoming blurred.

In the very last scene, we witness what is to become a revolution in autism expertise, as Temple’s own experiences, theories, ideas, and perceptions of autism and its meaning impress heavily on the conference participants, who very symbolically banish the “expert” speaker from stage as they ask her to take his place and speak in his stead. Autism self-advocacy is born.

It is important to remember that no type of expertise ever fully replaces another. Parents’ and educators’ expertise gained its rightful position alongside that of doctors and psychologists. Neurologists and geneticists (the more recent sciences) followed. Therapists and advocates, policy makers and social scientists, and not least autistic people all claim their right to make assertions, suggest theories, devise treatments, design priorities, and speak the ‘truth’. But truth is a problematic notion. One can never be sure that an objective ‘truth’ about autism can ever really be achieved. Instead, we are likely to witness an ever-lasting struggle between opposing or simply differently-focused discourses. A dynamic, ceaseless, complicated “matrix of expertise”, in the course of which the meaning of autism is never definitively unveiled, but is instead constantly negotiated.

Related Posts:

Marathon

Marathon (2005)

Roy Richard Grinker is definitely one of the best social anthropologists out there to study autism. If you’ve never read Unstrange Minds – Remapping the World of Autism, I strongly recommend it; it offers a wonderful overview of the social and cultural context of autism, and it’s extremely well-written and engaging. Among other things, Grinker presents the reader with a picture of how autism is framed, interpreted, and understood in various cultures throughout the world. One of these is South Korea.  So to me, Marathon was not only a beautifully made film about a topic that fascinates me, it was also a visualization of a world I was somehow already partly familiar with, and it made it all the more appealing. Anyway, in case I need to make it even more explicit – I loved Marathon. I thought it was excellent in so many different ways. Representations wise, it’s not perfect, ok. But honestly, when the protagonist is a barely verbal, quite severely autistic character, you always run the risk of painting a very specific, not necessarily representative picture of autism. But within the remits of their protagonist’s individual capabilities, he is a positive, well-rounded character, with personal coherence (so that his autistic traits don’t just ‘dissolve’ during the movie like in so many other films) – and that’s really all anyone can ask for.

Marathon-0006_zps71672b4d

Cho-Won loves animals; one of his favourite things to do, ever since he was a child, is watching nature documentaries on TV – and he knows the narration by heart. His absolutely favourite documentary is the one about the Serengeti Wilderness, an African region “uncontaminated by people”, where wild animals can roam free. He sympathizes not with the lions or cheetahs, but with the grass eaters, the animals of prey. He adores the zebra, which is a prominent theme throughout the movie. This reminded me of a point Temple Grandin often makes, about the similarity of experience between autistic people and prey animals, for both of whom fear is an overbearing emotion (what do you think about this?). According to Grandin, this shared experience is what allows her to understand cattle as well as she does – and she obviously does.

But Cho-Won’s love of Zebras didn’t lead him to become an animal behaviorist; instead, it mostly got him into trouble. Running away at the zoo, for example, or grabbing ladies’ purses or trousers if they happen to have a zebra pattern. Yet, in various stages of the film, his love for the striped animal also has benefits; he seems to understand motherly compassion, as he repeats the narration about the mother zebra caring for her cub; he understands danger, knowing that zebras can’t afford to stop running or fall over, as they will be eaten; he understand fear, sympathizing with the zebra’s constant need to look out and take care.

Understanding and expressing emotions are other themes common throughout the movie, and they’re articulated very carefully. “Your dad goes on a trip…” his therapist posits, “your mom is in the hospital … how would that makes you feel?” she shows him drawings of faces: “Happy? Sad? Angry? Scared?” His dad incidentally did go on a long trip, and his mother did lie sick at the hospital. The film seems to indicate that in some indirect way, these exercises have indeed helped him sort out his emotions. Also halfway in the film his mother teaches Cho-Won to smile, which he doesn’t quite get, until the very last scene, where he finishes the race, and his picture is taken. His running away at the zoo, in the beginning, was later recalled by him as a traumatic event, when he accuses Kyeong Sook of deliberately letting go of his hand, wanting him to run off. And his hesitant participation in the hiking trips with his mother evolved into a complex relationship between mother, son, coach, and long distance running, where prizes (choco-pies, medals) are awarded for success and the fear of pain (injection) thwarts defeat; this ultimately leads to the important and interesting dilemma that marks this film: whether Cho-Won enjoys running or was he numbed into passive submission by his mother’s constant pushing.

In Grinker’s writing and others’ about autism in South Korea, what stands out is the significant and often frustrating role of mothers; they are held responsible for their child’s autism; they are expected to subdue it, treat it, or at the very least conceal it; they must negotiate the diagnosis, often exchanging it for an arguably less stigmatizing (but false) ‘reactive attachment disorder’ (RAD); often, the concept of ‘border’ autism is used to frame the condition as uncertain, contingent, and temporary. Mothers of autistic children (adults too) are required to simultaneously be both assertive and respectful as they battle exclusion and discrimination in a society where individuality and defiance are frowned upon, and conformity is mandated.  They need to ceaselessly explain, apologize, and excuse their children’s slow progress in school, in an environment where academic excellence is revered. To top it all off, fathers seldom take any part in caring for their autistic child, or negotiating his/her place in society. Yes, none of these problems is entirely unique to Korea; but this specific combination of societal expectations and limitations make the Korean “version” of autism quite idiosyncratic, and make the role of mothers challenging in very particular ways.  It is apparently within this social context that Cho-Won’s mother faces the formidable challenge of raising an autistic child.

This blog, and my research in general, is not about the experiences of parents, siblings, caretakers, or children of autistic people. These are of course very significant. It is important that they are recognized and dealt with, and many researchers very successfully do so; but it was my explicit decision to limit my research, within reason, only to the experiences of autistic people themselves. However, Marathon represents a good example of one of those situations when the mother’s and child’s experiences become virtually indistinguishable. In fact, this is the very question the film’s makers pose: is running what Cho-Won wants, or is it what his mother wants? Is it him that can’t live without her, or her that can’t live without him?  Was he upset by the coach’s unorthodox methods, or was she? Seeing as Kyeong Sook always had to make nearly every single life decision for her son, either big or small, how can one even tell what are his choices anymore?  And is that really a valid question? In other words, can Cho-Won even make choices?

Still far from being able to answer these questions, I could try and suggest a way of approaching them.

Cho-Won’s mother struggles with her inability to determine whether he chooses running. She knows he likes zebras and choco-pies, meatballs dipped in ketchup and dancing in the supermarket. These preferences are easily communicated. What’s different about running is that it is painful (as his coach goes to great lengths to illustrate) and potentially dangerous. It is a more complex question than whether he likes choco-pies, for example, because running involves risk. Although she knows Cho-Won better than anyone, Kyeong Sook repeatedly fails at getting him to communicate his choice about running.

But perhaps we’re thinking about this all wrong; perhaps it’s not a problem of communication at all.

Usually, when one thinks of ‘choice’, one imagines an autonomous, individualized thought process, which might be influenced by the opinion of others (namely through advice and council), but is ultimately achieved within the bounded self. But is that really the only possible way to frame choice? After all, people make decisions in context. For example, a person is usually ‘ascribed’ with certain markers of identity at birth: gender, nationality, or ethnicity. Others are similarly innate (I use this word here sloppily) but are found out later, such as sexual orientation. Still others are considered to be personal preferences, such as political affiliation, hobbies, or an inclination to monogamy. But what if we don’t regard any of these as either purely ‘given’ or purely matters of ‘choice’; instead, what if we imagine choice as situational, whereas any ultimate state of being is a consequence of one’s environment, historical context, milieu, social expectations, and indeed some personal choice – this time in the traditional sense of the word.

Is being Jewish my choice? Choice has something to do with it, yes, because no one is forcing me to accept Judaism as my religion. But is it entirely my choice? Of course not. I was born to Jewish parents, grew up in an environment where almost everyone is of this faith, and I am expected by the people close to me to accept it. These factors aren’t the least bit marginal in the question of my choice of religion; they’re constitutive of it.  You could say “yes, but if you decide to become a Buddhist tomorrow, that would be entirely your choice”. Not true. The only way that were to happen is if other social influences not only encouraged me, but also allowed me to convert to Buddhism – and that these influences would in some way overpower the former influence of my family and childhood friends. In that case, these new factors would have a significant share in “my” decision to convert. This is arguably true of any choice, large or small. Even my decision as to what to eat for lunch today would be affected by all sorts of influences; my upbringing, my cultural preferences, my budget, my grocery store’s inventory, and how much time I can spare to prepare lunch for myself. It’s never really just “my” choice, is it? Ok, you could frame it as ultimately my choice; but I propose a different perspective; that it’s partly my choice. Every individual’s choice is always just a part of the story of how things came to be.

Cho-Won may not have become a runner on his own accord. That his mother ‘pushed’ (a quite judgmental term) him into it obviously had a huge part in it. But who is ever unaffected by the decisions made for them by parents? And grandparents, and teachers, and political leaders and policy makers? Kyeong Sook may never get her son to articulate whether he accepts the risks of running marathons and chooses to do it anyway; and in no way am I implying that his desire is irrelevant or insignificant! But if he was given every opportunity to stop running, and kept at it; and if he enjoys the activity while he is pursuing it; and if he seems content with life in periods where he runs regularly; then his part of the ultimate choice to run seems more or less resolute. The question of whether it is his choice or his mum’s might simply be misguided; the choice is situational (a situation which includes Cho-Won’s communicational difficulties, among all other factors). It is both his and hers. The fact that he’s autistic, the fact that his coach believes in him, and the fact that he’s a good runner were all important parts of this. The circumstances seemed to have led him to register for running a marathon; but then again, actually completing it – well, that was entirely up to him.

Related Posts:

The Story of Luke

The Story of Luke (2012)

Like several others of the films previously discussed in this series (e.g. Snow Cake, Adam), The Story of Luke also begins with the death of the autistic protagonist’s main carer. This similarity between the movies is not surprising, because they actually share something quite fundamental between them; in all three films, autism is not engaged with in any serious way, but instead it is merely a plot device. Here’s how they work: the main carer of the autistic person is dead; someone else must now take care of him/her, and cope with the difficulties this entails; that someone will now learn valuable life lessons thanks to this experience, and come out fulfilled, happier, and morally better. Along the way, the autistic person him/herself will rather magically overcome hitherto insurmountable challenges due to an extraordinary newly-acquired will-power.

But what’s the problem, you may ask. Wouldn’t inheriting a responsibility to look after a person with a developmental disability indeed be a huge challenge, so as to inspire a story worth telling? Perhaps. But then it’s not only autism, is it? The same is true for any other newly found, and not necessarily sought after, responsibility for a fellow person; an elderly parent that suddenly needs care, a badly injured relative, or a baby born prematurely. So why bring up autism? Why this puzzling disability and not another, more straightforward one? I suspect the reason why the movie makers chose autism is mainly for its potential for creative dialogue, unique acting mannerisms, and humorous rhetoric. The specific and particular circumstances of living with autism, both positive and negative, are never treated seriously. Ok; The Story of Luke is meant to be a comedy, I get that. But I happen to think that when a movie maker takes it upon him or herself to engage with a neurological condition, they need to actually have something to say about it, rather than using it to make some loosely relevant point (at best) about human compassion and people’s capacity for betterment.

The Story of Luke - Movie Stills 06

So unsurprisingly, there are many problems in how autism is depicted in The Story of Luke, but one thing really annoyed me: out of all the ambitions and desires that motivate people to go out, to do stuff, to struggle, and to ultimately overcome – couldn’t the makers of Luke think of something a little more sophisticated than his desire to screw? His sole reason for wanting a job – and this is repeated constantly throughout the film – is to get Maria on a date and look at her “pretty breasts” again. If Luke weren’t autistic, we’d think he was an immature chauvinistic moron, and root for his failure just out of spite.  So instead of using autism as an excuse for such a questionable ambition, why not just give him a proper motivation to get a job; like, I don’t know, to contribute to society, to challenge himself, to earn the respect of his family and friends, or to, you know, pay the bills and buy groceries? We’re meant to believe Luke wants a job so desperately simply because he eventually wants to have sex with a lady with whom he spoke once. I doubt that such a haphazard motivation would drive anyone to make such huge, life-changing decisions and overcome massive difficulties as he did – pretty as Maria’s breasts may well be.

But there I go again, getting all worked up about representations, when this shouldn’t even be what this blog is about. So once again, let us shed our disbelief, and instead follow the path paved for us by the makers of this film and see how, despite it all, we can use The Story of Luke as an inspiration to discuss some important issues about autism in its social and cultural context.

My favourite scenes in the film were the ones in which Zack (Seth Green) – who’s presumably also on the autism spectrum – shares with Luke his knowledge of neurotypical behaviour. He takes his study very seriously – he observes neurotypicals carefully, takes notes, and makes inferences based on what he sees. In more than one occasion he refers to his approach as ‘scientific’. He makes use of all the tools at his disposal, including computer hardware and software, to devise experiments and practice interactions. When Luke is taken on board as Zach’s student, they treat Luke’s attempts at socializing with neurotypicals in the ‘outside world’ as field experiments. This is reminiscent of many autistic authors’ experience of constantly trying to ‘figure out’ other people. Seeing as people with autism often lack the capacity many of us apparently share of somehow automatically knowing what people mean, think or feel, they need to apply a rational method to interpret human behavior. A ‘scientific’ method, so to speak. Temple Grandin succinctly compared this feeling of constantly being on-guard and always having to think and analyze people’s behaviour and actions to being ‘an anthropologist on Mars’. I wrote in a previous post about the alien metaphor commonly used in autism discourse. Today let’s discuss the other half of that statement by Grandin – the ‘anthropologist’ theme.

In my post about Mozart and the Whale I discussed the common issue of autistic people feeling pressured to mimic neurotypical behaviour and ‘act normal’ (with an emphasis on ‘acting’). But the scenes with Zack and his binoculars made me realize something that I was somewhat oblivious to up until now, or at least that I’ve never fully appreciated; that in order to mimic neurotypical behaviour, you first need to have a pretty good idea of what actually constitutes ‘neurotypical behaviour’. And, well, that’s quite a tricky bit isn’t it?

I remember reading in many memoirs and blogs by people on the autism spectrum about how, as children in the school-ground, they were looking at the kids around them; how strangely they were behaving, how inexplicable their actions often seemed. Similes such as ‘it was like they were speaking a foreign language that I just couldn’t understand’ are quite common. Indeed, aside from the aforementioned (and not a favourite of mine, as you may have realized) alien metaphor, the culture concept, stretched to its broadest meaning, is perhaps the best tool around to try and understand the sort of difference that autism brings about. The experience might be something like being in a strange land, not knowing – or even recognizing – the local language, habits, customs, and rules. I can only imagine the confusion and frustration one must feel, and the resulting sense of exclusion and solitude – as well as fear.

Except – what if you were to take it upon yourself to study real hard, and come to speak the language, habits and so on? That would certainly make things much easier wouldn’t it? You could make friends, fit in, and become a member of the group. Even if you’ll never be a 100% like ‘them’, at least you won’t be a total stranger any more. Well, not really. This might be true when different cultures are concerned, but it does not seem to be the case with autism. Like I wrote elsewhere, a neurological difference is grounded not in language, history, faith, or even the body – it is inscribed in the brain; and as such, it cannot (nor should it) be made to go away. A person can change their clothes, religion, surname, and accent. If they will it, they can even change their skin color. But they can’t change their brains. They can’t change who they are. That difference is there for good.

So what’s my point? It is this: Zach confused things. His honest attempts to learn neurotypical behaviour in order to mimic it was misguided. Mimicking may get one so far in certain situation, but from what I gather, such attempts too often result in utter stress, frustration, and an anxiety of being “found out” (though sometimes, unfortunately, these attempt just can’t be avoided – the social pressure to conform is simply to overpowering.  Sometimes things like getting an education or employment depend – sad as this may be – on one’s ability to ‘pass’ as normal. This is not inevitable, of course. This is why political action is necessary).  But then what about learning neurotypical behaviour, not in order to mimic it or pretend to be it, but purely to satisfy one’s intellectual curiosity, and to make some sense of what too often seems nonsensical; that might be quite a positive enterprise, wouldn’t it? Understanding those different from oneself, I imagine, could go a long way in assuaging fears, feeling comfortable around those Others, and feeling more at ease in their otherwise intimidating presence. It offers the safety of knowing your surroundings.

It’s also just really interesting. I mean hey, it’s essentially what I chose to do with my life.

So here, I believe, is something anthropology can really contribute to bettering the lives of autistic people – as it can be employed not just to the study of autism by neurotypicals (which is what I’m doing), but also to the autistic study of neurotypicals.

It’s long been held in my discipline that in order to be a good anthropologist, you need to leave your familiar surroundings. That’s the best – and some say the only – way to ‘stumble on every rock’, as it were; to be intrigued by what others consider mundane; to be bewildered by the otherwise banal; to ask questions that no one had thought of asking; to offer new sorts of explanations to phenomena mistakenly thought to be sufficiently-understood. When Zach observes the ‘mating habits’ of the office NTs, his potential for coming up with novel ideas about human behaviour is in fact much greater than if a neurotypical anthropologist would observe the same activities. For example, the fact he attributed such importance to the frequency and duration of eye contact obviously derives from the fact that to him (presumably), any eye-contact is an inexplicable activity. This puts him in the best of positions to try and explain this ‘odd’ behaviour – because it allows him to ask the most unobvious questions about what makes it so.

But importantly, while Zach’s curiosity is potentially very productive, his methods are entirely wrong. Observation might lead you to ask interesting questions; but it alone will never allow you to come up with any valid answers. In referring to office flirtation as a ‘mating ritual’ Zack makes a mistake that is common among non-anthropologists (or early anthropologists, or just bad ones) – the assumption that human behaviour is as predictable and patterned as animal behaviour. It isn’t. Human behaviour, much like humans themselves, is as unpredictable as it is complex. Affect, agency, performance, subversion, selfhood, defiance, creativity, cultivation – these are just a fraction of the concepts social scientists use when analyzing social behaviour in humans, in order to emphasize just how irregular it is, and how different ‘social forces’ are (if those can even be said to exist) from the laws of physics, chemistry, or biology.

If Zach wanted to understand the ‘ritual’ he was observing (namely flirtation near the water cooler) – he would have no choice but to come out of his office and ask the people involved what they thought was happening. Why did she laugh when he told that obviously unfunny joke; why did he look away when she looked at her watch; that sort of thing. He needn’t take their answers at face-value though – he’s absolutely entitled to make his own interpretation. That’s what anthropology is mostly about. But collecting their own thoughts and ideas is an invaluable first step. Without it, one never really has a chance of understanding other people. Without it it’s just guess-work, and you’re very unlikely to guess correctly.

“The institute for the Study of the Neurologically Typical” is a brilliantly articulated and witty example of what autistic anthropology shouldn’t be (it’s satirical, of course, and I’m just using it as a straw-man). It paints a grotesque image of neurotypical behaviour, that’s as invalid as it is funny. As a piece of social commentary, a satire – it’s brilliant. But this is not the kind of thing I’m proposing.

I wish we were all anthropologists – I honestly do. I think anthropological insight holds incredible potential for a better, more just society. But anthropology shouldn’t be taken too lightly, either, lest people are taken lightly; and that’s the very thing anthropology tries to avoid. In other words, be anthropologists, I urge you; it might just make things easier; behaivours might become less confusing, people might be seen as less strange, and the world a bit less chaotic. But be good anthropologists; take a course or several, read some anthropology introduction books, follow some anthropology blogs – do it right. Bad anthropology is often ridiculously bad, and is damaging more than it is helpful. Good anthropology, on the other hand, can be incredibly useful.

Is this what I was getting at this whole time? I sincerely doubt it. But I do like what came out. So I might just keep it.

What do you think about all this? I would love to hear your thoughts.

Related Posts:

Ben X

Ben X

In what would usually be referred to as ’the real world’, Ben is a Belgian (Flemish) teenager; lonely, full of anxiety, practically mute, and autistic. Is he anxious and silent because he is autistic? Well, partly, yes. But it is also because he is treated ever so horribly by his classmates who tease him, bully him, beat him, drug him, and humiliate him. Despite having loving and caring parents, and at least one kind and considerate teacher, Ben’s ‘real world’ existence is full of suffering; it is quite unbearable.

But in the virtual on-screen world of Archlord, he is Ben X; a hero, a skilled warrior, an esteemed and respected figure; and the fellow in battle, travel, and play to the equally skillful heroine who goes by the name of Scarlite.

benx-loner-movies

“The princess of letters, on the other side of the land; who is always by my side when things get out of hand; who knows me without knowing my name; who could put me back together again; who could make me ‘sleep tight’.“

During the film, Ben constantly shifts back and forth between the two worlds he inhabits; master in one, victim in the other; admired in one, tortured in the other. Is it any surprise he prefers one over the other?

An excessive liking to video games is frequently framed as a problem; quite often, the word addiction is invoked. I don’t want to go into a debate about whether excessive gaming is really such a bad thing or not – I guess I just find it hard to generalize. In principal: yes, a healthy life should ideally consist of various kinds of stimuli, not just one. But if video games offer someone a place where they can finally shed their anxiety and have some much needed relaxation and fun, while socializing in ways that would otherwise simply be too overwhelming, frightening, or painful – I don’t know, who’s to say that’s wrong? But like I said, this isn’t the discussion I was going for. What I would like to talk about is the role that virtual worlds have in the lives of those who play it; the important role that is quite often misunderstood, and possibly unappreciated by others.

Tom Boellstorff is social anthropologist. Like many others, he has chosen to study a culture different than his own. It has its own rules, norms, values, and symbols. Like members of most other cultural groups, the members of his studied culture, too, have bodies and occupy a space. Except in this case, the bodies are avatars, and the space they occupy is the virtual world of Second Life. Years of bad rep by people who don’t quite understand this world have contributed to a conception of Second Life as offering a type of a shallow, worthless sociality; a fake. Boellstorff shows exactly the opposite: Second Life, and presumably any other virtual world, is as unequivocally social, and just as real as any other space where people live and act.

First, he says, let’s get rid of the useless distinction between real and virtual that only confuses things. Connections made online are every bit as real as the ones made offline; people make friends on chat rooms, meet life partners in virtual worlds, find support in discussion forums, and joke around with other people on twitter. What’s not real about any of these activities? Is it different than making a friend in a book shop, being on a date in a bar, or sitting in a therapist’s clinic? Of course it is different. But is it any less real? Nope. It’s mediated, yes. But hey, so is a phone conversation, and that hasn’t been said to be fake since probably a century ago. Once the landline has become widespread, and we became accustomed to it, we accepted it as a valid form of communication. Virtual worlds are not essentially different from the landline insofar as they are a means of communication; visually elaborate and creatively designed, yes (how is that a bad thing?), but merely a means of communications nonetheless.

So instead of talking about the real-world, when all we’re talking about is communication-that-isn’t-mediated-by-technology, let’s talk about the actual world instead, as the counterpart of the virtual world. Different as the actual and the virtual worlds may be, both are equally real.

So if we agree that the story is not that there is the real world on one hand and a fake world on the other hand, but that we are simply talking about two spheres of the real world, we might want to ask – what’s the relationship between the two? How does one relate to the other?

Try this explanation: the virtual and the actual are distinct, but they are not separate. What happens in the actual world shapes one’s experience of the virtual world, and what happens in the virtual world shapes our interpretation of the actual world. It’s basically a continuous two-way dynamic, with mutual effects. How is this outlook helpful? Well, we could say, for example, that Ben’s online experiences make him see his actual-life reality in a certain way – as compared to (compared to the virtual world, that is), rather than as simply is. This has both negative and positive consequences: it inspires in him hope, suggests strategies for improvement, and offers a reservoir of images for day-dreaming and fantasizing. But at the same time, it highlights reality (actual-world reality) at its poorest: a violent, cruel, uncompassionate existence, where people’s potential is not realized, and power is used for evil instead of good. At the same time, Ben’s offline existence affects his virtual-life: he uses his avatar to act out his emotions, to speak honestly to his friend, to exert courage, but also to show weakness.

In other words, Ben is not leading two separate lives; he lives one life, which is divided into two spheres. Ben uses relatively novel technology to do this, but other than that, is it really so unique? Think of a businesswoman, spending weekdays in the office and weekends with her family; think of an army commander who spends several months at home followed by several months in the battlefield; think of a football player – running and kicking on the pitch, then having dinner with his wife; think of how we all take holidays; isn’t it quite the same thing, at the bottom of it? Same world, different spheres of life; same person, different ‘selves’.

Also:

If Ben inhabits two spheres, and occupies two bodies, can he be said to be two people? Well I wouldn’t go that far, but I would definitely agree that Ben has two selves. Not a real one and a fake one, mind you, as we’ve already established that they’re both perfectly real. They are certainly different, though; so perhaps the better question would be to ask in what way his two selves are different. And in this respect, the most obvious difference between the living body and the digital body is that while the former can feel, hear, touch, taste, smell, and sense – the latter can’t; it is numb, indifferent, desensitized.

So the living body has the capacity for pleasure; but also for pain. We tend to idealize bodily pleasure, and lament its absence. But like Pink Floyd suggested, numbness can often be comfortable; particularly if the alternative is not pleasure, but pain and suffering.

Another important difference is this: there is only so much one can do to change their physical body; of course you can exercise, eat better, dye your hair, dress according to whatever fashion you like, and even have cosmetic surgery – but mostly, it’s a given. You keep what you draw. Your virtual body, on the other hand – now that’s just one big variable.  You can choose your gender (with the body parts to go with it), your appearance; even your species! Feel like a pixie toddler today?  A bi-sexual cross-gender giant? A carnivorous double-hump camel?  You can be that. And you can play the part with confidence, because no one will accuse you of acting childlike when you should be acting like an adult; for acting masculine when you’re expected to be lady-like; for making jokes when situations calls for serious behaviour – or the other way around. In other words, you can just be yourself – silly as that may sound – when you’re a made-up character. Self-fashioning – self-creation even – are real possibilities in virtual worlds. They afford a type of creativity that goes beyond drawing on canvas, or writing words on paper – they afford creativity that can be utilized to design your very self. And for some people, particularly those whose physical existence is an endless and futile struggle to either conform to, or reject their expected social roles – to finally be able to choose the role that suits you, down to the very last detail – how appealing is that?

 

 “In games you can be whoever and whatever you like. Here you can only be one person. The jerk you see in the mirror. I have to teach him everything. For example, I have to teach him to laugh. People like that. To ‘give them a smile’, as they say. Which means smiling when really there’s nothing to smile about. That’s how you create your own avatar.”

 

Get it? We create avatars in the actual-world all the time. Except it’s harder, and we get very little choice about what we want these avatars to be.

In his mind, Ben wishes for his two characters – his two selves – to unite. He can only fantasize how his virtual self would react in the actual-world events he is forced to cope with. If only Ben had Ben X’s strength, courage, swordsmanship, and way with words, his life would have been so much better. But he hasn’t; and it’s not. It’s as hard as anyone can imagine. Ben X might have it made; but Ben is miserable.

In one critical junction in the story, it seems as if at least one formidable achievement of Ben’s virtual self might finally be transported to his actual-world; the friendship and affection of his co-player, Scarlite. Her actual-world self is worried about his, and she’s coming on the train to meet him. But the embodied, actual-world version of Scarlite simply proves too petrifying for him to approach – he can’t even say hello. Then, at one tragic moment, the pain of Ben’s existence overwhelms him, and he decides to kill himself by jumping under a moving train. I’m not embarrassed to admit that I myself have confused the actual-world and the cinematic world, and cried “NO!!!” at the screen, with a very real voice, led by a very real emotion.

Ben doesn’t kill himself, luckily, though I dare speculate that his eventual decision to go on living was solely motivated by the movie makers’ concern for the intactness of their audience’s nearly-shattered hearts; not by any real motivation of the protagonist. In that sense, Ben’s delusion of Scarlite as a loving girlfriend is a classic Deus ex Machina (adequately defined in Wikipedia as “a plot device whereby a seemingly unsolvable problem is suddenly and abruptly resolved by the contrived and unexpected intervention of some new event, character, ability, or object” – isn’t that precisely what Ben’s delusion of Scarlite is?) In other words, if Ben were to kill himself, this would have been a devastating story, and yet a remarkably sincere one (it seems the true story on which the novel was based did, in-fact, end with suicide). Adolescent suicide is a real-world problem, so an honest cinematic depiction of it would not have been ill-suited. But as it happened, the makers of Ben X chose to convey the dangers and injustices of high-school brutality in a different way; through their protagonist’s somewhat playful ploy, tricking his community into believing he had committed suicide, and noting their collective conscience at work. To actually have their main character proceed to kill himself would, I believe, send the same message, but so much more powerfully.

But at any rate, Ben X offers quite a brilliant depiction, I thought, of an important part of the experience of being autistic. It’s not always charming naivety, childhood innocence, good-naturedness, and a wholesome dash of some much-called-for honesty; instead, sometimes, it’s about suffering, anxiety, depression, distress, loneliness, and even suicidal thoughts. Aren’t these very often what autistic adolescents must cope with, in a society where antipathy, intolerance, and plain cruelty are all too common?

“Then it was time for truthfulness. Shamefulness, painfulness.” Indeed. Sometimes it is.

What do you think?

Related Posts:

Adam (2009)

Adam (2009)

I first wrote this post immediately after watching Adam for the first time – I wrote very fondly of the film, as I honestly enjoyed it at the time. I still think it’s a pretty good movie, but after reading this review by the awesome Caroline Narby at bitch magazine, I see now that I was actually overlooking some crucial points. This revised post is an attempt to reconcile my formerly held opinion of the movie with my new one.

(Which makes me wonder, which of my opinions of the film is more authentically ‘mine’? The one I formulated myself right after watching it, or the revised one, reformulated after reading someone else’s thoughts and reflections? I guess the short answer is: the latter.)

Adam600

Let’s start with the alien metaphor; it seems like you can’t read / watch anything about autism without coming across some mentioning of a life form from another planet. Why don’t we talk about that for a while?

In the very beginning of the film, during the opening captions, Beth has this to say:

“My favorite children’s book is about a little prince who came to earth from a distant asteroid. He meets a pilot whose plane has crashed in a desert. The little prince teaches the pilot many things but mainly about love. My father always told me I was like the little prince. But after I met Adam, I realized I was the pilot all along…”

So of course this is a reference to Antoine de Saint-Exupéry’s The Little Prince, originally published in French in 1943. If for some reason you haven’t read it yet, stop everything you’re doing and go read it now. Seriously, I’ll wait.

Are you done? Ok.

So what’s the deal? Why is the association of autism and alienism so common?

Ian Hacking, possibly one of the more prominent philosophers alive today, has written an article about exactly that. It’s called Humans, Aliens & Autism. Nothing too fancy or sophisticated, but a thoughtful explanation nonetheless. I’ll give you his argument in a nutshell.

“Aliens in modern space adventures may talk and walk like us, but by definition they are not human … Aliens can be better than us, as in moral fables such as ET. Most of the time they seem to be bent on destroying us … However, we seem to hold up aliens as mirrors to teach what is best or worst in us or in the human condition” (2009:45-6)

Hacking makes a great point here. When you think of it, aliens are always a metaphor; a metaphor for something that’s not us, but that’s not entirely different either. Some form of alter ego for the whole of humanity. An invention that’s meant to tell ourselves something about who ‘we’, as a species, are.

Ok, so what does this have to do with autism?

Well, Hacking believes this has something to do with the difficulty many with autism have with making eye-contact. This is because people – and this has been thought true for millennia – feel they can know the person in front of them just by looking at their eyes. You know how they say that “the eyes are the mirror to the soul”? Well, that sort of thing. Looking someone in the eyes allows us (well, some of us) to know the person in front of us, or at the very least feel like we know them (this distinction is crucial). Hacking calls this perceived ability ‘Köhler’s phenomenon’. If we are denied access to this proverbial “mirror to the soul” for whatever reason (either because the person in front of us isn’t making eye-contact, or we aren’t) we are confused, uncomfortable, frustrated, and mostly – we lack the words to describe our experience. It’s too unique. Hence, the need for a powerful metaphor.

Hacking argues that “…that kind of immediate understanding that Köhler described is not the common property and practice of that part of humankind that is autistic”, and he concludes: “We asked, “why does the metaphor of the alien crop up so often in fact and fiction?” We can now state an answer: because of the absence of Köhler’s phenomena in relations between neurotypicals and autistic people.” (2009:52) So the reason aliens come up so often in talk about autism is because autistics and neurotypicals do not share a certain ‘bedrock’ of experience that allows each-others’ inner being to be projected outwardly, and seen directly by the observer. This creates a feeling of unusual strangeness, which is well reflected by use of the alien metaphor.

Arghhh, I dunno. I mean ok, Hacking’s explanation is relatively straightforward and logical (if a bit obvious), and I don’t feel a burning need to confront it. But I’m not very satisfied with it, either.  Because what Hacking might be overlooking is the sort of power dynamic that is reinforced whenever the alien metaphor is used. Difference is very rarely neutral. Seeing as the alien metaphor invokes a very profound feeling of difference, we need to ask what is the political implications of referring to an entire group of people as ‘aliens’ (and I suppose this is where the tastefulness of the alien metaphor in Adam is brought into question). It’s not necessarily anything as simplistic as ‘we are good, they are bad’; we already saw that like in E.T. – or The Little Prince, for that matter – the alien is often morally superior to the earthlings. I would think the risk of using the alien metaphor is in that it reproduces a state of events in which one group (neurotypicals, in this case) is more privileged to determine the extent of the difference between it and the other group. The voice is always the earthling’s voice. Hence the voice of the autistic person, and his/her way of defining themselves or categorizing themselves is not taken into consideration. In Adam, the alien metaphor shapes the viewer’s experience of the story in the following way:  despite Adam’s name being in the title, it makes it a story about Beth. Because if Adam is the alien, then we are quite forcefully made to view the story through Beth’s perspective of him. As Caroline Narby rightfully points out, Adam becomes passive; merely a plot vehicle, whose “ultimate purpose is the moral instruction and betterment of the non-disabled Beth and, by extension, of the audience.” Spot on.

In its use of the alien metaphor, Adam is a striking example of Othering. It is not a bad-intentioned endeavour at discussing autism, but a misguided one nonetheless. Difference is fine – we need differences, we thrive on differences. Sameness should not be an ideal, and differences should not be concealed. But one has to be sensitive to power dynamics when discussing differences. To refer to another as so strange that he may well have come from a different planet is, well, plain wrong. Not just for its implications (the false assumption that autistics can never be sufficiently understood by us NTs – so what’s the use of even trying?), but for the very reasoning that brought it on: Autism means difference, but not THAT much of a difference. We don’t need to look up to the stars to account for this difference – the people on Planet Earth are sufficiently diverse. And we are all equally human.

And then a harder question creeps up: what to make of all those instances when autistic people refer to themselves as aliens. Wrong Planet being the most prominent example. I’m going to leave this question open for now. But I would love to hear what you all have to say about this!

I actually devoted way too much text to what is a very small part of the film – I’m aware of that. But I thought it makes an interesting point of discussion nonetheless. I want to briefly make one more point, though: I did think Adam was probably the most socially and politically aware movie about autism from all those I have watched so far.

About half way through the plot, Adam undergoes a series of unfortunate events. First, he unexpectedly loses his job; not because he did bad work, but for failing to adhere to his boss’s instructions (instead of a plain talking doll, Adam makes one with artificial intelligence; brilliant idea, but not too practical, from a commercial point of view). Then, cardboard box in hand (apparently the universal ‘I just lost my job’ signifier, in American cinema at least), he goes to watch kids in the school where Beth works. He just needed a splash of childhood innocence, to cheer him up a bit. Failing to see why an adult man watching children might worry some people, he is stopped by the police, and this quickly escalates into a violent and degrading affair. Then unemployment, and depression, and anxiety, and self- injurious behaviour.

This sequence was so political that I had to reassure myself that I wasn’t reading too much into this – this is an American movie after all. But no, it’s all there. And I have to give the creators credit; it’s very well done. See, I come across a lot of literature on autism in sociology, social work, public health, education, law etc. So much of this really good research is concerned with the type of difficulties autistic people face that are unequivocally social, similar to those depicted in Adam. Being thrown out of school / university; being shamed in public; losing jobs and failing to find employment; being arrested and incarcerated; even winding up living on the street – these are all relatively common experiences for people with autism; at the very least more common than for the general population. Of course, just because someone is autistic does not automatically make him/her unaccountable for their actions; But there is obviously room to take a person’s atypical neurology (and life history as autistic) into account when sanctioning him with expulsion / dismissal / arrest / incarceration. And this is seldom done. That’s a political problem for autistic people; and it is a problem that the makers of Adam rather courageously took it upon themselves to engage with.

So yes, sure, you would expect Adam to realize that complying with his boss’s directions is important if he wants to keep his job. You might expect him to realize that there’s a perfectly good reason for police to want to look at his ID when he’s staring at children through the school fence; and there’s definitely an unnecessary implication of violent tendencies in his banging his head against the mirror. In many subtle ways, Adam’s depiction of autism is inaccurate, stereotypical, or simplistic. But as far as creating some awareness to the sort of problems autistic people have to deal with in this world (those that go beyond social awkwardness or inability to pick up social cues), I thought the makers of Adam did a very decent job.

So good on them.

And while we’re at it, I thought it was really fair that Adam tells Beth about his having Asperger’s himself, rather than some doctor / psychologist sharing this information; I also thought it was very cool that in order to learn about Asperger’s and to form an opinion on whether Adam is good relationship material or not (a reasonable concern), Beth reads an autobiography written by an Aspie (it was Pretending to be Normal by Holliday Willey – haven’t read this one yet); I loved the fact that the term neurotypical is used in the movie, and even as a sort of caveat to psychologists’ expertise (most of them being neurotypical, and therefore have limited knowledge about autism); and I even thought it was a brave choice to write the following lines for Beth during her fight with Adam (after he freaks out about her lying to him, and wishes her dad to go to prison for life): “You’re a child Adam. Fuck Asperger’s. You’re a fucking child”. Was that over the line? Probably.  But I appreciated the implication: just because Adam is an Aspie, this does not excuse him for acting like an asshole. And that’s super fair, isn’t it? In some roundabout way, I felt this was a fairer treatment of autism than in most movies.

What do you think? I want to know.

My Name is Khan

My Name is Khan

Rizvan Khan grows up in a Mumbai neighborhood – possibly a slum. He is different from the other children; he takes things way too literally; loud noises and big crowds make him anxious; he dislikes being hugged, and gets upset by the color yellow (is a disliking of one specific color in any way common among autistics? I personally have never heard of this). He’s bullied at school, but his genius for mechanics earns him a respectable role in his community. His mannerisms are typical – perhaps stereotypical – of an autistic boy. But there is no real surprise there, or any room to wonder; we were already told by the movie’s opening captions that “The protagonist in the film suffers from Asperger’s syndrome, a form of autism.” And are further informed that “While the film endeavours to depict the character as authentically and sensitively as possible, it is a work of fiction and hence certain creative liberties have been taken in the portrayal of the condition.”

What an interesting statement. Why was it put there? Is it meant to appease those who might be offended by the inaccurate representation of autism in the film? Is it meant to forewarn the viewer no to take its portrayal of autism at face-value, lest he/she regards this as a project meant to educate, rather than entertain? Were the lessons from ‘Rain Man’ learnt – with its huge but unjustified effect on the understanding of autism in the English speaking world? Whatever the reason, I appreciated the film makers’ effort to qualify their depiction of autism as not-necessarily-accurate; after all, no depiction of autism in film can ever be 100% accurate (nor, for that matter, any depiction of anything else), so best to be aware of that fact, rather than to recklessly assume the role of educator.

As Rizvan reaches adulthood and moves to the US, he is diagnosed with Asperger’s syndrome by his sister-in-law, Hassina. “She was from Brooklyn, New York City”, he tells us. “She taught psychology in the university here.  She was the first to find out that I had Asperger’s syndrome. My fear of new places, new people. My hatred for the colour yellow and sharp sounds. The reason for me being so different from everyone was defined in just two words: Asperger’s syndrome.” Quite beautifully put, don’t you think?

My-Name-Is-Khan_4

Rizvan falls in love with a woman, who he then marries. Later, following a terrible tragedy, he goes on the road, getting involved in all sorts of Forrest-Gump-like adventures, till the eventual and predictable happy ending, when he (spoiler alert) meets the president of the United States. Realism was never intended by the film’s makers. Over-acting and an inclination to melodramatic over-the-top-ness are hallmarks of Indian cinema, and My Name is Khan is no exception. But having said that, I did really enjoy the movie. I sympathized with and rooted for Rizvan, and I completely ‘got’ his love for the beautiful Mandira. I cried my eyes out in the sad bits, jollily danced my head during the Indian musical montages, and laughed at the good natured Bollywood allusions. All in all, I thought My Name is Khan was very good. And it raised some extremely interesting issues.

My Name is Khan is very clearly a story about difference. But the type of difference that is discussed is not so straightforwardly laid out. Initially, we are led to believe that the film deals with a neurological difference; an autistic boy growing up in a neurotypical environment (as is usually the case); treated with cruelty by his peers, but loved and understood by his mother: “No doctor could ever tell her why I was the way I was” he narrates; “But Amni… she never felt the need to know why. I don’t know how, but she found a way to know me”.

But then the plot unexpectedly turns to focus on a quite different sort of difference; riots break out in Mumbai between Muslims and Hindus. Ethnic and religious rivalry becomes the focus of the story.

As a boy, Rizvan, a Muslim, repeats to his mother some random violent rant against Hindus that he over-heard in the street. Amni, outraged, explains to him in a way she knows he will understand: Muslims and Hindus are exactly the same. People are only different insofar as they are either good or bad; that is the only difference that exists.

But if that’s the case, what can be said about Rizvan’s own way of being different? It seems to be implied that it is as insignificant as ethnic or religious differences. But is that really true?

The immediate implication of this message is obviously positive; it is that differences don’t matter, we are all the same; we are all equal. We should thus accept one another, love one another, and judge each other based on actions – namely what one does; rather than on properties – namely what one is (anyone finds this reminiscent of yet another Forrest Gump motif? “My mama always said”, Gump kept repeating, “stupid is stupid does”). This is a peaceful message of tolerance. But there is another side to this. In asserting that people are all the same, and by implicitly comparing ethnic differences with neurological differences, Amni ignores an important fact – that Rizvan’s way of being different is, well, different. It is not grounded in beliefs, traditions, texts, language, ancestry, or places of worship. It is not even grounded in the body, as is sometimes the case (from henna dyed hair through circumcision to skin color). Instead, the difference between Rizvan and his peers is grounded in their respective brains; in their minds; in – some would say – the very thing that makes them human to begin with.

In other words, we could regard ethnic or religious differences – as well as nationality or gender – as mere add-ons, under which we are all essentially the same. But when neurological differences are thought of in the same way, this poses some difficulties. Autism involves a different wiring of the brain, a different mechanism of cognitive process; so if autism is also such an add-on, what is underneath it? Is there an underneath? Because if there isn’t (it’s just turtles all the way down…), can autistic people really be said to be the same as neurotypicals? What would be the nature of this sameness?

The view that autistic people and neurotypical people are essentially the same is obviously good-intentioned, but it’s inaccurate. Primarily, it relies on the assumption that in order to achieve equality and acceptance, we first need to establish sameness. That’s not necessarily so. Equality and acceptance can be similarly achieved by simply acknowledging the fact that people are different;  that this difference is not necessarily at the surface level, but at the very core of what makes us human; that this difference in no way implies the superiority of some over others, or dehumanizes certain groups; quite the opposite. It implies that there is more than one way to be human. It implies that in order to achieve personhood, one does not need to first establish similarity to the normal or the typical. One does not need to change their ways, to mimic, or pretend. One can be divergent, even radically so, and still be just as bit as human as someone who is the very definition of typicality.

Kristin Bumiller wrote about this in her 2008 article entitled ‘Quirky Citizens:  Autism, Gender, and Reimagining Disability’. She believes that autism advocacy, and the neurodiversity movement in particular, has much more to offer society than ‘just’ promoting acceptance of autistic people (crucial in itself). “In their quirkiness”, she writes, “[autistic people] contribute to a culture of citizenship that fosters equality without sameness.” Neurodiversity fosters citizenship based not on sameness nor on difference (because both imply the existence of a benchmark norm), but on inclusion and acceptance; on individual roles and contributions. “Although neurodiversity is most important to people who identify as being on the spectrum,” she later adds, “it also has the potential to enrich society and change how we understand ourselves and other people.” (2008:982)

And how grandiosely was this potential realized by the protagonist of My Name is Khan, whose life-course is continuously affected by politics of differences. His brother rejects his Hindu wife. “You cannot marry her, it’s Haram!” he says. “She is a Hindu. There a lot of differences between them and us, understood?” to which Rizvan replies, “No, there’s no difference.  Good people, bad people. There’s no other difference.” The 9/11 attacks on the World Trade Centre lead to a wave of hatred in the US; particularly towards Muslims, but anyone with a brown skin is suspect. His step son is killed in a racist attack. His wife sends him away, blaming his ethnicity for her son’s death. On his wanderings, he is the target of suspicion, fear, and ridicule, due to either his skin color, his creed, his autism – or all of these combined. His donation to a fund raiser is denied, as it is an event “for Christians only”. “Honey, keep it,” he tells the receptionist, “for those who are not Christians in Africa”. He is lodged by a kind Georgian Black woman and her young son, whose older brother has recently died in the war in Iraq. In a memorial service in the village chapel, he is asked to say a eulogy for his step son. To recap: in a Southern US state, In a Christian Church, in a village populated by African-Americans who bereave the death of their sons in Iraq, the Muslim Khan eulogizes, in Hindi, his son, a Hindu, who was killed by white Americans because of him having a Muslim last name. Get the picture?

Differences are omnipresent in My Name is Khan, and the protagonist is, we are made to believe, in the very best of positions to rise above these differences (without being oblivious to them, however, as one might think), and bring people of all colors and creeds together. To help each other out. To wear their cultural identities with pride, and to stand up against bigotry, prejudice and xenophobia.

In embodying a type of difference that in a way eclipses all other differences, Rizvan imagines a society where differences are respected, and where people are judged according to deeds rather than lineage, skin tone, or religious beliefs. To what extent does this tell a story of autistic people in general? Or even about autism itself, as a social category? I’m looking forward to hearing your thoughts on this. Please comment and share hits blog with others who might be interested.

And why not end with this lovely quote:

“The Book Different Minds says that people like us can’t express their emotions in words but we can write them easily. I can fill thousands of pages, millions of times with ‘I love you Mandira’. But not once could I say it to you. Perhaps that’s why you are angry with me … meanwhile whenever I have time, I will write all that I couldn’t say to you. And then, you will love me again. Insha’Allah.”

Related Posts:

Mary and Max

Mary and Max

The sad and beautiful tale of Mary and Max is one of my favorite films of all time.

On two opposite sides of the planet, a lonely little Australian girl and a lonely middle-aged obese New-Yorker become friends. Mary and Max are both made of plasticine, yet they’re two of the realest people ever to appear on screen.

mary-and-max

From Mary’s poo colored birth-mark to Max’s chocolate hot-dog recipe, from Ethel the rooster to Henry(s) the fish, from Ive’s painted eyebrows to Damien’s stutter, from egg laying rabbis to babies found in beer glasses, from Vera’s cooking sherry to dr. Hazalhof’s obsessions with warts, and from bird taxidermies to jars full of toe-nail clippings – every single scene in this movie is a little miracle of compassion and nuance, a portrait of humanity at its simultaneous highest peaks and lowest crevices.

Shades of brown and tan are gently sprinkled with reds and pinks, empty shelves are decorated with toys, bare walls are adorned with drawings, expressionless faces are made to smile, while the lonely and potentially grim existence of a sad little girl and an anxious middle-aged Aspie is being filled with excitement, chocolate, pets, and friendship.

I could probably go on like this forever, counting the infinite number of ways this film touched me, but what would be the point? Sufficed to thank Adam Elliot for making us this modest masterpiece, and urge whoever hasn’t watched Mary and Max to not waste another moment.

The emotional textile of Mary and Max’s existence is so rich, that one barely manages to take a deep breath between gently laid brush strokes of sadness and courage, loneliness and hope, despair and longing, fear and love. It is this vivid emotional landscape that inspired me to finally attempt a discussion on what stands at the core of my research; emotions and their meaning ­– particularly in the case of autistic people.

*

Confuzzled. Apparently just another made-up word (alongside snirt; namely the combination of snow and dirt, and smushables; the groceries found squashed at the bottom of the grocery bag), Max’s neologism reveals a lot about the nature of human emotions and the words we have for them. A combination of confused and puzzled, it even says something about the inherent limitedness of our emotional lexicon, whereby the words we have to describe our emotions are often insufficient. This limitedness is particularly consequential, I dare to suggest, in the lives of autistic people. But let us start from the beginning. Brace yourselves; discussing emotions is always an arduous task.

Many social anthropologists have wondered about the nature of human emotions. Are they universal? Do people of all cultures share exactly the same emotions? Are we all born with a capacity to experience emotions in similar ways? Do the words we use to describe our emotional states accurately reflect what we actually feel inside? Based on an extensive reading of anthropological theories, I will answer all of these questions with a hesitant ‘no’. Emotions, according to such theorists as Catherine Lutz, Unni Wikan, and Sarah Ahmed, to name a few, are not a property of the individual. They are not internal. Our emotional terms refer not to distinct ‘things’ within us, but rather to the nature of any specific relationship between a person and another person, between a person and an object, or even between a person and an idea, at a given moment. Emotions are always directed at something or another, and in this directedness they lie. Emotions are the stuff of which connections are made of. In this sense, emotions are relational.

Moreover, while all humans are born with the innate capability of being affected by their environment, their company, and even their own thoughts, this capability is not what is usually referred to when emotions are talked about. Rather, emotions refer to the cultural and lingual categorization of these affects, the connotations they raise, and the value judgment they are given (good or bad? Pleasant or unpleasant? Moral or immoral?). Emotions are the afterthought of the affective, the visceral, even the somatic. An afterthought that is inevitably framed in culture and limited by language. In this sense, emotions are socially constructed.

Similarly, seeing as humans are products of their upbringing, of the language they speak, and the social, historical and cultural context in which they live, our only available means of making any kind of sense at all of what we think and feel – is by using the vocabulary handed down to us by our parents, teachers, friends, the media etc. One cannot interpret what one cannot name. People of different cultures, therefore, or of different historical times, would have quite different ways to discuss their emotions; i.e., they will experience their emotions differently. In this sense, emotions are culturally specific.

Finally, emotions are only ever invoked in context. Sadness, or hope, do not lie within us waiting to surface; instead, emotional terms are begged when events, occurrences, relationships, and evaluations of a certain kind occur. Emotions are thus always specific, and no two are alike, despite the limited vocabulary we have whereby fear, for example, can refer to a great many different kinds of feelings, effectively crudely lumping them in one distinct ‘emotion’. When we think of emotions, when we articulate them – they are there. But when we forget about them, they simply cease to be. They are gone. When we are reminded in them again, they are then altered, changed, adapted to their new context, this time as the objects at which our new emotion is directed. And so on and so forth. In this sense, emotions are emergent.

*

“Max knew nothing about love,” we are told; “it was as foreign to him as scuba-diving … He felt love, but couldn’t articulate it. Its logic was as foreign to him as… as a salad sandwich”. Is this a sentiment many on the autistic spectrum share? It has been my impression that yes indeed, many autistic people frustratingly feel that love is too confusing, inexpressible, and uninterpretable to them. We all feel this at times, I would venture, but possibly not nearly as frequently as autistics do, and to a significantly different extent.

“He felt love, but couldn’t articulate it”. But if that were to be the case, how would Max know that he did indeed feel love? What would be the nature of a love unarticulated, and how would one recognize it as such? Emotions, I and anthropologists before me argue, are never independent of their articulation. In fact, it is the very articulation we speak of when we speak of any specific emotion. What is love, if it is not the loving words, the loving embrace, or the loving gaze; if it is not the motivation to act in certain ways, to think particular thoughts, or to see things in a certain light? Articulation, clearly, is not limited to words. There are various means of articulating love; and seeing as no two emotions are ever identical, articulations of love are potentially infinitely varied.

“He felt love, but couldn’t articulate it”. So what are we to make of this statement, given that it contradicts, in a meaningful way, what we take emotions to be?

Had Max felt a confusing mixture of thoughts and physical sensations of a particular kind, energizing him with great valence; arousing positive connotations and affectionate memories; warping his perspective into a good-natured acceptance of things, like when looking through the eye-piece of a camera while its lens gradually focuses on a patch of colorful flowers – while having no idea that this very concoction can be said to be ‘love’ – was it in fact love that he can be said to have felt?

The unsophisticated and disappointingly straightforward answer would supposedly be no. Love exists only when love is spoken of. Hence, Max did not feel love. But wait, love was spoken of, by the narrator, in retrospect. So in this case, Max’s sensations can be said to have been feelings of love. But what is the role of the narrator in Max’s life? None. The narrator is part of our perspective on Max’s life, not of his. Max is ignorant to the existence of any such narrator telling his story, and articulating his emotions for us, in ways Max cannot. In max’s life, love was never explicitly expressed.

But if it wasn’t expressed, it was certainly articulated! We see it being articulated in so many ways!! In Max’s excitement upon receiving a letter from his friend Mary; at his concern for her well-being; at his interest in all aspects of her life; at his advice for her, and his loyalty to her, and his kindness towards her. We see it at his forcing himself to smile for her sake, and at his using her own tears to make himself cry. Even at his rage and disappointment when he feels she has betrayed him. These are all, unquestionably, beautiful articulations of love! Must we discard them merely because the word ‘love’ is not explicitly uttered by either party? Simply because Max may be unaware that this – this precisely – is what people speak of when they speak of love? Must Max be robbed of having experienced love merely because he was oblivious to the love he was indeed experiencing?

It doesn’t feel right. It doesn’t seem fair.

So let us pursue a different option: Max loved Mary, and articulated this love in his letters, and in this very articulation his love can be said to have existed. Love is relational – and in the relation between Max and Mary there was love. Even more so, the relationship between Max and Mary can be said to have been made of love, love being the proverbial stuff of which their connection was made. Love is emergent, so in every new letter, in every bar of chocolate, or a drawing of a pet, brand new shades of ‘love’ arose and expanded. With this perspective in mind, we can say love is the central theme, the driving force, of Mary and Max’s tale. Love is everywhere! And yet confusingly, frustratingly, we are told that Max “felt love, but couldn’t articulate it”. Couldn’t articulate it? That’s all he ever does!

It appears the logical conclusion would be to argue that Max indeed felt love, and indeed articulated it brilliantly, but was simply unaware that he was.

How can we make sense of this statement?

*

Love, like any such distinct emotion term, is socially constructed. But this is not to say that it is made-up, or in any way unreal. Not even remotely. Think of a building.  A building is obviously constructed, but that does not mean it is imaginary, or in any way shabby, short lived, or inconsequential. It does imply, however, that it has not been in its current shape forever, and might not have been the same had circumstances been different. A Thai Pagoda is not similar to a Gothic Cathedral, though both are made of stone bricks. Moreover, the endurance of any construction ultimately depends on how well it is constructed; a well-constructed building can stand erect for millennia, particularly if it is made from quality bricks. And the precise nature, use, and overall shape of a construction depends on the historical and cultural context in which it was made, and in which it is currently being used.

If love is a construction, what is it constructed of? what are its bricks? They are the essentially human capacity to be affected in significant ways by one’s surrounding. Sounds abstract? It is. Strip love of its social, cultural, and historical significance, and you’re left with a strong feeling perhaps, but a feeling so vague that it is no longer recognizable or articulable.

If love is a construction, who constructed love? Generations of poets, authors, philosophers, theologians, scientists, readers, interpreters, parents, friends, lovers. Each employing the notions of their predecessors while adding their own ideas and experiences to articulate love in novel ways, which then subsequently accompany the concept of love further along.

And importantly, if love is a construction, what are its blue-prints and designs? What is its architecture? That would be the way love is framed, categorized into kinds, interpreted, and made sense of; the way it is valued and revered, glorified but also feared; the connotations it raises, the cultural references it builds on, the way it is typically exhibited, expressed, verbalized, and even experienced!

Stone is inevitable. But it can take the shape of a building in infinitely various ways. Similarly, our capacity to be affected is inseparable from our humanity. It is, also, inevitable. But this capacity can take the form of emotion in infinitely various ways. That’s what is emphasized when it is said that love is a social construction.

*

So where does this leave us? I suggested that Max was indeed feeling love, though he was unaware that he was not only feeling it, but was articulating it brilliantly. Now that we have conceptualized love as a social construction, or in other words, as the result of a collective social project, we may begin to understand why Max wasn’t aware that his relationship with Mary would normally be referred to as love; why he wasn’t conscious to the fact that he was articulating love; and why the language of love was said to be foreign to him. Being autistic, Max may have lacked what can be called ‘social intuition’; the capacity to effortlessly internalize such profound social discourses as gender roles, sociality, or indeed ‘emotion talk’.

In other words, seeing as love derives its meaning collectively, through the inherently social practice of language (verbal, written, or extra-lingual), one can be expected to be confused by it if one generally finds it challenging to intuitively understand other types of social practices.

Its logic was as foreign to him as… as a salad sandwich” we are told of Max’s puzzlement of matters of love. So what is it about Max being autistic that created this gap between him feeling love and his expressed inability to articulate love? Emotions, it was said, are social projects, inter-subjective endeavors, where a term is infused with meaning that is then negotiated to the point of mutual agreement. When somebody says “I love”, they are not simply expressing outwardly a strictly internal ‘thing’. No, instead, by uttering the word love, they infuse this utterance with a history of social connotations, with a world of cultural significations; they infuse it with great meaning. This much is – in some way or another – intuitive for neurotypicals; which is why love, or any emotion for that matter, is indeed never really straightforward, but still relatively understandable. Neurotypicals are generally comfortable treading the murky waters of emotion talk. But not autistics, for whom this murk often proves too opaque and impervious.

Max was not aware that love can be articulated in giving a thoughtful advice, by placing a gift-pompom on top of one’s yarmulke, or by sharing a favourite recipe with a friend. Max did love Mary; but unfortunately, seeing as the meaning of ‘love’, in its typical use, is framed and indeed ‘coded’ by neurotypicals, its complex and nuanced meaning was lost on him.
I’ll end with a couple of quotes from Mary and Max that I simply adore:

(1)    “I cannot express myself very clearly at this point, and so I will list my emotions, in the order they feel most intense: hurt, confuzzledness, betrayal, discomfort, distress, and wheeziness.”

(2)    “When I received your book the emotions inside my brain felt like they were in a tumble dryer, smashing into each other. The hurt felt like when I accidentally stapled my lips together. The reason that I forgive you is because you are not perfect. You are imperfect, and so am I. All humans are imperfect.”

Related Posts:

Mozart and the Whale

I was never a big fan of the romantic comedy genre. While some of these movies are often admittedly entertaining, they always leave me feeling a bit cheated. I do enjoy myself; I laugh, I get emotional, I sympathize with the characters, and I’m often completely engaged with the protagonists’ relationship, rooting for its success. But then, as the movie draws to its inevitable happy ending, I start to question the credibility of this whole thing. I start to reflect critically, and usually realize that the story was a bit too… much; the breakups too dramatic, the gestures too grand, the female protagonist too stereotypically cute, the male protagonist too stereotypically charming, and their love way too unreasonable. I then regret actually having been made to feel something, wasting sadness or sympathy on such an obvious falsity. I even feel I was cheated into laughing – odd as that may sound – because (except for in the first 20 minutes or so, where the jokes are usually quite good) it’s always that kind of laughter that stems from being surprised while overly emotional; like in Pretty Woman when Julia Roberts bursts into laughter when Richard Gear hands her the diamond necklace and then shuts the box abruptly. It’s not really funny – unless you’re four – but you laugh anyway. “What did I go through all that for?” I think angrily. What did I gain? I feel duped!  Give me back my 1 hour and 45 minutes! And more importantly, give me back my cynical faith in the inherently pessimistic and sarcastic nature of the human spirit.

Well, that was pretty much how I felt watching Mozart and the Whale. Image

But I have to say, that’s not all bad. See, I’ve been learning about autism for a while now; reading personal memoirs, papers and books in the humanities and social sciences as well as studies in the natural and medical sciences. Reading articles in popular media, and watching documentaries and lectures – as well as movies and series. And autism, I found, is almost always talked about in this earnest austerity; like it necessarily warrants a very grave face, the one you would wear at a funeral. (Example: recently I applied for a research grant for an ethnographic study on emotional experiences in autism. The reviewer rejected my proposal for what I thought were entirely unfounded reasons, ultimately suggesting that I might modify my research to focus on ‘distress’ as a main theme. Why distress, I wondered? Why not love, pride, aspiration, or attraction?  Why assume that the only way to talk about autism is through negative prisms?) Yet at the same time I also know autistic people; and our conversations are rarely austere. We share laughs, enlightening discussions, as well as mind-numbing chit-chat (alright, not so many of the latter). Sure, sometimes difficult experiences are talked about, where earnestness is indeed in order. But it’s not my impression that that’s the norm. It is not the only way to talk about autism.

Anyway, I found watching Mozart and the Whale to be kind of a multi-layered experience. In the beginning, I enjoyed it tremendously; the writing is good (in that pristine yet banal sort of way), the acting is pretty great, and the whole thing is done really professionally – as romantic comedies usually are. So I laughed, I got emotional, I sympathized with the characters, and rooted for them. Then, as the movie was drawing to a close, I started getting a bit angry, as it dawned on me – like it always does in this genre – that I am being duped. Why did they break up? I understand they had a fight, fair enough, but to actually break up over a thing like this? (This applies to each of the three times they broke up). Why did Izzy get so mad when Donald freaked out when she rearranged his apartment? She did this without asking him, and he’s obviously into order and routine. How about some understanding on her part? And him, how about some self-respect? This girl plays with his feelings like a marionette. I’m not saying forget about her, because she’s really cute and all and he obviously loves her, but how about calling her out on her overreacting whenever he does the slightest thing to upset her? How about these two guys have a CONVERSATION between them? You know, actually TALK about what’s bothering them? That’s kind of a major thing in relationships (for a fantastically angry and eloquently written review of the film check out this post by Caroline Narby).

But then, I was overtaken by yet another feeling, and that’s the one I’m still carrying now; it’s actually pretty cool that a movie where all the characters are autistic, and where autism is the main theme (well, aside from love; it is a romantic comedy after all), is made as just another movie in a genre. Not trying to educate, not trying to scare, not trying to draw a meaningful lesson on human nature, not even trying to promote tolerance or acceptance (not that there’s anything wrong with that); just another romantic comedy, where the characters happen to be autistic. Obviously, as far as representation of autism goes, it’s got its flaws (again, check out Caroline’s review). But still, this movie makes you want to put politics aside, and just say “hey, I just moderately enjoyed a mediocre movie about autism, without it antagonizing me by making a huge deal out of itself”. That’s a nice accomplishment I think. Maybe it even does teach a valuable lesson; sometimes autism is not the most important thing in autistic people’s lives. Sometimes it’s just something some people have to live with, while busy with work, errands, family, and relationships; in short, while occupied with living their lives.

Anyway,

I want to talk about one specific event that happened to Donald and Izzy. When Donald’s boss comes over to their house for dinner, Donald asks Izzy if everything could be ‘nice’. Isabelle interprets this (probably justifiably so) as a request that they act ‘normal’, and that makes her upset. She responds by acting – well, I’m not sure how to describe her behavior, as she’s not being herself, but not what you would stereotypically refer to as ‘normal’ either. She basically goes out of her way to make both Donald and his boss feel uncomfortable. Which I found was quite rude, seeing as he did ask her nicely and this dinner means a lot to him. Why not just tell him you were hurt by this request and that you refuse to change your behavior for his boss? That’s a very legitimate stance. Instead, she chose to act all awkward and bizarre. Anyway, that’s not my point. My point is that this scene, I think, offers an opportunity to reflect on what seems to be a very dominant theme in the lives of many autistic people. The notion of normality, and the demand that autistic people either embrace (mimic) it, or reject it, or do both alternately.

An important concept that will help us discuss this matter is ‘discourse’. Basically, discourse refers to any loosely connected cluster of texts, written or spoken, that subscribe to a certain attitude or perspective towards a particular field or domain. For example, you have the gourmet discourse, where unique and delicate tastes, textures and smells are glorified; then you have the health discourse, with its emphasis on nutritious consumption, breaking food down to its dietary elements; and you have the vegetarian discourse, which infuses food-talk with notions of morality and conscience. Clearly, these aren’t the only discourses on food, nor are they mutually exclusive. In this sense, bringing up ‘discourse’ is more of a working tool than it is in any way an objective descriptor of reality.

Similarly, there are different discourses on autism. Again, these are not mutually exclusive, and are very far from static; instead they are fluid, dynamic, and contested; reality is never so simple that it can ever be accurately sketched with such ease. But this sketch is still useful to appreciate the various influences that affect our understanding of things. Broadly put, there are two main discourses on autism, which are mostly in opposition with one another. On one side is the biomedical discourse, otherwise referred to as the deficit model, which views autism as essentially a disease, impairment, or disability. It seeks to find the causes of autism; trace its genetic, neurological, or cognitive mechanisms; find cure or treatment; subdue autism; alleviate its symptoms, and ultimately normalize autistic individuals. Medical professionals, as well as researchers in genetics, neuroscience, or psychology are usually prone to this kind of discourse (also promoted by various types of organizations), which they, in turn, reproduce and reify.

On the other hand is what’s usually referred to as the neurodiversity discourse, or the social model. In this discourse, autism is not seen as an impairment but as a difference, a form of human diversity. The autism traits (the equivalent of what the biomedical discourse refers to as symptoms) are considered ontologically inseparable from the autistic person, and thus there is no desire to be rid of them, lest the person him/herself will simply cease to be. This discourse emphasizes the role of society and its institutions in the disabling of autistic people, by way of marginalizing, silencing and othering; punishing for atypical behavior; and glorifying normality while devaluing difference. Instead, according the social model, society should be more accepting and tolerant towards all forms of diversity, including autism.

So you can see how normality is a central sticking point in discussions about autism. The very acceptance of such a thing as ‘normality’ ­– as an absolute and a positive – will most likely lead one to accept the biomedical discourse. Alternatively, leaning towards the neurodiversity model will unavoidably drive one to reconsider what ‘normal’ might actually mean, to the extent of arguing that there is really no such thing – or at the very least that there shouldn’t be.

Except, of course, there is such a thing as ‘normal’, because we all use it. We all know what the word means. So the question to ask might not be ‘does normality really exist?’ but rather ‘what sort of power dynamics do current notions of normality reflect?’ What ideology is served when ‘typicality’ is rearticulated as ‘normality’? What institutions will have trouble justifying themselves once ‘normality’ is readjusted to include autistic behavior? And how are people’s lives affected by the idea that normal=good, and abnormal=bad?

You might think that normality always existed. After all, what’s more natural and inevitable than drawing a line separating what’s normal and what’s abnormal? But this awesome graph illustrates just how recent, and therefore contingent and far from inevitable, our obsession with normality really is. The proportionate use of the words normalcy and normality has increased fivefold in literature over the last century.

Assuming that the possibilities for ‘being’ are infinitely varied, discourses constitute our best tool for tracing the influences that made people ‘be’ the way they ‘are’. They are a handy substitute for the still popular concept of ‘culture’, whereby it is often argued that it is culture that made you the way you are. Not that this statement is entirely wrong, it’s just terribly inaccurate. ‘Cultures’ are not homogenous; within any given culture there are hundreds or thousands of different available discourses which one can subscribe to; usually people subscribe to discourses which are most relevant to them in terms of age, gender, religion, ethnicity, social status, education, geography etc. Yet importantly, one very often has to make a choice – either conscious or unconscious – between different or even opposing discourses. Another alternative is to mix different discourses, taking just a bit of each.

With regards to autism, then, the choice is not so much between normality and abnormality; instead, as Nancy Bagatell, the medical anthropologist, has suggested in her 2007 article, being autistic involves a constant struggle to orchestrate two opposing discourses; the biomedical and the neurodiversity. Are my traits merely a form of difference, or are they symptoms to be subdued? If I wish to be alone, is that a valid choice or must I urge myself to seek company? In difficult times, should I fantasize about a cure for autism, or about a more just, tolerant, and accepting society? And mostly – should I lay my best efforts in attempting to pass as what others consider to be ‘normal’?

The conflict between Izzy and Donald is a marvelous example of this dilemma, primarily because the filmmakers do not suggest that any of these choices is necessarily right or wrong. Donald has a job which he loves, and therefore feels the need to impress his boss; this requires (presumably) that he and his girlfriend behave appropriately at dinner. Izzy, on the other hand, feels that her only way of getting by in the world is by not hiding her eccentricities, instead making them work for her advantage. Importantly, as the movie demonstrates, this is more than just an idle inconsequential decision; it affects relationships, employment, and social status. We, perhaps unfortunately, live in a world where appearances matter. I would think that the choice of whether to act normally or not is never an easy one. Bagatell indeed suggests that this necessity to constantly orchestrate these different ‘voices’ in choosing who to ‘be’ is itself a source of much stress and anxiety in the lives of autistic people.

But on the bright side, seeing as what we’re really talking about is not ‘normal’ vs. ‘abnormal’, but rather this discourse vs. that discourse, there is room for change. Currently, the dominant discourse has it that there is just one way of acting that is considered normal and appropriate; but like any other discourse, this discourse is also susceptible to change. Hopefully, the day when ‘autistic behavior’ would not fall outside the remit of the norm; or otherwise, the day when normality, per se, would cease to be in itself a value – might not be too far ahead.

What do you think about all this?

P.S. Did anyone else think the scene where they all go to the bleachers, boys on one side on girls on the other, is a homage to that famous scene in Grease?

References:

  1. Bagatell, Nancy. “Orchestrating Voices: Autism, Identity and the Power of Discourse.” Disability & Society 22, no. 4 (2007): 413–426.

Related Posts:

Snow Cake

There are a couple of things I didn’t want this blog to be about, though I knew steering away from them will take some conscious effort on my part. First, I never intended for this blog to be a film review. I’m not particularly interested in discussing how good a film is, the quality of its writing or acting, the choices made by the director, or its production value. There is quite enough of that already, and I don’t feel especially qualified to talk about these sort of topics anyway. I like movies a lot, and then sometimes I see a movie that I categorically don’t like. I can probably analyze why this is, but to be honest, I prefer to just read a well-written review by someone else to set my mind at ease, and think “ah, yes. That’s what I didn’t like about this movie!” It’s not the most original way to reflect about a film, but why waste my energy articulating a position that’s already been articulated by others?

The second thing I didn’t want to focus on, for quite different reasons, is the representation of autism in film. See, representations matter; they play a significant role in shaping our view of reality; indeed, of shaping reality itself. When members of certain social categories (e.g. black, gay, Jewish or autistic) are routinely represented in a certain way, this creates stereotypes, which in turn leads to stigma. In other words, negative, wrong, or even just overly simplistic portrayals of autism in cinema lead people to hold false beliefs about autism. This has real-world effect, as autistic people inevitably have to deal with these false beliefs on a day-to-day basis. At the very least, this involves having to repeatedly correct people’s preconceived notions about them and their condition. At worst, it prevents them from finding employment, acquiring an education, accessing services, or forming relationships. Representations matter.

So why don’t I want to write about the representation of autism in film? Two reasons. First, other people do it better than I can; in fact, some do it brilliantly. Look up Stuart Murray for a good example. Murray is neurotypical, but there are also many autistic bloggers online who have a justifiable bone to pick with depictions of autism in movies; many of these blog posts are sharp, accurate, brilliantly articulated, and sometimes angry as hell. My input in that field is just not really needed. The second reason I don’t want to write about representation of autism in movies is because it will just deflect me from my real purpose, which is to employ movies where autistic characters are depicted to talk about other social and cultural aspects of autism, as understood by various social scientists, including myself.  In order to do that, I actually want to suspend my disbelief, and take the on-screen portrayal as at least partly representative of reality, so that we can all discuss certain aspects of autism without being overly general or obscure, through a discussion about specific (albeit fictional) people and events.

snowcake

So why am I telling you all this? So that you can appreciate my difficulty in discussing the film I just finished watching, Snow Cake. I had a hell of a hard time suspending my disbelief on this one, as every little thing in this movie just shouted out at me from the screen “this is fake! And poorly executed! And a terrible representation of an autistic woman, who appears to lack any depth, life experience, or ability to experience grief or come to terms with her own emotions!” Ehm. So you see my problem? Having a serious discussion about fictional people and events is fine. That’s what this series is all about. But having a serious discussion about fictional people and events which I don’t believe and think are devaluing to autistic people is quite another. But perhaps I’m being overly critical. After all, this movie does offer some food for thought. Let’s give it a shot. Meanwhile, for a spot-on review of both the movie’s quality and its treatment of autism read this excellent post from Bitch Magazine by Caroline Narby.

Linda (Sigourney Weaver), presumably in her 40s, lives with her teenage daughter in the township of Wawa in the Canadian province of Ontario. One day a car accident kills her daughter. If I seem to report this devastating event offhandedly, it’s only because this is pretty much how the filmmakers chose to portray this tragedy. We never see Linda receive the news of her daughter’s death, but instead we encounter her for the first time a few hours later, when Alex (played by Alan Rickman), who was driving the car at the time of the accident, shows up at her doorstep to explain and apologize. Amazingly, Linda appears rather unaffected by the news. She’s not apathetic, though; she displays anger (at her intruding neighbour), joy (at the toys which her daughter had bought for her minutes before the accident), and distress (at Alex’s wet clothes which threaten to contaminate her carpet), but no sorrow, grief, or sadness. One could say that such an unimaginably tragic occurrence as losing one’s only child takes time to process. That’s possibly true; except during the 1 hour and 47 minutes of the film, Linda barely displays any emotion whatsoever with regards to her daughter dying. Pretty much the only time she expresses any negative reaction at all to Vivienne’s passing is when she refers to her, in the past tense, as “useful”. She was “useful” to her. Her only child, who we later learn was her best friend, the one person who truly accepted her, who played and danced with her, who shared stories and snowmen and made-up words with her, was “useful”. Oh, and Linda is autistic. As if that explains it.

But I digress. I actually don’t want to focus on grief at all (mainly because, as Caroline Narby observed, Linda’s character never gets a chance to grieve, but instead the grieving is reserved for Alex, who barely knew Vivienne!), but on another, less explicit theme in the movie: Minutes after they first meet, and her autistic traits become obvious to him, Alex worryingly asks Linda “Are you alone here? … is anybody… I can stay with you for a couple of days”. Linda’s ability (or inability) to live independently is recurrently referred to throughout the film. I already mentioned Alex’s amazed reaction at her living alone. Her next door neighbor Maggie expresses her own concern: “She needs other people to do things for her, if not necessarily with her. Vivienne did all the boring stuff, what’s she going to do now?” While Linda’s father mentions, without solicitation, that he never liked her house and never felt at home there. “Linda always liked it” He says, “We bought it for her because she wanted to be independent”. In one of the very last scenes, while Linda was enjoying the snow cake Alex had made for her (any ideas what this snow cake symbolizes? Couldn’t quite figure it out myself), Maggie, following through on her concern, takes out Linda’s trash.

Independence is a problematic and laden concept. In general discourse, the ability to live independently is valued as a proof of a person’s general competence; it connotes worth, self-definition, importance, even meaning. Independence allegedly differentiates child from adult; burden from contributor; disabled from able-bodied. In autism discourse (among other discourses), independence is often equated with what’s come to be known as ‘functioning’; so that someone would be considered high-functioning if they demonstrate an ability to live independently, while those who allegedly can’t  would be labeled ‘low-functioning’. The concept of independence in this regard has meaningful and tangible implications. It influences policy: resource allocation, housing benefits, education programs, service provision etc. In the more abstract and yet equally significant spheres of life, demonstration of independence is often a precondition for respect, for equal and humane treatment, for recognition of personhood.

But what does independence mean, anyway? What is “living independently”? Does anyone truly live in-dependently? And is that really what we should be striving for so passionately? Independence stands for not being dependent on anyone. I’m neurotypical, and I certainly do depend on other people daily, constantly, and I wouldn’t have it any other way. So yeah, I take my own trash out, but my wife doesn’t. She hates it. She wouldn’t freak out if she had to do it herself, but she definitely prefers that I do it. Bad example? Alright. I hate fixing the toilet. I hate the chemicals, the smell, the – well – stuff that clogged it in the first place. Also, I don’t know the first thing about plumbing, I’ll just mess it up if I try. I depend on other people to do that for me. Sometimes I have to pay for it. Sometimes I ask a friend to do it as a favor.  Does that make me less independent? Well yes, probably, but so what? We all have those things.

We rely on other people to help us live our lives. If I had to butcher my own cow to make dinner (or grow my own beans, if you prefer it) I would freak out! Different people have different skills and different preferences; that’s why we have distribution of labor. That’s how society works. I was never referred to as being any less independent because I can’t raise my own lunch, and I was never complemented for my impressive independence when I took out the trash.

But that seems to change when disability is concerned.

Because people need to differentiate. We (as in members of society) need to determine the boundaries of what’s normal. Those boundaries are then naturalized and presented as fact. Namely; some dependence is ok, but too much dependence isn’t. Rephrase; ‘just some’ dependence is renamed ‘independence’, while ‘too much’ dependence is termed lack of independence. Now it’s all sorted. But how arbitrary is this?

Taking a step back – obviously some people are more dependent than others. My wife’s reluctance to take out the trash is nothing like Linda’s anxiety. People are not the same, but are infinitely different. Autistic people, by and large, need more assistance in various aspects of life than many neurotypicals do. And of course, among those on the autism spectrum, some are more dependent, while others are less. I’m not arguing with that – I think acknowledging this is a fundamental part of promoting acceptance. What I am arguing with is the arbitrary dichotomous division between independent (healthy / able-bodied / adult / typically male), and not independent (sick /disabled /child / typically female). This dichotomy does nothing but reproduce power dynamics, in which those who are apparently independent must assume responsibility for those who aren’t. Case in point – Maggie, driven by the very best of intentions, takes out Linda’s trash, which was traditionally Vivienne’s role. But Maggie never asks Linda if that is something she would have wanted. In helping her (and in a pretty rude manner, I have to say, not even saying hello as she enters her house uninvited) she creates a reality in which she (Maggie) is the powerful decision maker, while Linda is that week receiver of assistance. Similarly, Alex decides to stay with Linda to “take care of her”, without her ever asking! Is that really a legitimate choice on his part?

Friends can and do often help each other; they should help each other; but without resorting to this kind of power dynamics, which occur when one party in the relationship is regarded as less than independent.

Rosetti and colleagues (2008), following an ethnographic study which included participant observation and interviews with autistic individuals who type to communicate, made a similar point. These authors challenged notions of agency, independence, and capability in autism discourse, arguing that all people have agency and capability, although they might require very particular conditions to express their agency (by which they mean “the opportunity to initiate a topic or agenda, participate in a dialogue, move a conversation in a particular direction, interpret others, affect the person with whom one is in dialogue, make a point, interact as a peer, and be seen as a person with ideas to contribute and a personality to inject into the conversation.” 2008:365) Independence does not exists, they argued, since everyone is always interdependent. Following from this is that incapability is never a quality of an individual, but always an outcome of environment and context.

What would change in Snow Cake, had Linda merely been considered interdependent rather than (not entirely) independent? For one thing, instead of offering to stay with her in order to ‘take care’ of her, Alex would probably have inquired about what precisely Linda might need help with in her to-to-day; since it’s not much (presumably just taking out the trash), he could have easily stayed at a hotel. Linda’s father may not have been so surprised and resentful that his adult daughter wants to live alone; perhaps accepting this would have helped him to like her home, and actually enjoy spending time there. We don’t know why Vivienne went to live with her grandparents as a kid; we’re never given that information. I suppose someone decided Linda couldn’t take care of a child all by herself; fair enough (though like I mentioned before, no one ever does anything 100% alone. That’ why we have nannies, teachers, social workers, food deliverers, therapists, doctors and nurses). But how many alternatives were considered before the decision was made to simply separate mother from daughter? How would a notion of interdependence affect that decision-making process? Actually a very nice example of portraying interdependence in the film was when Linda asked Alex to come with her to identify her daughter’s body. That was – I thought – the most touching and convincing scene in the film, because yes – people can ask their friends for help, that’s absolutely fair and legitimate. Especially in such hard times. Her independence would not even slightly be affected.

Language shapes reality; it can divide people, reproduce damaging power structures, and hinder positive development. But it can also lead to positive change, help growth, and unite people. This is basically just another claim for the thoughtful use of labels.

What do you think? Let’s talk about this some more. Please share and comment.

Related Posts:

Lars and the Real Girl

In a peaceful Northern U.S. town, Lars lives in the garage of the house where his older brother and pregnant sister-in-law live. Generally speaking, he lives a rather unremarkable life. Karin (sister-in-law) describes him on one occasion as “a sweetheart”. He really is. And yet, something about him does stand out. For one, he doesn’t go out much, except to work and church. He continually rejects his relatives’ breakfast invitation, using whatever excuse comes to mind. He is oblivious of his attractive female co-worker’s advances, or maybe he just acts as if he is. He’s shy, uncomfortable in social situations, and an all-around goodhearted guy.

lars_and_the_real_girl_ver3_xlg

His un-remarkability is put into question when one snowy afternoon he introduces to his bemused brother and sister-in-law his new girlfriend Bianca; an anatomically correct, human-size, marginally life-like doll he ordered from the internet and who arrived in a crate. To him, Bianca is 100% real. He has a delusion, his doctor/psychologist says.

I don’t know if Lars is autistic, and I’m in no position to say (who is?) To be honest, we are offered another perfectly valid explanation for his idiosyncrasies. He was orphaned from his mother at birth; he was raised by a depressed and anti-social dad (who may or may not have blamed him for his mother’s passing); his brother had admittedly ran away as soon as he possibly could, leaving Lars and his dad behind, only to return after the latter’s demise. That sort of depraved childhood is likely to leave its mark on any person, autistic or not. But then there is his enhanced sensitivity to touch, whereby he describes being touched as inducing an excruciating, burn-like sensation; which possibly means his is not strictly a psychogenic disturbance. Autism does not usually involve delusions (although according to these guys http://aut.sagepub.com/content/9/5/515.short, it often does – what do you think?).  But just because someone is delusional does not mean they weren’t also autistic to begin with.

Anyway, having made a huge deal about autism not existing if it is not named in my previous post about The Bridge, this whole discussion is a bit redundant. What is the point of trying to diagnose a fictional character? It’s just speculation. I guess what I’m trying to say is – I’m not too concerned about whether Lars is autistic or not. But I do want to talk about the movie. So for the sake of argument, say he is autistic. Fair enough?

Autism is very often thought of in terms of a social disability, social impairment, social deficit; autistic people often describe difficulties in initiating social interaction, maintaining social relationships, interpreting social cues, or adhering to social norms. All these said difficulties have one very obvious thing in common – the use of the term ‘social’ to describe them. But what if we are simply using this term all wrong? What would happen if we just tweaked its meaning a tiny bit, so that it stands for something a little different? What if ‘social’ stopped referring exclusively to interaction between people, and used to refer to interactions in general? Suddenly, we get quite a different picture. Do any of the above claims still hold true?

Of course, this is not my own original idea. If I’m going to make such an outrageous claim, I’m going to have to lean on someone more authoritative than I. Bruno Latour, the French philosopher and anthropologist, wrote, “I think time has come to modify what is meant by ‘social’” (2005:2). His point was this: ‘social’ is not a thing. It does not denote any kind of substance that is essentially unique, and for which a distinct explanatory framework is required, and for which distinct laws apply. The term ‘social’, he argues, is merely a by-product of another – equally meaningless – term; ‘Natural’. When physicists, biologists, geologists and the like started going about accounting for all sorts of phenomena, they designated them ‘natural’. i.e., deriving from nature, independently of human actions. Sociologists then came along, and rightfully started wondering this: what about all those types of connections that can’t be explained using newton’s laws, osmosis, or photosynthesis? How should we account for the types of connections that occur between people, regardless, and equally independently, of said ‘natural’ processes? Such connections were designated ‘social’.

Fair enough. So what’s wrong with that? Well, that hardly any relationship, simple or complex, is ever purely ‘social’. For example:

Take the relationship between Gus and Karin. Theirs is quite unambiguously a ‘social’ relationship, right? They’re a married couple. Well, no, not unambiguously at all, because her getting pregnant by him was obviously a biological process, involving sperm cells, the uterine tube, the corona radiata, ooplasm, and an acrosome reaction – if any of these fails to function, there is no conception. And seeing as this pregnancy is an extremely meaningful aspect of their relationship… Would it have been the same if ‘biology’ hadn’t stepped in to do its part? Their relationship is based at least as much on biological processes as it is on ‘social’ ties. And yes, it relies also on physical, chemical, legal, linguistic, and a million other types of ties. Social is but one of them. That is, If you must insist on ‘social’ having any kind of meaning at all.

My point is that ‘social’ is never exclusive of ‘natural’. Latour is arguing that we don’t need these concepts that do us little good but instead only mislead us. Let’s forget about what’s ‘social’ and what’s ‘natural’; what’s actual and what’s imagined, what’s real and what’s fake; and instead let’s just talk about meaningful connections. Specific, traceable, significant connections.

Joyce Davidson and Mick Smith from Queen’s University in Canada read 45 autobiographical works written by autistic authors, and found that in so many of them (approximately half), authors spoke of meaningful connections to non-human (or otherwise more-than-human) objects: animals, trees, flowers, hills, rocks, sand. Sometimes these connections were regarded as easier than interacting with people, hence more desirable. Sometimes they were described as making more sense, having more potential for growth and learning; they were said to be calming, relaxing, rewarding, enjoyable, fun. Davidson and Mick make a great point when they ask: why should we consider these connections as any less meaningful than if they were between people? Is it just because old-fashioned humanism would have it that way? Why not take these people’s accounts seriously? If they say these are meaningful connections, why not take their word for it? Latour would similarly say: if the connections are indeed meaningful (except he means in an empirical, non-subjective way, i.e., having a meaningful impact on real-world events), who’s to decide that they aren’t? By all means, let’s trace these connections. Let’s see what we learn!

Whether or not one morally approves of Lars’s somewhat unusual love affair (I admit I was shocked and embarrassed at first – I do literally get embarrassed for characters on film), one has to admit it did him a world of good. It changed Lars. It helped him develop a sense of meaning, self-confidence, self-esteem. It helped him regain trust in people. It made him happy. It made him whole. Bianca did all that! Insofar as she was the catalyst of this series of events, Bianca is an incredibly meaningful character in Lars’s life story – and of all the town’s people as well. She is an extremely significant actor in Lars’s life. Would this process have happened without her? No, it wouldn’t! Alright, did she MEAN to make it happen? No, of course not, she’s a doll. And yet, doll or no doll, she DID make it happen, through and through. Her relationship with Lars, their connection, made it happen.

Should relationships with non-humans (animals, plants, rocks, or inanimate man-made objects) be ignored? Should they be tolerated? Should they be discouraged? Should they be seen as psychologically perverse, morally questionable, damaging? Should they be seen as allegorical or symbolic? An extension, compensation, replacement? I suppose that by and large, the answer to all these questions is a resounding No. Such relationships should be taken seriously. After all, everyone has them. A favourite shirt, a beloved chair, a childhood toy, a cherished old car… Not to mention a pet. Such relationships should be respected. And when they are respected, maybe we get a somewhat different picture of what autism really is.

What do you think? Do you agree? Have you had similar experiences? Please leave your comments!

P.S., Lars says the exact feeling he gets when someone touches him is like when your feet have frozen from the snow, and then let to thaw too quickly. By the end of the movie, he shakes hands with Margo, gloves off, with great intent. Moral? Before, he was “frozen”. A human touch meant too much, too soon. Thanks to Bianca, he gradually thawed, and a warm touch doesn’t hurt him anymore. That’s nice, I think.

Related posts:

Bron / Broen (The Bridge)

What is it about serial killers that makes them such a popular topic for TV series? I can think of a number of reasons, but I guess the main one is audience’s proven love of gruesome and meaningless, yet often admirably photogenic acts of violence, accompanied by a somewhat shallow and sensationalistic exploration of the workings of the psychopathological mind.

Plus, it’s pretty much an ideal plot vehicle; you have the killer on the prow, the detectives on his tail (the killer is usually male), the innocent unsuspecting victims… It virtually writes itself! Well, except it’s often terribly superficial and silly. I guess that in order to write itself well, a talented team of writers is still a fundamental requirement. Now, while I personally don’t mind the graphic depictions of violence one bit (it’s just decent make-up work as far as I’m concerned), my wife is not a great fan of extreme close-up ultra-bloody murder scenes. It’s not so much the violence itself that upsets her – after all, some of our all-time favorite series are The Sopranos, The Wire, and Breaking Bad, none of which is in any way shy of violence– but it’s the graphics of the serial killings that bother her. And I get it. When Tony Soprano or Avon Barksdale kill a gangster, they shoot him and get on with their lives. Serial killing bad guys on TV tend to have more lengthy engagements with their victims, pre and post mortem. So anyway, while in our household watching a series alone – as in without each other – is considered a breach of confidence, I can get away with it as long as it’s about serial killers. Looking for a new one to watch, after having finished watching Dexter (loved it at first, then started really hating it, had to finish though), I came across Bron/Broen or as it was titled in English, The Bridge (meanwhile an American/Mexican remake has come out, as well as , apparently, an English/French one, but this post is about the original show) What a thrill! A fierce serial killer terrorizing both sides of the Danish-Swedish border, and a joint detective duo, Saga from Malmo and Martin Rohde from Copenhagen are out to get him. It truly is a wonderful series.

bron-broen-sofia-helin-kim-bodnia-the-bridge

Saga Noren is first and foremost an amazing detective; a real bad-ass, but in that good way. Tough, fearless, relentless, bright, meticulous. Also blonde, attractive, and always in leather trousers, she is presumably in her mid-30s, and single. She is well respected by her coworkers, superiors and subordinates alike. She’s mostly well liked, it seems, though people always appear to have trouble figuring out what to say to her, how to react to her. That’s probably why she doesn’t have that many friends. She generally prefers to stick to rules, however unreasonable they may seem at times. She reports her partner for mild violations of protocol. She refuses to let an ambulance pass through a crime scene, despite the fact that it’s carrying a kidney on its way to a transplant surgery. When the rules aren’t written anywhere, that’s when things get really difficult for her. Saga can’t make out what’s wrong with having spent the night with her partner’s 18 year old son – nor does she realize why her colleague would assume this means they had sex. She does have casual sex with men she meets at a club – always the same club – but she’s not interested in having a drink with them, and she doesn’t really understand why that might make them feel awkward. She’s fascinated by the police profiler, who makes sense of their perpetrator’s actions and behavior using analytical tools. The murderer also fascinates her, with his systematic conduct and obvious intellect.

I absolutely loved The Bridge. I have grown too accustomed to watching American TV series, and I enjoyed tremendously the different atmosphere, different scenery, different nuances (nuances! I watched so much American TV I forgot those even existed!), different style of writing, acting, even editing. I admired the fact that the show speaks Danish and Swedish, two languages I hardly ever have a chance to hear, and embarrassingly, I could not even for a second tell apart. (This romanticization of what is to millions of people their everyday languages is obviously amusing to some, but I stand by that comment. Foreign languages just make you feel something.) The writing is absolutely brilliant, and while the realism of the plot could probably be up for debate – I’m certain people have already found a myriad flaws in it – I found the characters to be as reliable as TV characters get. I believed everything that came out of their mouths, whether in Danish or Swedish (I never knew which it was, anyway). But there was one character who stood out. That would be splendid Saga.

So, does Saga have Asperger’s syndrome? Is she autistic? That question is never answered directly. But as far as the creators of the series are concerned, I would bet yes. I’m certain the writers knowingly intended her character to have Asperger’s, and were constantly conscious of that when writing the show. I also know that the actress who played Saga (the extremely talented Sofia Helin) did some research on Asperger’s as preparation to the show, and acted her part accordingly (and quite beautifully done, I thought). But does that all mean Saga has Asperger’s? I honestly don’t know. Let me phrase the question differently: Were Saga a real person, who went to see a real-world clinician to seek a diagnosis for herself – would he or she diagnose her with autism? All signs point to yes. Seeing as Scandinavian countries have awareness to autism that is among the world’s highest, if Saga had been a real person, she would probably already hold a diagnosis. So problem solved, series Saga is an Aspie. Right?

Well, no, I’m still not convinced. Despite everything I just typed, I would actually say that series Saga – i.e., the woman we see on the screen investigating a series of murder cases – probably does not have Asperger’s. Why not? Because in this case, Asperger’s is the tree that falls in the forest and nobody’s around to hear it. To put it simply – If Asperger’s is not mentioned, there is no Asperger’s. In the reality of the show, Saga is an unusual person. She’s socially awkward, she doesn’t get subtle social cues, she’s not overly vocal, she’s bad at small talk, she’s not empathetic in the typical way, she’s quirky and she’s eccentric. That’s all true – so how is she NOT autistic? Well, because autism only exists when there are people around to call it that. And if no one ever does? Then autism simply does not exist. I’ll try to explain.

What’s a Social Construction?

Are weak arms a disability? Of course not, right? I mean it’s a trait – an impairment, maybe – but it’s definitely not a disability. Well no, not here and now. But say in an imaginary world where everyone’s a blacksmith, for example, so yeah, absolutely, people with weak arms would be quite severely disabled. And if in that world one of the blacksmiths happens to be a doctor, I bet he would diagnose about 1% of the people – those with the weakest arms – as disabled. There would be a label – let’s call it Badsmith Syndrome, on account of them being bad smiths (There would be self-advocates, however, who will resent the name because it’s degrading. Others will happily adopt that label and take pride in it, making it their own). The doctor would probably prescribe weekly push-ups. Which many would also resent, because, well, they have weak arms. It’s not their fault, they would say – they were born that way. Why shouldn’t society just accept that? Why change who they are? Indeed.

Mind, these people would not be disabled because of their weak arms. After all, they might be amazing painters, mathematicians, retailers, theologians, engineers, or nurses. In a different time and place, nobody may have even noticed their poor physical strength. They merely have a different set of skills that distinguishes them from the rest of the population. It is the context that makes them disabled (this idea is sometimes referred to as ‘the social model of disability’). Their weak arms is one thing; and while the label attached to it is certainly not unrelated, it does not inevitably derive from it either.

And it’s certainly not irrelevant or inconsequential. The label might make a world of difference in their lives – for better or worse. They might be stigmatized because of it, and assumed to be things they are not (weak of character, lazy, even stupid), but it could also help them organize, support each other, change people’s minds about them, show the world that they are capable of other things. They have become a group, possibly a community. Thus Badsmith Syndrome becomes a social category, as well as a medical one. It raises connotations, invites interpretations, arouses feelings, in bad smiths and typical smiths both. So did people with weak arms exist before the label was invented? Of course. But was Badsmith Syndrome a thing? No, it wasn’t.

(More about autism as a social construction in this post about Bertleby the Scrivener, the short story be Herman Melville)

Is deafness a disability? Quite clearly yes, right? But the thing is, a lot of deaf people wholeheartedly disagree with that statement. Many of them say that the problem is not with them; the fact that they can’t hear is secondary. The problem is that society excludes them by not working out alternatives for deaf people in the public sphere (such as an interpreter into sign language at parliament sessions, for example). If everyone in the world could sign (not such a huge deal, it can be taught in schools, right?), so deafness would cease to be a disability. It would still be a category, sure, but kind of like homosexuality is a category – or even left-handedness. Both were severe disabilities in the old days. Both aren’t any more (the latter can’t even be said to be a social category today – that’s how far we’ve come).

Another example: Before there were schools; before kids were expected to sit down on their butts for hours on end, day in and day out, and listen to a teacher talk about something that they couldn’t care less about; was there ADD? Were there kids with attention disorders? No, there weren’t. How come? Because some kids sat and read or listened, while other kids drew, danced, learned to box, sew carpets or build furniture. Whatever a kid did, they would learn, grow, and improve at what they were doing. Attention deficit disorder only exists because now ALL kids are expected to sit and listen. Some kids find that easy. Some find it very difficult. They do have ADD – I’m not saying that they don’t – but they have it because in our context, in our society today, to not be able to sit and listen is a disability (or at least a disorder). So they invented a name for it, devised diagnostic criteria, and started labelling kids with it. Hey, they even found a treatment that works, which is fantastic. But the fact that ADD results from a real biological neurological mechanism does not mean it’s any less of a social construction. Just like weak arms are a real, biological fact. It’s the context that makes weak arms, ADD, deafness or autism into a disability, rather than a mere difference.

Of course autism is real. That’s not the question. It’s just that if no one’s naming it as abnormal, if no one’s placing it outside of the norm, if no one is there to give it a name, study it, investigate it, devise treatments, search for a cure, advocate acceptance etc. – so no, in that world (not necessarily a hypothetical world – this is still the case in some places) it doesn’t exist. After all, autism is a fairly recent diagnostic category. It was identified by Leo Kanner in 1943, and took a while to “catch on”. Even for decades after, kids that would today be diagnosed with autism were diagnosed with schizophrenia, retardation, or were simply – as in the case of what we refer to today as Asperger’s syndrome – often just considered eccentric. Were those diagnoses wrong? No, they were spot on! For the time, that is.

(More about the history of autism in this post, about Temple Grandin)

So wait, Does Saga Have Asperger’s or Not?

My point is this: We (as a society) acknowledge Asperger’s syndrome as a category. We diagnose people and consequently label people with it. We devise treatments for it, and initiate education programs for people who have it. People with Asperger’s are expected to perform their Asperger’s in a certain way. Some of them oppose that – and come up with their own idea about what Asperger’s means, and how to perform it. They negotiate the category, the treatments, and the expectations. They negotiate the very meaning of the term Asperger’s. Even the question of whether or not one has it depends on where you live, how much money your family has and other such social factors. The person is the same person with the same traits – I cannot stress that enough. But whether or not someone has Asperger’s ultimately depends not just on the person, but on society as well.

And it’s not only where you live, it’s also about when you live! 30 years ago, Saga would not be diagnosed with Asperger’s anywhere in the world. Today, she wouldn’t be diagnosed with it in many parts if the world. And you know what? In the revised DSM (diagnostic and statistical manual) that came out this last May, Asperger’s was removed as a distinct category. That actually means that as it currently stands, no one will be diagnosed with Asperger’s ever again!

So Saga has all the symptoms to indicate that in a Western country in the end of the 20th or the beginning of the 21st century, she would be diagnosed as having Asperger’s. But the way I see it is this: in the reality of the show, unless a fictitious Malmö psychiatrist was featured who gave her a formal diagnosis; or unless references would be made by other characters to such a diagnosis, I prefer to think of Saga as eccentric, quirky, unusual, and brilliant. She has a successful career, she contributed greatly to society, and she’s awesome. Her life can’t be easy, what with her difficulties in figuring out people and relationships, but as long as she has people who care for her, and as long as society is tolerant towards her and accepting of her unique needs, she’s not disabled. She doesn’t necessarily need to be labeled as “having” anything, or even “being” anything, other than her unique self. Does that make her life easier or harder than if the diagnosis existed? Can’t really say.

Do you agree? What do you think? I’m curious to know. Leave a comment or two.

 

Related posts: