The Tyranny of the Quantifiable

By
 
 

Jaron Lanier

1 The Rainstorm Versus The Oil Field

Our era is most fundamentally characterized by its changing technologies.  Whenever and wherever one might hope to influence events by changes in policy or pedagogy, new gadgets are likely to come along that will recast one’s efforts in hard-to-predict ways.

For instance, the introduction of email, chat, and sms (chat over wireless devices) has driven an upsurge in the recreational use of written language among young people whose parents, weaned on television, movies, and the telephone, regarded the task of writing as more of an obligation, dictated by work or school.

In the academy, the sciences and the humanities have both been reformulated by encounters with computers, and in some similar ways.  Computers have enabled software tools to make some problems that were at one time treated as being irreducible, and therefore best handled by wizened intuitive individuals, into quantifiable and comprehensible processes.  For instance, there is now considerably less guess work in drilling for oil.  Computer analyses of oil fields have actually resulted in an increase in the available supply when only an ever worsening decline had been predicted.

It has also become clear, however, that not all complexities are equally complex.  For instance, we have improved our predictions of the odds of a new oil well’s success more quickly than we have improved our ability to predict the weather.  By current standards, weather is considered a “chaotic” phenomenon, and we have studied it and other such phenomenon well enough to believe that some complexities will remain beyond us either forever or at least for the foreseeable future.

We will get better at predicting the weather, to be sure, but we will probably never get as good as we would like.  There are multiple reasons for this.  One immediate problem is that computers are still rather low resolution and slow devices when measured against the standards of weather.  Another problem is that we can’t measure the state of the atmosphere at any given time as well as we might like.

For each such immediate limitation in our abilities, there is a more dramatic version in the form of an ultimate question about the nature of computers. For instance, we can ask the ultimate question of whether we can build simulations that summarize the events of the universe well enough that one subset of the universe (arranged to be a computer) could even theoretically simulate a larger subset well enough to predict something like weather.

A second example of an ultimate question is whether it would ever be even theoretically possible to gather enough good real-world data quickly enough to satisfy the demands of the weather simulation of our dreams.  If we can’t give it sufficiently good starting data, it cannot give us the results we hope for.

The weather is one of a number of natural systems that most theoreticians regard as sufficiently complex that they will always remain beyond the reach of our fantasies of comprehensive and practical reduction.  We might be able to understand the dynamics of weather.  We might be able to predict how much the atmosphere will warm in the next century if we continue reckless energy policies.  We might be able to predict if it will rain tomorrow in Poughkeepsie.  But we might never be able to predict if it will rain there next month.

This is the nature of the sciences of complexity.  We are able to achieve useful theoretical reductions at some levels of description and not at others.

2 Trains Passing in the Night

Unfortunately, the humanities have recently been provoked by metaphors from computer science without being then tempered by the harshness of empirical successes and failures.  Because the intelligent and social aspects of the human brain are so far beyond our current sciences, we don’t even know enough to get a clear negative result to an experiment about something as complex as education.  If we understood the brain better, it would be easier to define what aspects we didn’t understand.  As it is, we can’t even be articulate about our areas of ignorance, so it is almost inevitable that we pretend to know more than we do.

The central question in the cybernetically excited humanities should be, “Is the human mind more like an oil field or more like the weather?”  Political and academic forces seem to be increasingly inclined to see the human mind, particularly the young, developing mind, as being more like an oil field; modelable, optimizable, predictable.  And profitable.

This bias is seen in the increased emphasis on testing, the decline in funding for experiential learning that requires equipment (like microscopes or clarinets), and the stultifying mundanity of many contemporary primary school textbooks.

There are two good reasons to think that human minds are more like weather than oil fields.  One is based on evidence.  As with the weather, there are narrow frames of description in which human behavior is fairly predictable and even predictably modifiable.  These are better known to advertisers and political consultants than to educators, but can nonetheless be said to exist.  It remains true, however, that human behavior is the least predictable phenomenon we know.    Examples abound of academic beliefs in human predictability that were allowed to stand for years prior to falling on the occasion of an initial empirical test.

One story that comes to mind is of the Stanford researchers hired by Microsoft to design simulated “personalities” for computer software.  The theorists believed that consumers would be helplessly and beneficially responsive to software that attempted to verbally dominate or be dominated by the user.  This idea resulted in productivity software oddly called “Bob” that was “dressed up” as if it were a suburban S&M parlor.  Needless to say, the result bombed on the marketplace, but what is remarkable is that it was tried at all.

The other reason to believe that human minds are more like weather than oil fields is moral.  As we start to gain scientific insights into the workings of the human mind, it is important that we not slip into a belief that minds are merely elaborate machines.  The founding documents of modern democracy rightly evoke divine arguments for the rights of free human beings, because ultimately the rights and status of a human being cannot be argued scientifically.  We must rely on faith on some core of free will in a person or our notion of society collapses.  This faith is neither true or false from a scientific point of view, since it is a premise that can ultimately be neither expressed nor tested scientifically.  It is a small crime against our culture when the language of education and the humanities encourages quantitative assessments of individuals.  Such small crimes did not begin with computers, but computers have provided a language that is stealthier and easier to accept in many quarters.

Like two trains passing in the night, the sciences and humanities have reacted to computational complexity in almost opposite ways.  The hard sciences have started to parse the world by how easily various parts of it can be usefully reduced.  Some aspects of the world are understood to be more like oil fields, and some more like rainstorms.

3 If Only Students Were Computers!

The humanities, alas, have moved in the opposite direction. Education has come to be increasingly characterized by “results oriented” approaches.  What might seem to be a borrowing of terminology and technique from the business world is actually a second-hand borrowing from engineers and scientists. (I will refer mostly to the situation in the United States, which I am most familiar with.)

This has not resulted in the introduction of radical new education tools nearly as much as one might hope.  Instead, it has most often resulted in new justifications for old ideas, precisely because these are more describable, and are more easily integrated into fantasies of predictability.

For instance, testing has been with us for a long time, but now testing is applied to schools, teachers, school districts, and any other unit of description anyone can describe, and this is all done with language borrowed from engineering.  While there have been ample examples of human beings being treated as machines in educational, correctional, and medical circumstances since the enlightenment, what is new is the cybernetic systems approach.

Since people are so complicated that we don’t even know how complicated we are, we tend to treat ourselves with linear approximation

The core problem is probably political and economic.  Societies don’t like to devote a lot of resources to things that are not understood.

So, in countries where education is well funded, it tends to be rigidly construed.  An example would be Germany or Japan, which have excellent universal education, and where teachers enjoy status and live well- but this comes at the expense of an overly linear education model.  I personally would not have survived in the schools of either country.

In the United States, on the other hand, support for education is low- teachers are poorly paid and have little status.  But the system is a little more open.  Many successful adults remember having one or two “magic teachers” who brought a devotion above and beyond the curriculum to their work.

(It should be noted that as I write this, the Bush administration is trying to make American education more rigid, linear, bounded, like the Japanese or German systems.  If my theory is correct, America will start spending more money on education if these reforms take hold, even though that is certainly not the intent at this time.)

The world as a whole probably benefits from the diversity of educational styles.  There’s some truth to the cliché that the United States has produced a disproportionate number of creative and innovative people even in quantitative pursuits like engineering.

Because the linear mindset will accept approaches that create the illusion that a problem is understood, it is easy to get people interested in putting computers in classrooms.  Even Newt Gingrich, a conservative American politician of the 1990s who was famously cautious about public spending, was excited about the idea of putting computers in front of children.

What usually happens when this is tried is that policy makers face a rude surprise.  If kids are allowed to use computers creatively, the result is even less linear and harder to track, and therefore justify, than what was there before.  If the computers are used linearly, then they turn out to be more expensive than the old way of doing linear things.  This is because the whole lifecycle of a book includes maintenance that a book doesn’t need and a book lasts longer.

This has resulted in a sort of confused situation in which there’s still a lot of excitement about digital tools in schools, but no widely agreed upon idea about how to use them.

4 Smellier than Cheese

Since the benefits of computers in education are so complex that we rely on fake measurements, then it shouldn’t come as a complete surprise that the costs of computers in schools are often underestimated.  The sad truth is that computers go bad as fast as cheese.  The uses that children are the most interested in, and that can be the most effective in most educational applications, happen to be just the things that you need the very most recent computer for: 3D graphics, sound, color, good screen, etc.  So schools are in the position of making capital investments that become antiquated faster than any other item.  School computers have a reputation on a par with school food.

The usual way that technologies are bundled for use in classrooms is as follows:  Software is written for some particular machines by a vendor. The specified machines and software are bought and installed at great expense by a school, often with special one-time funding support.  After a couple of years the machines are quite obsolete by prevailing standards, and furthermore many of them of ceased to function correctly.  It quickly becomes hard to find personnel to maintain the machines as qualified candidates are strongly motivated to learn about newer machines instead and seek better pay.  The machines fall into disuse and then the whole cycle is begun again.

What is most sad about this scenario is that the length of time it takes to develop software is sometimes longer than the period in which it is genuinely useful (though it may remain in use for a longer period of time).  What is most frustrating is that the true and full costs of a generation of educational software are almost always grossly underestimated.  It is also probably true that educational software usually has a briefer period of relevancy than anticipated.

This is a confounding paradox.  There have recently been campaigns to have businesses donate obsolete computers to schools in exchange for tax write-offs.  This is worse than useless, because it is inordinately expensive to find a way to make use of assorted slightly, but not quite compatible old machines that are inherently dull.

Young people want, and indeed need the very latest computers.  The more recent a computer is, the less it is merely a text processor and the more it is a potential simulator.  In this way, computer power directly maps into educational paradigms.

Instead of trying to finance an endless dispiriting money pit, I think there is an alternative that should be tried.  It would not be easy, but it’s worthy of consideration.

Every year and a half or so, a new digital medium design coupled with a  cultural use pattern emerges suddenly in teenagers and young adults, and takes on giant proportions, sometimes dwarfing all other media usage.  It then starts to fade a way, but slowly.

Some recent examples, in reverse order of their appearance on the scene, include sms, Napster, Doom/Quake, chat, email, and personal web pages.  These are the collaborative examples, but they are joined by equally impressive, though less public experiences, such as Playstation 2 games.

This approach should rightly be seen as problematic by advocates of underserved populations.  There is no question that underpriviledged kids depend on schools for many items that we wish were available in the home, often including food.  It is certainly unreasonable to expect them to have access to computers at home.

I must say, however, that traveling in poor areas in the USA, and even in much of the urban Third World, I am struck by the widespread availability of consumer electronics.  Video games, sattelite TV, and other gadgets find their way to many places that lack sufficient potable water.

I fully realize that this observation is not an adequate response to the problem, but I still maintain that leveraging the technologies of popular culture is likely to be both better and cheaper in the long run than periodically filling schools with out-of-date computers.
 


Go back to Jaron's home page.