1.31.2011

What does technology want?

The following is a review of technology philosopher Kevin Kelly’s latest book, What Technology Wants, which he added to the list of reviews book's website. If you live in the Northeast and/or admire him as much as I do, join the Business Innovation Factory for a conversation with Kelly on February 10th.

* * *

I've been following Kevin Kelly’s work for a long time, and his thoughts for his next book about the meaning of technology since he started chronicling them in 2004. So it was with much anticipation and a good amount of context that I, and many others, received the book that finally emerged, What Technology Wants. In this ambitious endeavor to elucidate the intention of technology, Kelly contextualizes the evolution of technology within the evolution of life and consciousness. It is strangely beautiful poetry for technophiles and technophobes alike; instead of contrasting or conflating life and technology, he tells a convincing tale in which they are two thrusts of a self-organizing and increasingly self-conscious universe. But as exhibited by the notes I scribbled on almost every one of the book’s 406 pages, my critiques are many. What follows are five pointed questions and, based on the fifth one, a more extended critique of the book. In this critique, I make claims as audacious as Kelly’s, but unlike him, without duly defending them, and I willingly leave myself open to criticism. I struggled with my reaction to this book and wholeheartedly encourage your comments. Here goes, starting with five questions:

1) How does Kelly situate technology ontologically? Where does technology ‘sit’ relative to culture, humanity, life, consciousness, and other phenomena? Is it a seventh kingdom of life (p. 49), ontologically on par with animals, plants, etc., or is it a force “like gravity…[which] is embedded in the fabric of matter and energy” (p. 273), and therefore permeates all kingdoms of life? Or is it somehow both? Alternatively, if “we are continuous with the machines we create” (p. 188), is technology part of the animal kingdom, or further, part of the human species? And how does Kelly see this ontology changing as technology evolves into a more autonomous, self-aware force?

2) What Technology Wants is about technology in the aggregate. So important is this point that Kelly coins a term to refer to the entire sphere of technology: the technium. He goes on to characterize the technium in absolute terms, for example claiming that it is inherently prolife and diversity-enhancing (pp. 196, 352). But given the differences between individual technologies, is characterizing the entire sphere of technology in absolute terms meaningful enough? In other words, given the massively different social implications of a hammer versus a nuclear bomb versus the Internet, is it meaningful to put all technologies into one big pile in order to elucidate what ‘it’ wants? Absolute terms are certainly significant, but in order to meaningfully characterize any complex phenomenon, I think relative measures must also be taken into account. When characterizing an economy, we look at total wealth (an absolute measure) along with wealth distribution (a relative one); similarly, when characterizing the technium, I think total biophilia, diversity-enhancement, and other absolute measures along with the distribution of biophilia, etc. must be accounted for. If Kelly is interested in change through time, how this distribution is trending must also be factored in.

3) Kelly prefers a decentralized system of color photography processing to a centralized one, and a peer-to-peer radio broadcast system to a heavily government-regulated one, and promotes transparent labeling of chemical products (p. 257). The characteristics he associates with a convivial manifestation of technology include cooperation, transparency, decentralization, flexibility, redundancy, and efficiency (p. 264). Such inclinations imply a particular political orientation, but why is Kelly not explicit about his politics? Does he a) not want to be painted as a liberal, idealistic techno-hippie?, b) believe that convivial technologies can emerge from and thrive in non-convivial social contexts (oppositely defined), or c) other?

4) Given that What Technology Wants makes bold claims about the nature of technology and is being read far beyond academic circles, what do academics in the field of Science and Technology Studies (STS) think of the book? Specifically, what does Langdon Winner, who Kelly cites admirably throughout the book, think of it? If you’ve already read this far and have any connection to Winner, one of my two favorite philosophers of technology, then a) you’re amazing, and b) please ask him to write a review. Or better yet, please ask him to have a public conversation with Kelly, which would make for a fascinating encounter between my two favorite philosophers of technology.

5) Years ago, renowned mycologist Paul Stamets began noticing that mushrooms which cured similar diseases grew in similar conditions. Applying this observation, he started knowing where to look for mushrooms that cured specific diseases. Applying it further, he started being able to create conditions conducive to growing mushrooms that cured specific diseases, effectively growing cures. Without wanting to risk social determinism (the false notion that social context alone determines technology), I pose an honest question: similar to Stamets, can we deliberately create the social conditions conducive to growing less harmful, and au contraire convivial technologies? (Who are “we” and how do we determine what is “less harmful”? Wince, I know, that is another question – or rather the question – taken up below.) Put more simply, if our process of technological development matched the character of technology we sought, what technologies would we be creating? I won’t say the following statement applies wholesale, at least not yet, but I’d venture to imagine that such technologies would have unintended benefits.

Based on this fifth question, following is an attempt to synthesize my critique of the book.

* * *

Ultimately, I wonder whether Kelly ascribes too much power to technology as an inevitable force in its own right, and not enough power to us to create the conditions in which technology emerges.

A basic premise of What Technology Wants is that technology cannot be stopped, only postponed. Therefore, Kelly argues, “our role as humans…is to coax technology along the paths it naturally wants to go” (p. 269). To me, this smells too much like Thomas Friedman’s take on our relationship with globalization: inaccurate and overly passive. I’m inclined towards a more explicitly active relationship with globalization, and with technology – an active terraforming of its path rather than a seemingly passive coaxing along the path it chooses.

Kelly claims a) we can’t predict what a given technology’s harms will be, and therefore b) nor can we prevent them (p. 244). On the contrary, I believe both a) and b) are possible, and increasingly so within an actionable time frame. Technology critic Langdon Winner, who as mentioned above Kelly cites throughout the book, discusses how technologies can have social implications that can be known before they are used or even built. He describes the approximately 200 low-hanging overpasses on Long Island, deliberately designed by master builder Robert Moses to prevent buses, therefore public transit, and therefore poor people and blacks, from accessing Jones Beach, Moses’ acclaimed public park. He describes technologies that require extremely hierarchical forms of social organization to be built and operated, such as cotton factories, railways, and nuclear energy plants. The difficulty with such technologies in democratic societies, Winner writes, is ensuring that the forms of social organization they require to be built and operated not ‘bleed’ into the polity as a whole. I’m painfully simplifying Winner’s sophisticated discussion of how technologies can 'have politics' for purposes of this post. Still, in all of these cases, the social harms of technologies can be predicted a priori, based purely on the social context of their development. And if harms can be predicted, they can be prevented. Knowing that an overtly racist builder will build overpasses that institutionalize racism can prompt the hiring of a builder who prioritizes social equality, or better yet, the use of an equal-opportunity participatory design process. Knowing that nuclear technology requires an extremely centralized social system while solar doesn’t require any system in particular, but is compatible with a diversity of them, can help guide energy choices in a democratic society. Contrary to Kelly, I believe that under some circumstances, we can not only use the social context of technological development to predict and prevent a technology’s social harms, but furthermore deliberatively design the social context in order to cultivate social benefits.

Potentially plausible, but what about non-social harms? How would examining the social context of technological development enable us to predict or prevent health and environmental implications? Happily for us, we create hard distinctions between health, social, and environmental, but our bodies, societies, and ecosystems don’t. There are distinct levels in complex adaptive systems, but there’s feedback between them. Hazardous nuclear waste is an “unintended consequence” of producing nuclear energy, but so-called unintended consequences are simply symptoms of a deeper-rooted, more systemic problem. Let me offer two anecdotes. First, the reason global dumping grounds exist is because some countries have disproportionately more power than others. Powerful countries are able to create an abundance of toxic waste because they can dump it in weak countries. Extreme social inequality is therefore detrimental to the environment as humans need it in order to thrive. Linger on that for a moment: Social inequality manifests itself as environmental degradation. This is something any environmental justice advocate can tell you. Think back to nuclear energy. Is it possible that nuclear waste is not unrelated to, i.e. not a symptom of the extreme social centralization required to produce it?

A second anecdote: In his TED talk, Dan Barber tells the story of how historically, Foie Gras was a naturally-occurring seasonal food, discovered accidentally by Israelite slaves in Egypt. During the Fall, ducks naturally gorged on dry leaves in order to prepare for the Winter, enlarging their livers and making them a delicacy. It was because the Pharaoh wanted to eat this delicacy year-round and demanded that the slaves figure out how to make it available that spawned the inhumane practice of force-feeding of grain to ducks, and ultimately the maligned Foie Gras industry we know today. But in 2007, the winner of the Coup de Coeur (essentially the French Olympics for food) turned out to be a Spanish farmer who produced his Foie Gras naturally, allowing his ducks to eat as they pleased during the Fall and producing Foie Gras from their free-roaming, happily-eating livers. It is at once both obvious and miraculous that animal welfare should taste delicious on our taste buds. What’s more, allowing ducks to eat happily means organically growing a biodiversity of plants, so sustainability tastes delicious too. Again, let it linger for a moment that inhumane treatment of animals evolved in the context of slavery, and that their humane treatment manifests itself as deliciousness. But deliciousness would not be accurately described as an unintended benefit of duck happiness; it is a sister symptom of a systems solution manifesting itself at multiple levels of organization – from our taste buds to the local ecosystem. Bringing these anecdotes together, not only can we predict and prevent social harms of technologies, but by organizing ourselves in such a way that maximizes our collective well-being – fostering equitability and heeding our taste buds – we create the conditions conducive to the emergence of technologies that similarly maximize well-being, at all levels of organization. Kelly prefers a decentralized system of color photography processing to a centralized one, and a peer-to-peer radio broadcast system to a heavily government-regulated one because of their improved technological features. Could improved technological features be a symptom of decentralized social organization, and if so, why not start with the decentralization that manifests itself as technological improvements? In Wendell Berry’s terms, this might be considered a solving for pattern approach to technological development. In Kelly’s terms, it suggests building convivial technologies by organizing ourselves in a way that is itself convivial, i.e. cooperative and decentralized.

But taken to the extreme, this implies social determinism. It implies that all technology should be appropriate technology, and that all technological development must be convivial in order for the technologies built to be convivial, thus rendering un-convivial most technologies in existence today, including the ones I’m using to develop and communicate these very ideas. But I’m not a social determinist, nor socialist, nor a communist. (In fact, I’m a promiscuous pragmatic pluralist, scroll down for a brief elaboration). I recognize the value of non-classlessness – too much and too little social inequality are detrimental to economic growth and technological development, albeit in different ways. This extreme is not what I’m arguing; in fact, my claim is even bolder. Kelly acknowledges “industrialization was dirty, ugly, and dumb,” and brilliantly asks “whether this ugliness is a necessary stage of the technium’s growth (p. 323). I think it was necessary but is no longer. We’ve already created nuclear energy and DDT and other technologies that required severely undemocratic forms of social organization, and I believe we’re arriving at a juncture in spacetime in which feedback loops are tighter and faster, so that instead of manifesting themselves 7 generations later, implications – social and otherwise – manifest themselves within the timeframe of actionability, evolving us towards a more socially responsible process of technological development. And, bringing it back home, I believe this is due to our co-evolution with technology. Let me explain.

Kelly believes technology is permeating everything it creates with sentience, effectively becoming the universes’ mechanism for self-awareness. Perform the thought experiment: As the universe becomes self-aware, what will it know? It is information, communication, and transportation technologies, among others, that enabled the story of deliciously sustainable Foie Gras to get from Spain to TED to me to you, all within the matter of a few years, so it could be acted upon. More broadly, it is Zagat and Yelp and Urbanspoon that speed up the informational feedback loops enabling it to be known that sustainably grown food tastes better, and thus enabling the gourmet food industry to adapt to this information, effectively subsidizing the production and consumption of sustainable food and sustainable food systems. Even further, it is social network analysis tools that enable people to see the structure of their social networks, and I predict will cultivate the capacity for sociofeedback, the social equivalent to biofeedback, i.e. the ability to adaptively shift the structure of our social networks by virtue of being conscious of them (which I referred to in a previous Smart Mobs post, and will soon dedicate an entire post to). Sociofeedback would allow people to, for example, form an internal hierarchical social structure for purposes of producing nuclear energy, without it bleeding into the external polity, because they’d be able to consciously shift to other social structures for other purposes, such as citizenry. And due to collective awareness of nuclear’s ramifications at all levels of organization, producing nuclear would be understood as a deliberate choice, perhaps a temporary tactic under emergency circumstances or as part of a broader strategy to increase the ratio of renewables to non. It is with apologetic irreverence yet undying optimism that I propose we can not only a) predict and b) prevent a technology’s harms, but increasingly do so within an actionable timeframe, thanks to technologies produced by an un-convivial industrialization that are now integrating themselves into the inherently convivial complex adaptive system that is life.

Mega online clothing retailer Zappos is structured to maximize employee happiness, because its CEO discovered that employee happiness correlates directly with customer satisfaction, and in turn the company’s bottom line. I suspect that 200 years ago, had a mega-retailer structured itself to maximize employee happiness, it would have swiftly gone bankrupt. I understand this is the case for many, if not most businesses today, but is it not again obvious/miraculous or at least meaningful that in the case of a meta-retailer like Zappos, socially responsible labor practices correlate with financial returns? Or, as the Gini coefficient shows, that a base level of economic equality is necessary to economic growth? In fact, writing about his upcoming book on the evolution of capitalism, Christopher Meyer posits that “business [will] take ownership of the impacts they now call ‘externalities.’” The new capitalists will internalize externalities not for altruistic reasons, but because doing so is actually becoming good for business. As technology makes the universe more self-aware, accelerating the feedback loops that make these kinds of correlations explicit, and evolving our complex adaptive system – or as Kelly might say, evolving evolvability itself – we’ll adaptively shift from passing externalities onto other people at other cash registers 7 generations later to engaging in a more convivial mode of production and consumption, and in turn, a more convivial mode of technological development.

How would such a convivial mode of technological development be mandated, or even encouraged by public policy? Not quite with the Precautionary Principle, which as Kelly criticizes, maximizes safety to the detriment of other values, like progress. Also not quite with Kelly’s Proactionary Principle, whereby we “shape technology’s expression by…riding it with both arms around its neck” (p. 262), as if struggling to tame a wild horse. Very simply, I suggest we get the Office of Technology Assessment, first, revived, and second, talking to the Department of Health and Human Services and the Department of Education and the Department of Energy. Some interdepartmental coordination could go a long way towards integrating technological development with other societal goals, and therefore fostering a convivial mode of technological development that, in turn, developed convivial technologies.

* * *

“So what does technology want?” All the way on page 269 of his treatise, Kelly offers a seemingly straightforward answer: “Technology wants what we want – the same long list of merits we crave.” But, what do we want? Ah, finally we arrive at the “we” question. After wading through a thicket of mental trials and tribulations, ‘riding technology with both arms around its neck’ presents itself as an exercise in democracy, an exercise in deciding who “we” are and what we want. Reading Kelly’s words in this light seems to affirm the importance of social context – could ‘coaxing technology along the paths it naturally wants to go’ mean creating the social context conducive to manifesting our will? Might he actually concur that by organizing ourselves in such a way that maximizes our collective wellbeing, we create the conditions conducive to the emergence of technologies that similarly maximize our wellbeing, at all levels of organization? If I understand Kelly correctly, his response to the seminal question of this book is that if technology wants what we want, then we must get what we want in order for technology itself to be satisfied.

Kelly bases his argument that technology wants what we want, and is inherently a force of good, on its maximization of our freedom. “Technology is acquiring its own autonomy and will increasingly maximize its own agenda, but this agenda includes – as its foremost consequence – maximizing of possibilities for us” (p. 352). This emphasis on freedom interestingly echoes America’s Founding Fathers’ emphasis on the same virtue, but meaningfully leaves out the corresponding virtue they acknowledged: justice. Kelly nowhere speaks of justice; the characteristics he attributes to convivial technology do not include justice or equality or anything to temper freedom. Furthermore, he feels that nothing need be done about the ‘digital divide’ – the rich should subsidize technology evolution for the poor, as this is an ideal state of affairs. But if liberty without justice is chaos, and justice without liberty is tyranny, and if technology wants what we want, then – within the crudely-defined context of democratic societies, in which all of our wants are supposed to be accounted for – technology wants liberty and justice for all. Even after 406 pages, I confess to not knowing to what extent Kelly would agree with me on this conclusion: technology manifests what it wants, i.e. what we want, to the extent that we manifest what we want, including in our development and therefore co-evolution with technology. Coming full circle, hopefully I’ll have the opportunity to ask him at the Business Innovation Factory’s conversation with Kevin Kelly on February 10th.

No comments: