Saturday, March 23, 2013

Economism and Human Robots: Why Seeing Life as a Market is Corrupting

            In the previous post I mentioned Evgeny Morozov’s recent book, To Save Everything, Click Here: The Folly of Technological Solutionism ( While Morozov focuses on the Internet, one passage especially helps us see the flaws in the sort of thinking that characterizes economism.
            As I explained in The Golden Calf, one feature of economism is viewing all human behavior as a sort of market exchange, and arguing that economic laws should govern all social policy. Philosopher Michael Sandel, in his What Money Can’t Buy: The Moral Limits of Markets (, calls this the move from having a market economy (which is perfectly fine)  to living in a market society (which isn’t).
            Morozov surveys the geeks who think that the Internet will solve all the world’s problems, even problems that aren’t really problems and don’t need solving, but there’s a great app for them. He finds that these geeks seem to have a love affair with rational-choice theory from economics and with providing various incentives and “nudges” to get people to behave the way social engineers want us to. He takes aim at what this approach does to social life and our sense of ourselves, worth quoting at length:
            “A scheme that wants to get children to help senior citizens by awarding them badges and game points is likely to produce very different children than a scheme that appeals to their civic duty, even if both schemes yield the same results. The problem with simplistic models imported from economics and rational-choice theory is that, whenever they tackle a novel case, they start with a new set of abstract, independent, and ahistorical citizens. Thus, children who were just helping senior citizens by playing games are forgotten and swept away, and a new set of children--like so many widgets and coconuts—is mustered up to engage in some different task, perhaps to solve math puzzles after resisting the cookies. But, of course, children can’t reboot the way computers can; we have the same children doing both—and their experiences accumulate rather than cancel each other out. Constructing a world preoccupied only with the most efficient outcomes—rather than the processes through which these outcomes are achieved—is not likely to make them aware of the depth of human passion, dignity, and respect. We don’t earn our dignity by collecting badges; we do it by behaving in a dignified manner, often in situations in which we have other options. Tinker with this spiritual pasture, and those options might go away—along with the very possibility of dignity.”
            Now, here’s a philosophical-ethical take on what the man just said. In ethics we have the old-fashioned ideas of virtue and character—that it matters what sort of person you are, and not merely how you behave at one moment in time. And what sort of person you are is shaped over a lifetime by how you are brought up and by your later experiences and values. Our goal ought to be to encourage folks to develop into morally good persons who are good citizens for our democracy and who treat those who depend on them well and responsibly. To encourage that, we have to understand the basic ideas of virtue and character, and how these are life-long attainments.
            The type of view that most undermines this concept of virtue and character is anything that reduces people to superficial packages of behavior and that forgets that people remain the same people over a lifetime, with what happened to them in the past helping to shape who they are in the future. It’s exactly such a view of replaceable, rebootable people that Morozov accuses the rational-choice-economics people of pushing on us. His conclusion: “[T]here’s something profoundly disgusting about this [rational-choice incentives] approach, for it not only tricks—rather than talks—us into doing the right thing but also gives us a fake feeling of mastery over our own actions….Trying to improve the human condition by first assuming that humans are like robots is not going to get us very far.”

No comments:

Post a Comment