In my previous post I discussed my concern that being perpetually “plugged in” was hindering my ability to learn. I’ve had other concerns, too. I’m less productive than I used to be. I have a hard time seeing projects through. My relationships with friends have atrophied; one-line Facebook notes have largely replaced the long, rich e-mail conversations I’ve always favored. Overall, I’ve had the nagging sense that being constantly wired has made my life less meaningful and satisfying.
That got me thinking a lot about the hazards of our information age, which we are mostly blind to. After all, information is the medium we swim in these days. We take it for granted and can hardly conceive how to live differently. So this week I read two books that thoughtfully examine what the Internet is doing to us as human beings.
First, I liked Nicholas Carr’s Wired article so much that I bought his book. Carr draws on modern scientific research to show how the Internet is actually rewiring our brains at a physiological level. New technologies, he argues, fundamentally reshape how we think. The birth of written language and the creation of the printing press didn’t merely put new information into the hands of the masses; these technologies overturned centuries of oral culture and an education system based primarily on memorization. They changed how human beings think. The Internet has done the same thing, and while that brings many benefits, it has risks. The research is pretty conclusive that the Internet is a medium built around distraction. While it makes a vast amount of information accessible, deep learning and creativity is hindered.
Some Internet apologists argue that we don’t need deep learning anymore, because we can look up anything we want instantaneously on the net. Amazingly, Carr cities a Rhodes scholar studying philosophy who has given up reading books entirely. These people argue that the function of human intelligence nowadays is to index information sources and know where to look up answers we need. To some degree, that might be true. But Carr cites scientific studies which show that deep learning is what creates the frameworks and scaffolding in our brains that we use to process and analyze information. Our ability to synthesize information, place it into larger wholes, and think creatively depends to a large extent on the kind of deep learning that occurs when we read books.
Jaron Lanier’s book You are Not a Gadget: A Manifesto attacks Web 2.0 culture, arguing that it is undermining us as human beings. Lanier is no Luddite; he is the father of virtual reality and one of the pioneers of the digital age. Lanier draws on his technological expertise to show the profound consequences that design choices can have on human society. These design choices are not inevitable, but once the choices are made, they frequently get “locked in” and are impossible to change. By atomizing human beings and reducing them to discrete properties that fit within tidy design patterns, Web 2.0 designs are sacrificing the uniqueness and creativity that make us human.
Lanier frequently compares Web 2.0 technologies to the MIDI music format, which was designed to digitally represent musical notes. The technology got “locked in” early and acts like a straitjacket now, because MIDI has been so universally adopted. The problem is that MIDI sounds… well, terrible. The rigid digital format can’t capture any of the richness, subtlety, and nuance that makes for quality music. Lanier argues the same thing is happening to human beings.
Lanier has other criticisms. “Cybernetic totalists” are elevating the crowd (or hive) above human beings themselves. These totalists tell us that the hive has more worth, creativity, and intelligence than individuals. Thus a collaborative project like Wikipedia will somehow be superior to the contributions of any one individual. The truth, Lanier argues, is frequently the opposite. Passion, creativity, and real art spring from individuals. Web 2.0 chops, dices, atomizes, mashes up, and resynthesizes these contributions into something less meaningful.
Lanier also believes that Internet business models–which flow directly from design choices–are destroying art and creativity. A handful of information gatekeepers like Google make extravagant amounts of money, while artists, content creators, and producers get nothing. Most Web 2.0 enthusiasts bash “old media” like newspapers and the music business, blaming them for being slow to adapt to changing technology. Lanier argues that with our current designs, adaptation is impossible; there is no alternative business model that will help musicians or newspapers survive.
Whether one agrees with these authors or not, they do a great service by challenging conventional wisdom and asking hard questions about the digital age. What is the Internet doing to us as human beings? We are familiar with the benefits; what are the risks? Are these inevitable or can they be redesigned? How do we live effective and satisfying lives as human beings in response to these technologies? These two books have given me plenty to think about.