Technology

Talking About Talking: Part One

I was a communication major in college, and I taught communication theory in grad school. I got an MA in media studies and am a journalist by trade. Needless to say, I often think about communication. So it's time to start a multi-part series about it on my blog. What can I say? I'm nostalgic for grad school. There are a LOT of different kinds of communication. Everyone does it slightly differently. It’s one of the things I like most about people. I love observing how they talk and listening to them and learning about their stories. I'm a journalist, so it's kinda what I do.

The Death of Facebook

Many of you know that I have a love/hate relationship with Facebook. I joined it reluctantly six months ago, and have loved and loathed it for various reasons. Recently, though, Facebook has been mired in a bit of an existential crisis. Just this week it reversed its new terms of service which users passionately rejected for its creepy proprietary implications. And then there is the whole "25 Random Things" sensation that has inexplicably captured the imagination of 7+ million Facebook users. To me, this oddly retro, gloriously insipid throwback to 1998 e-mail forwards is the strongest sign yet that Facebook will soon collapse under the weight of its own purposelessness.

Reformed Luddite Talks About Communication

In the wake of my recent treatise against Twitter in Relevant magazine, I’ve felt a little bit guilty. I’ve felt like I need to apologize to technology for being so hard on it, for assuming the worst about it always. I still and always will insist on critical analysis of new technologies, and I still believe that we should err on the side of skepticism rather than unthinking embrace, but I’ve come to realize this week that the technologies I often and have very publicly railed against (Facebook, Twitter, Bluetooth, etc) can and are being used for good things. God uses these things in spite of their creepy digital impersonality.

I Joined Facebook... Sigh.

September 19 was a dark day for me... but one that I feared would come soon enough.

I joined Facebook.

This is after years and years of publicly campaigning against it in articles such as this and this... oh and this one as recently as January where I talked about "the irrevocable damage Facebook and its various counterparts have done to meaningful communication."

And now I am a part of the monster, feeding it like everyone else...

Laughable, I know. It will take a while for me to recover from this swift idealistic collapse. Now I know what Obama must feel like after talking so much about not running a negative campaign and then being forced to do it anyway.

Not that I was forced to do it, but believe me when I say that I had to join Facebook. Any professional journalist really cannot function without it these days, and my job at Biola magazine (especially some articles I'm writing now) necessitated some serious usage of Facebook.

I sickens me when technology wins, when I can no longer survive without it. This is like the cell phone: so many people held out and refused to get them five years ago, but now we'd all die without them. These are moments when Neil Postman's Technopoly seems more prescient than ever.

I joined Facebook with the hope that I could "hide" and only use it secretly for work purposes. Ha. That lasted about 30 minutes earlier today, quickly devolving into just another Facebook startup: "friends," friend-requests, profile-making, etc. I've really fallen fast, giving myself over to my sworn enemy with crude ease and jolting swiftness. At this rate of ideological turnaround I will be Facebook's biggest champion by this time next week. Heaven forbid.

Talking Singularity at Cambridge

So the Cambridge week of the Oxbridge 2008 conference is underway (since Saturday), and it has been a marvelous experience thus far. The weather is cool and rainy (in a British sort of way) but the energy is high and all of our heads are spinning from the various lectures and stimuli being thrown at us.

A few highlights of Cambridge thus far include a stunning Evensong service at Ely Cathedral on Sunday, a dinner/dance at Chilford Hall (basically a barn-like structure in Kansas-like wheat fields), and some great lectures from the likes of Colleen Carroll Campbell, Bill Romanowski, and Nigel Cameron, the latter of which I found particularly provocative.

Cameron, Director of the Center on Nanotechnology and Society and Research Professor of Bioethics and Associate Dean at Chicago-Kent College of Law in the Illinois Institute of Technology, gave a talk entitled "Stewarding the Self: A Human Future for Humans?" Essentially the talk asked the question, "what does it mean to be human?" in an age (the 21st century) when all efforts seem to be moving toward a reinvention of the human project itself. He talked about three ways in which the human as we know it is being redefined: 1) taking life (abortion, euthanasia, stem cells, etc), 2) making life (test tube babies, cloning, etc), and 3) faking life (cyborgs, chips in human brains, robots, etc).

It's interesting because just about a month ago I wrote a blog post about many of the things Cameron talked about. Actually, my review of Bigger, Stronger, Faster also fits into the discussion, as does my post about Iron Man. In each of these pieces I point out the increasing sense in our culture that the human being is becoming more machine-like... We conceive of our bodies not as carriers of a transcendent soul but as a material objects which can be manipulated, botoxed, pumped up, and enhanced in whatever way that pleases us. Cameron pointed out various technologies being developed that will make this sort of "faking life" all the more prevalent... such as BMI (Brain Machine Interface) which will allow our brains to work with embedded computer chips in them... so we can just think a webpage or some digital computation rather than go to the trouble of using a computer hardware external to our body.

He mentioned that the computing power in the world will likely increase by a factor of a million within a generation, which means we have no concept now of just what the future will look like. He pointed to a government study released in 2007 entitled "Nanotechnology: The Future is Coming Sooner Than You Think," which featured some pretty remarkable assessments from noted futurists and nanotech scholars about what the future might hold. For a government study, it's pretty sci-fi. Take this section which poses the potential of "The Singularity" happening within a generation or two (and for those unfamiliar with "The Singularity," read about it here)...

Every exponential curve eventually reaches a point where the growth rate becomes almost infinite. This point is often called the Singularity. If technology continues to advance at exponential rates, what happens after 2020? Technology is likely to continue, but at this stage some observers forecast a period at which scientific advances aggressively assume their own momentum and accelerate at unprecedented levels, enabling products that today seem like science fiction. Beyond the Singularity, human society is incomparably different from what it is today. Several assumptions seem to drive predictions of a Singularity. The first is that continued material demands and competitive pressures will continue to drive technology forward. Second, at some point artificial intelligence advances to a point where computers enhance and accelerate scientific discovery and technological change. In other words, intelligent machines start to produce discoveries that are too complex for humans. Finally, there is an assumption that solutions to most of today’s problems including material scarcity, human health, and environmental degradation can be solved by technology, if not by us, then by the computers we eventually develop.

Pretty crazy stuff, eh? Who knew the government actually thought that The Terminator was going to come true? As Cameron pointed out, it's as if the forecasts of Mary Shelley, Aldous Huxley, and C.S. Lewis (in The Abolition of Man) were all coming true. It means that Christians will need to address science and technology along with theology and postmodernism in the coming decades, raising questions that perhaps no one else will, such as: how do we reconcile a theology of suffering with a world that is trying its hardest, through technology, to rid us of all suffering?

Is “Online Community” an Oxymoron?

L.A. is a lot like the blogosphere. It is sprawling and overwhelming, though manageable if you find your niche. It’s full of pockets and localized communities where ideas and ideologies are reinforced in insulated communities. And like L.A., the blogosphere can be very, very impersonal.

One thing I’ve struggled with during my first year of blogging is the ever present dichotomy of, on one hand, feeling more connected to people than ever before, and on the other feeling a bit isolated from the “real” world. Do you other bloggers feel that tension? It is a very personal thing to share one’s thoughts, but also a very strange thing to do it from so veiled a position. Are humans really meant to be so unrestricted in their ability to mass communicate?

I certainly feel more empowered and willing to say pretty much whatever I want when I write for my blog, which is totally great but also a total misrepresentation of non-blogging life. I wouldn’t dare say some of the things I’ve written on this blog in person to very many people, though that doesn’t mean I don’t believe them. Which, of course, begs the question: what is more “real”? Self-constructed, though thoroughly free-wheeling and uncensored online discourse, or co-constructed, slightly-more-tactful in person communication? I want to say the former, but large parts of me feel that the latter is truer, that it is in the unsaid presences and awkward cadences of simultaneous communication between people that the most important things reveal themselves.

Of course, by saying this, I’d have to say that Martin Luther reading the words of Paul in his isolated monk’s chambers is somehow inferior (in terms of meaning-making) to an insipid dorm room conversation about predestination, and I’m not prepared to go that far. But I think I might be talking about two different things here: communication as arbiter of ideas and communication as creator of relationships. Perhaps one method (the written or otherwise recorded word transmitted impersonally) is superior in terms of elucidating the meaning of abstract ideas and theories, while the other method (in person community and communication) serves better the development of emotional and relational existence. In platitudinal terms: one is better for the head, the other for the heart.

This may sound obvious, but the mainstream of communication theory has heretofore been unable to reconcile the two “purposes” of communication (in William Carey’s terms: the “transmission” vs. “ritual”). Traditional scholarship views communication as either a way to communicate things and ideas from one place or person to another (emphasis on what we communicate), or as a symbolic process of shared meaning (emphasis on the act of communication). With the Internet, though, I think we have to reexamine all of these things; we have to re-conceptualize communication itself.

More presently to my concerns as a blogger: should I view it mainly as a community and value it as such (for the visitors, the comments, the entertaining back-and-forth, regardless of how productive), or should I look at it is a place for ideas to be born and bred? It is interesting to wonder: with all the thoughts and ideas bandied about on the blogosphere every day, is there any resultant progress in the overall level of human understanding? Has discourse been furthered? Or maybe it has made things worse for actual productive discourse? I’d hope it’s not the latter. I’ll continue blogging in the wishful understanding that it is the former.

YouTube Goes Highbrow

During the L.A. Film Festival this year, I was first introduced to the Youtube Screening Room, an area of the site devoted exclusively to selected independent films. The Screening Room will feature four short films every two weeks, as well as the occasional full-length feature. Right now the four featured films include Miguel Arteta’s hilarious short, Are You the Favorite Person of Anyone? (starring Miranda July, John C. Reilly, and Mike White), Oscar-winner The Danish Poet (2007 best animated short), Oscar nominee Our Time is Up (starring a fantastic Kevin Pollack), and Love and War (“the world’s first animated opera”). I recommend viewing them all.

This new YouTube venture is terribly exciting, and has the potential to revolutionize the regretfully ghettoized short film form. Previously, short films have been largely relegated to life on the festival circuit, but with the Internet (and especially something like YouTube Screening Room), perhaps the short film will enjoy a popular renaissance.

More importantly, this will further democratize (possibly) the entry points to the film industry. Intrepid young filmmakers who score a featured spot on the site (and user-submitted videos will in fact be a part of it) and garner a million or so views will likely become attractive properties for bigger and better things in Hollywood. The Screening Room also provides a potential moneymaking venture for erstwhile unemployed aspiring filmmakers. Videos on the site will be eligible for YouTube's revenue-sharing program, whereby filmmakers split some of the income from the advertising that accompanies their movies.

Finally, I think that if this is a successful venture, it indicates that the future of all art cinema will in the not-too-distant future be distributed first and foremost on the Internet. Blockbusters and event movies will always (well, for a while at least) be outside-the-home experiences, but art films will increasingly be seen via Netflix, HDNet, or the Internet. After all, not every city is like L.A.: most people in the world don’t have film festivals and 12-screen arthouse multiplexes to go to if they want to see obscure films.

Four Easy Pieces

I. A lot of people are hating on Prince Caspian, for understandable (if not completely sympathetic) reasons: the movie is vastly different than the book, especially in overall tone and spirit. The film is a swashbuckling war epic that is about 66% battle scenes and/or sword fights, and certainly this is not what Lewis’s classic children’s tale is about. And yet I enjoyed the film, and I’m perplexed at all those who angrily dismiss it as “missing the point.” What do you expect when a children’s book from 50 years ago is transformed into a big-budget summer blockbuster in the year 2008? (That said, I do suggest reading this creative critique of the film.)

I don’t want to defend the film too much, because it is certainly not perfect; but to judge it on the merits of the book is not completely fair. The moving image, after all, is a remarkably different medium than the written word. Cinema removes the element of imagination (or at least downplays it) which is crucial to books and novels (especially children’s fantasy!). In books, we visualize the characters, settings, and action. In film, it is done for us—our attention directed hither and yon from one set piece, sequence, or costume to another. In lieu of the removed element of “interaction” (the ability of the reader to co-create the reality of the story), cinema must compensate in other ways: offering high-intensity spectacle, gloss, and action to hold our interest and transport us into a world.

To fault Caspian for being too action-heavy, then, is to misunderstand the purpose of cinematic adaptation. A film could never equal the experience of a book; the best book-to-film adaptations are those that are the most true to form (i.e. cinematic) and that don’t get bogged down in something that is ontologically contrary (i.e. the literary). Film theorist Andre Bazin harped on this, and for good reason. He wrote that “If the cinema today is capable of effectively taking on the realm of the novel and the theater, it is primarily because it is sure enough of itself and master enough of its means so that it no longer needs assert itself in the process. That is to say it can now aspire to fidelity—not the illusory fidelity of a replica—through an intimate understanding of its own true aesthetic structure which is a prerequisite and necessary condition of respect for the works it is about to make its own.”

The film version of Narnia does Lewis justice to not try to capture his literary genius on film. It does better to focus on its own form (spectacularized summer blockbuster) and wow the audience with cinematic wonder, in the way Lewis wows us with his poetic literary whimsy. One might complain, for example, that the film transforms Susan into a Tarantino-eque killing machine, wielding a bow-and-arrow with Legolas-like tenacity. But this is a film, built around action, so it’s much better to have our heroine Susan smack-dab in the middle of it all rather than cheering from the off-camera sidelines. Sure, the film loses much of the book’s innocence and spiritual “themes”—the “deeper magic” of Narnia, after all, is not something that WETA special effects can really evoke (certainly not as well as the written words of Lewis could). But the film offers us something altogether more visceral that the book could never express. But we’re talking about apples and oranges here: films and books. We should move on.

II.

“The medium is the message,” said Marshall McLuhan. Meaning: the form of a message shapes its content. Indeed, the form is itself a kind of content. McLuhan wrote in the 60s, as the television form was revolutionizing the world. His contribution to communication theory was the idea that technological change (with particular respect to media and communication technologies) shapes humanity in deep and significant ways: new media forms “work us over completely,” he wrote. “They are so pervasive in their personal, political, economic, aesthetic, psychological, moral, ethical, and social consequences that they leave no part of us untouched, unaffected, unaltered… Any understanding of social and cultural change is impossible without a knowledge of the way media work as environments.”

McLuhan divided history into eras and epochs of media transformation: the tribal era (oral, tribal culture, face-to-face communication), the literate era (invention of alphabets and written language, emphasis on the visual), the print era (printing press, birth of mass communication, visual emphasis), and the electronic era (computers, telegraph, emphasis on touch and hearing). Whether or not one agrees completely with McLuhan’s somewhat suspicious lineage here, I think it is definitely true that technology effects how humans relate to each other and the world.

And I wonder if we are not moving into some new “era” that is better fit to our digified, attention-challenged generation? A sort of bite-sized, schizophrenic, decontextualized-yet-hyperlinked period of human civilization.

III.

Television was probably the beginning of this “snack” era. Its form, as noted by McLuhan’s heir Neil Postman, was one of decontextualized soundbites: segments of entertainment juxtaposed with advertisements, “news,” sports, and other diverse occurrences. The form of television news, for example, was one of total and utter schizophrenia: “this happened… and then this… now weather, now sports, now BREAKING NEWS, now pop culture fluff…” This very form (emphasizing ands rather than whys), argued Postman, has conditioned the human mind to be less capable of understanding context and perspective. In the stream of broadcast images and commercials, there is very little recourse to depth or understanding.

And how much moreso is this the case with the Internet! Here we are freed from all over-arching narratives, causal linkage or contextualized coherence. We can (and do) hop from CNN.com to TMZ.com, from Bible.com to ESPN, picking up bits and pieces and snippets of whatever our fingers feel led to click on. Since I’m on my computer now I might as well mimic this in my writing, since writing as a form is changing as well…

Here I am on CNN.com, surveying the “news” on Sunday, May 18, 2008. Oh, there is a positive review from Cannes of Indiana Jones! Richard Corliss liked it, saying that it “delivers smart, robust, familiar entertainment.” This eases my mind a bit… though I have heard that other Cannes audience members were not quite as wowed as Corliss was… Speaking of Cannes, I just saw a picture of Brad Pitt and Angelina Jolie from the Kung Fu Panda premiere. Looking very, very good. I hope Brad Pitt isn’t messing up Terrence Malick’s new film Tree of Life, which is filming in Texas right now. Evidently Angelina is pregnant with twins, which probably means some unfortunate little Burmese orphan won’t get adopted this year. Speaking of Burma, I’m now clicking on the latest CNN headline about the cyclone in Mynamar… Evidently the UN is now saying over 100,000 might be dead. Meanwhile, China just started its three days of mourning for the earthquake victims, which now number 32,477. And if we’re talking numbers, I now see that Prince Caspian raked in $56.6 million to be the top film at the box office this weekend. That’s a lot more that Speed Racer made last week, but a lot less than Iron Man made in its first weekend. And the death toll from the earthquake in China is a lot more than the toll of those killed in tornadoes last weekend in America (24 I seem to recall), but a lot less than the 2004 tsunami disaster (more than 225,000 killed).

IV.

Unfortunately, as easy and accessible as the “news” and “numbers” are for all these things, there is scarcely little in the way of making sense of it all… Indeed, the very fact that we juxtapose things like Cannes glamour and human misery (earthquakes, cyclones) as if they were equally crucial bits of information makes it difficult to think of anything in terms of meaning or context. But perhaps we don’t want to. Perhaps the world is just too crazy, too horribly gone-wrong to reckon with on any level deeper than the snack-sized soundbite. To come to terms with the scope of the Asian disasters means to think about deeper things like God, death, evil, and nature, which gets quite broad and philosophical in a jiffy. Taking time to make connections is a dying art, just as reading is… and writing, and newspapers, and printed anything… Basically the “long form” and all that that entails is falling to the wayside in our easy-pieces-based culture. Thus, I should probably end this rather long blog post, and I should probably end somewhere near the start, as if clicking back on my browser about fifty times.

Prince Caspian the book and Prince Caspian the movie are quite different things, representing different times and cultures and mindsets. It’s true that the latter loses some of the magic and meaning of the former, but so it is with life these days. We’ve supplanted meaning with simulacra and snack-sized spectacle. Even though we probably need it more than ever, “the deeper magic” is ever more abstract and inaccessible to a world so desperate for instant and easy gratification.

Fearsome Facebook

Without a doubt, Facebook was this year’s Youtube. That is: a novel new Internet fad that in one year hit an explosive tipping point of ubiquity. Over the past few weeks it’s really became clear to me just how obsessive the Facebook craze is. Most of the people I know (between the ages of 10 and 30) are constantly on Facebook: checking their feed dozens of times a day, updating statuses, poking various people, seeking out new “friends,” and stalking persons they seek more information about.

This last use of Facebook—“stalking”—is perhaps the most disturbing function of the whole thing. Whenever anyone meets someone else or wants some more information about them (as a friend, significant other, employee, etc), Facebook is the perfect place to do a little surveillance. Who are this person’s friends? What activities and groups are they a part of? What kinds of pictures are they tagged in?

The problems with this sort of information-gathering on Facebook are obvious. The “person” represented on a Facebook profile is a highly constructed, constantly tweaked avatar. It is a painstakingly composed projection of a person, but not a person. But perhaps more problematic is the way that Facebook has come to stand in for traditional forms of interpersonal relationships. “Peeling away the onion” in relationship formation was formerly a delicate, imperfect, rocky road of physically present social penetration. With Facebook it has become an impersonal, easily-confused, mechanistic process with efficiency—not humanity—at its heart.

I recently took my Facebook disdain public in a year-end article for Relevant magazine in which I characterized Facebook as part of a disastrous digital-era trend toward meaningless, mechanistic communication. This is was I wrote:

At a time when our culture needs a primer on meaningful communication, new tech wonders and digital “advances” increasingly led us in the opposite direction (what I call “superfluous communication”) this year. Things like iPhones and the latest hands-free gadgets have created millions of meaningless “let me talk to someone via this fun device because I CAN” conversations worldwide in 2008. Add to this the irrevocable damage Facebook and its various counterparts have done to meaningful communication, and 2007 was a banner year for the tech-driven trivialization of communication.

Predictably, reactions in the comments section singled out my criticisms of Facebook (and yes, “irrevocable damage” is a harsh phrase). “Bryan” had this to say:

Facebook is not the problem. People who spend so much time on it are the ones with the problem. It's the people with the problem. Your argument is like someone claiming that they can't stop looking at porn due to the fact that it exists on the net. Don't wanna spend so much time on there? Then don't spend so much time on there.

What Bryan does here is frame my position on Facebook in terms of technological determinism—the notion that the nature of technology determines the human uses of it. I am not necessarily trying to argue this. What I am suggesting is that the consequences of Facebook—whether the fault is with the technology or the user—are serious… And seriously changing our very basic understandings of how humans relate to and connect with one another. Rather than blast Facebook as inherently evil, I simply want to raise a warning flag—because with every technological or cultural trend that rises and spreads so quickly, some caution is certainly warranted.

Mii, Myself, and My Online Identity

Recently I’ve been fascinated with the notion of the avatar—whether our Facebook picture or our IM Buddy icon or our actual videogame avatars. I’ve been playing on the Nintendo Wii and having way too much fun creating Miis… little cartoonish avatars that I can make from scratch and then play in games. But it’s a pretty interesting thing to consider on a deeper level—the attraction and increased ubiquity of avatars in a digital age.

In his essay, “Hyperidentities: Postmodern Identity Patterns in Massively Multiplayer Online Role-Playing Games,” Miroslaw Filiciak argues that “on the Internet … we have full control over our own image—other people see us in the way we want to be seen.”

My question is this: To what extent are these avatars or online identities really “identities,” insofar as we recognize them as being in some way “us”? Do we see them as extensions of ourselves, or substitutes, or “one of many” variant, circumstantial identities? Do we empathize with our avatar as a function of being its creator and controller? Or as a result of its being our digital likeness and online persona?

“Identity” as an idea is complicated enough, but “postmodern identity” is another ball game entirely. Filiciak attempts to grasp the postmodern identity in his essay, citing people like Jean Baudrillard (identity is the “label of existence”), Michel Foucault (“self” is only a temporary construct), and Zygmunt Bauman, “the leading sociologist of postmodernism,” who argues that the postmodern identity “is not quite definite, its final form is never reached, and it can be manipulated.” This latter notion seems to be the crux of the matter—the idea that identity in this networked world is not fixed but fluid, ever and often malleable in our multitudinous postmodern existence.

Filiciak cites social psychologist Kenneth Gergen, who writes about how we exist “in the state of continuous construction and deconstruction.” While this is not a new idea (psychologist Erving Goffman argued, in his 1959 classic, Presentation of Self in Everyday Life, that the presentation of self is a daily ongoing process of negotiation and information management, with the individual constantly trying to “perform” the image of themselves that they want others to see), it is nonetheless an idea which does seem ever more appropriate in this DIY, user-generated, “massively multiplayer” society.

The type of “self” we construct and deconstruct in everyday life, however, seems to me to be a subtly different thing than what we can and often do in videogame avatar creation. A primary attraction of avatar creation, I think, is that it allows us to create “selves” that are both our creation and our plaything, something that can be as near or far from us as we want. We can and often do construct “identities” that are far from who we are or would ever want to be in the “real” world. Why do we do this? Because we can. Where else can I create a detailed character—complete with eyes, nose, hair, lips, eyebrows, all proportioned to my curious heart’s content—who I not only have authored but can now control and “act as” in a simulated, interactive space?

I find it interesting that when I began to create my first Mii, my initial instinct was not to carefully craft a Mii in my image (I did do this later on, and found it rather boring), but rather to play around with the tools and manipulations at my disposal and create the weirdest looking, side-ponytail-wearing freak I could come up with. Given the opportunity to create any type of Mii, I had no inclination—and I never have, really—to create an avatar that is remotely like who I am (or who I think I am). Thus it strikes me as questionable whether avatars are primarily something that we are to empathize with, at least in the visual sense.

In a sense, my attraction to an avatar is not so much the ability to portray and empathize with a digital alternate to my self, as it is an empathy or affinity towards the ability to create and control this being. To create the avatar is—to me—the most enjoyable part of having one. Of all the things I’ve played on the Wii (sports, Mario Paper), Mii creating was definitely my favorite part. There is something very attractive to the idea of formulating a person from scratch—assembling features in bizarre and unnatural ways with no penalty for cruelty or ugliness. As Filiciak writes of the avatar creation of MMORPGs:

There is no need for strict diets, exhausting exercise programs, or cosmetic surgeries—a dozen or so mouse clicks is enough to adapt one’s ‘self’ to expectations. Thus, we have an opportunity to painlessly manipulate our identity, to create situations that we could never experience in the real world because of social, sex-, or race-related restrictions.

Indeed, if we view avatars as a sort of extension of our identity, then here is one case in which we truly can be anything we want to be.

We can also do anything we want to do, or at least things that are taboo or unthinkable in our real lives (play Grand Theft Auto for a good example of this). Here again we see that our empathy with the avatar occurs not just in what the avatar is, but perhaps more in what the avatar does, or is able to do at our command. Filiciak believes the freedom we have with the avatar “minimizes the control that social institutions wield over human beings,” and results not in chaos but liberation: “avatars are not an escape from our ‘self,’ they are, rather a longed-for chance of expressing ourselves beyond physical limitations … a postmodern dream being materialized.”

It’s an interesting notion, to be sure: the vaguely Freudian idea that who we really are (our true identity) can be realized only when the many limitations of everyday life are removed (as in a game). Gonzalo Frasco, in his essay “Simulation versus Narrative,” makes a similar point about how videogames allow for a place where “change is possible”—a form of entertainment providing “a subversive way of contesting the inalterability of our lives.”

I think that the ability to transgress the limitations and inalterability of our real lives is an especially important attraction of the avatar. But within this ability of the avatar (to be and do things that are beyond the scope of our real lives), I think, lies the very limitations of our identification with it. It seems that what draws us to the avatar is the very thing which ultimately alienates us from it. If true empathy is possible with the user and his avatar, he must first get past the fact that this digital incarnation of “self” can do (and is really meant to be) substantively different than we are—unbound by the many limitations (physical, emotional, cultural, etc) which mark our existence.

The pleasure we derive from our relation to an avatar, then, seems to be less about empathy or identification than creative control and interactivity. With my Mii creations, for example, my enjoyment came from the ability to create in any way I wanted—to play God in some small way. There was little in the Miis that I could relate to my own identity; little I could really empathize with. But I still enjoyed creating, changing, and controlling them. This reflects a tension that is, in my mind, central to the videogame experience. It is the tension between the “anything is possible” freedom of virtual worlds and the user’s desire for empathy. The former may produce the higher levels of fun and gameplay, but the latter is a fundamental human longing. And I believe the two are negatively correlated: as “anything is possible” increases, the opportunity for empathy decreases, simply because limitation—as opposed to unbounded freedom—is what we know. It’s our human frame of reference.

The Commodification of Experience

In Wes Anderson’s new film, The Darjeeling Limited, three brothers from an aristocratic family meet in India to go on a “spiritual journey.” Loaded down with designer luggage, laminated trip itineraries, and a hired staffer (“Brendan”) with an albino disease, the dysfunctional trio embarks on a train ride through the richly spiritual terrain of India.

It is clear from the outset that the brothers—or at least Francis (Owen Wilson)—are here to experience something: something deep, profound, and hopefully life changing. And they are oh-so methodical about maximizing the “spirituality” of it all. Francis stuffs every spare moment of their schedule with a temple visit or some sort of feather prayer ritual. It might be odd and a little offensive that these three rich white guys—decked out in fitted flannel suits by Marc Jacobs—are prancing around such squalor, making light (by juxtaposition) of the decidedly exotic culture that surrounds them… But this is what makes the film funny. It’s a comedy.

But it also rings very true. These guys are swimming in things (designer sunglasses, clothes, trinkets, keychains, etc), but what they really want is to feel. And because acquiring commodities is in their DNA, they assume that these types of immaterial experiences can be collected too. Thus, their exotic pilgrimage to India.

The film made me think a lot about my own life, and how I increasingly feel drawn to experiences rather than things. It’s all about seeking those magic moments—whether on a vacation abroad or on a sunset walk on the beach—when we feel something more. And of course, it helps to have an appropriate song pumping through your iPod to fit whatever mood or genre of life you are living at that moment. In Darjeeling, the “iPod as soundtrack to a nicely enacted existential episode” is given new meaning.

In his book The Age of Access, Jeremy Rifkin applies this all very neatly to economic theory, pointing out that our post-industrial society is moving away from the physical production of material goods to the harnessing of lived experience as a primary economic value. For Rifkin, the challenge facing capitalism is that there is nothing left to buy, so consumers are “casting about for new lived experiences, just as their bourgeois parents and grandparents were continually in search of making new acquisitions.” Rifkin believes that the “new self” is less concerned with having “good character” or “personality” than in being a creative performer whose personal life is an unfolding drama built around accumulated episodes and experiences that fit into a larger narrative. Rifkin keenly articulates how this user orientation toward theatricalized existence creates a new economic frontier:

There are millions of personal dramas that need to be scripted and acted out. Each represents a lifelong market with vast commercial potential… For the thespian men and women of the new era, purchasing continuous access to the scripts, stages, other actors, and audiences provided by the commercial sphere will be critical to the nourishing of their multiple personas.

And so as we (the spoiled, affluent westerners among us, at least) become more and more dissatisfied with all the physical goods we’ve amassed, and begin to seek lived experiences and dramatic interaction as a new life pursuit, we must not delude ourselves that this is some higher goal, untainted by commercialism.

On the contrary, the economy is shifting to be ready for the “new selves” of this ever more de-physicalized era. The question is: are we prepared to allow our experiences to become commodities? Are we okay with the fact that our “to-buy” wishlists are now being replaced by “to do” lists, of equal or greater value to the marketplace? What happens when every moment of our lives becomes just another commodity—something we collect and amass to fill the showcase mantles of our memories?

Trivial Pursuit

When you get on the Internet, what are you there for? To find some piece of information, perhaps: movie times, train schedules, store hours, etc? Or maybe you are there because it is habit: every day when you wake up, and sporadically throughout the day, you must go through your cycle of websites (for me it is CNN.com, then my three primary email accounts, then Relevantmagazine.com, then occasionally I’ll make a stop at my fourth email account). Or perhaps you go online simply because there is nothing else to do—and there is EVERYTHING to do on the web.

It is this last motivation that I’m the most interested in. The Internet, beyond being the most useful information-getting resource ever to be at mankind’s fingertips, is also the largest and most wonderful playground we’ve ever had. You can go anywhere, watch or listen to anything, buy whatever your heart desires, and do scores of other things that may or may not be acceptable in the “real” world.

With all of this at our disposal, it’s no wonder so many of us go online when we have a spare moment. It’s no wonder we can easily drop 3 hours online when we only intended to check our email. It’s like going to a massive and wonderful amusement park with the ostensible motivation of trying out the new rollercoaster. Of COURSE you’re going to stay all day and ride everything you can, while you’re there!

So it is with the Internet. It’s built on links and ads and things to push and pull us in new, alluring directions. It’s all about movement, dissatisfaction, keeping you wanting more.

Because there is EVERYTHING on the web, it is almost impossible—if you don’t have a utilitarian reason to be there—to choose where to go. Thus we rely on links and pop-ups and “if you liked this, you’ll like…” recommendations to guide us along the way. Thus, a typical session online is a hyperscattered, nonsensical web of aimless wandering, dead-ends, backtracking, and rabbit trails. But I think we like it this way. How nice to not be looking for something, but to be finding wonders and pleasures by the boatload, so easily! The search bar is our pilgrim guide online. Give it any clue as to what you desire, and it’ll lead you the rest of the way. Hit the google button and get your surf on.

But what is it we’re pursuing? The vast terrain of the web is an amusement park in which information is the diversion. Collecting more useless knowledge and facts is the name of the game. Whether we are there to check sports stats, see what we can download for free, or watch the latest goofy clip on YouTube, it’s all a passing trifle. It draws our attention for a second, but only until the next interesting link pops up.

For all the great and valuable things the Internet provides us, I wonder if it has done irreparable damage to our ability to think critically—to really mull over questions (that don’t have easily-Googled answers), seek out the big questions and not be at the mercy of a marketplace that prefers to ask and answer the questions for you. We should live our lives in a state of search, I think, but the Internet all too often makes “searching” a trivial pursuit.