Friday, May 25, 2012

In 8 Years, Facebook Changed All We Do Online

In 8 Years, Facebook Changed All We Do Online

In the storm that is Facebook's IPO, we pause to take note of the way the social network has transformed the way we live now.  

 Is Facebook worth the $100 billion or so its pending IPO suggests it is? Who the good gracious knows. But one thing we can all be certain about is how the social network has radically changed people's behavior 
and expectations online in the eight short years since it was a nary more than a twinkle in the eye of its baby-faced founder(s). Those changes have had the monumental impact of facilitating the formation of entirely new industries and dramatically shifting the way brands market themselves online.

There are things we do online today, that we take so much for granted that we forget that some of them didn't exist even as recently as two years ago. And others were so radical they inspired outright rebellions when they were first introduced. And yet all of these things are not only commonplace today, they are the presumed paradigms. To operate any differently would seem downright odd.

If past is prologue, we're confident Facebook will continue to innovate in the years to come, thereby continuing to transform how individuals and businesses interact online and creating a whole new set of economic opportunities. Whether that translates into enough revenue to merit a $38 share price, we'll leave up to the number-crunchers on Wall Street. For now, however, we want to pause in this brief respite before the Nasdaq frenzy slated for tomorrow to pay homage to a few of Facebook's game-changing innovations.

 

The Death Of Email

I was in London last winter, and while walking through a train station, I overheard two people talking about coordinating with a third person. "I'll reach out to him on Facebook," one of them said. When I was in Afghanistan last year, at the rec center of every single military base I was on, anywhere from half to two-thirds of troops were on Facebook. When you only have access to computers for half an hour at a time, Facebook becomes the most efficient way to let friends and family know what you're up to and catch up with their news. When I found out that an old boyfriend had had a kid but hadn't emailed me the happy news, I was momentarily upset until a mutual friend told me, "I think he just posted it to Facebook." The social network has become one of the primary ways that people communicate today. Certainly it hasn't supplanted email altogether, but, globally, it has become the go-to channel for a slew of use cases that used to be managed by email or phone--or simply not communicated at all. So much so that it's spawned an entirely new industry of social networks-for-business, like Yammer, Chatter, Podio, and Edmodo.

----------

 

Sharing

In the good old days, if you wanted to let friends and family know about something cool you'd found on the web, you'd copy a link to the website into an email and send it off to your nearest and dearest. What a difference two years make. Yes, it's barely two years since Facebook made it possible to slap the Like button onto content on external websites, which in turn has expedited communication about everything from news stories to videos to photos to fundraising appeals, making Facebook the leading referrer of traffic to many content sites, as well as probably being responsible for helping get innumerable Kickstarter campaigns funded.

 

Single Sign-On

Remember the days when you had to produce a unique user name and password for every site you visited on the Internet? Then, remember how freaky it was when all of a sudden sites started inviting you to sign in with your Facebook credentials, and how we were all worried about what that meant about who would suddenly know what about us? And yet, today, we take this system (which has been adopted by others, like Twitter and Google) for granted. And maybe even get a little cranky when we have to set up independent log-in credentials at sites that don't integrate with Facebook. And this system (Facebook Connect) hasn't just made our lives more convenient, it's helped accelerate a whole new industry of apps and websites that have been able to get up and running faster, because they haven't had to build their own identity management systems but instead were able to just plug in to Facebook's (the same way they get up and running faster because they can use Amazon Web Services rather than building out their own server infrastructure).

 

Personalized Ads

Raise your hand if you've had this experience recently: You're watching TV (probably online), and the ads come on. You notice that they're for things you have no interest in, and you actually get a little ticked off. After all, all these sites are now supposed to know so much about you. If that's the case, you grumble, then why are you being shown an ad for a minivan, or a Disney vacation, or any number of products and services you'd never in a million years think of using? Thank the social network for that. It now gives advertisers unprecedented specificity in who they want to reach. That's why, for example, Airbnb will pop up in my right rail when Oracle OpenWorld is in town, asking if maybe I'd like to rent a room to a conventioneer. To which I respond: "You know, that's a pretty good idea…." Suddenly the ads are interesting again.

 

Facebook Pages As Company Websites

Try this: Open up a consumer magazine, like a cooking magazine, for example. Flip through the ads, and make a note of how many list a Facebook URL as their web address, rather than a company website. Remember back when producers of packaged foods or house cleaning products tried to get you to go to their websites? No more. More often than not, they'll send you straight to their Facebook page. The social network has created powerful tools for brands to build excitement (and evangelism) among consumers, and companies are choosing to use those pages as their primary home on the web. Even GM, which provoked a stir earlier this week when it was reported the automaker was killing its $10 million Facebook advertising budget, said it would nevertheless continue to invest in its brand pages--to the tune of $30 million, no less--because, the company said, "it continues to be a very effective tool for engaging with our customers."

 

Searching Gives Way to Discovering

Back in the late '90s, with the arrival of sites like Amazon and Google, commentators bemoaned the loss of serendipity. The web was now a place where you had to know what you were looking for in order to find anything. No longer would shoppers, and others, have the delightful experience of browsing, as they did in real-world stores, or libraries, and tripping across something splendid but thoroughly unexpected. The social network is helping shift the balance back toward discovery. It's increasingly the place, for example, where people discover the news, via links friends share. And it's also making discovery possible on other sites, by giving those sites tools that let their visitors filter content by Facebook friends, whether it's Yahoo, for example, that integrated with Facebook to let you see what your friends are reading on its news sites, or design store Fab, which allows you to browse a feed of items that your friends are buying and favoriting. The result is that the web is increasingly a place for serendipity, facilitated by Facebook and your friends.





src:~http://www.fastcompany.com/1837657/facebook-innovation-how-the-social-network-changed-everything-you-do-in-8-short-years

Saturday, May 19, 2012

Facebook’s I.P.O.: The Party Fizzles

Facebook’s I.P.O.: The Party Fizzles

FB-IPO.jpg 

No prizes for guessing what the stock of the day is on CNBC. On Squawk Box, the daily breakfast show, Andrew Sorkin, the Times columnist, founder of Dealbook, best-selling author, and, in his latest incarnation, would-be Matt Lauer, said he was going to a bar to watch the opening trade at 11 A.M. “We’re gonna have Red Bull and vodka,” he joked. Henry Blodget—yes, that Henry Blodget—put on his serious face and said, “Facebook is an extraordinary success story for America.”

Joe Kernan, who was also around in the late nineties, refused to put on a Mark Zuckerberg-style hoodie, which somebody had thrown onto the set. Good for him, you may say: refusing to partake in this Barnum & Bailey show. Not quite. Kernan’s hesitancy was based on a dislike of putting on an item that somebody else had worn, he explained.

Thank goodness Jim Cramer—yes, that Jim Cramer—popped up to inject a bit of skepticism and sanity. “I am just concerned that the public will get burned again,” he said shortly before Zuck himself, live from the hideous corporate campus in Menlo Park that serves as Facebook’s HQ, pressed a bell to start the day’s trading on the Nasdaq. The great man-boy was wearing his hoodie, of course, and it looked like he’d had his hair cut for the occasion, or perhaps for his twenty-eighth birthday, which was earlier this week. He didn’t say anything, but he received a nice bear hug anyway from Sheryl Sandberg, his C.E.O. and surrogate mother. From below the hastily erected stage, a scrum of geeks who were about to get rich gave him a loud cheer.

That was about that. It looked a bit like an encore by a second-tier college rock band at a sparsely attended music festival. To get across the momentousness of the occasion to their viewers, the CNBC anchors were forced to rely on words rather than pictures. “We’re witnessing a lot of American wealth getting created,” said Carl Campanile, a CNBC regular, who was on the ground in California. “It opens a new chapter, an exciting chapter, for business in this country,” offered David Faber, who was trying to get into the spirit of things from Englewood Cliffs. It was left to Melissa Lee, Faber’s co-host, to summon up a sentence fully commensurate with the occasion. “Mark Zuckerberg has accomplished the substance of the American dream,” she intoned.

In the scheme of things, it was all pretty harmless stuff, and at least some of it had the merit of being true. To those who say America is in inexorable decline, the existence of Silicon Valley, and the college-dorm-to-corporate-park success stories it generates on a fairly regular basis, is the great counter-argument. You can’t replicate Facebook in a Guangdong sweatshop. Where is the French Zuckerberg, or even the French Sandberg? And why even now, despite the rise of Mumbai, Silicon Fen, and other technology clusters, do so many innovative internet companies—Zynga, Pinterest, Instagram—start out or migrate to the Bay Area?
Sadly, none of this has any bearing on whether Facebook is worth a hundred billion dollars—roughly what it would be worth at the issue price of $38. Shortly before 11 A.M., the signs were that “FB” would open at about $45, sharply lower than many people had expected. At eleven, CNBC’s monitors were showing $42, just four dollars above the issue price—an indication that investors were balking at paying a big premium over the offer price. Fifteen minutes later, the stock still hadn’t opened. “I’m getting nervous,” Jim Cramer stammered. He wasn’t the only one. At 11:25, the Wall Street Journal reported that many traders were trying to cancel their electronic orders and having trouble doing it. “Did Nasdaq break?” Henry Blodget, who had gone back to his day job, tweeted. No, it didn’t.

At 11:30, the stock opened at $42, jumped up to $43, fell back $42—and kept falling, back to $40. “For market sentiment, this is not going to be positive,” said Simon Hobbs, the network’s resident Brit. Melissa Lee was equally crestfallen: “Forty minutes ago, I don’t think anybody thought $40,” she said. David Faber had been working the phones, and he reported that his sources had told him the stock might well fall below the issue price of $38, which would be a big embarrassment to the banks underwriting the deal, led by Morgan Stanley. “The big story is that Facebook, the social network, is now a public company,” he said. “The smaller story is that after five minutes, it’s only up six per cent.”

At 11:50, the stock hit the issue price of $38, prompting the underwriters to enter the market and prevent it falling any further. On CNBC, at least, the Facebook party was over. Out in Silicon Valley, things looked quite a bit different. We weren’t back in 1999, after all, but Facebook was, indeed, a public company—one valued at more than a hundred billion dollars. Zuckerberg was worth nearly twenty billion; Sandberg more than a billion. Eduardo Saverin, the Facebook co-founder who gave up his U.S. citizenship to avoid paying taxes on his windfall, was worth $2.7 billion. A bunch of venture capitalists, including a Russian oligarch and an Irish rock star, had made out like bandits. One of them, Roger McNamee, Bono’s business partner, popped up on the screen to say, “I’m very, very bullish about the long run.”

The insiders had done very, very well. Now we’ll see how the public investors fare.


 

 src:~http://www.newyorker.com/online/blogs/johncassidy/2012/05/facebooks-ipo.html

Facebook: The Ultimate Dot-Com

Facebook: The Ultimate Dot-Com

h_05432647.jpg
Paternot and Krizelman, in 2001

History will record that Mark Zuckerberg wasn’t the first college student to have the idea of enabling people to set up Web pages and share stuff with their friends. Yesterday, my colleague Silvia Killingsworth wrote about the Winklevoss twins, two Harvard grads who famously accused Zuckerberg of stealing the idea for Facebook while working on their fledgling site Connect U. Before the Winklevii, there were the folks behind MySpace and Friendster. And before them, way back in 1995, there were Todd Krizelman and Stephen Paternot, who launched TheGlobe.com from their dorm rooms at Cornell.

TheGlobe.com allowed people to create their personal space online, upload pictures, and set up what came to be known as blogs. By 1998, it had more than two million members, which was then considered impressive. It also had a business plan: sell advertising. On November 13, 1998, Bear Stearns issued 3.1 million shares in the company at nine dollars each to some of its clients—the lucky ones. When Bear’s traders tried to open the stock for trading, they found it difficult to establish a floor price. As I recalled in my 2002 book, “Dot.Con: The Greatest Story Every Sold” :
Whatever price they indicated—$20, $30, $40, $50—was too low. CNBC reported that the first trade might be $70, but even this proved to be a conservative estimate. After a lengthy delay, the first trade crossed the ticker at $87—almost ten times the issue price. Even for an Internet stock, this was unheard of. Within an hour, the price had risen to $97.
TheGlobe.com’s I.P.O. marked the beginning of the dot-com bubble’s epic stage. By the time the bubble burst, in March and April, 2000, hundreds of online firms had issued stock, among them many clunkers like Pets.com, E-Stamp, and etoys.com (not to be confused with a later company that used the same name), but also many online companies that survived and eventually thrived, such as eBay, Amazon.com, and Priceline.com. The bursting of the bubble discredited the term “dot-com,” which was understandable but, in a way, unfortunate, because the term itself had come to be the expression of an attitude that saw in online communication and online commerce boundless possibilities. Facebook’s I.P.O. represents a return to that mindset. It’s the fulfillment of the dreams of the nineties—and a reminder of their potentially fatal attraction.
While the term “dot-com” disappeared, the idea survived. Before very long, it was rebranded as “Web 2.0”—a term popularized by Tim O’Reilly and John Battelle, who from 2004 onwards organized a series of conferences under this banner. Supposedly, what distinguished Web 2.0 from Web 1.0 was user control, and user collaboration, with the network serving as a “platform,” but that wasn’t really a new idea: Krizelman and Paternot had fastened upon it years earlier, as had the founders of GeoCities and other Web-hosting ventures.

What really got Web 2.0 going was the proliferation of broadband connections, the invention of top-notch search engines (Google), and the creation of idiot-proof tools for doing fun stuff online, such as sharing photos and videos, posting blogs, and creating mashups. By February, 2004, when Zuckerberg launched Facebook, the elements were in place for the Web to fulfill the hopes of the late nineties—or some of them, anyway. But if Zuckerberg was in the right place at the right time—nobody should underestimate the role that the “Harvard” brand played in Facebook’s initial growth—he seized the opportunity ruthless and brilliantly. Now, seven years later, he is about to become a billionaire many times over by selling (non-voting) shares in what is, in many ways, the ultimate dot-com.

Back in the late nineties, I used to read a lot of S-1s—official investment prospectuses produced by companies about to issue public shares for the first time. Delving into Facebook’s S-1, which it has amended repeatedly since February, when it put out an initial version, felt like old times. The numbers were different (by an order of magnitude) from those that the original dot-coms used to put out, but the basic story was the same one that had led to all those bad investments and broken dreams: a Web site expanding this fast, with this many eyeballs focussed upon it, has simply got to be worth a lot of money.

Certainly, Facebook’s growth has been astonishing. As of March 31st, some nine hundred million people—about one in eight of all the humans on the planet—used the site at least once a month. More than five hundred million people—about one in thirteen of the global population—used it daily. Every day, Facebook users upload about three hundred million photographs and generate about 3.2 billion “likes” and “comments.” People on Facebook have a hundred and twenty-five billion “friends.” For many of us, Facebook has become a part of daily life. Many use it to keep up with friends; some use it as a news service; I’m in the camp of those who utilize it mainly as a professional tool. (Once I put up this post, I will link to it on my page.)

Compared to the late nineties, there are some basic differences, of course. Unlike many of the original dot-coms, Facebook makes money—quite a lot, in fact. It sells advertising and also charges other firms that use the site to drum up business, such as the gaming company Zynga and the music service Spotify. In 2011, on revenues of $3.7 billion, Facebook generated a billion dollars in profit. In the three months to March 31st, it made another two hundred million dollars.

That’s reassuring, but does it justify a valuation of a hundred billion dollars? That’s what the company will be capitalized at if the underwriters, led by Morgan Stanley—another echo of the late nineties—price its stock at the upper end of the $34-$38 range they indicated on Tuesday. If the stock goes up when trading starts, and it almost certainly will, Facebook will be even more highly valued. While I don’t think Facebook’s stock will enjoy the sort of crazy leap that TheGlobe.com’s took, I wouldn’t be at all surprised to see it close over fifty dollars, which would value Facebook at more than a hundred and twenty-five billion dollars.

For such a figure to make sense, given the risks attached to the technology industry, you have to assume that, within a few years, Facebook will be making not a billion dollars a year in profit but five billion dollars, or ten billion dollars, or even more. Apple, the world’s most valuable company—its market cap passed six hundred billion dollars briefly last month, and is currently hovering at a little more than five hundred billion—generated more than twenty-five billion dollars in profits last year. Microsoft, which is valued at less than half of Apple, made more than twenty-three billion. Google, valued at about two hundred billion, made nearly ten billion.

If it is to compete with these giants, Facebook will need to find a much better way to monetize its vast user bases. At the moment, it generates barely four dollars a year in revenues per user, primarily in the form of charging fees to advertisers. Maybe it can gin up more of these revenues, but there are still questions about the effectiveness of ads on social-networking sites. General Motors’ decision to pull its advertising from Facebook, which was announced yesterday, is hardly encouraging. Neither is the fact that Facebook still hasn’t properly figured out how to deliver ads to mobile users.

Simply relying on attracting more and more people to the site won’t do the trick. As the site’s audience approaches the saturation point in many advanced countries—more than sixty per cent in the U.S. and the U.K.; more than eighty-five per cent in Chile, Turkey, and Venezuela—its rate of expansion is inevitably slowing down. Between March, 2009, and March, 2010, the number of monthly active users rose a hundred and fifty-four per cent. Between March, 2011, and March, 2012, the growth rate was forty-one per cent. Quarterly figures confirm the slowdown. In the first quarter of 2010, the growth rate was 26.5 per cent. In the first quarter of this year, it was 8.9 per cent.

Another disturbing sign—and one very familiar to students of the dot-com bubble—is that Facebook’s costs are rising considerably faster than its revenues. Between the first quarter of 2011 and the first quarter of 2012, as it hired more engineers and sales people, and continued to invest in the site, its costs shot up ninety-seven per cent. Revenues rose by forty-five per cent. Consequently, Facebook’s profits in the three months to March were actually lower than they were a year earlier: two hundred and five million dollars compared to two hundred and thirty-three million.

None of this necessarily means that Facebook will be a bubble stock, or that it will meet the same fate as TheGlobe.com, which saw its market capitalization shrink to virtually nothing in 2001 before it closed down for good in 2008. Despite the recent slowdown in its growth, Facebook is an innovative, profitable company, which has established a unique and ubiquitous online presence that it may be able to exploit in ways that nobody, not even Zuckerberg, has yet dreamed of. I’d be willing to bet that in ten years’ time Facebook will still be around, and it will be a big player on the Web.

But how big? In Silicon Valley, many people view Facebook’s Web site, and its trove of user data, as the next key technology platform, something akin to Microsoft Windows and Apple iOS, which the company will leverage to create its own economic ecosystem—one that generates huge monopoly rents. Perhaps this will happen. For now, though, Facebook is basically an online media company, and there are some legitimate questions about its prospects. In purchasing its stock, as with buying the original dot-com stocks, investors will be laying out their cash primarily on the basis of hope and optimism rather than a clearly defined and firmly established business plan.

To me, at least, that has echoes of the past.






src:~http://www.newyorker.com/online/blogs/johncassidy/2012/05/facebook-the-ultimate-dotcom.html

 

 

Friday, May 18, 2012

Get Rich U.

Is Stanford University Too Close to Silicon Valley

 Get Rich U.

There are no walls between Stanford and Silicon Valley. Should there be?

Students at the Institute of Design at Stanford, or d.school, work this spring on an irrigation project for farmers in Burma. The work is part of the university
Students at the Institute of Design at Stanford, or d.school, work this spring on an irrigation project for farmers in Burma. The work is part of the university’s focus on interdisciplinary education. Photograph by Aaron Huey.

 
Stanford University is so startlingly paradisial, so fragrant and sunny, it’s as if you could eat from the trees and live happily forever. Students ride their bikes through manicured quads, past blooming flowers and statues by Rodin, to buildings named for benefactors like Gates, Hewlett, and Packard. Everyone seems happy, though there is a well-known phenomenon called the “Stanford duck syndrome”: students seem cheerful, but all the while they are furiously paddling their legs to stay afloat. What they are generally paddling toward are careers of the sort that could get their names on those buildings. The campus has its jocks, stoners, and poets, but what it is famous for are budding entrepreneurs, engineers, and computer aces hoping to make their fortune in one crevasse or another of Silicon Valley. 

Innovation comes from myriad sources, including the bastions of East Coast learning, but Stanford has established itself as the intellectual nexus of the information economy. In early April, Facebook acquired the photo-sharing service Instagram, for a billion dollars; naturally, the co-founders of the two-year-old company are Stanford graduates in their late twenties. The initial investor was a Stanford alumnus.
The campus, in fact, seems designed to nurture such success. The founder of Sierra Ventures, Peter C. Wendell, has been teaching Entrepreneurship and Venture Capital part time at the business school for twenty-one years, and he invites sixteen venture capitalists to visit and work with his students. Eric Schmidt, the chairman of Google, joins him for a third of the classes, and Raymond Nasr, a prominent communications and public-relations executive in the Valley, attends them all. Scott Cook, who co-founded Intuit, drops by to talk to Wendell’s class. After class, faculty, students, and guests often pick up lattes at Starbucks or cafeteria snacks and make their way to outdoor tables.

On a sunny day in February, Evan Spiegel had an appointment with Wendell and Nasr to seek their advice. A lean mechanical-engineering senior from Los Angeles, in a cardigan, T-shirt, and jeans, Spiegel wanted to describe the mobile-phone application, called Snapchat, that he and a fraternity brother had designed. The idea came to him when a friend said, “I wish these photos I am sending this girl would disappear.” As Spiegel and his partner conceived it, the app would allow users to avoid making youthful indiscretions a matter of digital permanence. You could take pictures on a mobile device and share them, and after ten seconds the images would disappear.

Spiegel needed some business advice from campus mentors. He and his partner already had forty thousand users and were maxing out their credit cards. If they reached a million customers, the cost of their computer servers would exceed twenty thousand dollars per month. Spiegel told Wendell and Nasr that he needed investment money but feared going to a venture-capital firm, “because we don’t want to lose control of the company.” When Wendell asked if he’d like an introduction to the people at Twitter, Spiegel said that he was afraid that they might steal the idea. Wendell and Nasr suggested a meeting with Google’s venture-capital arm. Spiegel agreed, Nasr arranged it, and Spiegel and Google are now talking.
Spiegel knows that mentors like Wendell will play an important part in helping him to realize his dreams for the mobile app. “I had the opportunity to sit in Peter’s class as a sophomore,” Spiegel says. “I was sitting next to Eric Schmidt. I was sitting next to Chad Hurley, from YouTube. I would go to lunches after class and listen to these guys talk. I met Scott Cook, who’s been an incredible mentor.” His faculty adviser, David Kelley, the head of the school of design, put him in touch with prospective angel investors.

If the Ivy League was the breeding ground for the élites of the American Century, Stanford is the farm system for Silicon Valley. When looking for engineers, Schmidt said, Google starts at Stanford. Five per cent of Google employees are Stanford graduates. The president of Stanford, John L. Hennessy, is a director of Google; he is also a director of Cisco Systems and a successful former entrepreneur. Stanford’s Office of Technology Licensing has licensed eight thousand campus-inspired inventions, and has generated $1.3 billion in royalties for the university. Stanford’s public-relations arm proclaims that five thousand companies “trace their origins to Stanford ideas or to Stanford faculty and students.” They include Hewlett-Packard, Yahoo, Cisco Systems, Sun Microsystems, eBay, Netflix, Electronic Arts, Intuit, Fairchild Semiconductor, Agilent Technologies, Silicon Graphics, LinkedIn, and E*Trade.

John Doerr, a partner at the venture-capital firm Kleiner Perkins Caufield & Byers, which bankrolled such companies as Google and Amazon, regularly visits campus to scout for ideas. He describes Stanford as “the germplasm for innovation. I can’t imagine Silicon Valley without Stanford University.”

Leland Stanford was a Republican governor and senator in the late nineteenth century, who made a fortune from the Central Pacific and Southern Pacific railroads, which he had helped to found. Stout and bearded, he could be typecast, like Gould, Morgan, and Vanderbilt, as a robber baron. Without knowing it, this man of the industrial revolution spent part of his legacy establishing a center for what would become the Age of Innovation. After his only child, Leland, Jr., died, of typhoid fever, at fifteen, Stanford and his wife, Jane, bequeathed more than eight thousand acres of farmland, thirty-five miles south of San Francisco, to found a university in their son’s name. They hired Frederick Law Olmsted, who designed Central Park, to create an open campus with no walls, vast gardens and thousands of palm and Coast Live Oak trees, and California mission-inspired sandstone buildings with red-tiled roofs. Today, the campus extends from Palo Alto to Woodside and Portola Valley, spanning two counties, three Zip Codes, and six government jurisdictions.
Stanford University opened its doors in 1891. Jane and Leland Stanford said in their founding grant that the university, rather than becoming an ivory tower, would “qualify its students for personal success, and direct usefulness in life.” From its early days, engineers and scientists attracted government and corporate research funds as well as venture capital for start-ups, first for innovations in radio and broadcast media, then for advances in electronics, microprocessing, medicine, and digital technology. One of the first big tech companies in Silicon Valley—Federal Telegraph, which produced radios—was started by a young Stanford graduate in 1909. The university’s first president, David Starr Jordan, was an angel investor.

Frederick Terman, an engineer who joined the faculty in 1925, became the dean of the School of Engineering after the Second World War and the provost in 1955. He is often called “the father of Silicon Valley.” In the thirties, he encouraged two of his students, William Hewlett and David Packard, to construct in a garage a new line of audio oscillators that became the first product of the Hewlett-Packard Company.
Terman nurtured start-ups by creating the Stanford Industrial Park, which leased land to tech firms like Hewlett-Packard; today, the park is home to about a hundred and fifty companies. He encouraged his faculty to serve as paid consultants to corporations, as he did, to welcome tech companies on campus, and to persuade them to subsidize research and fellowships for Stanford’s brightest students.

William F. Miller, a physicist, was the last Stanford faculty member recruited by Terman, and he rose to become the university’s provost. Miller, who is now eighty-six and an emeritus professor at Stanford’s business school, traces the symbiotic relationship between Stanford and Silicon Valley to Stanford’s founding. “This was kind of the Wild West,” he said. “The gold rush was still on. Custer’s Last Stand was only nine years before. California had not been a state very long—roughly, thirty years. People who came here had to be pioneers. Pioneers had two qualities: one, they had to be adventurers, but they were also community builders. So the people who came here to build the university also intended to build the community, and that meant interacting with businesses and helping create businesses.”

President Hennessy believes that the entrepreneurial spirit is part of the university’s foundation, and he attributes this freedom partly to California’s relative lack of legacy industries or traditions that need to be protected, so “people are willing to try things.” At Stanford more than elsewhere, the university and business forge a borderless community in which making money is considered virtuous and where participants profess a sometimes inflated belief that their work is changing the world for the better. Faculty members commonly invest in start-ups launched by their students or colleagues. There are probably more faculty millionaires at Stanford than at any other university in the world. Hennessy earned six hundred and seventy-one thousand dollars in salary from Stanford last year, but he has made far more as a board member of and shareholder in Google and Cisco.

Very often, the wealth created by Stanford’s faculty and students flows back to the school. Hennessy is among the foremost fund-raisers in America. In his twelve years as president, Stanford’s endowment has grown to nearly seventeen billion dollars. In each of the past seven years, Stanford has raised more money than any other American university.

Like other élite schools, Stanford has become increasingly diverse. Caucasian students are now a minority on campus; roughly sixty per cent of undergraduates, and more than half of graduate students, are Asian, black, Hispanic, Native American, or from overseas; seventeen per cent of Stanford’s undergraduates are the first member of their family to attend college. Half of Stanford’s undergraduates receive need-based financial aid: if their annual family income is below a hundred thousand dollars, tuition is free. “They are the locomotive kids, pulling their whole family behind them,” Tobias Wolff, a novelist who has taught at Stanford for nearly two decades, says.

But Stanford’s entrepreneurial culture has also turned it into a place where many faculty and students have a gold-rush mentality and where the distinction between faculty and student may blur as, together, they seek both invention and fortune. Corporate and government funding may warp research priorities. A quarter of all undergraduates and more than fifty per cent of graduate students are engineering majors. At Harvard, the figures are four and ten per cent; at Yale, they’re five and eight per cent. Some ask whether Stanford has struck the right balance between commerce and learning, between the acquisition of skills to make it and intellectual discovery for its own sake.

David Kennedy, a Pulitzer Prize-winning historian who has taught at Stanford for more than forty years, credits the university with helping needy students and spawning talent in engineering and business, but he worries that many students uncritically incorporate the excesses of Silicon Valley, and that there are not nearly enough students devoted to the liberal arts and to the idea of pure learning. “The entire Bay Area is enamored with these notions of innovation, creativity, entrepreneurship, mega-success,” he says. “It’s in the air we breathe out here. It’s an atmosphere that can be toxic to the mission of the university as a place of refuge, contemplation, and investigation for its own sake.”

In February, 2011, a dozen members of the Bay Area business community had dinner with President Obama at the home of the venture capitalist John Doerr. Steve Jobs, who was in the late stages of the illness that killed him, eight months later, sat at a large rectangular table beside Obama; Mark Zuckerberg, of Facebook, sat on the other side. They were flanked by Silicon Valley corporate chiefs, from Google, Cisco, Oracle, Genentech, Twitter, Netflix, and Yahoo. The only non-business leader invited was Hennessy. His attendance was not a surprise. “John Hennessy is the godfather of Silicon Valley,” Marc Andreessen, a venture capitalist, who as an engineering student co-invented the first Internet browser, says.

Hennessy is fifty-nine, six feet tall, and trim, with thinning gray hair and a square jaw. He talks fast and loud, one thought colliding with the next; he bubbles over with information and data points. His laugh is a sharp cackle. Hennessy grew up in Huntington, Long Island. His father was an aerospace engineer; his mother quit teaching to rear six children. As a child, he read straight through the sixteen-volume encyclopedia his parents gave him. He studied electrical engineering at Villanova University and went on to earn a doctorate in computer science at Stony Brook University. He married his high-school sweetheart, Andrea Berti, and, in 1977, became an assistant professor of electrical engineering at Stanford.

Hennessy’s academic work focussed on redesigning computer architecture, primarily through streamlining software that would make processors work more efficiently; the technology was called Reduced Instruction Set Computer (RISC). He co-wrote two textbooks that are still considered to be seminal in computer-science classes. He took a year’s sabbatical from Stanford in 1984 to co-found MIPS Computer Systems. In 1992, it was sold to Silicon Graphics for three hundred and thirty-three million dollars. MIPS technology has contributed to the miniaturization of electronics, making possible the chips that power everything from laptops and mobile phones to refrigerators and automobile dashboards. “RISC was foundational,” Andreessen says. “It was one of the maybe five or six things in the history of the industry that really matter.”
Hennessy returned to teaching at Stanford, and became a full professor in 1986. In 1996, he was elevated to dean of the School of Engineering. He had little time to teach, but he marvelled at the inventions of his graduate students. He describes the time, in the mid-nineties, when Jerry Yang and David Filo took him to visit their campus trailer, which was littered with pizza boxes and soda cans, to show off a directory of Web sites called Yahoo. He calls this an “aha moment,” because he realized that the Web was “going to change how everyone communicated.”

In 1998, Larry Page and Sergey Brin, who were graduate students, showed Hennessy their work on search software that they later called Google. He typed in the name Gerhard Casper, and instead of getting results for Casper the Friendly Ghost, as he did on AltaVista, up popped links to Gerhard Casper the president of Stanford. He was thrilled when members of the engineering faculty mentored Page and Brin and later became Google investors, consultants, and shareholders. Since Stanford owned the rights to Google’s search technology, he was also thrilled when, in 2005, the stock grants that Stanford had received in exchange for licensing the technology were sold for three hundred and thirty-six million dollars.

In 1999, after Condoleezza Rice stepped down as provost to become the chief foreign-policy adviser to the Republican Presidential candidate George W. Bush, Casper offered Hennessy the position of chief academic and financial officer of the university. Soon afterward, Hennessy induced a former electrical-engineering faculty colleague, James Clark, who had founded Silicon Graphics (which purchased MIPS), to give a hundred and fifty million dollars to create the James H. Clark Center for medical and scientific research. Less than a year later, Casper stepped down as president and Hennessy replaced him.

Hennessy joined Cisco’s corporate board in 2002, and Google’s in 2004. It is not uncommon for a university president to be on corporate boards. According to James Finkelstein, a professor at George Mason University’s School of Public Policy, a third of college presidents serve on the boards of one or more publicly traded companies. Hennessy says that his outside board work has made him a better president. “Both Google and Cisco face—and all companies in a high-tech space face—a problem that’s very similar to the ones universities face: how do they maintain a sense of innovation, of a willingness to do the new thing?” he says.

But Gerhard Casper worries that any president sitting on a board can pose a conflict of interest. Stanford was one of the first universities to agree to allow Google to digitize a third of its library—some three million books—at a time when publishers and the Authors Guild were suing the company for copyright infringement. Hennessy says that he did not participate in the decision and “never saw the agreement.” But shouldn’t the president of a university see an agreement that may violate copyright laws and that represents a historic clash between the university and the publishing industry? And shouldn’t he worry that those who made the decision might be eager to reach an agreement that would please him?

Debra Satz, the senior associate dean for Humanities and Arts at Stanford, who teaches ethics and political philosophy, is troubled that Hennessy is handcuffed by his industry ties. This subject has often been discussed by faculty members, she says: “My view is that you can’t forbid the activity. Good things come out of it. But it raises dangers.” Philippe Buc, a historian and a former tenured member of the Stanford faculty, says, “He should not be on the Google board. A leader doesn’t have to express what he wants. The staff will be led to pro-Google actions because it anticipates what he wants.”

Hennessy has also invested in such venture-capital firms as Kleiner Perkins, Sequoia Capital, and Foundation Capital—companies that have received investment funds from the university’s endowment board, on which Hennessy sits. In 2007, an article published in the Wall Street Journal—“THE GOLDEN TOUCH OF STANFORD’S PRESIDENT”—highlighted the cozy relationship between Hennessy and Silicon Valley firms. The Journal reported that during the previous five years he had earned forty-three million dollars; a portion of that sum came from investments in firms that also invest Stanford endowment monies. Hennessy flicks aside criticism of those investments, noting that he isn’t actively involved in managing the endowment and likening them to a mutual fund: “I’m a limited partner. I couldn’t even tell you what most of these investments were in.”

Perhaps because his position is so seemingly secure, and his assets so considerable, Hennessy rarely appears defensive. He knows that questions about conflicts of interest won’t define his legacy, and they seem less pressing when Stanford is thriving. Facebook’s purchase of Instagram made millions for, among others, Sequoia Capital—which means that it made money for Hennessy and for Stanford’s endowment, too.

Two decades ago, when the Stanford humor magazine, the Chaparral, did a spoof issue with the Harvard Lampoon, the Chappie, as it’s called, rearranged Harvard’s logo—“Ve Ri Tas”—to “Ve Ry Tan.” Sometimes the campus stars are athletes. This year, the quarterback Andrew Luck is the likely No. 1 pick in the N.F.L. draft. He stayed through his senior year to earn a degree in architectural design. “I sat next to him in a class once,” Ishan Nath, a senior economics major, says. “I didn’t talk to him, but I sneezed and he said, ‘Bless you.’ For the next month, like, ‘I got blessed by Andy Luck!’ ” But, at Stanford, star athletes don’t always have the status they do at other schools. When Tiger Woods was a student, in the mid-nineties, not everyone knew who he was. One classmate, Adam Seelig, now a poet and a theatre director, spotted Woods practicing in his hallway one night and returned to his own dorm to ask who “this total loser practicing putts at 11 P.M. on a Saturday night” was.

Thirty-four thousand high-school seniors applied for Stanford’s current freshman class, and only twenty-four hundred—seven per cent—were accepted. Part of the appeal, undoubtedly, is the school’s laid-back vibe. There are nearly as many bicycles on campus—thirteen thousand—as undergraduate and graduate students. Flip-flops are worn year-round.

“To me, it felt like a four-year camp,” Devin Banerjee, who graduated last year and is now a reporter for Bloomberg News, says. “We had so many camp counsellors. You never felt lost.” Michael Tubbs, a senior who was brought up by a single mother and whose father has been in prison for most of his son’s life, says that he could not have attended Stanford without a full financial-aid scholarship. He is an honors student, and marvels at how financial aid has produced a campus of diverse students who are unburdened by student debt—and who thus don’t have to spend the first five years of their career earning as much money as they can. After he graduates, Tubbs plans to return home to Stockton, California, to challenge an incumbent member of the city council this fall.

To listen to students who are presumably in the most anxious, rebellious period of their lives express such serenity is jarring. One afternoon, I met with some undergraduates at the CoHo coffee-house. They almost uniformly described an idyllic university life. Tenzin Seldon, a senior and a comparative-studies major from India, was one of five Rhodes Scholars chosen from Stanford this year. She said that Stanford is particularly welcoming to foreign students. Ishan Nath, who also won a Rhodes Scholarship, disagrees with those who say that Stanford is a utilitarian university: “There are plenty of opportunities for learning for the sake of learning here.” Kathleen Chaykowski, a junior, was a premed and an engineering major who switched to English, and last year was the editor-in-chief of the Stanford Daily. She spoke about the risk-taking that is integral to Silicon Valley. “My academic adviser said, ‘I want you to have a messy career at Stanford. I want to see you try things, to discover the parts of yourself that you didn’t know existed.’ ”

Each year, Hennessy visits four to five freshman dormitories to field questions. When he visited the Cedro dorm, on January 30th, the forty or so students gathered around him in the recreation room often asked the kinds of benign question posed to celebrities on TV shows: Did he miss computer science? Or they sometimes asked the questions of young careerists: To be successful, what should we do?

The students’ calm, however, belies the stress that they are under. “Looking around, most everyone looks incredibly productive, seems surrounded by friends, and ultimately appears to be fundamentally happy. This aura of good cheer is contagious,” the editorial board of the Stanford Daily wrote in early April, in an essay that described the Stanford duck syndrome in detail. “Yet this contagious happiness has its dark side: Feeling dejected or unhappy in a place like Stanford causes one to feel abnormal and out-of-place, so we may tend to internalize and brood over this lack of happiness instead of productively addressing the situation.”

In late 2010, Mayor Michael Bloomberg announced that New York wanted to replicate the success of Silicon Valley in the city’s Silicon Alley, and he called for a public competition among universities to build an élite graduate school of engineering and applied sciences on city-owned land. Seven universities submitted proposals for a campus on Roosevelt Island, and Stanford was widely viewed as the early front-runner.
Stanford’s proposal contained a cover letter from Hennessy that conveyed his sweeping ambition:
 “StanfordNYC has the potential to help catapult New York City into a leadership position in technology, to enhance its entrepreneurial endeavors and outcomes, diversify its economic base, enhance its talent pool and help our nation maintain its global lead in science and technology.” Stanford proposed spending an initial two hundred million dollars to build a campus housing two hundred faculty and more than two thousand graduate students. It pledged to raise $1.5 billion for the campus.

This was not to be a satellite campus. It would be solely an engineering and applied-science school. Hennessy proposed that each department base three-quarters of its faculty in Palo Alto and a quarter on Roosevelt Island. Nor was it to be solely a research facility. (Stanford has one at Peking University, in Beijing.) Faculty members across the country would share videoconference screens, and students in New York would be able to take online classes based in Palo Alto. Stanford’s chief fund-raiser, Martin Shell, who is the vice-president for development, says, “New York City could be the place we could begin to put into place a truly second campus. One hundred years from now, we could be a global university.”

Not everyone on Stanford’s campus shared Hennessy’s enthusiasm. Members of the humanities faculty were upset that Stanford proposed to create a second campus without including liberal-arts faculty or students. Casper, the former Stanford president, asked whether the Roosevelt Island project would “reinforce the cliché that we are science and engineering and biology driven and the arts and humanities are stepchildren.” According to Jeffrey Koseff, the director of Stanford’s Woods Institute for the Environment, there were “mixed feelings,” because of fears that resources would be drained from the Palo Alto campus. And there were additional questions: Would Stanford be able to recruit top faculty and students to New York when the technological heart of the country was in Silicon Valley? Could Stanford really reproduce in New York its “secret sauce,” a phrase that university officials use almost mystically to describe whatever it is that makes the school succeed as an entrepreneurial incubator?

Exactly what that sauce is provokes much speculation, but an essential ingredient is the attitude on campus that business is a partner to be embraced, not kept at arm’s length. The Stanford benefactor and former board chairman Burton McMurtry says, “When I first came here, the faculty did not look down its nose at industry, like most faculties.” Stanford’s proposal to New York, almost as a refrain, repeatedly referred to the “close ties between the industry and the university.”

People may remember Hennessy’s reign most for the expansion of Stanford into Silicon Valley. But his principal academic legacy may be the growth of what’s called “interdisciplinary education.” This is the philosophy now promoted at the various schools at Stanford—engineering, business, medicine, science, design—which encourages students from diverse majors to come together to solve real or abstract problems. The goal is to have them become what are called “T-shaped” students, who have depth in a particular field of study but also breadth across multiple disciplines. Stanford hopes that the students can also develop the social skills to collaborate with people outside their areas of expertise. “Ten years ago, ‘interdisciplinary’ was a code word for something soft,” Jeff Koseff says. “John changed that.”

Among the bolder initiatives to create T-students is the Institute of Design at Stanford, or the d.school, which was founded seven years ago and is housed in the school of engineering. Its founder and director is David Kelley, who, with a thick black mustache and black-framed eyeglasses, looks like Groucho Marx, without the cigar. His mission, he says, is to instill “empathy” in his students, to encourage them to see the human side of the challenges posed in class, and to provoke them to be creative. Stanford is not the only university to adopt this approach to learning—M.I.T., among others, does, too. But Kelley’s effort is widely believed to be the most audacious. His classes stress collaboration across disciplines and revolve around projects to advance social progress. The school concentrates on four areas: the developing world; sustainability; health and wellness; and K-12 education. The d.school space is open, with sliding doors and ubiquitous whiteboards and tables too small to accommodate laptops; Kelley doesn’t want students retreating into their in-boxes. There are very few lectures at the school, and students are graded, in part, on their collaborative skills and on evaluations by fellow-students.

Sarah Stein Greenberg, who is the managing director of the d.school, was a student and then a fellow. Her 2006 class project was to figure out an inexpensive way for farmers in Burma to extract water from the ground for irrigation. Greenberg and her team of students travelled to Burma, and devised a cheap and efficient treadle pump that looks like a Stairmaster, which the farmer steps on in order to extract water. A local nonprofit partner manufactured and sold twenty thousand pumps, costing thirty-seven dollars each. In his unpretentious, book-filled office, John Hennessy displays items that have been produced, at least in part, by Stanford students to assist developing countries, including a baby warmer for premature babies; the simple device’s cost is one per cent of an incubator’s.

In late January, a popular d.school class, Entrepreneurial Design for Extreme Affordability, taught by James M. Patell, a business-school professor, consisted of thirty-seven graduate and three undergraduate students from thirteen departments, including engineering, political science, business, medicine, biology, and education. It was early in the quarter, and Patell offered the students a choice of initial projects. One was to create a monitoring system to help the police locate lost children. Another was to design a bicycle-storage system.

David Janka, a teaching fellow, who walked about the class’s vast open space wearing tapered khakis and shoes without socks, invited the students to gather in groups around the white wooden tables to discuss how to address these challenges. Patell and Janka were joined by David Beach, a professor of mechanical engineering; Julian Gorodsky, a practicing therapist and the “team shrink” at the d.school; and Stuart Coulson, a retired venture capitalist who volunteers at the university up to fifty hours per week. “The kinds of project we put in front of our students don’t have right and wrong answers,” Greenberg says. “They have good, better, and really, really better.”

Justin Ferrell, who was attending Stanford on a one-year fellowship, on leave from his job as the digital-design director at the Washington Post, said that he was impressed by “the bias toward action” at the d.school. Newspapers have bureaucracy, committees, hierarchies, and few engineers, he said. At the Post, “diversity” was defined by ethnicity and race. At the d.school, diversity is defined by majors—by people who think different.

Multidisciplinary courses at Stanford worked for two earlier graduates, Kevin Systrom and Mike Krieger, the founders of Instagram. In 2005 and 2007, respectively, Systrom and Krieger were awarded Mayfield fellowships. (Only a dozen upperclassmen are chosen each year.) In an intense nine-month work-study program, fellows immerse themselves in the theoretical study of entrepreneurship, innovation, and leadership, and work during the summer in a Valley start-up. Tom Byers, an engineering professor, founded the program in 1996, and says that it aims to impart to fellows this message: “Anything is possible.” Byers has kept in touch with Systrom and Krieger and remembers them as “quiet and quite humble,” by which he means that they were outstanding human beings who could get others to follow them. They were, in short, T-students.
The most articulate critic of the way the university functions might be the man who used to run it. Gerhard Casper, who is a senior fellow at Stanford, is full of praise for Hennessy, and the two men clearly like each other. Nonetheless, it wasn’t hard to find a few daggers in a speech that Casper gave in May, 2010, in Jerusalem. The United States has “two types of college education that are in conflict with each other,” he said. One is “the classic liberal-arts model—four years of relative tranquility in which students are free to roam through disciplines, great thoughts, and great works with endless options and not much of a rationale.” The second is more utilitarian: “A college degree is expected to lead to a job, or at least to admission to a graduate or professional school.” The best colleges divide the first two years into introductory courses and the last two into the study of a major, all the while trying to expose students to “a broad range of disciplines and modes of thought.” Students, he declared, are not broadly educated, not sufficiently challenged to “search to know.” Instead, universities ask them to serve “the public, to work directly on solutions in a multidisciplinary way.” The danger, he went on, is “that academic researchers will not only embrace particular solutions but will fight for them in the political arena.” A university should keep to “its most fundamental purpose,” which is “the disinterested pursuit of truth.” Casper said that he worried that universities would be diverted from basic research by the lure of new development monies from “the marketplace,” and that they would shift to “ever greater emphasis on direct usefulness,” which might mean “even less funding of and attention to the arts and humanities.”

When I visited Casper in his office on campus this winter, I asked him if his critique applied to Stanford. “I am a little concerned that Stanford, along with its peers, is now justifying its existence mostly in terms of what it can do for humanity and improve the world,” he answered. “I am concerned that a research-intense university will become too result-oriented,” a development that risks politicizing the university. And it also risks draining more resources from liberal arts at a time when “most undergraduates at most universities are there not because they really want to get a broad education but because they want to get the wherewithal for a good job.”

John Hennessy is familiar with Casper’s Jerusalem speech. “It applies to everyone—us, too,” he says. Getting into college is very competitive, tuition is very expensive, and, with economic uncertainty, students become preoccupied with majoring in subjects that may lead to jobs. “That’s why so many students are majoring in business,” Hennessy says, and why so few are humanities majors. He shares the concern that too many students are too preoccupied with getting rich. “It’s true broadly, not just here,” he says.
Miles Unterreiner, a senior, fretted in the Stanford Daily that students spent too much time networking and strategizing and becoming “slaves to the dictates of a hoped-for future,” and too little time being spontaneous. “Stanford students are superb consequentialists—that is, we tend to measure the goodness of actions by their eventual results,” he wrote. “Bentham and Mill would be proud. We excel at making rational calculations of expected returns to labor and investment, which is probably why so many of us will take the exhortation to occupy Wall Street quite literally after graduation. So before making any decision, we ask one, very simple question: What will I get out of it?”

“At most great universities, humanities feel like stepchildren,” Casper told me. Two members of the humanities faculty—David Kennedy and Tobias Wolff, a three-time winner of the O. Henry Award for his short stories—extoll Stanford’s English and history departments but worry that the university has acquired a reputation as a place for people more interested in careers or targeted education than in a lofty “search for truth.”

Attempting to address this weakness, Stanford released, in January, a study of its undergraduate education. The report promoted the T-student model embraced by Hennessy. The original Stanford “object” of creating “usefulness in life,” though affirmed, was said to be insufficient. “We want our students not simply to succeed but to flourish; we want them to live not only usefully but also creatively, responsibly, and reflectively.” The report was harsh:
The long-term value of an education is to be found not merely in the accumulation of knowledge or skills but in the capacity to forge fresh connections between them, to integrate different elements from one’s education and experience and bring them to bear on new challenges and problems. . . . Yet we were struck by how little attention most departments and programs have given to cultivating this essential capacity. We were also surprised, and somewhat chagrined, to discover how infrequently some of our students exercise it. For all their extraordinary energy and range, many of the students we encountered lead curiously compartmentalized lives, with little integration between the different spheres of their experience.
Like any president of a large university, John Hennessy is subject to a relentless schedule of breakfasts, meetings, lunches, speeches, ceremonies, handshakes, dinners, and late-night calls alerting him to an injury or a fatality on campus. His home becomes a public space for meetings and entertaining. He juggles various constituencies—faculty, administrators, students, alumni, trustees, athletics. The routine becomes a daily blur, compelling a president to want to break away and seek a larger vision, something that becomes his stamp, his legacy. For a while, it seemed that StanfordNYC might provide that legacy.

Hennessy declared that a New York campus was “a landmark decision.” He invested enormous time and effort to overcome faculty, alumni, trustee, and student unease about diverting campus resources for such a grandiose project. “I was originally a skeptic,” Otis Reid, a senior economics major, says. But Hennessy persuaded him, by arguing that Stanford’s future will be one of expansion, and Reid agreed that New York was a better place to go first than Abu Dhabi.

On December 16, 2011, Stanford announced that it was withdrawing its bid. Publicly, the university was vague about the decision, and, in a statement, Hennessy praised “the mayor’s bold vision.” But he was seething. In January, he told me that the city had changed the terms of the proposed deal. After seven universities had submitted their bids, he said, the city suddenly wanted Stanford to agree that the campus would be operational, with a full complement of faculty, sooner than Stanford thought was feasible. The city, according to Debra Zumwalt, Stanford’s general counsel and lead negotiator, added “many millions of dollars in penalties that were not in the original proposal, including penalizing Stanford for failure to obtain approvals on a certain schedule, even if the delays were the fault of the city and not Stanford. . . . I have been a lawyer for over thirty years, and I have never seen negotiations that were handled so poorly by a reputable party.” One demand that particularly infuriated Stanford was a fine of twenty million dollars if the City Council, not Stanford, delayed approval of the project. These demands came from city lawyers, not from the Mayor or from a deputy mayor, Robert Steel, who did not participate in the final round of negotiations with Stanford officials. However, city negotiators were undoubtedly aware that Mayor Bloomberg, in a speech at M.I.T., in November, had said of two of the applicants, “Stanford is desperate to do it. Cornell is desperate to do it. . . . We can go back and try to renegotiate with each” university. Out of the blue, Hennessy says, the city introduced the new demands.

To Hennessy, these demands illustrated a shocking difference between the cultures of Silicon Valley and of the city. “I’ve cut billion-dollar deals in the Valley with a handshake,” Hennessy says. “It was a very different approach”—and, he says, the city was acting “not exactly like a partner.”

Yet the decision seemed hasty. Why would Hennessy, who had made such an effort to persuade the university community to embrace StanfordNYC, not pause to call a business-friendly mayor to try to get the city to roll back what he saw as its new demands? Hennessy says that his sense of trust was fundamentally shaken. City officials say they were surprised by the sudden pullout, especially since Hennessy had an agreeable conversation with Deputy Mayor Steel earlier that same week.

Steel insists that “the goalposts were fixed.” All the stipulations that Stanford now complains about, he says, were part of the city’s original package. Actually, they weren’t. In the city’s proposal request, the due dates and penalties were left blank. Seth Pinsky, the president of the New York City Economic Development Corporation, who was one of the city’s lead negotiators, says that these were to be filled in by each bidder and then discussed in negotiations. “The more aggressive they were on the schedule and the more aggressive they were on the amount, the more favorably” the city looked at the bid, Pinsky told me. In the negotiations, he said, he tried to get each bidder to boost its offer by alerting it of more favorable competing bids. At one point, Stanford asked about an ambiguous clause in the city’s proposal request: would the university have to indemnify the city if it were sued for, say, polluted water on Roosevelt Island? The city responded that the university would. According to Pinsky, city lawyers said that this was “not likely to produce significant problems,” and that other bidders did not object. To Pinsky and the city, these demands—and the twenty-million-dollar penalty if the City Council’s approval was delayed—were “not uncommon,” since developers often “take liability for public approvals.” To Stanford, the stipulations made it seem as if the goal posts were not fixed.

Three days after Stanford withdrew, the city awarded the contract to Cornell University and its junior partner, the Technion-Israel Institute of Technology, the oldest university in Israel. Not a few Hennessy and Stanford partisans were pleased. “I am very relieved,” Gerhard Casper said.

Jeff Koseff, who played golf with Hennessy within a few days of Stanford’s withdrawal, recalls, “He was already talking about what we could do next.” One venture that Hennessy was exploring, though there is as yet no concrete plan, is working with the City College of New York to establish a Stanford beachhead in Manhattan. Deputy Mayor Steel says, “I’d be ecstatic.” Still, a Stanford official is dubious: “John’s disillusionment with the city is pretty thorough.”

Another person who is pleased with the withdrawal is Marc Andreessen, whose wife teaches philanthropy at Stanford and whose father-in-law, John Arrillaga, is one of the university’s foremost donors. Instead of erecting buildings, Andreessen says, Stanford should invest even more of its resources in distance learning: “We’re on the cusp of an opportunity to deliver a state-of-the-art, Stanford-calibre education to every single kid around the world. And the idea that we were going to build a physical campus to reach a tiny fraction of those kids was, to me, tragically undershooting our potential.”

Hennessy, like Andreessen, believes that online learning can be as revolutionary to education as digital downloads were to the music business. Distance learning threatens one day to disrupt higher education by reducing the cost of college and by offering the convenience of a stay-at-home, do-it-on-your-own-time education. “Part of our challenge is that right now we have more questions than we have answers,” Hennessy says, of online education. “We know this is going to be important and, in the long term, transformative to education. We don’t really understand how yet.”

This past fall, Stanford introduced three free online engineering lectures, each organized into short segments. A hundred and sixty thousand students in a hundred and ninety countries signed up for Sebastian Thrun’s online Introduction to Artificial Intelligence class. They listened to the same material that Stanford students did and were given pass/fail grades; at the end, they received certificates of completion, which had Thrun’s name on them but not Stanford’s. The interest “surprised us,” John Etchemendy, the provost, says, noting that Stanford was about to introduce several more classes, which would also be free. The “key question,” he says, is: “How can we increase efficiency without decreasing quality?”

Stanford faculty members, accustomed to the entrepreneurial culture, have already begun to clamor for a piece of the potential revenue whenever the university starts to charge for the classes. This quest offends faculty members like Debra Satz, the senior associate dean, who regards herself as a public servant. “Some of the faculty see themselves as private contractors, and, if you are, you expect to get paid extra,” she says. “But, if you’re a member of a community, then you have certain responsibilities.”

Sebastian Thrun quit his faculty position at Stanford; he now works full time at Udacity, a start-up he co-founded that offers online courses. Udacity joins a host of companies whose distance-learning investments might one day siphon students from Stanford—Apple, the News Corp’s Worldwide Learning, the Washington Post’s Kaplan University, the New York Times’ Knowledge Network, and the nonprofit Khan Academy, with its approximately three thousand free lectures and tutorials made available on YouTube and funded by donations from, among others, the Bill & Melinda Gates Foundation, Google, and Ann and John Doerr.

Since so much of an undergraduate education consists of living on campus and interacting with other students, for those who can afford it—or who benefit from the generous scholarships offered by such institutions as Stanford—it’s difficult to imagine that an online education is comparable. Nor can an online education duplicate the collaborative, multidisciplinary classes at Stanford’s d.school, or the personal contact with professors that graduate students have as they inch toward a Ph.D.

John Hennessy’s experience in Silicon Valley proves that digital disruption is normal, and even desirable. It is commonly believed that traditional companies and services get disrupted because they are inefficient and costly. The publishing industry has suffered in recent years, the argument goes, because reading on screens is more convenient. Why wait in line at a store when there’s Amazon? Why pay for a travel agent when there’s Expedia? The same argument can be applied to online education. An online syllabus could reach many more students, and reduce tuition charges and eliminate room and board. Students in an online university could take any course whenever they wanted, and wouldn’t have to waste time bicycling to class.

But online education might also disrupt everything that distinguishes Stanford. Could a student on a video prompter have coffee with a venture capitalist? Could one become a T-student through Web chat? Stanford has been aligned with Silicon Valley and its culture of disruption. Now Hennessy and Stanford have to seriously contemplate whether more efficiency is synonymous with a better education.

In mid-February, Hennessy embarked on a sabbatical that will take him away from campus through much of the spring. His plans included travelling and spending time with his family. The respite, Hennessy says, will provide an opportunity to think. Of all the things he plans to think hard about, he says, distance learning tops the list. Stanford, like newspapers and music companies and much of traditional media a little more than a decade ago, is sailing in seemingly placid waters. But Hennessy’s digital experience alerts him to danger. He says, “There’s a tsunami coming.”





src:~http://www.newyorker.com/reporting/2012/04/30/120430fa_fact_auletta?currentPage=all

Wednesday, May 16, 2012

Kahlil Gibran's The Prophet: Why is it so loved?

Kahlil Gibran is said to be one of the world's bestselling poets, and his life has inspired a play touring the UK and the Middle East. But many critics have been lukewarm about his merits. Why, then, has his seminal work, The Prophet, struck such a chord with generations of readers?

Kahlil Gibran
Khalil Gibran

Since it was published in 1923, The Prophet has never been out of print. The perennial classic has been translated into more than 50 languages and is a staple on international best-seller lists. It is thought to have sold tens of millions of copies.

Although practically ignored by the literary establishment in the West, lines from the book have inspired song lyrics, political speeches and have been read out at weddings and funerals all around the world.
"It serves various occasions or big moments in one's life so it tends to be a book that is often gifted to a lover, or for a birth, or death. That is why it has spread so widely, and by word of mouth," says Dr Mohamed Salah Omri, lecturer in Modern Arabic literature at Oxford University.

The Beatles, John F Kennedy and Indira Gandhi are among those who have been influenced by its words.
"This book has a way of speaking to people at different stages in their lives. It has this magical quality, the more you read it the more you come to understand the words," says Reverend Laurie Sue, an interfaith minister in New York who has conducted hundreds of weddings with readings from The Prophet.
"But it is not filled with any kind of dogma, it is available to anyone whether they are Jewish or Christian or Muslim."

The book is made up of 26 prose poems, delivered as sermons by a wise man called Al Mustapha. He is about to set sail for his homeland after 12 years in exile on a fictional island when the people of the island ask him to share his wisdom on the big questions of life: love, family, work and death.

Its popularity peaked in the 1930s and again in the 1960s when it became the bible of the counter culture.

The Prophet

On marriage: "Love one another but make not a bond of love: Let it rather be a moving sea between the shores of your souls. Fill each other's cup but drink not from one cup."
On children: ''Your children are not your children. They are the sons and daughters of Life's longing for itself.''
On beauty: ''Beauty is eternity gazing at itself in a mirror. But you are eternity and you are the mirror.''
"Many people turned away from the establishment of the Church to Gibran," says Professor Juan Cole, historian of the Middle East at the University of Michigan who has translated several of Gibran's works from Arabic.

"He offered a dogma-free universal spiritualism as opposed to orthodox religion, and his vision of the spiritual was not moralistic. In fact, he urged people to be non-judgmental."

Despite the immense popularity of his writing, or perhaps because of it, The Prophet was panned by many critics in the West who thought it simplistic, naive and lacking in substance.

"In the West, he was not added to the canon of English literature," says Cole. "Even though his major works were in English after 1918, and though he is one of bestselling poets in American history, he was disdained by English professors."

An image of the Prophet drawn by Kahlil Gibran
Gibran sketched the Prophet after a dream
 
"He was looked down upon as, frankly, a 'bubblehead' by Western academics, because he appealed to the masses. I think he has been misunderstood in the West. He is certainly not a bubblehead, in fact his writings in Arabic are in a very sophisticated style.

"There is no doubt he deserves a place in the Western canon. It is strange to teach English literature and ignore a literary phenomenon."

Gibran was a painter as well as a writer by training and was schooled in the symbolist tradition in Paris in 1908. He mixed with the intellectual elite of his time, including figures such as WB Yeats, Carl Jung and August Rodin, all of whom he met and painted.

Symbolists such as Rodin and the English poet and artist William Blake, who was a big influence on Gibran, favoured romance over realism and it was a movement that was already passe in the 1920s as modernists such as TS Eliot and Ezra Pound were gaining popularity.

He painted more than 700 pictures, watercolours and drawings but because most of his paintings were shipped back to Lebanon after his death, they have been overlooked in the West.

Professor Suheil Bushrui, who holds the Kahlil Gibran chair for Values and Peace at the University of Maryland, compares Gibran to the English Romantics such as Shelley and Blake, and he says that like Gibran, Blake was dismissed in his own time.

"He was called 'mad Blake'. He is now a major figure in English literature. So the fact that a writer is not taken seriously by the critics is no indication of the value of the work".

In Lebanon, where he was born, he is still celebrated as a literary hero.

His style, which broke away from the classical school, pioneered a new Romantic movement in Arabic literature of poetic prose.

A poet's life

  • Born to Maronite Catholic family in Lebanon, 1883
  • Moves to US aged 12 with mother and siblings after father imprisoned for embezzlement
  • Settles in South Boston's Lebanese community
  • Clerical error at school registers his name as Kahlil, not Khalil
  • He was a talented pupil and came to the attention of local artist and photographer Fred Holland Day
  • Returns to Lebanon at 15 to study Arabic
  • Soon after, he lost his mother, sister and brother to TB and cancer within months of each other
  • Back in the US in 1904, he meets Mary Haskell
  • In 1908, goes to Paris for two years to study art in the symbolist school
  • First book of poetry published in 1918, then The Prophet five years later
  • Dies in 1931 from cirrhosis of the liver and TB
  • Inspires a play Rest Upon the Wind, which tours UK and Middle East in 2012
"We are talking about a renaissance in modern Arabic literature and this renaissance had at its foundation Gibran's writings," says Professor Suheil Bushrui, who holds the Kahlil Gibran Chair for Values and Peace at the University of Maryland.

In the Arab world, Gibran is regarded as a rebel, both in a literary and political sense. He emigrated to the US at 12 but returned to study in Lebanon three years later where he witnessed injustices suffered by peasants at the hands of their Ottoman rulers.

"He was a Christian but he saw things being done in the name of Christianity which he could not accept," says Bushrui.

In his writing, he raged against the oppression of women and the tyranny of the Church and called for freedom from Ottoman rule.

"What he was doing was revolutionary and there were protests against it in the Arab world," says Juan Cole. "So he is viewed in Arabic literature as an innovator, not dissimilar to someone like WB Yeats in the West."
A portrait by Kahlil Gibran of his family
Gibran the painter created more than 700 works, including this one of his family
 
Political leaders considered his thoughts poisonous to young people and one of his books, Spirit Rebellious, was burnt in the market place in Beirut soon after it was published.

By the 1930s, Gibran had become a prominent and charismatic figure within the Lebanese community and New York literary circles.

But the success of his writing in English owes much to a woman called Mary Haskell, a progressive Boston school headmistress who became his patron and confidante as well as his editor.
Haskell supported him financially throughout his career until the publication of The Prophet in 1923.
Their relationship developed into a love affair and although Gibran proposed to her twice, they never married.

Haskell's conservative family at that time would never have accepted her marrying an immigrant, says Jean Gibran, who married Kahlil Gibran's godson and his namesake and dedicated five years to writing a biography of the writer.

In their book, Jean Gibran and her late husband didn't shy away from the less favourable aspects of the Gibran's character. He was, they admit, known to cultivate his own celebrity.

He even went so far as to create a mythology around himself and made pretensions to a noble lineage.
But Jean Gibran says that he never claimed to be a saint or prophet. "As a poor but proud immigrant amongst Boston's elite, he didn't want people to look down on him. He was a fragile human being and aware of his own weaknesses."

But arguably for Gibran's English readers, none of this mattered much.

"I don't know how many people who picked up The Prophet, read it or gifted it, would actually know about Gibran the man or even want to know," says Dr Mohamed Salah Omri.

"Part of the appeal is perhaps that this book could have been written by anybody and that is what we do with scripture. It just is."


Tuesday, May 8, 2012

MACHINE POLITICS

The man who started the hacker wars.

Radical hackers took up Hotz
Radical hackers took up Hotz’s fight, although he never considered himself a cause.

In the summer of 2007, Apple released the iPhone, in an exclusive partnership with A.T. & T. George Hotz, a seventeen-year-old from Glen Rock, New Jersey, was a T-Mobile subscriber. He wanted an iPhone, but he also wanted to make calls using his existing network, so he decided to hack the phone.
Every hack poses the same basic challenge: how to make something function in a way for which it wasn’t designed. In one respect, hacking is an act of hypnosis. As Hotz describes it, the secret is to figure out how to speak to the device, then persuade it to obey your wishes. After weeks of research with other hackers online, Hotz realized that, if he could make a chip inside the phone think it had been erased, it was “like talking to a baby, and it’s really easy to persuade a baby.”

He used a Phillips-head eyeglass screwdriver to undo the two screws in the back of the phone. Then he slid a guitar pick around the tiny groove, and twisted free the shell with a snap. Eventually, he found his target: a square sliver of black plastic called a baseband processor, the chip that limited the carriers with which it could work. To get the baseband to listen to him, he had to override the commands it was getting from another part of the phone. He soldered a wire to the chip, held some voltage on it, and scrambled its code. The iPhone was now at his command. On his PC, he wrote a program that enabled the iPhone to work on any wireless carrier.

The next morning, Hotz stood in his parents’ kitchen and hit “Record” on a video camera set up to face him. He had unruly curls and wispy chin stubble, and spoke with a Jersey accent. “Hi, everyone, I’m geohot,” he said, referring to his online handle, then whisked an iPhone from his pocket. “This is the world’s first unlocked iPhone.”

Hotz’s YouTube video received nearly two million views and made him the most famous hacker in the world. The media loved the story of the teen-age Jersey geek who beat Apple. Hotz announced that he was auctioning off the unlocked phone. The winning bid, from the C.E.O. of Certicell, a cell-phone-refurbishing company, was a 2007 Nissan 350Z sports car and three new iPhones. Later, on CNBC, Erin Burnett asked Hotz if he thought that day’s uptick in Apple stock was due in part to his efforts. “More people want iPhones now if they can use them with any sort of provider,” he said, and added that he “would love to have a talk right now with Steve Jobs” about it.
“Man to man?” Burnett said.
“Man to man.”

Apple and A.T. & T. remained conspicuously silent. Unlocking a phone was legal, but it could enable piracy. Many hardware manufacturers sell the devices at a loss, recovering the costs through monthly contracts or software sales. When Steve Jobs was asked at a press conference about the unlocked iPhone, he smiled awkwardly and said, “This is a constant cat-and-mouse game that we play. . . . People will try to break in, and it’s our job to keep them from breaking in.” Hotz never heard directly from Jobs.
Steve Wozniak, the co-founder of Apple, who hacked telephone systems early in his career, sent Hotz a congratulatory e-mail. “It was like a story out of a movie of someone who solves an incredible mystery,” Wozniak told me. “I understand the mind-set of a person who wants to do that, and I don’t think of people like that as criminals. In fact, I think that misbehavior is very strongly correlated with and responsible for creative thought.”

Hotz continued to “jailbreak,” or unlock, subsequent versions of the iPhone until, two years later, he turned to his next target: one of the world’s biggest entertainment companies, Sony. He wanted to conquer the purportedly impenetrable PlayStation 3 gaming console, the latest version of Sony’s flagship system. “The PS3 has been on the market for over three years now, and it is yet to be hacked,” he blogged on December 26, 2009. “It’s time for that to change.”

“My whole life is a hack,” Hotz told me one afternoon last June, in Palo Alto, California. He had moved there the previous month. He was now twenty-one, stocky, and scruffy. He wore a gray T-shirt under a gray hoodie, ripped bluejeans, and brown suède moccasins. “I don’t hack because of some ideology,” he said. “I hack because I’m bored.”

The word “hacker,” when it was applied to technology, initially meant college students and hobbyists, exploring machines. At worst, a hacker was a prankster. In the early nineteen-seventies, Wozniak, the hacker archetype, built a system that let him make free phone calls. Among others, he called the Vatican, pretending to be Henry Kissinger, and managed to get a bishop on the line. Over time, “hacker” acquired a more sinister meaning: someone who steals your credit cards, or crashes the electronic grid. Today, there are two main types of hackers, and only one is causing this kind of trouble. A “white hat” hacker—an anti-virus programmer, for instance, or someone employed in military cyberdefense—aims to make computers work better. It is the “black hat” hacker who sets out to attack, causing havoc or ripping people off. A recent series of attacks on Brazil’s largest banks, which took down their Web sites for a short time, is an example of the malicious black-hat type. The number of black-hat intrusions is rising: in the U.S., the Department of Homeland Security has reported a spike—fifty thousand between October and March, up ten thousand from the same period last year.

Hotz likes to hack according to the early definition of the word: getting inside a machine to see how it works, and changing it. To him, hacking is almost a sport, played against someone in a position of authority. “It’s a testosterone thing,” he told me. “It’s competitiveness, but it isn’t necessarily competitiveness with other people. It’s you versus the system. And I don’t mean the system like the government thing, I mean the system like the computer. ‘I’m going to stick it to the computer. I’m going to make it do this!’ And the computer throws up an error like ‘No, I’m not going to do this.’ It’s really a male thing to say, ‘I’m going to make you do this!’ ”

We were sitting in Hotz’s apartment, on the ground floor of a building near Stanford. Red Bull cans and take-out menus littered the kitchen. Plastic wrappings, scattered cash, and empty computer boxes covered the living room. One box was overturned and being used as a dining table. A blue air mattress sagged in a corner. Hotz tossed a wad of cash from his pocket to the ground and sat with his legs crossed on a desk chair before three giant computer monitors. He held an iPad 2 that he had bought that afternoon at the Apple Store on University Avenue. Around the room, whiteboards were filled with scrawled notes and algorithms. One had a list labelled “Morning Routine”: “7:15 a.m., one snooze? . . . shower . . . floss/brush/wash . . . vitamins . . . dress nicely . . . water plants.” Another list, labelled “Uncomfortable Things,” included “Call Therapist” and “Join Gym and Use It.”

Hotz talked about how he wrote his first computer program when he was five, while sitting on his father’s lap at their Apple II. By fifth grade, he was building his own video-game console with an electronic-projects kit from Radio Shack. His parents often found household appliances (remote controls, answering machines) gutted. “He always liked learning stuff, and if that was how he did it, great,” his father, George Hotz, Sr., a high-school computer teacher, told me. Hotz, bored with his classes and letting his grades slide, became known at school as an inventive joker who rolled down the hallways in wheeled sneakers and once hacked several classroom computers to simultaneously play Beethoven’s Ninth Symphony. His mother, Marie Minichiello, a social worker, told me that although she punished him for his acts of mild disobedience, she always supported him. “I didn’t want school to kill his passion,” she said.

When Hotz was fourteen, he beat thousands of students from more than sixty countries to reach the finals of the Intel International Science and Engineering Fair. He appeared on the “Today” show with his invention, a small robot on wheels that could plot the dimensions of a room using infrared sensors, and wirelessly transmit the information to a computer. “Well, I think it’s very cool to be good in science,” Katie Couric told her viewers, as Hotz, in an ill-fitting dark suit, stepped forward, “and George Hotz is an example of that.” Couric asked if the technology could improve automated vacuum cleaners. But Hotz was more excited about helping soldiers fight terrorists. “They can send it into a complex before the military infiltrates it!” he said, his voice not yet broken. “Well, I’m impressed, George,” Couric replied, nudging him in the shoulder. “You’re a little brainiac, you.”

In high school, Hotz built the Neuropilot, a sort of Segway controlled by brain waves, which you could drive around by thinking about it. Companies had explored similar technology for controlling video games, but building hardware controlled by brain waves still seemed like science fiction. The Neuropilot worked, though the movements were not always precisely translated from the driver’s brain. During his senior year, in 2007, Hotz built a “Star Trek”-inspired 3-D display called “I Want a Holodeck,” which again made him a finalist at the International Science and Engineering Fair. This time, he topped the electrical- and mechanical-engineering category and won fifteen thousand dollars. Before a Forbes photo shoot for a story about his achievement, Hotz smoked pot for the first time. In the photograph, he told me, smiling, “my eyes are bloodshot. It’s great.”

While he was talking, Hotz had been playing with the iPad 2. He planned to spend the night hacking it but needed computer cables. We drove to Fry’s Home Electronics. It was around midnight, and as we approached a desolate intersection, hip-hop cranking from the car’s sound system, the light changed to red. With an angry swerve of the wheel, he cut through an adjoining parking lot and kept driving, muttering, “Fuck these assholes. Stupidest red light ever. It makes no sense at all.
“I live by morals, I don’t live by laws,” he went on. “Laws are something made by assholes.”

After high school, Hotz enrolled at the Rochester Institute of Technology but dropped out a few weeks later, to take up an internship at Google, in Silicon Valley. “We were not surprised or disappointed when he decided to leave school,” his father told me, though he admitted that he sometimes worried about his son, who spent a lot of time alone. Hotz supported himself through donations from people who had downloaded software he’d written and given away free; one program let people jailbreak the iPhone 3GS. His hacks generated enough income that he was able to buy an old white Mercedes. But after a few months he grew bored at Google and in 2009 moved back home to New Jersey. Since his iPhone feat, geeks often sent him devices just to see if he could hack them. That year, someone mailed Hotz a PlayStation 3 video-game system, challenging him to be the first in the world to crack it. Hotz posted his announcement online and once again set about finding the part of the system that he could manipulate into doing what he wanted. Hotz focussed on the “hypervisor,” powerful software that controls what programs run on the machine.

To reach the hypervisor, he had to get past two chips called the Cell and the Cell Memory. He knew how he was going to scramble them: by connecting a wire to the memory and shooting it with pulses of voltage, just as he had when he hacked his iPhone. His parents often gave him gifts that were useful for his hobby: after he unlocked the iPhone, they bought him a more expensive one. For Christmas, 2009, they gave him a three-hundred-and-fifty-dollar soldering iron. Sitting on the floor of his room, Hotz twisted off the screws of the black PS3 and slid off the casing. After pressing the iron to the wire, he began pulsing the chips.
Next, he had to write an elaborate command that would allow him to take over the machine. Hotz spent long nights writing drafts of the program on his PC, and trying them out on the hypervisor. “The hypervisor was giving me shit,” he recalls. It kept throwing up an error message—the number 5—telling Hotz that he was unauthorized. He knew that, if he got through, he’d see a zero instead. Finally, after several weeks typing at his computer, Hotz had composed a string of code five hundred lines long. He ran it on the PS3 and nervously watched the monitor. The machine displayed a sublime single digit: 0. Hotz called the code his “Finnegans Wake.”

On January 23, 2010, a little more than a month after posting his challenge, Hotz announced on his blog, “I have hacked the PS3.” He later posted instructions for others to do the same, and freely distributed the code. Hotz had hacked the two most iconic and ironclad devices of his generation. “Nothing is unhackable,” he told the BBC. “I can now do whatever I want with the system. It’s like I’ve got an awesome new power—I’m just not sure how to wield it.”

Sony responded by releasing a software update that disabled OtherOS, the feature through which Hotz had accessed the hypervisor. OtherOS enabled the machine to run Linux, the alternative operating system to Microsoft Windows and Apple OS. Running Linux essentially turned the PS3 from a single-purpose gaming console into a desktop computer, which people could use to write programs. They were furious that Sony had robbed them of this capability. “I am EXTREMELY upset,” a comment on Sony’s blog read. Some wanted to rally around Hotz, and organize: “THIS IS MADNESS!!! HACKERS UNITE!!! GEOHOT WILL LEAD US INTO THE LIGHT!” But many were angry at Hotz, not at Sony. “Congratulations geohot, the asshole who sits at home doing nothing than ruining the experience for others,” one post read. Someone posted Hotz’s phone number online, and harassing calls ensued.

Recalling the controversy, Hotz seemed genuinely unfazed. “All those people flaming me, I could care less,” he told me. He spent the summer of 2010 biking through China, and that fall, back at his parents’ house, he read Ayn Rand, which he said made him want to “do something.” “We let him get away with murder,” his father admitted. “But he never did bad things. He always did what he felt was right, and we were happy with that.”

In late December, Hotz decided once again to try to hack the PS3 in a way that would give him total control and let him restore what Sony had removed. On New Year’s Eve, Hotz and some high-school buddies played beer pong and watched the Times Square ball drop on TV. He woke up hung over on the couch at a friend’s house, with a towel stretched across him as a blanket, and stumbled back to his parents’ to fix some macaroni and cheese and think things through. Hotz wanted control of the PS3 metldr (pronounced “metloader”), a part of the software that, functioning like a master key, “lets you unlock everything.”
Hotz knew that the metldr key was hidden within the PS3, but now he realized that he didn’t necessarily have to find and break into the secret place. He could run a special decryption program in a different part of the machine, and make the key appear there. He had to figure out how to speak to the metldr, and then command it to appear. Within ten minutes, he had coded the PS3 hack.

The cursor blinked, indicating that Hotz had the power to do anything with the PS3: install OtherOS, play pirated games, or run obscure Japanese software. He prepared a Web page and a video documenting what he had done. But he hesitated. Although Apple had never sued anyone for jailbreaking, Sony had reacted fiercely to previous modifications of the PlayStation. Sony had also long boasted about the security of the PS3. Hotz wasn’t just undoing years of corporate P.R.; he was potentially opening the door to piracy.
With this concern in mind, Hotz wrote code that disabled the ability to run pirated software using his hack and added a note in his documentation: “I don’t condone piracy.” Still, he wanted a second opinion. Before he put the site live, he signed into an online chat channel where hacker friends hung out, and asked them whether he should release his hack. “Yeah,” one told him. “Information should be free.” Hotz told me, “This is the struggle of our generation, the struggle between control of information and freedom of information.” Also, on the day of the hack, unbeknownst to his parents, Hotz was high. He told me he had taken Vicodin and OxyContin, which filled him with a sense of invulnerability. “You just feel good about everything,” he recalled. He pushed a button on the keyboard and uploaded the instructions for his PS3 jailbreak.

On January 11, 2011, Hotz was playing Age of Empires II on his computer in New Jersey when he received an e-mail from Sony announcing a lawsuit against him. The company requested a temporary restraining order for violating the Computer Fraud and Abuse Act and facilitating copyright infringement, such as downloading pirated games. According to the Entertainment Software Association, piracy costs the industry eight billion dollars a year. Sony was also seeking to impound his “circumvention devices,” and it wanted him to take all the instructions offline immediately.

As soon as the news hit the Web, geeks rushed to Hotz’s site, seeking the tools while they could. At Carnegie Mellon University, David Touretzky, a computer scientist and proponent of freedom of information online, made copies of Hotz’s files. Touretzky blogged that Sony was “doing something breathtakingly stupid, presumably because they don’t know any better. . . . Free speech (and free computing) rights exist only for those determined to exercise them. Trying to suppress those rights in the Internet age is like spitting in the wind.” The Electronic Frontier Foundation, a digital-rights advocacy group, released a statement saying that the Sony v. Hotz case sent a “dangerous message” that Sony “has rights in the computer it sells you even after you buy it, and therefore can decide whether your tinkering with that computer is legal or not. We disagree. Once you buy a computer, it’s yours.”

But Sony believed that Hotz’s hack was sending a dangerous message of its own. If people were free to break into their machines, game creators would be cheated out of royalties. Cheaters could tweak the games in order to beat everyone who stuck to the rules. Riley Russell, the general counsel for Sony Computer Entertainment of America, said in a statement at the time, “Our motivation for bringing this litigation was to protect our intellectual property and our consumers.”

On January 14th, Hotz went on “Attack of the Show,” a popular news program for gamers on G4, a cable-television network. When the host asked what he was being sued for, Hotz joked, “Making Sony mad.” He was serious, though, about his mission to keep information free. Later, he uploaded a hip-hop video on YouTube, which he titled “The Light It Up Contest.” He sat in front of his Webcam in a blue sweatshirt, his computer in the background. “Yo, it’s geohot,” he rapped, as the beat kicked in, “and for those that don’t know, I’m getting sued by Sony.” It was a surprisingly catchy tune about a complex issue from a whiz kid brazenly striking a pose. Hotz went on, bouncing in his desk chair, “But shit man / they’re a corporation / and I’m a personification / of freedom for all.”

Hotz’s rap earned him sympathy in chat rooms but not in the courts. A California district court granted Sony the restraining order against Hotz, preventing him from hacking and disseminating more details about its machines. It also approved a request by Sony to subpoena information from Twitter, Google, YouTube, and Bluehost, Hotz’s Internet provider, including the Internet Protocol addresses of anyone who downloaded the instructions from his site—a move that further incensed digital-rights advocates. Sony also gained access to records from Hotz’s PayPal account. In some circles, the rebel leader was becoming a martyr. As one fan of Hotz’s posted: “geohot = savior of mankind.”

Martyrs win devotees, and soon Hotz had gained the allegiance of the most notorious hackers: a group called Anonymous. In the past few years, the group has become famous for engineering elaborate online attacks and protests, often in the name of free speech and “lulz,” which is Internet-speak for laughs. Group members fought against the Church of Scientology, which they believed to be suppressing free speech online, and shut down government Web sites in defense of WikiLeaks. More recently, coders have joined the Occupy Wall Street movement, and threatened to release a list of people collaborating with the Zetas, a Mexican drug gang. At the time of the Sony hack, Anonymous had become its own pop-culture meme. On Comedy Central, Stephen Colbert called Anonymous members a “global hacker nerd brigade.” Others referred to them as “the paramilitary wing of the Internet.”

Anonymous is an international, decentralized, shape-shifting hive. All you have to do to join is say you are part of it. No one goes by his or her real name. As in any shadowy group, some members are more extreme than others. A few years ago, I was invited to attend a secret meeting of Anonymous activists, at an Indian restaurant in Hollywood. While Anonymous is often characterized as a group of malicious cyber-terrorists, they struck me more as a group of earnest young protesters with a dark sense of humor and a brilliant knack for viral marketing. Anons, as members call themselves, are the best publicists on the Internet: through social media, they mobilize, inform, outrage, and entertain in ways that the Yippies could never imagine, and they do it all really fast.

In early April, an Anonymous member created an Internet relay chat room called Operation Sony, or #OpSony. “It is the duty of Anonymous to help out this young lad, and to protest against Sony’s censorship,” the mission statement read. Around the world, curious coders logged into their phones and laptops to discuss plans.

As the chat room filled, Anons began digging up personal contact information on Sony’s lawyers and debating the most effective tactics: Flash mobs outside Sony stores? Sending black faxes, which would waste all the ink in their machines? Eventually, they settled on a distributed denial of service, or DDoS, attack, overwhelming Sony’s Web sites with simultaneous visits until they crashed.

On April 4th, Anonymous announced the plan to the public in a press release: “Congratulations, Sony. You have now received the undivided attention of Anonymous. You saw a hornets nest, and stuck your penises in it. You must face the consequences of your actions, Anonymous style.” Within hours, both Sony.com and PlayStation.com were down. Anonymous posted a video on YouTube with its demands: Drop the case against Hotz and allow for modifications on the PS3. Over an image of a Guy Fawkes mask, which the group uses as a symbol, text read, “Leave Fellow hackers like geohot alone.”

Internet protests, like street protests, have a way of spinning out of control. People chant peacefully, but then someone throws a rock through a window and rioting begins. No sooner had the hacker war begun than one Anon declared a splinter faction, SonyRecon, calling for personal hacks against Sony employees and the judge in the geohot case. Other Anons posted the phone numbers, family-member names, and addresses of Sony executives. They even published a description of the C.E.O.’s house, and proposed various methods of attack:

<sonyrecon335> We’ll shit on his doorstep, then run away
<e-hippie741> dude
<e-hippie741> you’d shit on someones doorstep
<Hit_X> ring the kids school and pull a prank like hes been rushed in hospital:
<e-hippie741> do you love geohot that much


Back in his parents’ house, in front of the glowing computer screens in his cluttered bedroom, Hotz clicked with mounting apprehension through the news of Anonymous’s plans. “I hope to God Sony doesn’t think this is me,” he remembers thinking. He didn’t believe in secretive online warfare, much less in defecating on someone’s doorstep. “I’m the complete opposite of Anonymous,” he told me. “I’m George Hotz. Everything I do is aboveboard, everything I do is legit.”

On April 11th, Sony announced that it had reached an agreement with Hotz, who denied wrongdoing but consented to a permanent injunction barring him from reverse-engineering any Sony product in the future. But Hotz’s supporters felt that the injunction was a form of censorship. Some of his defenders made “FREE GEOHOT” shirts, and others went to Sony stores in cities such as San Diego and Costa Mesa to protest. Black-hat hackers called for more destructive attacks against Sony.

At 4:15 P.M. on April 19, 2011, technicians at the San Diego offices of Sony Network Entertainment noticed that four of their computer servers were rebooting without authorization. The team took the systems offline and began examining the activity logs. Their investigation confirmed that someone had broken into the servers, and possibly into others. Sony immediately shut down the PlayStation Network, their online-entertainment hub. The company concluded that it had been the victim of a sophisticated attack that had exposed the addresses, passwords, birthdays, and e-mail addresses of seventy-seven million PSN subscribers, who pay to play games and watch movies. “While there is no evidence at this time that credit card data was taken, we cannot rule out the possibility,” Patrick Seybold, a company spokesman, wrote in a blog post on PlayStation’s Web site. Though it remained unclear whether someone from Anonymous was responsible for the hack or whether it was just someone taking advantage of the chaos, the events were clearly linked.

Security experts called it one of the biggest data breaches of all time. Sony announced that it would keep the PSN down indefinitely—at an estimated cost of ten million dollars in lost revenue per week—as it raced to plug the holes. Anonymous denied responsibility and temporarily suspended its campaign against the company.

At 4:51 A.M. on April 28th, Hotz uploaded a lengthy rant against the PSN hackers. “Running homebrew and exploring security on your devices is cool,” he wrote. “Hacking into someone elses server and stealing databases of user info is not cool. You make the hacking community look bad, even if it is aimed at douches like Sony.” Hotz was pointing out the distinction between white- and black-hat hackers. Still, he knew he had helped loosen a boulder that was now crashing down a hill.

On May 1st, the company discovered a data breach on the Sony Online Entertainment service, exposing twenty-four million personal accounts. Technicians also found a file that had been planted on one of their servers as a kind of digital graffiti. It was titled “Anonymous,” and read, “We Are Legion.” At a press conference in Tokyo that day, Kaz Hirai, the chief executive officer of Sony Computer Entertainment, and two other executives walked onto a stage and faced the packed crowd. “We offer our sincerest apologies,” Hirai said and, setting his microphone on a table, bowed low with the others for eight seconds as the cameras flashed. They said that some network services would be back up in a few days. But it took two weeks to fully restore the system.

Sony soon had a new force to contend with: an Anonymous splinter group called Lulz Security, commonly known as LulzSec. Members were like the merry droogs of the net; on their Twitter feed, nicknamed the Lulz Boat, they identified themselves as “the world’s leaders in high-quality entertainment at your expense.” Their first bit of dark comedy came on May 30th, when they hacked the PBS Web site, in retaliation for what they thought was unfairly negative coverage of the WikiLeaks founder, Julian Assange. They posted a fake news story reporting that the late rappers Tupac Shakur and Biggie Smalls had been hiding out in New Zealand. “Local townsfolk refuse to comment on exactly how long or why the rappers were being sheltered,” the story read. “One man simply says ‘we don’t talk about that here.’ ”
The day after the PBS prank, the group began tweeting a series of warnings to Sony. “Hey @Sony,” one read, “you know we’re making off with a bunch of your internal stuff right now and you haven’t even noticed? Slow and steady, guys.” Some saw the warnings as more geohot backlash for the company. “The group is sending a message to Sony for messing with one of their own, hacker George Hotz,” a blogger wrote.

On June 2nd, LulzSec hacked the Sony Pictures Web site, compromising what it claimed to be more than a million passwords of consumers who had put their personal information on the site. (Sony later put the figure at thirty-seven thousand.) The group’s purpose, it explained in a statement, was not to come across as “master hackers” but to expose the continued weakness of Sony’s security systems. Lulz’s statement said that Sony was “asking for it,” because the company stored the passwords in plain text, instead of encrypting them. The statement went on to encourage fellow-hackers to “tear the living shit out of it while you can; take from them everything!” LulzSec members broke in using a rudimentary technique called SQL Injection, which allowed them access to unauthorized data on the Sony Pictures site. “From a single injection, we accessed EVERYTHING,” they said. “Why do you put such faith in a company that allows itself to become open to these simple attacks?”

Black-hat hackers began posting corporate e-mails, and, during the summer of 2011, attacks on media, technology, and other institutions came almost daily. Nintendo got hacked, and so did Sega, Electronic Arts, the News Corporation, Booz Allen Hamilton, NATO, and Lady Gaga. Even the C.I.A. was hacked, LulzSec claimed. It was the Summer of Lulz. Hotz didn’t mean to inspire a hacker war, but he doesn’t regret what he did. One night at a restaurant in Palo Alto, he clarified his position on the attacks against Sony. “If being a techno-libertarian leads to online anarchy, so be it,” he said. “I’m not a cause. I just like messing with shit.”

Hotz defines a hacker as “somebody with a set of skills,” and points out that the skills alone don’t make you good or evil. It’s up to you to decide how to use them. Facebook’s Mark Zuckerberg may be his generation’s most famous hacker, but Hotz most embodies its original spirit. He hacks for the technical challenge and the fun. He doesn’t identify as white-hat or black-hat, preferring to think of himself more like someone twisting wrenches under a sink. “Hacker is to computer as plumber is to pipes,” he once blogged. When I met him again, later in the summer, at DefCon, a hacker convention in Las Vegas, he wasn’t at a bar with guys in long black coats, plotting some corporate takedown. He was alone on a couch in a back room, coding on his laptop.

A month after his settlement with Sony, last spring, Hotz moved back to California to take a full-time job at Facebook. He wouldn’t say what he worked on, other than design technology to improve the site. Some saw his transition as a shrewd move by Facebook to co-opt a hacker before he might compromise the company. Others flamed him for cashing in. “You have to love the amount of suck and sell-out that George Hotz contains within his flimsy nerd shell,” a detractor wrote online.

One of my interviews with Hotz took place in Palo Alto just after he’d started the Facebook job, before word had leaked online. He showed up wearing a new blue-and-white Facebook T-shirt, a member of the Valley’s coolest frat. He was waking up early (as his “Morning Routine” whiteboard reminded him) to get to work. “Everything is very fast-moving and the culture is young,” he said, and then handed me his business card, which read, “I am the most illegal circumvention device of them all.” Eight months later, Hotz quit. He didn’t want to discuss why, but suggested that having a day job didn’t suit him. “Facebook is a fun place to work,” he told me, “but I wonder how people stay employed for so long.” He travelled in Panama, then returned to Palo Alto. He wouldn’t say what he was going to do next, only that he won’t be sharing his exploits on the Internet anymore. “I’m through with all that,” he said.

He wasn’t the only one. On March 6, 2012, U.S. officials announced the indictment of six élite hackers from Anonymous and LulzSec. A federal law-enforcement official told me that the arrests were “very significant—these are core members.” Hotz had never made contact with LulzSec or Anonymous members, even when they were crusading on his behalf, and he was agnostic about their fate. The brash young man from the rap video who described himself as “a personification of freedom for all” had retired from battle. He refused to make what he called “a moral judgment” of the indicted hackers. “I’ll make a technical judgment,” he told me. “If they were that good, they wouldn’t have got caught.”

Even with the arrests, other Anons have sworn to keep up the campaign. Companies are working hard, too. “The last year has demonstrated how sophisticated cybercriminals can be,” Jim Kennedy, the senior vice-president of strategic communications for Sony Corporation of America, wrote me in an e-mail. Sony created a new position—a corporate executive in charge of global-information security and privacy—and promoted Nicole Seligman, who as general counsel had been targeted by hackers, to president of the Sony Corporation of America. Kennedy admits that security remains an unceasing fight. “In the end, it must be recognized that no system is absolutely foolproof,” he wrote. “Constant vigilance is essential.”

Last May, engineers from Sony invited Hotz to a meeting at its American headquarters, a half hour’s drive north, in Foster City. (“We are always interested in exploring all avenues to better safeguard our systems and protect consumers,” Kennedy told me.) Nervous but curious, Hotz walked into the building eating from a box of Lucky Charms, dropping marshmallows across the lobby. “If there were going to be lawyers there,” he recalled, “I was going to be the biggest asshole ever.” Instead, he found a roomful of PS3 engineers who were “respectful,” he said, and wanted to learn more about how he had beaten their system. During the next hour or so, the man who had started the hacker wars described his methodology.



src:~http://www.newyorker.com/reporting/2012/05/07/120507fa_fact_kushner?currentPage=all