Whatever happened to… The Wisdom of Crowds?

Future social historians looking back at the web cult – which met in San Francisco this week for a $3,000-a-head “summit” – may wonder what made them tick. Scholars could do worse than examine their superstitions. We’ll bet that lurking on the bookshelf of almost every “delegate” was a copy of James Surowiecki’s The Wisdom of Crowds. It’s as ubiquitous as Erik Von Daniken books were in the 1970s.

In Silicon Valley this year, “collective intelligence” is the mandatory piece of psycho-babble necessary to open a Venture Capitalist’s cheque book. Surowiecki’s faith in prediction markets appears unshakeable. Writing in Slate three years ago, in an attempt to save Admiral Poindexter’s “Terror Casino” – punters were invited to bet on the probability of state leaders being assassinated, for example – Mystic Jim begged for understanding:

“Even when traders are not necessarily experts, their collective judgment is often remarkably accurate because markets are efficient at uncovering and aggregating diverse pieces of information. And it doesn’t seem to matter much what markets are being used to predict.”

“Whether the outcome depends on irrational actors (box-office results), animal behavior (horse races), a blend of irrational and rational motives (elections), or a seemingly random interaction between weather and soil (orange-juice crops), market predictions often outperform those of even the best-informed expert. Given that, it’s reasonable to think a prediction market might add something to our understanding of the future of the Middle East.”

A heart-warming fable, then, for a population robbed of their pensions, and beset by uncertainty after the dot.com bubble. Suroweicki failed to mention however that experts are regularly outperformed by chimps, or dartboards – but no one talks about “The Wisdom of Chimps”.

This week however the people spoke – and the markets failed.

Read more

With Horizon, the BBC abandons science

creepy

BBC TV’s venerable science flagship, Horizon, has had a rough ride as it tries to gain a new audience. It’s been accused of “dumbing down”. That’s nothing new – it’s a criticism often leveled at it during its 42 year life.

But instead of re-examing its approach, the series’ producers have taken the bold step of abandoning science altogether. This week’s film, “Human v2.0”, could have been made for the Bravo Channel by the Church of Scientology. The subject at hand – augmenting the brain with machinery – was potentially promising, and the underlying question – “what makes a human?” – is as fascinating as ever. Nor is the field short of distinguished scientists, such as Roger Penrose, or philosophers, such as Mary Midgley, who’ve made strong contributions.

Yet Horizon unearthed four cranks who believed that thanks to computers, mankind was on the verge of transcending the physical altogether, and creating “God” like machines.

“To those in the know,” intoned the narrator, “this moment has a name.” (We warned you it was cult-like, but it gets worse).

It’s not hard to find cranks – the BBC could just as readily have found advocates of the view that the earth rests on a ring of turtles – and in science, yesterday’s heresy often becomes today’s orthodoxy. But it gets there through a well-established rigorous process – not through unsupported assertions, confusions, and errors a five-year old could unpick.

Read more

Do Artificial Intelligence Chatbots look like their programmers?

George the AI chat bot

Do pets eventually resemble their owners? Or do owners get to look like their pets? It’s heck of a conundrum – but one we might now be a little closer to solving. For the past fortnight it’s been hard to escape the animated faces of “Joan”, or “George” the graphical representations of what we’re told is a new breakthrough in Artificial Intelligence. TV and newspapers, both highbrow and lowbrow, have flocked to report on the chatterbot. You can talk to Joan (or George) – the output of the British software project Jabberwacky – and think it’s human!

Er, almost.

Read more

The Emperor’s New AI

“It looks like you’re trying to have a conversation with a computer – can I help?
In the early 1970s, no science show was complete without predictions of HAL-like intelligent autonomous computers by the turn of the century.

The Japanese, fearing their industrial base would collapse without a response to this omniscient technology, poured hundreds of millions of dollars into their own AI project, called Fifth Generation. They may as well have buried the money in the Pacific Ocean. Two decades later there are no intelligent robots, and “intelligent” computers are a pipe-dream.

(It was an academic coup for MIT’s Professor Marvin Minsky, a fixture on the AI slots. Minsky’s own preferred, linguistics-based approach to AI, symbolic AI, triumphed in the grants lotteries over an approach which preferred to investigate and mimic the neural functions of the brain. Minsky’s non-stop publicity campaign helped ensure his AI lab at MIT was well-rewarded while neural networks starved.)

For the past week reports have again confidently predicted intelligent computers are just around the corner. Rollo Carpenter, whose chatbot Joan won an annual AI prize for creating software that most resembles a human, predicts that computers will pass the ‘Turing Test’ by 2016. In this test, computer software fools a human interrogator by passing off as a human.

(You can spot the flaw already: to sound human isn’t a sign of intelligence. And what a pity it is that Turing is remembered more for his muddle-headed metaphysics than for his landmark work in building computational machines. It’s a bit like lauding Einstein for opposition to the theory of plate tectonics, rather than his work on relativity, or remembering Newton for his alchemy, not his theory of gravity).

But let’s have a look. A moment’s glance at the conversation of Joan, or George, is enough to show us there is no intelligence here.

Read more

Neurosis as a lifestyle: remixing revisited

“We stand on the last promontory of the centuries! Why should we look back, when what we want is to break down the mysterious doors of the impossible ? Time and Space died yesterday. We already live in the absolute, because we have created eternal, omnipresent speed”
– Fillippo Marinetti, 1909

When a year ago I looked at some of the strange attitudes to copyright and creativity that abound on the internet, vilification followed swiftly. I wondered what was behind odd assertions that “the power of creativity has been granted to a much wider range of creators because of a change in technology”, which grew, without pausing for punctuation, into even odder and grander claims, such as “the law of yesterday no longer makes sense.” ‘Remix Culture’ as defined by the technology utopians wasn’t so much a celebration of culture as it is of the machines that make it possible, we noted. But many people simply find such thinking quite alien. So it’s heartening to see writers like Nick Carr and, today, the Wall Street Journal‘s columnist Lee Gomes join the debate that so animates Reg readers, and question these silly assumptions too.

Gomes hears a dot com executive sell his movie editing service with the claim that, “until now, watching a movie has been an entirely passive experience.”

(We heard a similar, silly claim from Kevin Kelly recently, only about reading.)

Passive? Not at all, Gomes explains today:

“Watching a good movie is ‘passive’ in the same way that looking at a great painting is ‘passive’ – which is, not very. You’re quite actively lost in thought. For my friend, though, the only activity that seemed ‘active’, and thus worthwhile, was when a person sitting at a PC engaged in digital busy work of some kind.”

Which is the world view in a nutshell. The future in which the scribbles of the digerati adorn every book or movie is a nightmare, he agrees. It’s also rather presumptious. Who does this self-selecting group claim to represent?

We’ve had a glimpse into this “future” with Google for the past three years, where to reach some original source material, one must wade through thickets of drivel, some of it generated by bloggers, the rest by machines pretending to be bloggers. It’s hardly anyone’s idea of enhancement, and Gomes calls it “dismally inferior”, and has a lovely simile.

“Reading some stray person’s comment on the text I happen to be reading is about as appealing as hearing what the people in the row behind me are saying about the movie I’m watching.”

Read more

Anti-war slogan coined, repurposed and Googlewashed … in 42 days

Second Superpower


In early 2003, the phrase “Second Superpower” became a popular way to refer to the street protests against the imminent invasion of Iraq. The metaphor had been used by UN Secretary General Kofi Annan and on the cover of The Nation magazine. A small number of techno utopian webloggers hijacked the phrase.

The narrower sense sprung from a paper by a technocratic management consultant Jim Moore, who referred to direct democracy mediated through technology. It belongs to the school of literature in which the Internet is the manifestation of a “hive mind”. Only a few links from weblogs were sufficient to send the paper to the top of Google’s search results for the phrase “second superpower”.

In the New York Times, UC Stanford Linguistics professor Geoffrey Nunberg, wrote:

“Sometimes, though, the deliberations of the collective mind seem to come up short. Take Mr. Moore’s use of “second superpower” to refer to the Internet community. Not long ago, an article on the British technology site The Register accused Mr. Moore of “googlewashing” that expression – in effect, hijacking the expression and giving it a new meaning. The outcomes of Google’s popularity contests can be useful to know, but it’s a mistake to believe they reflect the consensus of the ‘Internet community’, whatever that might be, or to think of the Web as a single vast colloquy – the picture that’s implicit in all the talk of the Internet as a ‘digital commons’ or ‘collective mind’.

While in Le Monde, Pierre Lazuly observed:

When you search the net you are not examining all available knowledge, but only what contributors – universities, institutions, the media, individuals – have chosen to make freely available, at least temporarily. The quality of it is essential to the relevance of the results.” Lazuly drew attention’s to Google’s description of its algorithms as “uniquely democratic”:

“It’s a strange democracy where the voting rights of those in a position of influence are so much greater than those of new arrivals. ”
Lazuly concluded –

“Those who got there first in net use are now so well-established that they enjoy a level of representation out of proportion to their real importance. The quantity of links they maintain (especially through the mainly US phenomenon of webloggers) mathematically give them control of what Google thinks.”

Webloggers had enjoyed a symbiotic relationship with Google. The dense interlinking between weblogs gave them a higher ranking in Google’s search results. This had not been written about before, and they didn’t like it one bit.

Search engine expert Gary Stock described it:

“[Google] didn’t foresee a tightly-bound body of wirers, They presumed that technicians at USC would link to the best papers from MIT, to the best local sites from a land trust or a river study – rather than a clique, a small group of people writing about each other constantly. They obviously bump the rankings system in a way for which it wasn’t prepared.”

“Each of us gets vote,” jokes Stock. “And someone votes every day and I vote once every four years.”

The act of being observed changes everything. As Slate‘s Paul Boutin concluded:

“Bloggers determined to prove they can be just as clueless and backbiting as the professional journalists they deride scored a major milestone this week …”

Read the original article below the fold.

Read more