Are we entering a dark age, or are we already in it?

This from Eurekalert: You may think that with faster internet connectivity, internet phone calls and iPods, that we’re living in a technological nirvana. But according to a new analysis we are fast approaching a new dark age. The results show that the number of technological breakthroughs and patents peaked a century ago and have been falling steadily ever since. But this is a controversial view not held by most futurologists.

The reason why an observation like this is controversial is that futurologists take a determinedly optimistic view of the future. I suspect this is a survey of American patent applications. There is a subjective increase in the wealth and comfort of Americans that leads them to think that they must be in a golden age. But wasn’t Rome similar prior to its collapse? The Romans had become decadent, they suffered from an internal rot and cultural decline. They embarked on fruitless foreign adventures as a means of distracting an overlarge, and underused military. The recent resource wars in the middle east seem aimed at distracting the world with a sleight of hand allowing America to parasitize Iraq without feeling morally inferior.

If we truly are in a worldwide decline in creativity, what does that mean? Victorian scientists used to predict that all of the major discoveries had been made and the future was going to be a period of filling the gaps and creating a prosperous age of automation. Were they right? Our explanations of the fine detail of what goes on is a little more precise, but really we have just been adding decimal points to the accuracy of our picture. The whole of twentieth century physics has involved creating pictures of the world that even their inventors didn’t understand or trust! Einstein made significant advances in quantum physics in an attempt to falsify it on aesthetic grounds!

Maybe the lull is exemplified by the history of Artificial Intelligence? Artificial Intelligence grew out of the availability of computing hardware in postwar America (called giant brains at the time) and the theoretical advances that had been made prewar by the likes of Turing, and von Neumann et al. Von Neumann and Turing both took a mechanistic view of the brain, that it was to all intents and purposes a “computer made of meat”. There was great optimism that the algorithms of the brain would be quickly understood, and a means to emulate the brain would be found in “10 or 20 years”. Minsky and others were finding ways to simulate neurons in hardware. All in all it seemed that AI would flourish in the 70s providing added impetus to the space race, and mankind’s transcendence.

Researchers found that emulating human perceptual capabilities was actually much harder than performing the sort of tasks that humans find hard (like maths and logic). Philosophical problems arose over our very definition of intelligence and consciousness (it seemed to always be imminent). The whole effort became mired in attempts to work out what it was that they were really after. The over optimistic forecasts came back to haunt the researchers and the whole enterprise was scaled back to become a peripheral research enterprise in most universities. The high country was abandoned in favour of vocational education - computing for profit rather than fun.

Maybe we don’t seem to be advancing as fast because we are now trying to solve the truly intractable problems of science that require a fundamental change in our understanding of the world or the brain or whatever. Maybe we can’t solve these problems without skills that are not currently in our conceptual repertoire?

Dialogue & Discussion