When it comes to the Internet, we’re still in the prelude stage. So far, we’re only partly inside the machine and the machine is only partly inside us. But just you wait. As Tom Chatfield points out in one of his typically smart, elucidating BBC pieces, the term “online” is already redundant. We’re never anything but. His opening:
If I’m driving along in my car listening to GPS directions from Google Maps, am I online or offline? How about when I’m sitting at home streaming movies on demand? Skip forward a few years: if I’m dozing in my driverless car while my smartphone screens messages and calls, do words like “offline” and “online” even make sense?
The answer, I think, is that they make about as much sense as asking me today whether I have recently had any non-electric experiences. Electricity is so much a part of our world that it makes sense to ask how we use it – but no longer if or when we do. It’s a given.
The first time I went online, in the 1990s, it felt like a journey. I hooked my PC up to a space-age box called a modem, linked my modem to a phone line, and carefully instructed it to dial up the Internet Service Provider who would connect me to the World Wide Web. Much beeping and bleeping later, I was online in my own home: a miracle of modernity! Or rather, my computer was online – the only object in my home, and quite possible within a 10-mile radius, that had an internet connection.
Two decades later, this vanished world sounds like a kind of joke; a primitive realm of ritualised waiting and bizarrely isolated computation. You don’t really “go” online in 2016. Online is simply there, waiting.•
Tom Chatfield, an uncommonly thoughtful commenter on the technological world we’ve built for ourselves, is interviewed by Nigel Warburton of Aeon about staying human in a machine age. In the seven-minute piece, Chatfield notes that games in the Digital Age have become more meaningful than work in many instances because the former builds skills in players while the latter looks to replace the messy human component.
A much more exiting model of human-machine interaction, Chatfield offers, is one where we maximize what people and AI are each good at. That would be great and is doable in the short run if we choose to approach the situation that way, but I do believe that ultimately, whatever tasks that both humans and machines can do will be ceded almost entirely to silicon. A freestyle-chess system to production will have a short shelf life in most applications. We may be left to figure out brand new areas in which we can thrive and define why we exist.
At any rate, smart stuff about automated systems. Watch here.
It’s perplexing the American school system (and no other that I know of) doesn’t employ video games as teaching tools, since they’re both satisfying and edifying and can allow students to pursue knowledge at a personalized pace. It’s a real lost opportunity to think learning can’t be vibrant and fun.
Beyond the classroom, Nicholas Carr wonders why software is created to pose no obstacles to us, to not challenge us but replace us. He addresses this point, among others, in an excellent discussion with Tom Chatfield of BBC Future. An excerpt:
Should life be more like a video game?
Tom Chatfield:
I was glad to see that you use video games in the book as an example of human-machine interactions where the difficulty is the point rather than the problem. Successful games are like a form of rewarding work, and can offer the kind of complex, constant, meaningful feedback that we have evolved to find deeply satisfying. Yet there is also a bitter irony, for me, in the fact that the work some people do on a daily basis is far-less skilled and enjoyable and rewarding.
Nicholas Carr:
Video games are very interesting because in their design they go against all of the prevailing assumptions about how you design software. They’re not about getting rid of friction, they’re not about making sure that the person using them doesn’t have to put in much effort or think that much. The reason we enjoy them is because they don’t make it easy for us. They constantly push us up against friction – not friction that simply frustrates us, but friction that leads to ever-higher levels of talent.
If you look at that and compare it to what we know about how people gain expertise, how we build talent, it’s very, very similar. We know that in order to gain talent you have to come up against hard challenges in which you exercise your skills to the utmost, over and over again, and slowly you gain a new level of skill, and then you are challenged again.
And also I think, going even further, that the reason people enjoy videogames is the same reason that people enjoy building expertise and overcoming challenges. It’s really fundamentally enjoyable to be struggling with a hard challenge that we then ultimately overcome, and that gives us the talent necessary to tackle an even harder challenge.
One of the fundamental concerns of the book is the fear that we are creating a world based on the assumption that the less we have to engage in challenging tasks, the better. It seems to me that that is antithetical to everything we know about what makes us satisfied and fulfilled and happy.•
The opening of “Automated Ethics,” Tom Chatfield’s excellent new Aeon essay about humans outsourcing moral quandaries to machines, which create some new challenges while eliminating many old ones:
“For the French philosopher Paul Virilio, technological development is inextricable from the idea of the accident. As he put it, each accident is ‘an inverted miracle… When you invent the ship, you also invent the shipwreck; when you invent the plane, you also invent the plane crash; and when you invent electricity, you invent electrocution.’ Accidents mark the spots where anticipation met reality and came off worse. Yet each is also a spark of secular revelation: an opportunity to exceed the past, to make tomorrow’s worst better than today’s, and on occasion to promise ‘never again.’
This, at least, is the plan. ‘Never again’ is a tricky promise to keep: in the long term, it’s not a question of if things go wrong, but when. The ethical concerns of innovation thus tend to focus on harm’s minimisation and mitigation, not the absence of harm altogether. A double-hulled steamship poses less risk per passenger mile than a medieval trading vessel; a well-run factory is safer than a sweatshop. Plane crashes might cause many fatalities, but refinements such as a checklist, computer and co-pilot insure against all but the wildest of unforeseen circumstances.
Similar refinements are the subject of one of the liveliest debates in practical ethics today: the case for self-driving cars. Modern motor vehicles are safer and more reliable than they have ever been – yet more than 1 million people are killed in car accidents around the world each year, and more than 50 million are injured. Why? Largely because one perilous element in the mechanics of driving remains unperfected by progress: the human being.
Enter the cutting edge of machine mitigation. Back in August 2012, Google announced that it had achieved 300,000 accident-free miles testing its self-driving cars. The technology remains some distance from the marketplace, but the statistical case for automated vehicles is compelling. Even when they’re not causing injury, human-controlled cars are often driven inefficiently, ineptly, antisocially, or in other ways additive to the sum of human misery.
What, though, about more local contexts? If your vehicle encounters a busload of schoolchildren skidding across the road, do you want to live in a world where it automatically swerves, at a speed you could never have managed, saving them but putting your life at risk?”
We often see technological development as a silver bullet that can change everything, but that bullet still has to be sized to fit a gun, even if it’s a 3D-printed gun. Structural changes are usually incremental and new technologies have to accommodate that pace. It’s evolution, not revolution. It can’t impose perfection and order upon the world. The telephone, for instance, started conversations but did not end wars.
In his final BBC column, Tom Chatfield addresses revolution as it relates to technology. The opening:
“Lecturing in late 1968, the American sociologist Harvey Sacks addressed one of the central failures of technocratic dreams. We have always hoped, Sacks argued, that ‘if only we introduced some fantastic new communication machine the world will be transformed.’ Instead, though, even our best and brightest devices must be accommodated within existing practices and assumptions in a ‘world that has whatever organisation it already has.’
As an example, Sacks considered the telephone. Introduced into American homes during the last quarter of the 19th Century, instantaneous conversation across hundreds or even thousands of miles seemed close to a miracle. For Scientific American, editorializing in 1880, this heralded ‘nothing less than a new organization of society – a state of things in which every individual, however secluded, will have at call every other individual in the community, to the saving of no end of social and business complications…’
Yet the story that unfolded was not so much ‘a new organization of society’ as the pouring of existing human behaviour into fresh moulds: our goodness, hope and charity; our greed, pride and lust. New technology didn’t bring an overnight revolution. Instead, there was strenuous effort to fit novelty into existing norms.”
I prefer too much information to too little, so I’m strongly in favor of our decentralized, interconnected world, even though I think most of the tools misused, much of the text a bore. Our thumbs often fail us in the same ways our voices did. From “I Type, Therefore I Am,” Tom Chatfield’s new Aeon essay:
“As a medium, electronic screens possess infinite capacities and instant interconnections, turning words into a new kind of active agent in the world. The 21st century is a truly hypertextual arena (hyper from ancient Greek meaning ‘over, beyond, overmuch, above measure’). Digital words are interconnected by active links, as they never have and never could be on the physical page. They are, however, also above measure in their supply, their distribution, and in the stories that they tell.
Just look at the ways in which most of us, every day, use computers, mobile phones, websites, email and social networks. Vast volumes of mixed media surround us, from music to games and videos. Yet almost all of our online actions still begin and end with writing: text messages, status updates, typed search queries, comments and responses, screens packed with verbal exchanges and, underpinning it all, countless billions of words.
This sheer quantity is in itself something new. All future histories of modern language will be written from a position of explicit and overwhelming information — a story not of darkness and silence but of data, and of the verbal outpourings of billions of lives. Where once words were written by the literate few on behalf of the many, now every phone and computer user is an author of some kind. And — separated from human voices — the tasks to which typed language, or visual language, is being put are steadily multiplying.“
“As early as 1945, the American engineer and inventor Vannevar Bush described the potentials of a hypothetical system he dubbed ‘Memex’: a single device within which a compressed, searchable form of all the records and communications in someone’s life could be stored. It’s a project whose spirit lives on in Microsoft’s MylifeBits project, among other places, which attempted digitally to record every single aspect of a modern life – and presented the results in a book by researchers Gordon Bell and Jim Gemmell entitled Total Recall: How the E-Memory Revolution Will Change Everything.
What Google’s glasses suggest to me, though, is a giant leap forward in the sheer ease of capturing and broadcasting our lives from minute to minute – something that smartphones have already revolutionised once during the space of the last decade. Far more than mere technological possibility, it’s this portability and seamlessness that seem likely to most transform the way we live over the coming century. And it makes me wonder: what exactly does it mean when a computer’s memory becomes a more and more integral part of our own process of remembering?
The word ‘memory’ is the same in both cases, but there’s a huge gulf between the phenomena it describes in people and in machines. Computers’ memories offer a complete, faithful and objective record of whatever is put into them. They do not degrade over time or introduce errors. They can be shared and copied almost endlessly without loss, or precisely erased if preferred. They can be fully indexed and rapidly searched. They can be remotely accessed and beamed across the world in fractions of a second, and their contents remixed, augmented or updated endlessly.”
••••••••••
Animated version of Vannevar Bush’s Memex diagrams:
From “As We May Think,” by Vannevar Bush in the Atlantic,1945: “There is a growing mountain of research. But there is increased evidence that we are being bogged down today as specialization extends. The investigator is staggered by the findings and conclusions of thousands of other workers—conclusions which he cannot find time to grasp, much less to remember, as they appear. Yet specialization becomes increasingly necessary for progress, and the effort to bridge between disciplines is correspondingly superficial.
Professionally our methods of transmitting and reviewing the results of research are generations old and by now are totally inadequate for their purpose. If the aggregate time spent in writing scholarly works and in reading them could be evaluated, the ratio between these amounts of time might well be startling. Those who conscientiously attempt to keep abreast of current thought, even in restricted fields, by close and continuous reading might well shy away from an examination calculated to show how much of the previous month’s efforts could be produced on call. Mendel’s concept of the laws of genetics was lost to the world for a generation because his publication did not reach the few who were capable of grasping and extending it; and this sort of catastrophe is undoubtedly being repeated all about us, as truly significant attainments become lost in the mass of the inconsequential.
The difficulty seems to be, not so much that we publish unduly in view of the extent and variety of present day interests, but rather that publication has been extended far beyond our present ability to make real use of the record. The summation of human experience is being expanded at a prodigious rate, and the means we use for threading through the consequent maze to the momentarily important item is the same as was used in the days of square-rigged ships.
But there are signs of a change as new and powerful instrumentalities come into use.”