Science/Tech

You are currently browsing the archive for the Science/Tech category.

18lpea9eos1ltjpg

I don’t think earthlings should travel to Mars by 2025. We’re in a rush, sure, but probably not in that much of a hurry. My own hope would be that in the near-term future we send unpeopled probes to our neighbor, loaded with 3D printers that begin experimenting with building a self-sustaining colony.

Of course, I’m not a billionaire, so my vote really won’t amount to much. The best argument that Elon Musk and other nouveau space entrepreneurs have for leading us at warp speed into being a multi-planet species isn’t only existential risk but also that the next generation of fabulously wealthy technologists may turn their attention from the skies. It wouldn’t be the first time the stars lost our interest.

A transcript of Musk discussing space exploration at last week’s 2016 StartmeupHK Venture Forum in Hong Kong:

Question:

Let’s get even more way out there and talk about SpaceX. You’ve said that your ultimate goal is getting to Mars. Why is Mars important? Why does Mars matter?

Elon Musk:

It’s really a fundamental decision we need to make as a civilization. What kind of future do we want? Do we want a future where we’re forever confined to one planet until some eventual extinction event, however far in the future that might occur. Or do we want to become a multi-planet species, and then ultimately be out there among the stars, among many planets, many star systems? I think the latter is a far more exciting and inspiring future than the former. 

Mars is the next natural step. In fact, it’s really the only planet we have a shot of establishing a self-sustaining city on. I think once we do establish such a city, there will be a strong forcing function for the improvement of spaceflight technology that will then enable us to establish colonies elsewhere in the solar system and ultimately extend beyond our solar system.

There’s the defensive reason of protecting the future of humanity, ensuring that the light of consciousness is not extinguished should some calamity befall Earth. That’s the defensive reason, but personally I find what gets me more excited is that this would be an incredible adventure–like the greatest adventure ever. It would be exciting and inspiring, and there needs to be things that excite and inspire people. There have to be reasons why you get up in the morning. It can’t just be solving problems. It’s got to be something great is going to happen in the future.

Question:

It’s not an exit strategy or back-up plan for when Earth fails. It’s also to inspire people and to transcend and go beyond our mental limits of what we think we can achieve.

Elon Musk:

Think of how sort of incredible the Apollo program was. If you ask anyone to name some of humanity’s greatest achievements of the 20th century, the Apollo program, landing on the moon, would in many places be number one.

Question:

When will there be a manned SpaceX mission and when will you go to Mars?

Elon Musk:

We’re pretty close to sending crew up to the Space Station. That’s currently scheduled for the end of next year. So that will be exciting, with our Dragon 2 spacecraft. Then we’ll have a next-generation rocket and spacecraft beyond the Falcon-Dragon series, and I’m hoping to describe that architecture later this year at the National Aeronautical Congress, which is the big international space event every year. I think that will be quite exciting.

In terms of me going, I don’t know, maybe four or five years from now. Maybe going to the Space Station would be nice. In terms of the first flights to Mars, we’re hoping to do that around 2025. Nine years from now or thereabouts. 

Question:

Oh my goodness, that’s right around the corner.

Elon Musk:

Well, nine years. Seems like a long time to me.

Question:

Are you doing the zero-gravity training?

Elon Musk:

I’ve done the parabolic flights. Those are fun.

Question:

You must be reading up and doing the physical work to get ready for the ultimate flight of your life.

Elon Musk:

Umm, I don’t think it’s that hard, honestly. Just float around. It’s not that hard to float around. [Laughter] Well, going to Mars is going to be hard and dangerous and difficult in every way, and if you care about being safe and comfortable going to Mars would be a terrible choice.

Tags:

telephoneoperators

Do you want a digital assistant 10,000 times more useful than Siri? A voice-activated universal remote that runs your life? I suppose the answer is “yes.”

Moore’s Law made supercomputers of yore affordable and portable for almost everyone, stealing them from the domain of superwealthy corporations and states and sliding them into our shirt pockets. Similarly, efforts are being made to create AI that acts as a voice-activated universal remote for our lives, anticipating and satisfying our needs. We may soon be able to enjoy the benefits of a “staff” the way our richer brethren do. 

The thing is, most of the new technologies have not created more leisure. Will these tools, if realized, be the same? If they do actually reduce toil, what will we use the extra bandwidth for?

From Zoë Corbyn’s Guardian article about Dag Kittlaus’ attempts to create not Frankenstein but Igor:

Kittlaus is the co-founder and CEO of Viv, a three-year-old AI startup backed by $30m, including funds from Iconiq Capital, which helps manage the fortunes of Mark Zuckerberg and other wealthy tech executives. In a blocky office building in San Jose’s downtown, the company is working on what Kittlaus describes as a “global brain” – a new form of voice-controlled virtual personal assistant. With the odd flashes of personality, Viv will be able to perform thousands of tasks, and it won’t just be stuck in a phone but integrated into everything from fridges to cars. “Tell Viv what you want and it will orchestrate this massive network of services that will take care of it,” he says.

It is an ambitious project but Kittlaus isn’t without a track record. The last company he co-founded invented Siri, the original virtual assistant now standard in Apple products. Siri Inc was acquired by the tech giant for a reported $200m in 2010. The inclusion of the Siri software in the iPhone in 2011 introduced the world to a new way to interact with a mobile device. Google and Microsoft soon followed with their versions. More recently they have been joined by Amazon, with the Echo you can talk to, and Facebook, with its experimental virtual assistant, M.

But, Kittlaus says, all these virtual assistants he helped birth are limited in their capabilities. Enter Viv. “What happens when you have a system that is 10,000 times more capable?” he asks. “It will shift the economics of the internet.”•

Tags: ,

Some of the things contemporary consumers most desire to possess are tangible (smartphones) and others not at all (Facebook, Instagram, etc.). In fact, many want the former mainly to get the latter. A social media “purchase” requires no money but is a trade of information for attention, a dynamic that’s been widely acknowledged, but one that still stuns me. Our need to share ourselves–to write our names Kilroy-like on a wall, as Hunter S. Thompson once said–is etched so deeply in our brains. Manufacturers have used psychology to sell for at least a century, but the transaction has never been purer, never required us to not only act on impulse but to publish that instinct as well. Judging by the mood of America, this new thing, while it may provide some satisfaction, also promotes an increased hunger in the way sugar does. And while the Internet seems to encourage individuality, its mass use and many memes suggests something else.

On a somewhat related topic: Rebecca Spang’s Financial Times article analyzes a new book which argues that a consumerist shift is more a political movement than we’d like to believe, often a culmination of large-scale state decisions rather than of personal choice. The passage below is referring to material goods, but I think the implications for the immaterial are the same. The excerpt:

In Empire of Things, Frank Trentmann brings history to bear on all these questions. His is not a new subject, per se, but his thick volume is both an impressive work of synthesis and, in its emphasis on politics and the state, a timely corrective to much existing scholarship on consumption. Based on specialist studies that range across five centuries, six continents and at least as many languages, the book is encyclopedic in the best sense. In his final pages, Trentmann intentionally or otherwise echoes Diderot’s statement (in his own famous Encyclopédie) that the purpose of an encyclopedia is to collect and transmit knowledge “so that the work of preceding centuries will not become useless to the centuries to come”. Empire of Things uses the evidence of the past to show that “the rise of consumption entailed greater choice but it also involved new habits and conventions . . . these were social and political outcomes, not the result of individual preferences”. The implications for our current moment are significant: sustainable consumption habits are as likely to result from social movements and political action as they are from self-imposed shopping fasts and wardrobe purges.

When historians in the 1980s-1990s first shifted from studying production to consumption, our picture of the past became decidedly more individualist. In their letters and diaries, Georgian and Victorian consumers revealed passionate attachments to things — those they had and those they craved. Personal tastes and preferences hence came to rival, then to outweigh, abstract processes (industrialisation, commodification, etc) as explanations for historical change. The world looked so different! Studied from the vantage point of production, the late 18th and 19th centuries had appeared uniformly dark and dusty with soot; imagined from the consumer’s perspective, those same years glowed bright with an entire spectrum of strange, distinct colours (pigeon’s breast, carmelite, eminence, trocadero, isabella, Metternich green, Niagra [sic] blue, heliotrope). At the exact moment when Soviet power seemed to have collapsed chiefly from the weight of repressed consumer desire, consumption emerged as a largely positive, almost liberating, historical force. “Material culture” became a common buzzword; “thing theory” — yes, it really is a thing — was born.•

Tags: ,

edisonbulb

Asking if innovation is over is no less narcissistic than suggesting that evolution is done. It flatters us to think that we’ve already had all the good ideas, that we’re the living end. More likely, we’re always closer to the beginning.

Of course, when looking at relatively short periods of time, there are ebbs and flows in invention that have serious ramifications for the standard of living. In Robert Gordon’s The Rise and Fall of American Growth, the economist argues that the 1870-1970 period was a golden age of productivity and development unknown previously and unmatched since.

In an excellent Foreign Affairs review, Tyler Cowen, who himself has worried that we’ve already picked all the low-hanging fruit, lavishly praises the volume–“likely to be the most interesting and important economics book of the year.” But in addition to acknowledging a technological slowdown in the last few decades, Cowen also wisely counters the book’s downbeat tone while recognizing the obstacles to forecasting, writing that “predicting future productivity rates is always difficult; at any moment, new technologies could transform the U.S. economy, upending old forecasts. Even scholars as accomplished as Gordon have limited foresight.” In fact, he points out that the author, before his current pessimism, predicted earlier this century very healthy growth rates.

My best guess is that there will always be transformational opportunities, ripe and within arm’s length, waiting for us to pluck them.

An excerpt:

In the first part of his new book, Gordon argues that the period from 1870 to 1970 was a “special century,” when the foundations of the modern world were laid. Electricity, flush toilets, central heating, cars, planes, radio, vaccines, clean water, antibiotics, and much, much more transformed living and working conditions in the United States and much of the West. No other 100-year period in world history has brought comparable progress. A person’s chance of finishing high school soared from six percent in 1900 to almost 70 percent, and many Americans left their farms and moved to increasingly comfortable cities and suburbs. Electric light illuminated dark homes. Running water eliminated water-borne diseases. Modern conveniences allowed most people in the United States to abandon hard physical labor for good.

In highlighting the specialness of these years, Gordon challenges the standard view, held by many economists, that the U.S. economy should grow by around 2.2 percent every year, at least once the ups and downs of the business cycle are taken into account. And Gordon’s history also shows that not all GDP gains are created equal. Some sources of growth, such as antibiotics, vaccines, and clean water, transform society beyond the size of their share of GDP. But others do not, such as many of the luxury goods developed since the 1980s. GDP calculations do not always reflect such differences. Gordon’s analysis here is mostly correct, extremely important, and at times brilliant—the book is worth buying and reading for this part alone.

Gordon goes on to argue that today’s technological advances, impressive as they may be, don’t really compare to the ones that transformed the U.S. economy in his “special century.” Although computers and the Internet have led to some significant breakthroughs, such as allowing almost instantaneous communication over great distances, most new technologies today generate only marginal improvements in well-being. The car, for instance, represented a big advance over the horse, but recent automotive improvements have provided diminishing returns. Today’s cars are safer, suffer fewer flat tires, and have better sound systems, but those are marginal, rather than fundamental, changes. That shift—from significant transformations to minor advances—is reflected in today’s lower rates of productivity.•

Tags: ,

robot-congo-2

An Economist article looks at the latest report on automation by Carl Benedikt Frey, Michael Osborne and Craig Holmes, which argues that poorer nations are more likely than, say, America, to be prone to technological unemployment despite the U.S. holding an advantage in AI.

Because such countries are not yet as widely engaged in information work, their Industrial Age could be interrupted mid-epoch before they arrive at the Information Age. It’s like being pushed down a ladder when you’ve only scaled it part of the way. The academics acknowledge, though, that everything from policy to consumer preference may forestall the rise of the machines in India and China others. After all, Foxconn’s promised one-million robots factory workforce has yet to be realized.

An excerpt:

BILL BURR, an American entertainer, was dismayed when he first came across an automated checkout. “I thought I was a comedian; evidently I also work in a grocery store,” he complained. “I can’t believe I forgot my apron.” Those whose jobs are at risk of being displaced by machines are no less grumpy. A study published in 2013 by Carl Benedikt Frey and Michael Osborne of Oxford University stoked anxieties when it found that 47% of jobs in America were vulnerable to automation. Machines are mastering ever more intricate tasks, such as translating texts or diagnosing illnesses. Robots are also becoming capable of manual labour that hitherto could be carried out only by dexterous humans.

Yet America is the high ground when it comes to automation, according to a new report* from the same pair along with other authors. The proportion of threatened jobs is much greater in poorer countries: 69% in India, 77% in China and as high as 85% in Ethiopia. There are two reasons. First, jobs in such places are generally less skilled. Second, there is less capital tied up in old ways of doing things. Driverless taxis might take off more quickly in a new city in China, for instance, than in an old one in Europe.

Attracting investment in labour-intensive manufacturing has been a route to riches for many developing countries, including China. But having a surplus of cheap labour is becoming less of a lure to manufacturers. An investment in industrial robots can be repaid in less than two years. This is a particular worry for the poor and underemployed in Africa and India, where industrialisation has stalled at low levels of income—a phenomenon dubbed “premature deindustrialisation” by Dani Rodrik of Harvard University.•

Tags: , ,

dallas-cowboys-coach-tom-landry-and-quarterback-eddie-lebaron-the-boys-are-back-blog_thumb (1)

We know football is horrible for the game’s players, the head injuries traumatic and unavoidable regardless of the equipment. The question is whether this truth is an existential threat for the most popular team sport in America. It was for boxing, once not that long ago the king of the U.S. athletics. But prizefighting was an ever-changing hodge-podge of crooked promoters and money men, whereas the NFL is a unified–and crooked–billion-dollar corporation. Can it find some way to keep kids playing a game that will ruin them?

Two recent tragic examples underline the seriousness of the crisis: The physical and mental deterioration at 36 of former wide receiver Antwaan Randle-El and the troubling post-mortem of ex-Giant Tyler Sash. In the latter case, a study of brain tissue conducted after the fatal overdose of the increasingly erratic retired safety proved he suffered from CTE (Chronic Traumatic Encephalopathy), a degenerative condition caused by repeated concussions and (most likely) sub-concussive impacts. 

CTE has thus far shown up in the tissue of many former football players who’ve died, but the rub is that there’s no way to test for it in the living. That may soon change, and if it does, it could be a game-changer for football and other contact sports. From Jack Encarnacao at the Boston Herald:

As it stands, an athlete has to be dead before he can be diagnosed with Chronic Traumatic Encephalopathy, the trauma-induced brain disease prominent in ex-football players. The disease manifests in a way that standard scans can’t detect, so there’s no way to advise a player to hang it up before irreversible damage is done.

Leading concussion researcher Dr. Robert Cantu of Boston University sees a day when this will change.

“I think we’re within a fairly short window, I hope no more than a few years, of being able to detect CTE in living people with almost 100 percent certainty,” Cantu told me in a sit-down interview for the second installment of my podcast series “Unfiltered,” which continues this week on Boston Herald Radio.

The key, Cantu said, is identifying a marker specific to CTE that a brain scan can pick up. A radioactive substance in tau — the protein at the heart of CTE — may be that marker, but current tests produce smudgy images that make it hard to discern, he said.

“Images will only get better over time, and hopefully soon it will be ready for prime time,” Cantu said.•

Tags: , , ,

retrofutre7 (2)

The late, great AI pioneer Marvin Minsky referred to us as “meat machines,” which irked many (very biased) humans. The more polite phrase subsequently coined to describe our brains in computer terms is “wetware.” Regardless of the vernacular, I think we’re essentially machines, though (for a little while longer) easily the most complex ones.

On that topic, John Pavlus of Quanta has an interesting interview with Harvard computer scientist Leslie Valiant, who believes all biology computational, that “ecorithms” underlie life the way algorithms do machines. To the researcher, learning is learning, human or AI, though there are significant differences in stimuli (external, unpredictable vs. internal, predictable). Not everyone may agree with Valiant, but we’re a far cry from the brickbats he would have received for his beliefs in the 1980s when he began working on machine learning, a field then very belittled if not verboten.

An excerpt:

Question:

So what is learning? Is it different from computing or calculating?

Leslie Valiant:

It is a kind of calculation, but the goal of learning is to perform well in a world that isn’t precisely modeled ahead of time. A learning algorithm takes observations of the world, and given that information, it decides what to do and is evaluated on its decision. A point made in my book is that all the knowledge an individual has must have been acquired either through learning or through the evolutionary process. And if this is so, then individual learning and evolutionary processes should have a unified theory to explain them.

Question:

And from there, you eventually arrived at the concept of an “ecorithm.” What is an ecorithm, and how is it different from an algorithm?

Leslie Valiant:

An ecorithm is an algorithm, but its performance is evaluated against input it gets from a rather uncontrolled and unpredictable world. And its goal is to perform well in that same complicated world. You think of an algorithm as something running on your computer, but it could just as easily run on a biological organism. But in either case an ecorithm lives in an external world and interacts with that world.

Question:

So the concept of an ecorithm is meant to dislodge this mistaken intuition many of us have that “machine learning” is fundamentally different from “non-machine learning”? An ecorithm is an algorithm, but its performance is evaluated against input it gets from a rather uncontrolled and unpredictable world. And its goal is to perform well in that same complicated world. You think of an algorithm as something running on your computer, but it could just as easily run on a biological organism. But in either case an ecorithm lives in an external world and interacts with that world.

Leslie Valiant:

Yes, certainly. Scientifically, the point has been made for more than half a century that if our brains run computations, then if we could identify the algorithms producing those computations, we could simulate them on a machine, and “artificial intelligence” and “intelligence” would become the same. But the practical difficulty has been to determine exactly what these computations running on the brain are. Machine learning is proving to be an effective way of bypassing this difficulty.•

 

Tags: ,

lat77

L.A. 2013” is a 1988 Los Angeles Times feature that imagined life in the future for a family of four-–and their robots. The feature dreamed too big in some cases and not enough in others, though it did see smart homes, quantified health, personalization, etc. An excerpt:

6 A.M.

WITH A BARELY perceptible click, the Morrow house turns itself on, as it has every morning since the family had it retrofitted with the Smart House system of wiring five years ago. Within seconds, warm air whooshes out of heating ducts in the three bedrooms, while the water heater checks to make sure there’s plenty of hot water. In the kitchen, the coffee maker begins dripping at the same time the oven switches itself on to bake a fresh batch of cinnamon rolls. Next door in the study, the family’s personalized home newspaper, featuring articles on the subjects that interest them, such as financial news and stories about their community, is being printed by laser-jet printer off the home computer–all while the family sleeps.

6:30 A.M.

With a twitch, “Billy Rae,” the Morrows’ mobile home robot, unplugs himself from the kitchen wall outlet–where he has been recharging for the past six hours–then wheels out of the kitchen and down the hall toward the master bedroom for his first task of the day. Raising one metallic arm, Billy Rae gently knocks on the door, calling out the Morrows’ names and the time in a pleasant, if Southern drawl: ‘Hey, y’all–rise an’ shine!’

On the other side of the door, Alma Morrow, a 44-year-old information specialist. Pulling on some sweats, Alma heads for the tiny home gym, where she slips a credit–card-size X–ER Script–her personal exercise prescription–into a slot by the door. Electronic weights come out of the wall, and Alma begins her 20-minute workout.

Meanwhile, her husband, Bill, 45, a senior executive at a Los Angeles–based multinational corporation, is having a harder time. He’s still feeling exhausted from the night before, when his 70-year-old mother, Camille, who lives with the family, accidentally fell asleep with a lighted cigarette. Minutes after the house smoke detector notified them of a potential hazard, firefighters from the local station were pounding on the front door. Camille, one of the last of the old–time smokers, had blamed the accident on these “newfangled Indian cigarettes” she’s been forced to buy since India has overtaken the United States in cigarette production. Luckily, she only singed a pillowcase–and her considerable pride. Bill, however, had been unable to fall back asleep and had spent a couple of hours in the study at the personal computer, teleconferring with his counterparts in the firm’s Tokyo office. But this morning, he can’t afford to be late. With a grunt, he rolls out of bed and heads for the bathroom, where he swishes and swallows Denturinse–much easier and more effective than toothbrushing–and then hurries to get dressed. As he does, the video intercom buzzes. Camille’s collagen-improved face appears on the video screen, her gravelly voice booming over the speaker. Bill clicks off the camera on his side so Camille can’t see him in his boxer shorts, then talks to her. She tells him she wants him to drive her downtown to finalize her retirement plan with her attorney. Knowing this will make him late, he suggests that Alma could drop Camille off at the law firm’s branch office in the Granada Hills Community Center. Camille reluctantly agrees– much to Alma’s chagrin–then buzzes off. When the couple heads for the kitchen, they leave the bed unmade: Billy Rae can change the sheets.•

In his great song “Pretty Boy Floyd,” Woody Guthrie, knowing that when it comes to crime a collar can be white just as easily as blue, sang these words:

Yes, as through this world I’ve wandered
I’ve seen lots of funny men;

Some will rob you with a six-gun,
And some with a fountain pen.

For those who employ the latter modus operandi, not even a stylus, let alone a pen, is necessary anymore. Over the last four decades in the U.S. (and much of the rest of the developed world), money has mysteriously moved from the middle class into the accounts of the 1%, and no one seems completely sure how it was transferred. We’re only know that it’s shifted, that it’s been shifty. Maybe it was the manipulation of tax codes or the decline of unions or the rise of the machines or the forces of globalization or the invention of outlandish Wall Street products. Probably it was all of that and more. The result is the disappearance of the prosperity enjoyed by a far greater percentage of Americans in the aftermath of WWII through the early 1970s, which was created by a humming capitalist engine paired with severe progressive tax rates that redistributed the wealth. No one need want to return to the pre-Civil Rights United States–wildly uneven in other odious ways–but there are some economic lessons to be learned there.

One thing that seems sure is the vast accumulation of riches at the top isn’t the end result of a successful experiment in meritocracy. These are the not uniformly the best, the brightest and the most deserving. Similarly, the shit-out-of-luck souls aren’t on the ever-widening bottom because of any defect of character or lack of work ethic. Some may drink or use drugs or divorce, but so do those whose wealth provides a cushion for such failings common to mere mortals. The main reason that poor people are so is because, at long last, they don’t have any money. They haven’t failed the system. Quite the contrary.

In a London Review of Books essay, Ed Miliband, the leader of the British Labour Party prior to Jeremy Corbyn, opines on the haves, the have-nots and the what-the-fuck situation we all find ourselves in, the eclipsed and the sun-kissed alike. The politician, who believes that beyond sheer unfairness, inequality ultimately inhibits economic growth, offers some prescriptions. The opening:

‘What do I see in our future today you ask? I see pitchforks, as in angry mobs with pitchforks, because while … plutocrats are living beyond the dreams of avarice, the other 99 per cent of our fellow citizens are falling farther and farther behind.’ Who said this? Jeremy Corbyn? Thomas Piketty? In fact it was Nick Hanauer, an American entrepreneur and multibillionaire, who in a TED talk in 2014 confessed to living a life that the rest of us ‘can’t even imagine’. Hanauer doesn’t believe he’s particularly talented or unusually hardworking; he doesn’t believe he has a great technical mind. His success, he says, is a ‘consequence of spectacular luck, of birth, of circumstance and of timing’. Just as his own extraordinary wealth can’t be explained by his unique talents, neither, he says, can rising inequality in the United States be justified on the grounds that it is a side effect of a broader economic success from which everyone benefits. As Henry Ford recognised, if you don’t pay ordinary workers decent wages, the economy will lack the demand to sustain economic growth.

Hanauer is in the vanguard of the ‘Fight for 15’, the campaign for a $15 minimum wage. Like Bill Gates and Warren Buffett, who have also issued loud warnings about inequality, he is heir to a long tradition of social concern among the wealthy in the US. They have reason to be worried. The last time inequality reached comparable levels was shortly before the Wall Street Crash. As Anthony Atkinson shows in Inequality: What Can Be Done?, inequality in the US fell for decades after the crash, before beginning to rise again in the 1970s. Since then the gap between the wealthy and the rest has grown steadily wider. The top 1 per cent now has nearly 20 per cent of total US personal income. In the 1980s, inequality in the UK went up even more sharply than in the US. Since then, overall UK inequality has been relatively stable but the income share of the top 1 per cent has increased significantly and now accounts for about 12 per cent of UK personal income. The important factors are rising inequality in wages, a decline in the share of the national income that wages represent as more money goes to corporate profits and dividends, and a reversal of redistribution from the rich to the poor.

The rise in inequality should not, Atkinson insists, be brushed aside as an inevitable effect of irresistible forces such as globalisation or developments in technology. It is driven by political choices.•

Tags: ,

pi-1

AI cracked backgammon in 1979, putting all other games on notice. But today’s announcement about a Google computer system besting a human Go champion was still surprising since most researchers thought we were years, perhaps a decade, from machine intelligence accomplishing such a feat in the complex, ancient game. What does this mean for Artificial General Intelligence and where does research head next? In a Conversation piece, Peter Cowling and Sam Devlin try to answer. An excerpt:

However the real world is a step up, full of ill-defined questions that are far more complex than even the trickiest of board games. The techniques which conquered Go can certainly be applied in medicine, education, science or any other domain where data is available and outcomes can be evaluated and understood.

The big question is whether Google just helped us towards the next generation of Artificial General Intelligence, where machines learn to truly think like – and beyond – humans. Whether we’ll see AlphaGo as a step towards Hollywood’s dreams (and nightmares) of AI agents with self-awareness, emotion and motivation remains to be seen. However the latest breakthrough points to a brave new future where AI will continue to improve our lives by helping us to make better-informed decisions in a world of ever-increasing complexity.

Now that Go has seemingly been cracked, AI needs a new grand challenge – a new “lab rat” – and it seems likely that many of these challenges will come from the $100 billion digital games industry. The ability to play alongside or against millions of engaged human players provides unique opportunities for AI research. At York’s centre for Intelligent Games and Game Intelligence, we’re working on projects such as building an AI aimed at player fun (rather than playing strength), for instance, or using games to improve well-being of people with Alzheimer’s. Collaborations between multidisciplinary labs like ours, the games industry and big business are likely to yield the next big AI breakthroughs.•

____________________________

“The possibilities of game play are endless.”

Tags: ,

bookomatvending

There’s never been greater access to books than there is right now, but all progress comes with a price. If print fiction and histories and such should disappear or become merely a luxury item, digital media would change the act of reading in unexpected ways over time.

Some see screen reading promoting a decline in analytical skills, but the human brain sure seems able to adapt to new forms once it becomes acclimated. Even as someone raised on paper books, I’m not worried that what’s lost in translation will be greater than what’s gained. Of course, I say that while still primarily using dead-tree volumes.

In a smart BBC Future article, Rachel Nuwer traces the fuzzy history of e-books and considers the future of reading. Some experts she interviews hope for a “bi-literate” society that values both the paperback and the Kindle. That would be a great outcome, but I don’t know how realistic a scenario it is. The opening:

When Peter James published his novel Host on two floppy disks in 1993, he was ill-prepared for the “venomous backlash” that would follow. Journalists and fellow writers berated and condemned him; one reporter even dragged a PC and a generator out to the beach to demonstrate the ridiculousness of this new form of reading. “I was front-page news of many newspapers around the world, accused of killing the novel,”James told pop.edit.lit. “[But] I pointed out that the novel was already dying at an alarming rate without my assistance.”

Shortly after Host’s debut, James also issued a prediction: that e-books would spike in popularity once they became as easy and enjoyable to read as printed books. What was a novelty in the 90s, in other words, would eventually mature to the point that it threatened traditional books with extinction. Two decades later, James’ vision is well on its way to being realised.

That e-books have surged in popularity in recent years is not news, but where they are headed – and what effect this will ultimately have on the printed word – is unknown. Are printed books destined to eventually join the ranks of clay tablets, scrolls and typewritten pages, to be displayed in collectors’ glass cases with other curious items of the distant past?

And if all of this is so, should we be concerned?•

Tags: ,

applecomp1981 (2)

In a series of articles in the New York Review of Books over the last couple of years, Sue Halpern has taken a thought-provoking look at the dubious side of the Digital Era, considering the impact of tech billionaires, technological unemployment and the Internet of Things.

Her latest salvo tries to locate the real legacy of Steve Jobs, who was mourned equally in office parks and Zuccotti Park. In doing so she calls on the two recent films on the Apple architect, Alex Gibney’s and Danny Boyle’s, and the new volume about him by Brent Schlender and Rick Tetzeli. Ultimately, the key truth may be that Jobs used a Barnum-esque “magic” and marketing myths to not only sell his new machines but to plug them into consumers’ souls.

An excerpt:

So why, Gibney wonders as his film opens—with thousands of people all over the world leaving flowers and notes “to Steve” outside Apple Stores the day he died, and fans recording weepy, impassioned webcam eulogies, and mourners holding up images of flickering candles on their iPads as they congregate around makeshift shrines—did Jobs’s death engender such planetary regret?

The simple answer is voiced by one of the bereaved, a young boy who looks to be nine or ten, swiveling back and forth in a desk chair in front of his computer: “The thing I’m using now, an iMac, he made,” the boy says. “He made the iMac. He made the Macbook. He made the Macbook Pro. He made the Macbook Air. He made the iPhone. He made the iPod. He’s made the iPod Touch. He’s made everything.”

Yet if the making of popular consumer goods was driving this outpouring of grief, then why hadn’t it happened before? Why didn’t people sob in the streets when George Eastman or Thomas Edison or Alexander Graham Bell died—especially since these men, unlike Steve Jobs, actually invented the cameras, electric lights, and telephones that became the ubiquitous and essential artifacts of modern life?* The difference, suggests the MIT sociologist Sherry Turkle, is that people’s feelings about Steve Jobs had less to do with the man, and less to do with the products themselves, and everything to do with the relationship between those products and their owners, a relationship so immediate and elemental that it elided the boundaries between them. “Jobs was making the computer an extension of yourself,” Turkle tells Gibney. “It wasn’t just for you, it was you.”•

Tags: , , ,

reed-hastings

17nfchxk68rn8jpg

ironically-hastings-offered-to-sell-forty-nine-percent-of-netflix-to-blockbuster-in-2000-to-act-as-an-online-arm-for-the-video-r

underwood5

The particular rules Clayton Christensen laid down for disruptive innovation probably don’t much matter because the world doesn’t exist within his constructs, but ginormous companies (even entire industries) being done in by much smaller ones has become an accepted part of life in the Digital Age.

In trying to explain this phenomenon, Christopher Mims of the Wall Street Journal explores the ideas in Anshu Sharma’s much-debated article about Stack Fallacy, which argues that companies moving up beyond their core businesses are likely to fail (Google+, anyone?), while those moving down into the guts of what they know have a far better chance. For an example of the latter, Mims writes of the ride-sharing sector. An excerpt:

To really understand the stack fallacy, it helps to recognize that companies move “down” the stack all the time, and it often strengthens their position. It is the same thing as vertical integration. For example, engineers of Apple’s iPhone know exactly what they want in a mobile chip, so Apple’s move to make its own chips has yielded enormous dividends in terms of how the iPhone performs. In the same way, Google’s move down its own stack—creating its own servers, designing its own data centers, etc.—allowed it to become dominant in search. Similarly, Tesla’s move to build its own batteries could—as long as it allows Tesla to differentiate its products in terms of price and/or performance—be a deciding factor in whether or not it succeeds.

Of course, the real test of a sweeping business hypothesis is whether or not it has predictive power. So here’s a prediction based on the stack fallacy: We’re more likely to see Uber succeed at making cars than to see General Motors succeed at creating a ride-sharing service like Uber. Both companies appear eager to invade each other’s territory. But, assuming that ride sharing becomes the dominant model for transportation, Uber has the advantage of knowing exactly what it needs in a vehicle for such a service.

It is also worth noting that the stack fallacy is just that: a fallacy and not a law of nature. There are ways around it. The key is figuring out how to have true, firsthand empathy for the needs of the customer for whatever product you’re trying to build next.•

Tags: , ,

tentaclearm (1)

In addition to yesterday’s trove of posts about the late, great Marvin Minsky, I want to refer you to a Backchannel remembrance of the AI pioneer by Steven Levy, the writer who had the good fortune to arrive on the scene at just the right moment in the personal-computer boom and the great talent to capture it. The journalist recalls Minsky’s wit and conversation almost as much as his contributions to tech. Just a long talk with the cognitive scientist was a perception-altering experience, even if his brilliance was intimidating. The opening:

There was a great contradiction about Marvin Minsky. As one of the creators of artificial intelligence (with John McCarthy), he believed as early as the 1950s that computers would have human-like cognition. But Marvin himself was an example of an intelligence so bountiful, unpredictable and sublime that not even a million Singularities could conceivably produce a machine with a mind to match his. At the least, it is beyond my imagination to conceive of that happening.

But maybe Marvin could imagine it. His imagination respected no borders.

Minsky died Sunday night, at 88. His body had been slowing down, but that mind had kept churning. He was more than a pioneering computer scientist — he was a guiding light for what intellect itself could do. He was also our Yoda. The entire computer community, which includes all of us, of course, is going to miss him. 

I first met him in 1982; I had written a story for Rolling Stone about young computer hackers, and it was optioned by Jane Fonda’s production company. I traveled to Boston with Fonda’s producer, Bruce Gilbert; and Susan Lyne, who had engineered my assignment to begin with. It was my first trip to MIT; my story about been about Stanford hackers.

I was dazzled by Minsky, an impish man of clear importance whose every other utterance was a rabbit’s hole of profundity and puzzlement.•

Tags: ,

DR. WERNHER VON BRAUN SUITED UP IN SPACE SUIT PRIOR TO ENTERING MARSHALL SPACE FLIGHT CENTER'S NEUTRAL BUOYANCY SIMULATOR. 1967

Five Books did an excellent interview with geneticist Matthew Cobb on the topic of the “History of Science.” In discussing William E. Burrows’ really fun 1999 title, This New Ocean: The Story of the First Space Age, Cobb comments on Wernher von Braun an erstwhile Nazi and American hero who directly oversaw the murders of Jewish prisoners and who wanted to gas monkey astronauts in outer space (I swear!). An excerpt:

Question:

You just mentioned Enceladus so, talking of space missions, we’ll go on to your next book: William Burrows’s This New Ocean: The Story of the First Space Age published in 1998. What do you like about this book?

Matthew Cobb:

Space! Rockets! When it came out I was about to go on holiday and wanted a thick book to read. Burrows is a science journalist: not a historian or a scientist. I find it incredibly readable, very exciting. Although it was written by an American, it didn’t cover up the fact that Wernher von Braun, the brains behind the Apollo programme, was a Nazi Party member who was absolved for his involvement with the Hitler regime because he could build ICBMs. The book contains a good account—as good as there could be at the time, given the archives in the USSR hadn’t fully opened—of the huge advances the Russians made, which became obvious as they first flew up the Sputnik and then put the first man in space. I find it an extremely readable account of a time I grew up in—almost like a novel. I wasn’t reading it with a professional eye because I don’t know much about space history.

Question:

Burrows’s book is very dramatic—especially some of the moments like the first moon landing.

Matthew Cobb:

I remember it! I was 11 years old at the time. I was watching it with my uncle Brian in the middle of the night. Although I remember the excitement of seeing Neil Armstrong’s feet stepping down on to the ground, I was equally amazed by the fact that Brian was eating four Weetabix at three o’clock in the morning. We have lost a lot of the excitement about space flight. A year ago NASA trialled the Orion space capsule, which they may use to fly to Mars. The launch was in the middle of one of my lectures, so I decided to take a brief break and show the students the NASA live stream. You don’t see rocket launches on live TV anymore. The space shuttle has been scrapped and although there are rockets going to the Space Station, and private companies like SpaceX and Blue Origin developing reusable rockets, they doesn’t enjoy the same media attention as in the 60s and 70s. So we all sat and watched it—the students were very excited.•

Tags: ,

1Sadly, the legendary MIT cognitive scientist Marvin Minsky just died. From building a robotic tentacle arm nearly 50 years ago to consulting on 2001: A Space Odyssey, the AI expert–originator, really–thought as much as anyone could about smart machines during a lifetime. From Glenn Rifkin’s just-published New York Times obituary:

Well before the advent of the microprocessor and the supercomputer, Professor Minsky, a revered computer science educator at M.I.T., laid the foundation for the field of artificial intelligence by demonstrating the possibilities of imparting common-sense reasoning to computers.

“Marvin was one of the very few people in computing whose visions and perspectives liberated the computer from being a glorified adding machine to start to realize its destiny as one of the most powerful amplifiers for human endeavors in history,” said Alan Kay, a computer scientist and a friend and colleague of Professor Minsky’s.•

The following are a collection of past posts about his life and work.

_______________________________

“Such A Future Cannot Be Realized Through Biology”

WESTWORLD

Reading Michael Graziano’s great essay about building a mechanical brain reminded me of Marvin Minsky’s 1994 Scientific American article,Will Robots Inherit the Earth?It foresees a future in which intelligence is driven by nanotechnology, not biology. Two excerpts follow.

· · · · · · · · · ·

Everyone wants wisdom and wealth. Nevertheless, our health often gives out before we achieve them. To lengthen our lives, and improve our minds, in the future we will need to change our bodies and brains. To that end, we first must consider how normal Darwinian evolution brought us to where we are. Then we must imagine ways in which future replacements for worn body parts might solve most problems of failing health. We must then invent strategies to augment our brains and gain greater wisdom. Eventually we will entirely replace our brains — using nanotechnology. Once delivered from the limitations of biology, we will be able to decide the length of our lives–with the option of immortality — and choose among other, unimagined capabilities as well.

In such a future, attaining wealth will not be a problem; the trouble will be in controlling it. Obviously, such changes are difficult to envision, and many thinkers still argue that these advances are impossible–particularly in the domain of artificial intelligence. But the sciences needed to enact this transition are already in the making, and it is time to consider what this new world will be like.

Such a future cannot be realized through biology. 

       · · · · · · · · · ·

Once we know what we need to do, our nanotechnologies should enable us to construct replacement bodies and brains that won’t be constrained to work at the crawling pace of “real time.” The events in our computer chips already happen millions of times faster than those in brain cells. Hence, we could design our “mind-children” to think a million times faster than we do. To such a being, half a minute might seem as long as one of our years, and each hour as long as an entire human lifetime.

But could such beings really exist? Many thinkers firmly maintain that machines will never have thoughts like ours, because no matter how we build them, they’ll always lack some vital ingredient. They call this essence by various names–like sentience, consciousness, spirit, or soul. Philosophers write entire books to prove that, because of this deficiency, machines can never feel or understand the sorts of things that people do. However, every proof in each of those books is flawed by assuming, in one way or another, the thing that it purports to prove–the existence of some magical spark that has no detectable properties.

I have no patience with such arguments.•

_________________________________

“A Century Ago, There Would Have Been No Way Even To Start Thinking About Making Smart Machines”

AI pioneer Marvin Minsky at MIT in ’68 showing his robotic arm, which was strong enough to lift an adult, gentle enough to hold a child.

Minsky discussing smart machines on Edge: 

Like everyone else, I think most of the time. But mostly I think about thinking. How do people recognize things? How do we make our decisions? How do we get our new ideas? How do we learn from experience? Of course, I don’t think only about psychology. I like solving problems in other fields — engineering, mathematics, physics, and biology. But whenever a problem seems too hard, I start wondering why that problem seems so hard, and we’re back again to psychology! Of course, we all use familiar self-help techniques, such as asking, “Am I representing the problem in an unsuitable way,” or “Am I trying to use an unsuitable method?” However, another way is to ask, “How would I make a machine to solve that kind of problem?”

A century ago, there would have been no way even to start thinking about making smart machines. Today, though, there are lots of good ideas about this. The trouble is, almost no one has thought enough about how to put all those ideas together. That’s what I think about most of the time.•

________________________________

“People Have A Fuzzy Idea Of Consciousness”

800px-На_концерте_Шойфет_2-Ashek1881-e1416263704118

Consciousness is the hard problem for a reason. You could define it by saying it means we know our surroundings, our reality, but people get lost in delusions all the time, sometimes even nation-wide ones. What is it, then? Is it the ability to know something, anything, regardless of its truth? In this interview with Jeffrey Mishlove, cognitive scientist Marvin Minsky, no stranger to odysseys, argues against accepted definitions of consciousness, in humans and machines.

________________________________

“The Brain Doesn’t Work In A Simple Way”

tentaclearm

Marvin Minsky, visionary of robotic arms, thinking computers and major motion pictures, is interviewed by Ray Kurzweil. The topic, unsurprisingly: “Is the Singularity Near?”

________________________________

“Do Outstanding Minds Differ From Ordinary Minds In Any Special Way?”

hairdryer44

Humans experience consciousness even though we don’t have a solution to the hard problem. Will we have to crack the code before we can make truly smart machines–ones that not only do but know what they are doing–or is there a way to translate the skills of the human brain to machines without figuring out the mystery? From Marvin Minsky’s 1982 essay, “Why People Think Computers Can’t“:

CAN MACHINES BE CREATIVE?

We naturally admire our Einsteins and Beethovens, and wonder if computers ever could create such wondrous theories or symphonies. Most people think that creativity requires some special, magical ‘gift’ that simply cannot be explained. If so, then no computer could create – since anything machines can do most people think can be explained.

To see what’s wrong with that, we must avoid one naive trap. We mustn’t only look at works our culture views as very great, until we first get good ideas about how ordinary people do ordinary things. We can’t expect to guess, right off, how great composers write great symphonies. I don’t
believe that there’s much difference between ordinary thought and highly creative thought. I don’t blame anyone for not being able to do everything the most creative people do. I don’t blame them for not being able to explain it, either. I do object to the idea that, just because we can’t explain it now, then no one ever could imagine how creativity works.

We shouldn’t intimidate ourselves by our admiration of our Beethovens and Einsteins. Instead, we ought to be annoyed by our ignorance of how we get ideas – and not just our ‘creative’ ones. Were so accustomed to the marvels of the unusual that we forget how little we know about the marvels of ordinary thinking. Perhaps our superstitions about creativity serve some other needs, such as supplying us with heroes with such special qualities that, somehow, our deficiencies seem more excusable.

Do outstanding minds differ from ordinary minds in any special way? I don’t believe that there is anything basically different in a genius, except for having an unusual combination of abilities, none very special by itself. There must be some intense concern with some subject, but that’s common enough. There also must be great proficiency in that subject; this, too, is not so rare; we call it craftsmanship. There has to be enough self-confidence to stand against the scorn of peers; alone, we call that stubbornness. And certainly, there must be common sense. As I see it, any ordinary person who can understand an ordinary conversation has already in his head most of what our heroes have. So, why can’t ‘ordinary, common sense’ – when better balanced and more fiercely motivated – make anyone a genius,

So still we have to ask, why doesn’t everyone acquire such a combination? First, of course, it sometimes just the accident of finding a novel way to look at things. But, then, there may be certain kinds of difference-in-degree. One is in how such people learn to manage what they learn: beneath the surface of their mastery, creative people must have unconscious administrative skills that knit the many things they know together. The other difference is in why some people learn so many more and better skills. A good composer masters many skills of phrase and theme – but so does anyone who talks coherently.

Why do some people learn so much so well? The simplest hypothesis is that they’ve come across some better ways to learn! Perhaps such ‘gifts’ are little more than tricks of ‘higher-order’ expertise. Just as one child learns to re-arrange its building-blocks in clever ways, another child might learn to play, inside its head, at rearranging how it learns!

Our cultures don’t encourage us to think much about learning. Instead we regard it as something that just happens to us. But learning must itself consist of sets of skills we grow ourselves; we start with only some of them and and slowly grow the rest. Why don’t more people keep on learning more and better learning skills? Because it’s not rewarded right away, its payoff has a long delay. When children play with pails and sand, they’re usually concerned with goals like filling pails with sand. But once a child concerns itself instead with how to better learn, then that might lead to exponential learning growth! Each better way to learn to learn would lead to better ways to learn – and this could magnify itself into an awesome, qualitative change. Thus, first-rank ‘creativity’ could be just the
consequence of little childhood accidents.

So why is genius so rare, if each has almost all it takes? Perhaps because our evolution works with mindless disrespect for individuals. I’m sure no culture could survive, where everyone finds different ways to think. If so, how sad, for that means genes for genius would need, instead of nurturing, a frequent weeding out.”•

_______________________________

“Backgammon Is Now The First Board Or Card Game With, In Effect, A Machine World Champion”

sciambackgammon123 (1)

For some reason, the editors of the New Yorker never ask me for advice. I don’t know what they’re thinking. I would tell them this if they did: Publish an e-book of the greatest technology journalism in the magazine’s history. Have one of your most tech-friendly writers compose an introduction and include Lillian Ross’1970 pieceabout the first home-video recorder, Malcolm Ross’1931 lookinside Bell Labs, Anthony Hiss’ 1977 story about the personal computer, Hiss’1975 articleabout visiting Philip K. Dick in Los Angeles, and Jeremy Bernstein’s short1965 pieceand long1966 oneabout Stanley Kubrick making 2001: A Space Odyssey.

Another inclusion could be A.I.Bernstein’s 1981 profile of the great artificial-intelligence pioneer Marvin Minsky. (It’s gated, so you need a subscription to read it.) The opening:

In July of 1979, a computer program called BKG 9.8–the creation of Hans Berliner, a professor of computer science at Carnegie-Mellon University, in Pittsburgh–played the winner of the world backgammon championship in Monte Carlo. The program was run on a large computer at Carnegie-Mellon that was connected by satellite to a robot in Monte Carlo. The robot, named Gammonoid, had a visual-display backgammon board on its chest, which exhibited its moves and those of its opponent, Luigi Villa, of Italy, who by beating all his human challengers a short while before had won the right to play against the Gammonoid. The stakes were five thousand dollars, winner take all, and the computer won, seven games to one. It had been expected to lose. In a recent Scientific American article, Berliner wrote:

Not much was expected of the programmed robot…. Although the organizers had made Gammonoid the symbol of the tournament by putting a picture of it on their literature and little robot figures on the trophies, the players knew the existing microprocessors could not give them a good game. Why should the robot be any different?

This view was reinforced at the opening ceremonies in the Summer Sports Palace in Monaco. At one point the overhead lights dimmed, the orchestra began playing the theme of the film Star Wars, and a spotlight focused on an opening in the stage curtain through which Gammonoid was supposed to propel itself onto the stage. To my dismay the robot got entangled and its appearance was delayed for five minutes.

This was one of the few mistakes the robot made. Backgammon is now the first board or card game with, in effect, a machine world champion. Checkers, chess, go, and the rest will follow–and quite possibly soon. But what does that mean for us, for our sense of uniqueness and worth–especially as machines evolve whose output we can less distinguish from our own?•

________________________________

“Each One Of Us Already Has Experienced What It Is Like To Be Simulated By A Computer”

MaxheadroomMpegMan

We know so little about the tools we depend on every day. When I was a child, I was surprised that no one expected me to learn how to build a TV even though I watched a TV. But, no, I was just expected to process the surface of the box’s form and function, not to understand the inner workings. Throughout life, we use analogies and signs and symbols to make sense of things we constantly consume but don’t truly understand. Our processing of these basics is not unlike a computer’s process. Marvin Minsky wrote brilliantly on this topic in an Afterword of a 1984 Vernor Vinge novel. An excerpt:

Let’s return to the question about how much a simulated life inside a world inside a machine could resemble our real life “out here.” My answer, as you know by now, is that it could be very much the same––since we, ourselves, already exist as processes imprisoned in machines inside machines! Our mental worlds are already filled with wondrous, magical, symbol–signs, which add to every thing we “see” its “meaning” and “significance.” In fact, all educated people have already learned how different are our mental worlds than the ‘real worlds’ that our scientists know.

Consider the table in your dining room; your conscious mind sees it as having familiar functions, forms, and purposes. A table is “a thing to put things on.” However, our science tells us that this is only in the mind; the only thing that’s “really there” is a society of countless molecules. That table seems to hold its shape only because some of those molecules are constrained to vibrate near one another, because of certain properties of force-fields that keep them from pursuing independent trajectories. Similarly, when you hear a spoken word, your mind attributes sense and meaning to that sound––whereas, in physics, the word is merely a fluctuating pressure on your ear, caused by the collisions of myriads of molecules of air––that is, of particles whose distances are so much less constrained.

And so––let’s face it now, once and for all: each one of us already has experienced what it is like to be simulated by a computer!•

_________________________________

“The Book Is About Ways To Read Out The Contents Of A Person’s Brain”

heads789-2

In 1992, AI legend Marvin Minsky believed that by the year 2023 people would be able to download the contents of their brains and achieve “immortality.” That was probably too optimistic. He also thought such technology would only be possible for people who had great wealth. That was probably too pessimistic. Froman interview that Otto Laske conducted with Minsky about his sci-fi novel, The Turing Option:

Otto Laske:

I hear you are writing a science fiction novel. Is that your first such work?

Marvin Minsky:

Well, yes, it is, and it is something I would not have tried to do alone. It is a spy-adventure techno-thriller that I am writing together with my co-author Harry Harrison. Harry did most of the plotting and invention of characters, while I invented new brain science and AI technology for the next century.

Otto Laske:

At what point in time is the novel situated?

Marvin Minsky:

It’s set in the year 2023.

Otto Laske: 

I may just be alive to experience it, then …

Marvin Minsky:

Certainly. And furthermore, if the ideas of the story come true, then anyone who manages to live until then may have the opportunity to live forevermore…

Otto Laske: 

How wonderful …

Marvin Minsky:

 … because the book is about ways to read out the contents of a person’s brain, and then download those contents into more reliable hardware, free from decay and disease. If you have enough money…

Otto Laske: 

 That’s a very American footnote…

Marvin Minsky:

Well, it’s also a very Darwinian concept.

Otto Laske: 

Yes, of course.

Marvin Minsky:

There isn’t room for every possible being in this finite universe, so, we have to be selective …

Otto Laske: 

 And who selects, or what is the selective mechanism?

Marvin Minsky:

Well, normally one selects by fighting. Perhaps somebody will invent a better way. Otherwise, you have to have a committee …

Otto Laske: 

That’s worse than fighting, I think.•

___________________________________

“We Are On The Threshold Of An Era That Will Be Strongly Influenced, And Quite Possibly Dominated, By Intelligent Machines”

sa

In the introduction to his 1960 paper, “Steps Toward Artificial Function,” Marvin Minsky, who later served as a technical consultant for 2001: A Space Odyssey, succinctly described the present and future of computers:

A VISITOR to our planet might be puzzled about the role of computers in our technology. On the one hand, he would read and hear all about wonderful “mechanical brains” baffling their creators with prodigious intellectual performance. And he (or it) would be warned that these machines must be restrained, lest they overwhelm us by might, persuasion, or even by the revelation of truths too terrible to be borne. On the other hand, our visitor would find the machines being denounced on all sides for their slavish obedience, unimaginative literal interpretations, and incapacity for innovation or initiative; in short, for their inhuman dullness.

Our visitor might remain puzzled if he set out to find, and judge for himself, these monsters. For he would find only a few machines mostly general-purpose computers), programmed for the moment to behave according to some specification) doing things that might claim any real intellectual status. Some would be proving mathematical theorems of rather undistinguished character. A few machines might be playing certain games, occasionally defeating their designers. Some might be distinguishing between hand-printed letters. Is this enough to justify so much interest, let alone deep concern? I believe that it is; that we are on the threshold of an era that will be strongly influenced, and quite possibly dominated, by intelligent problem-solving machines. But our purpose is not to guess about what the future may bring; it is only to try to describe and explain what seem now to be our first steps toward the construction of “artificial intelligence.”•

_________________________________

“He Is, In A Sense, Trying To Second-Guess The Future”

sk

I posted a brief Jeremy Bernstein New Yorker piece about Stanley Kubrick that was penned in 1965 during the elongated production of 2001: A Space Odyssey.The following year the same writer turned out a much longer profile for the same magazine about the director and his sci-fi masterpiece. Among many other interesting facts, it mentions that MIT AI legend Marvin Minsky, who’s appeared on this blog many times, was a technical consultant for the film. An excerpt from “How About a Little Game?:

By the time the film appears, early next year, Kubrick estimates that he and [Arthur C.] Clarke will have put in an average of four hours a day, six days a week, on the writing of the script. (This works out to about twenty-four hundred hours of writing for two hours and forty minutes of film.) Even during the actual shooting of the film, Kubrick spends every free moment reworking the scenario. He has an extra office set up in a blue trailer that was once Deborah Kerr’s dressing room, and when shooting is going on, he has it wheeled onto the set, to give him a certain amount of privacy for writing. He frequently gets ideas for dialogue from his actors, and when he likes an idea he puts it in. (Peter Sellers, he says, contributed some wonderful bits of humor for Dr. Strangelove.)

In addition to writing and directing, Kubrick supervises every aspect of his films, from selecting costumes to choosing incidental music. In making 2001, he is, in a sense, trying to second-guess the future. Scientists planning long-range space projects can ignore such questions as what sort of hats rocket-ship hostesses will wear when space travel becomes common (in 2001 the hats have padding in them to cushion any collisions with the ceiling that weightlessness might cause), and what sort of voices computers will have if, as many experts feel is certain, they learn to talk and to respond to voice commands (there is a talking computer in 2001 that arranges for the astronauts’ meals, gives them medical treatments, and even plays chess with them during a long space mission to Jupiter–‘Maybe it ought to sound like Jackie Mason,’ Kubrick once said), and what kind of time will be kept aboard a spaceship (Kubrick chose Eastern Standard, for the convenience of communicating with Washington). In the sort of planning that NASA does, such matters can be dealt with as they come up, but in a movie everything is visible and explicit, and questions like this must be answered in detail. To help him find the answers, Kubrick has assembled around him a group of thirty-five artists and designers, more than twenty-five special effects people, and a staff of scientific advisers. By the time this picture is done, Kubrick figures that he will have consulted with people from a generous sampling of the leading aeronautical companies in the United States and Europe, not to mention innumerable scientific and industrial firms. One consultant, for instance, was Professor Marvin Minsky, of M.I.T., who is a leading authority on artificial intelligence and the construction of automata. (He is now building a robot at M.I.T. that can catch a ball.) Kubrick wanted to learn from him whether any of the things he was planning to have his computers do were likely to be realized by the year 2001; he was pleased to find out that they were.•

_____________________________

“We Will Go On, As Always, To Seek More Robust Illusions”

416px-Von_Krahl_Theatre_The_Magic_flute_Kantele

Times of great ignorance are petri dishes for all manner of ridiculous myths, but, as we’ve learned, so are times of great information. The more things can be explained, the more we want things beyond explanation. And maybe for some people, it’s a need rather than a want. The opening of “Music, Mind and Meaning,” Marvin Minsky’s 1981 Computer Music Journal essay:

Why do we like music? Our culture immerses us in it for hours each day, and everyone knows how it touches our emotions, but few think of how music touches other kinds of thought. It is astonishing how little curiosity we have about so pervasive an “environmental” influence. What might we discover if we were to study musical thinking?

Have we the tools for such work? Years ago, when science still feared meaning, the new field of research called “Artificial Intelligence” started to supply new ideas about “representation of knowledge” that I’ll use here. Are such ideas too alien for anything so subjective and irrational, aesthetic, and emotional as music? Not at all. I think the problems are the same and those distinctions wrongly drawn: only the surface of reason is rational. I don’t mean that understanding emotion is easy, only that understanding reason is probably harder. Our culture has a universal myth in which we see emotion as more complex and obscure than intellect. Indeed, emotion might be “deeper” in some sense of prior evolution, but this need not make it harder to understand; in fact, I think today we actually know much more about emotion than about reason.

Certainly we know a bit about the obvious processes of reason–the ways we organize and represent ideas we get. But whence come those ideas that so conveniently fill these envelopes of order? A poverty of language shows how little this concerns us: we “get” ideas; they “come” to us; we are “reminded of” them. I think this shows that ideas come from processes obscured from us and with which our surface thoughts are almost uninvolved. Instead, we are entranced with our emotions, which are so easily observed in others and ourselves. Perhaps the myth persists because emotions, by their nature, draw attention, while the processes of reason (much more intricate and delicate) must be private and work best alone.

The old distinctions among emotion, reason, and aesthetics are like the earth, air, and fire of an ancient alchemy. We will need much better concepts than these for a working psychic chemistry.

Much of what we now know of the mind emerged in this century from other subjects once considered just as personal and inaccessible but which were explored, for example, by Freud in his work on adults’ dreams and jokes, and by Piaget in his work on children’s thought and play. Why did such work have to wait for modern times? Before that, children seemed too childish and humor much too humorous for science to take them seriously.

Why do we like music? We all are reluctant, with regard to music and art, to examine our sources of pleasure or strength. In part we fear success itself– we fear that understanding might spoil enjoyment. Rightly so: art often loses power when its psychological roots are exposed. No matter; when this happens we will go on, as always, to seek more robust illusions!•

________________________

“Most People Think Computers Will Never Be Able To Think”

h9

Here’s the opening of a 1982 AI Magazine piece by cognitive scientist MIT’s Marvin Minsky, which considers the possibility of computers being able to think:

Most people think computers will never be able to think. That is, really think. Not now or ever. To be sure, most people also agree that computers can do many things that a person would have to be thinking to do. Then how could a machine seem to think but not actually think? Well, setting  aside the question of what thinking actually is, I think that most of us would answer that by saying that in these cases, what the computer is doing is merely a superficial imitation of human intelligence. It has been designed to obey certain simple commands, and then it has been provided with programs composed of those commands. Because of this, the computer has to obey those commands, but without any idea of what’s happening.

Indeed, when computers first appeared, most of their designers intended them for nothing only to do huge, mindless computations. That’s why the things were called “computers”. Yet even then, a few pioneers — especially Alan Turing — envisioned what’s now called ‘Artificial Intelligence’ – or ‘AI.’ They saw that computers might possibly go beyond arithmetic, and maybe imitate the processes that go on inside human brains.

Today, with robots everywhere in industry and movie films, most people think Al has gone much further than it has. Yet still, ‘computer experts’ say machines will never really think. If so, how could they be so smart, and yet so dumb?

Indeed, when computers first appeared, most of their designers intended them for nothing only to do huge, mindless computations. That’s why the things were called “computers.” Yet even then, a few pioneers –especially Alan Turing — envisioned what’s now called “Artificial Intelligence” – or “AI.” They saw that computers might possibly go beyond arithmetic, and maybe imitate the processes that go on inside human brains.

Today, with robots everywhere in industry and movie films, most people think Al has gone much further than it has. Yet still, “computer experts” say machines will never really think. If so, how could they be so smart, and yet so dumb?•

___________________________

“Using This Instrument, You Can ‘Work’ In Another Room, In Another City, In Another Country, Or On Another Planet”

vrnasalead-thumb-550x388-19831

The opening of “Telepresence,” Marvin Minsky’s 1980 Omni think piece which suggested we should bet our future on a remote-controlled economy:

You don a comfortable jacket lined with sensors and muscle-like motors. Each motion of your arm, hand, and fingers is reproduced at another place by mobile, mechanical hands. Light, dexterous, and strong, these hands have their own sensors through which you see and feel what is happening. Using this instrument, you can ‘work’ in another room, in another city, in another country, or on another planet. Your remote presence possesses the strength of a giant or the delicacy of a surgeon. Heat or pain is translated into informative but tolerable sensation. Your dangerous job becomes safe and pleasant.

The crude ‘robotic machines of today can do little of this. By building new kinds of versatile, remote‑controlled mechanical hands, however, we might solve critical problems of energy, health, productivity, and environmental quality, and we would create new industries. It might take 10 to 20 years and might cost $1 billion—less than the cost of a single urban tunnel or nuclear power reactor or the development of a new model of automobile.•

Tags:

P9TpCaRn

Scott Kelly, who’s nearing the end of a one-year stint aboard the International Space Station, just conducted an Ask Me Anything at Reddit. Because the astronaut and his inquisitors are human, many of the questions had to do with urine, food and sleep. A few exchanges follow.

________________________________

Question:

What is the largest misconception about space/space travel that society holds onto?

Scott Kelly:

I think a lot of people think that because we give the appearance that this is easy that it is easy. I don’t think people have an appreciation for the work that it takes to pull these missions off, like humans living on the space station continuously for 15 years. It is a huge army of hard working people to make it happen.

________________________________

Question:

During a spacewalk what does it feel like having nothing but a suit (albeit a rather sophisticated one) between you and space?

Scott Kelly:

It is a little bit surreal to know that you are in your own little spaceship and a few inches from you is instant death.

________________________________

Question:

Upon completing your year in space, if the offer was on the table, would you do a two-year space mission in the future? And why? Would it depend on the mission (Moon, Mars, ISS again)?

Scott Kelly:

It would definitely depend on the mission. If it was to the moon or Mars, yeah I would do it.

________________________________

Question:

What’s the creepiest thing you’ve encountered while on the job?

Scott Kelly:

Generally it has to do with the toilet. Recently I had to clean up a gallon-sized ball of urine mixed with acid.

________________________________

Question:

Why do you always have your arms folded?

Scott Kelly:

Your arms don’t hang by your side in space like they do on Earth because there is no gravity. It feels awkward to have them floating in front of me. It is just more comfortable to have them folded. I don’t even have them floating in my sleep, I put them in my sleeping bag.

________________________________

Question:

Could you tell us something unusual about being in space that many people don’t think about?

Scott Kelly:

The calluses on your feet in space will eventually fall off. So, the bottoms of your feet become very soft like newborn baby feet. But the top of my feet develop rough alligator skin because I use the top of my feet to get around here on space station when using foot rails.

________________________________

Question:

What’s it like to sleep in 0G? It must be great for the back. Does the humming of the machinery in the station affect your sleep at all?

Scott Kelly:

Sleeping here is harder here in space than on a bed because the sleep position here is the same position throughout the day. You don’t ever get that sense of gratifying relaxation here that you do on Earth after a long day at work. Yes, there are humming noises on station that affect my sleep, so I wear ear plugs to bag.

________________________________

Question:

Does the ISS have any particular smell?

Scott Kelly:

Smells vary depending on what segment you are in. Sometimes it has an antiseptic smell. Sometimes it has an odor that smells like garbage. But the smell of space when you open the hatch smells like burning metal to me.

________________________________

Question:

What will be the first thing you eat once you’re back on Earth?

Scott Kelly:

The first thing I will eat will probably be a piece of fruit (or a cucumber) the Russian nurse hands me as soon as I am pulled out of the space capsule and begin initial health checks.

________________________________

Question:

What ONE thing will you forever do differently after your safe return home?

Scott Kelly:

I will appreciate nature more.•

Tags:

New-Apple-Campus-Renderings_1

The technocratic office space has some of its roots in Fascist Italy, in the work of Italo Balbo, though Mussolini’s Air Minister wasn’t overly concerned like Google and Facebook are now with swallowing up employees’ lives by smothering them with amenities (though he did have coffee delivered to their desks via pneumatic tubes!). He just wanted the “trains” to run on time.

For lot of workers in today’s Gig Economy, the office has disappeared, the software serving as invisible middleman. The inverse of that reality is the sprawling technological wonderlands that are campuses from Apple to Zappos (which actually tried to reinvent downtown Las Vegas), with their amazing perks and services aimed to make managers and engineers feel not just at home but happy, incentivizing them to remain chained, if virtually, to their desks. 

In a smart Aeon essay, Benjamin Naddaff-Hafrey traces the history of today’s all-inclusive technological “paradises,” isolationist attempts at utopias in a time of economic uncertainty and fear of terrorism, to yesteryear’s enclosed company towns and college campuses.

An excerpt:

Google boasts more than 2 million job applicants a year. National media hailed its office plans as a ‘glass utopia’. There are hosts of articles for businesspeople on how to make their offices more like Google’s workplace. A 2015 CNNMoney survey of business students around the world showed Google as their most desired employer. Its campus is a cultural symbol of that desirability.

The specifics of Google’s proposed Mountain View office are unprecedented, but the scope of the campus is part of an emerging trend across the tech world. Alongside Google’s neighbourhood is a recent Facebook open office on their campus that, as the largest open office in the world, parallels the platform’s massive online community. Both offices seem modest next to the ambitious and fraught effort of Tony Hsieh, CEO of the online fashion retailer Zappos, to revitalise the downtown Las Vegas area around Zappos’ office in the old City Hall.

Such offices symbolise not just the future of work in the public mind, but also a new, utopian age with aspirations beyond the workplace. The dream is a place at once comfortable and entrepreneurial, where personal growth aligns with profit growth, and where work looks like play.

Yet though these tech campuses seem unprecedented, they echo movements of the past. In an era of civic wariness and economic fragility, the ‘total’ office heralds the rise of a new technocracy. In a time when terrorism from abroad provokes our fears, this heavily-planned workplace harks back to the isolationist values of the academic campus and even the social planning of the company town. As physical offices, they’re exceptional places to work – but while we increasingly uphold these places as utopic models for community, we make questionable assumptions about the best version of our shared life and values.•

Tags:

tancrede

Often the other side of extreme beauty is something too horrible to look at. One of the abiding memories of my childhood is seeing a brief clip of 73-year-old Karl Wallenda plunging to his death one windy day in San Juan. Why was that old man up on a wire? How did he even get to that age behaving in such a way?

French daredevil Tancrède Melet didn’t reach senior status, having earlier this month, at age 32, suffered a deadly fall from grace from a hot-air balloon. Similarly to Philippe Petit, he thought himself more philosopher and artist than extreme athlete, though intellectualizing didn’t soften his crash landing. An erstwhile engineer, he climbed to the sky to escape the air-conditioned nightmare, and he managed that feat, if only for a short while.

A thoughtful Economist obituary celebrates the audacity that abbreviated Melet’s life, which is one way to look at it. An excerpt:

Essentially he saw himself as an artist of the void, weaving together base-jumping, acrobatics and highlining to make hair-raising theatre among the peaks. Love of wildmélanges had been encouraged by his parents, who took him out of school when he was bullied for a stammer and, instead, let him range over drawing, music, gymnastics and the circus. Though for four years he slaved as a software engineer, he dreamed of recovering that freedom.

“One beautiful day” he threw up the job, bought a van, and took to the roads of France to climb and walk the slackwire. In the Verdon gorges of the Basses-Alpes he fell in with a fellow enthusiast, Julien Millot, an engineer of the sort who could fix firm anchors among snow-covered rocks for lines that spanned crevasses; with him he formed a 20-strong team, the Flying Frenchies, composed of climbers, cooks, musicians, technicians and clowns. These kindred spirits gave him confidence to push ever farther out into empty space.

Many thought him crazy. That was unfair. He respected the rules of physics, and made sure his gear was safe. When he died, by holding on too long to the rope of a hot-air balloon that shot up too fast, he had been on the firm, dull ground, getting ready. It looked like another devil-prompted connerie to push the limits of free flight, but this time there was no design in it. He was just taken completely by surprise, as he had hoped he might be all along.•

Tags:

jojo1

From the February 6, 1913 New York Times:

ANN ARBOR, Mich.–The brain of a dog was transferred to a man’s skull at University Hospital here to-day. W.A. Smith of Kalamazoo had been suffering from abscess on the brain, and in a last effort to save his life this remarkable operation was performed. 

Opening his skull, the surgeons removed the diseased part of his brain, and in its place substituted the brain of a dog.

Smith was resting comfortably to-night, and the surgeons say he has a good chance to recover.•

teachingmachine31

skinnerteachingmachine5

teachingmachine1975

It’s perplexing that video games aren’t used to teach children history and science, though the economics aren’t easy. A blockbuster game on par with today’s best offerings can cost hundreds of millions to develop and design, and that’s a steep price without knowing if such software would be welcomed into classrooms.

In addition to cost, there’s always been a prejudice against learning devices because they seem to reduce students into just more machines. That’s not altogether false if you consider that B.F. Skinner saw pupils as “programmable.” In an Atlantic article by Jacek Krywko looks the latest attempts at the making of mind-improving machines, which will not only teach language but also “monitor things like joy, sadness, boredom, and confusion.” Such robot social intelligence is thought to be the key difference: Don’t try to make the students more like machines but the machines more like the students.

A passage about the Skinner’s failed attempts in the 1950s at making education more robotic:

His new device taught by showing students questions one at a time, with the idea that the user would be rewarded for each right answer.

This time, there was no “cultural inertia.” Teaching machines flooded the market, and backlash soon followed. Kurt Vonnegut called the machines “playthings” and argued that they couldn’t prepare a kid for “one-millionth of what is going to hit him in the teeth, ready or not.” Fortune ran a story headlined “Can People Be Taught Like Pigeons?” By the end of the ‘60s, teaching machines had once again fallen out of favor. The concept briefly resurfaced again in the ‘80s, but the lack of quality educational software—and the public’s perception of mechanized teachers as something vaguely Orwellian—meant they once again failed to gain much traction.

But now, they’re back for another try.

Scientists in Germany, Turkey, the Netherlands, and the U.K. are currently working on language-teaching machines more complex than anything [Sydney] Pressley or Skinner dreamed up.•

Tags: , ,

dezeen_Galaxy-SOHO-by-Zaha-Hadid-Architects-13

Some structures survive because they’re made of sturdy material and some because of enduring symbolism. Chinese real-estate billionaire Zhang Xin doesn’t possess the hubris to believe her buildings, even those designed by Pritzker winners, will survive the Great Wall, but she’s hopeful about her nation’s future despite present-day economic turbulence. Zhang thinks the country must be more open politically and culturally, perhaps become a democratic state, and has invested heavily toward those ends by funding scholarships for students to be educated at top universities all over the world. 

An excerpt from Bernhard Zand’s Spiegel interview with her:

Spiegel:

Zhang Xin, if China’s economy was an enterprise and you were running it, how would you make your company fit for the future?

Zhang Xin:

No economy, no company, in fact no individual can develop its full potential today without embracing two fundamental trends — globalization and digitalization. They will dominate for quite some time to come.

Spiegel:

What does this mean for China?

Zhang Xin:

It means that the country needs to continue opening up and keep connecting. It needs to realize that the world has become one. The old concept of isolation, the idea that you can solve your problems on your own does not work anymore — neither in cultural, economic, nor political terms. Isolation means a lack of growth. I grew up in China at a time when the country was completely isolated. That era is over.

Spiegel:

When countries prosper economically there comes a time when its people start asking for greater political participation. Will this eventually happen in China, too?

Zhang Xin:

I said before that the Chinese no longer crave so much for food and accommodation, but they do crave democracy. I stand by that. I don’t know which model China will follow. But the higher our standard of living, the higher our levels of education, the further people will look around. And we can see which level of openness other societies enjoy. We are no different — we too want more freedom. The question is: How much freedom will be allowed?

Spiegel:

Today the silhouettes of your buildings dominate the skylines of Beijing and Shanghai, almost serving as a signature of modern China. Have you ever wondered how long these buildings will continue to stand tall and just how sustainable these structures that you have created together with your architects will be?

Zhang Xin:

We have become so quick and effective in building things today. It would be easy to build another Pyramid of Giza or another Great Wall. But these buildings haven’t withstood the test of time because of their building quality. They stand tall because they have a symbolic value, they represent a culture. I’m afraid what we are building today will not have the same impact and sustainability of the architecture of a 100, 500 or 1,000 years ago. The buildings of those days were miracles. We don’t perform such miracles today. So we should be a little more modest. For my part, I’ll be glad to show one of my buildings one day to my grandchildren and say: I’m proud of that.•

Tags: ,

Secretary of Defense Donald Rumsfeld responds to a question as he defends President Bush's proposed $439.3 billion defense budget for 2007 during his testimony before the Senate Armed Services Committee on Capitol Hill in Washington, Tuesday, Feb. 7, 2006. Beyond budget matters, Rumsfeld told the panel that the U.S. military must continue to change in order to defend the nation against enemy terrorists who could acquire a nuclear weapon or launch a chemical attack against a major U.S. city. (AP Photo/J. Scott Applewhite) Original Filename: RUMSFELD_DCSA106.jpg

Strategy would not seem to be Donald Rumsfeld’s strong suit.

Despite that, the former Dubya Defense Secretary marshaled his forces and created an app for a strategic video game called “Churchill Solitaire,” based on actual card game played incessantly during WWII by the British Prime Minister. If you’re picturing an ill-tempered, computer-illiterate senior barking orders into a Dictaphone, then you’ve already figured out Rumsfeld’s creative process. At least tens of thousands of people were not needlessly killed during the making of the app.

From Julian E. Barnes at the Wall Street Journal:

Mr. Rumsfeld can’t code. He doesn’t much even use a computer. But he guided his young digitally minded associates who assembled the videogame with the same method he used to rule the Pentagon—a flurry of memos called snowflakes.

As a result, “Churchill Solitaire” is likely the only videogame developed by an 83-year-old man using a Dictaphone to record memos for the programmers.

At the Pentagon, Mr. Rumsfeld was known for not mincing words with his memos. Age hasn’t mellowed him.

“We need to do a better job on these later versions. They just get new glitches,” reads one note from Mr. Rumsfeld. “[W]e ought to find some way we can achieve steady improvement instead of simply making new glitches.”

Other notes were arguably more constructive, if still sharply worded.

“Instead of capturing history, it is getting a bit artsy,” he wrote in one snowflake in which he suggested ways to make the game better evoke Churchill—including scenes from World War II and quotes from the prime minister, changes that made it into the final game.•

______________________

“One of the strangest interviews I’ve ever done.”

Tags: , ,

timothy_leary

In 1993, three years before his death, a shaky Dr. Timothy Leary was hired by ABC to interview fellow drug user Billy Idol about the new album (remember those?) Cyberpunk. From his first act as an LSD salesman, Leary was intrigued by the intersection of pharmaceuticals and technology. After a stretch in prison, the guru reinvented himself as a full-time technologist, focusing specifically on software design and space exploration. One trip or another, I suppose.

Given the year this network special (which also featured the Ramones and Television) was broadcast, it’s no surprise the pair sneer at the marketing of the Generation X concept. Leary offers that cyberpunk means that “we have to be smarter than people who run the big machines.” Or maybe it means that we can purchase crap on eBay until the Uber we ordered arrives. Leary tells Idol that his music is “changing middle-class robot society.” Oh, Lord. Well, I’ll give the good doctor credit for saying that computers would rearrange traditional creative and economic roles.

This Q&A runs for roughly the first ten minutes, and while the footage may be of crappy quality, it’s a relic worth the effort.

Tags: ,

Baseball-projection systems were generally woeful in 2015

Predictions are really difficult in a sport that features athletes hitting a round ball with a round bat, in which small differences in eyesight are so key and a couple of injuries or trades can make all the difference. Despite the statistical revolution, it’s hard to say what will happen. And the things that are pretty evident are known by every franchise. How to get an edge?

There’s no doubt the veritable data arms race between clubs, which Branch Rickey birthed during the Cold War, is becoming even more information-rich as technology and biotech play an increasingly bigger role. Brains as well as elbows are to be X-rayed. The deeper you dig, the more returns may be diminishing, but perhaps you strike gold.

Fangraphs, creator of some of those awful 2015 projections, has an article by Adam Guttridge and David Ogren about next-level data collection, explaining what teams are doing to try to acquire significantly more info than fans or their fellow front offices. An excerpt:

Third-party companies are supplying a wealth of data which previously didn’t exist. The most publicized forms of that have been Trackman and Statcast. The key phrase here is data, as opposed to supplying new analysis. Data is the manna from which new analysis may come, and new types or sources of data expand the curve under which we can operate. That’s a fundamentally good thing.

There’s a wave of companies providing something different than Statcast and Trackman. While Statcast and Trackman are generally providing data that’s a more granular form of information which we already have — i.e. more detailed accounts of hitting, fielding, or pitching — others are aiming to provide information in spaces it hasn’t yet been available. A startup named DeCervo is using brain-scan technology to map the relationship between cognition and athletic performance. Wearable-tech companies like Motus and Zepp aim to provide detailed, data-centric information in the form of bat speed, a pitcher’s arm path, and more. Biometric solutions like Kitman Labs are competing to capture and provide biometric data to teams as well.

The solutions which provide more granular data (Trackman, Statcast, and also ever-evolving developments from Baseball Info Solutions) are of perhaps unknown significance. They offer a massive volume of data, but it’s an open question as to whether it yet offers significant actionable information, whether it has value as a predictive/evaluative tool rather than merely a descriptive one.•

Tags: ,

« Older entries § Newer entries »