Excerpts

You are currently browsing the archive for the Excerpts category.

underwater-3-1200x1165

Everyone marvels at the otherworldly ambition of the outré Arab state of Dubai, but nobody does anything about it. I’d like a full-length book about the emirate from Douglas Coupland or George Saunders, and I’d like it now. One decadent desert dream which may or may not come to fruition: an underwater tennis complex. It could cost $2.5 billion, but who’s counting? Castles carved into the sand by quasi-slave labor in the 21st century should be almost beyond reckoning yet it sadly doesn’t seem an anachronism. From Codelia Mantsebo at Elite Traveler:

After boasting of tennis court high up in the air built atop the 1,000-foot-tall Burj al Arab hotel, plans for the world’s first underwater tennis court in Dubai were revealed in April last year. Today, Kotala has revealed the project has eyed potential US investors to turn this project into a reality while he works on the final designs for the concept.

In April last year, Polish architect Krzysztof Kotala made global headlines when he unveiled initial designs of the Underwater Dubai Tennis Center. According to Kotala, plans for this venture are set to move a step closer to reality as he confirmed he was in talks with US investors. He also confirmed he is currently working on the final designs for the concept.•

Tags: ,

astro2 (3)

There are dual, deep-seated reasons for the modern preoccupation with apocalypse, which has never been more pronounced in literature and art. Part of it has to do with a dissatisfaction with what we’ve created and will create. It’s the ultimate nostalgia: We dream of a board clear even of us. The other part of the equation, I think, is a collective attempt to wrest control of what may turn out to be the doom of the species, the extinction that could ultimately be our fate. Like a terminally ill person with a handful of pills, we’d like the endgame to be played by our rules. In our sci-fi dress rehearsals, at least, we’re in charge.

On the topic of apocalypse, Frank Bures has penned an especially graceful Aeon essay, trying to make sense of his–and society’s–foreboding feelings in the Anthropocene. He believes it has to do with the ever-growing machine we’ve invented, which provides for us in fascinating ways and may be the death of us. As Bures writes, we have the “feeling that we are part of something over which we have no control, of which we have no real choice but to keep being part of.”

The opening:

One day in the early 1980s, I was flipping through the TV channels, when I stopped at a news report. The announcer was grey-haired. His tone was urgent. His pronouncement was dire: between the war in the Middle East, famine in Africa, AIDS in the cities, and communists in Afghanistan, it was clear that the Four Horsemen of the Apocalypse were upon us. The end had come.

We were Methodists and I’d never heard this sort of prediction. But to my grade-school mind, the evidence seemed ironclad, the case closed. I looked out the window and could hear the drumming of hoof beats.

Life went on, however, and those particular horsemen went out to pasture. In time, others broke loose, only to slow their stride as well. Sometimes, the end seemed near. Others it would recede. But over the years, I began to see it wasn’t the end that was close. It was our dread of it. The apocalypse wasn’t coming: it was always with us. It arrived in a stampede of our fears, be they nuclear or biological, religious or technological.

In the years since, I watched this drama play out again and again, both in closed communities such as Waco and Heaven’s Gate, and in the larger world with our panics over SARS, swine flu, and Y2K. In the past, these fears made for some of our most popular fiction. The alien invasions in H G Wells’s War of the Worlds (1898); the nuclear winter in Nevil Shute’s On the Beach (1957); God’s wrath in the Left Behind series of books, films and games. In most versions, the world ended because of us, but these were horrors that could be stopped, problems that could be solved.

But today something is different. Something has changed.•

Tags:

Donald-Trump-Mocks-A-Reporter-With-A-Disability-And-Says-He-Doesnt-Remember

What a difference a day makes. Just before the Iowa caucuses, Donald Trump was labeled by Spiegel the “world’s most dangerous man.” If he were to become President, you could make that argument since he is ridiculously unqualified for the job, but the first-in-nation voting put a crimp in his effort. New Hampshire could revise the script again, but on Tuesday morning he seems more Pat Buchanan with hair plugs than Pol Pot.

It’s deplorable that our new media equation used Trump as cheap entertainment, as if it were just one more tacky yet harmless reality show. Even worse are the supposedly serious journalists who depicted him as merely a somewhat irreverent entertainer when he was making fascistic noise in a very important arena. 

That being said, the Spiegel article by Markus Feldenkirchen, Veit Medick and Holger Stark is still really good. An excerpt:

‘It’s a Miracle Trump Didn’t Invent the Selfie’

Michael D’Antonio is sitting in an Applebee’s fast-food restaurant on Long Island, speaking quietly. He’s a cheerful, thoughtful man with a white beard, the polar opposite of Trump. D’Antonio has delved a lot deeper than most others into Donald Trump’s world.

D’Antonio recently wrote a biography of Trump, who was enthusiastic about the project and gave his cooperation — at least initially. Trump granted the author several interviews, which were usually held in his penthouse inside the Trump Tower, behind the kinds of double doors that would normally be used in castles. D’Antonio was granted free access to Trump’s family and associates, and spoke with his grown children and all three of his wives. But when Trump realized that D’Antonio was also one of his critics, he immediately canceled the project.

“What I noticed immediately in my first visit was that there were no books,” says D’Antonio. “A huge palace and not a single book.” He asked Trump whether there was a book that had influenced him. “I would love to read,” Trump replied. “I’ve had many best sellers, as you know, and The Art of the Deal was one of the biggest-selling books of all time.” Soon Trump was talking about The Apprentice. Trump called it “the No. 1 show on television,” a reality TV show in which, in 14 seasons, he played himself and humiliated candidates vying for the privilege of a job within his company. In the interview, Trump spent what seemed like an eternity talking about how fabulous and successful he is, but he didn’t name a single book that he hadn’t written.

“Trump doesn’t read,” D’Antonio says in the restaurant. “He hasn’t absorbed anything serious and profound about American society since his college days. And to be honest, I don’t even think he read in college.” When Trump was asked who his foreign policy advisers were, he replied: “Well, I watch the shows.” He was referring to political talk shows on TV.

In all of the conversations about his life, Trump seemed like a little boy, says D’Antonio. “Like a six-year-old boy who comes home from the playground and can hardly wait to announce that he shot the decisive goal.”

According to D’Antonio, American society revolves around two things: ambition and self-promotion. This is why Trump is one of the most appropriate heroes he can imagine for the country, he adds, noting that no one is more ambitious and narcissistic. “It’s a miracle Trump didn’t invent the selfie.”•

Tags: , , , ,

5master

Human dominance in the game of Go is going but not yet gone. That’s one of the clarifying points Gary Marcus makes in a Backchannel piece that looks at Google’s machine intelligence triumphing over a human “champion” in the ancient game. Even when AI becomes the true Go champion, that doesn’t mean such knowledge will be easily transferable to other areas. Furthermore, the psychologist explains that the Google system isn’t in fact a pure neural network but a hybrid. An excerpt:

The European champion of Go is not the world champion, or even close. The BBC, for example, reported that “Google achieves AI ‘breakthrough’ by beating Go champion,” and hundreds of other news outlets picked up essentially the same headline. But Go is scarcely a sport in Europe; and the champion in question is ranked only #633 in the world. A robot that beat the 633rd-ranked tennis pro would be impressive, but it still wouldn’t be fair to say that it had “mastered” the game. DeepMind made major progress, but the Go journey is still not over; a fascinating thread at YCombinator suggests that the program — a work in progress — would currently be ranked #279.

Beyond the far from atypical issue of hype, there is an important technical question: what is the nature of the computer system that won? 

By way of background, there is a long debate about so-called neural net models (which in its most modern form is called “deep-learning”) and classical “Good-old-fashioned Artificial Intelligence” (GOFAI) systems, of the form that the late Marvin Minsky advocated. Minsky, and others like his AI-co-founder John McCarthy grew up in the logicist tradition of Bertrand Russell, and tried to couch artificial intelligence in something like the language of logic. Others, like Frank Rosenblatt in the 50s, and present-day deep learners like Geoffrey Hinton and Facebook’s AI Director Yann LeCun, have couched their models in terms of simplified neurons that are inspired to some degree by neuroscience.

To read many of the media accounts (and even the Facebook posts of some of my colleagues), DeepMind’s victory is a resounding win for the neural network approach, and hence another demerit for Minsky, whose approach has very much lost favor.

But not so fast.•

Tags:

Roberts1

It’s not really stunning that a patriarchal institution like Oral Roberts University is doing something remarkably invasive, but the question is whether the school is an outlier for long or just for now. ORU will require incoming freshmen to wear Fitbit in order to monitor their exercise, food, sleep, location and body weight. A school founded by a “faith healer” that’s utilizing new technologies is bound to “lay its hands” on others in off-putting ways, though wholly secular bodies will likely attempt similar things in the not-too-distant future.

From Elizabeth Chuck at NBC News:

An Oklahoma university is taking a novel approach to fighting the “Freshman 15”: Require all incoming students to wear fitness trackers.

Oral Roberts University, a Christian university in Tulsa, announced earlier this month that all first-years must wear Fitbits — watches that track how much activity a person does. Their fitness data will be tracked by the school and will affect students’ grades.

While mandatory for all incoming freshman this year, Oral Roberts said it “has opened the program up to all students,” and said the campus bookstores have already sold more than 550 of the popular gadgets.

The university has always included a fitness component in its curriculum, requiring students to “manually log aerobics points in a fitness journal” in past years. The students get graded on their level of aerobic activity.

Now, instead of tediously entering the data by hand, it will be automatically tracked and submitted by the Fitbits, which retail for about $150.

“ORU offers one of the most unique educational approaches in the world by focusing on the Whole Person — mind, body and spirit,” ORU President William M. Wilson said in a statement. “The marriage of new technology with our physical fitness requirements is something that sets ORU apart.”

The Fitbit requirement is a first of its kind for colleges and universities, Oral Roberts said.•

Tags:

<> on August 15, 2015 in Des Moines, Iowa.

Nothing has been better than the New York Times’ day-to-day coverage of the 2016 Presidential race which kicks off in earnest tonight in Iowa. The work by reporters like Maggie Haberman, Michael Barbaro and Trip Gabriel has been lively, lucid and layered, a Herculean task in the new normal of the nonstop churn. Gabriel, who’s been stationed in first-in-nation state for a year, just did an Ask Me Anything at Reddit. A few exchanges follow.

_____________________________

Question:

As an Iowan, I’ve long been skeptical of our First in the Nation status when it comes to narrowing the field of presidential candidates, largely because of the small (tiny) non-representative population. Of course, that’s a risky opinion to have here, so I was wondering, do you feel Iowa perhaps has too much influence on the process? Or has your time in Iowa helped justify it’s position as FITN in your mind?

Trip Gabriel:

I change my mind about Iowa’s role weekly. On the plus side, it’s a state where a candidate without money can spend a lot of time doing retail campaigning. If I was reporting from Florida today, the race would be much more about who has the millions for TV ads. But yes, Iowa is unrepresentative of America, not just demographically (very white) but also ideologically. Republicans are very conservative here, and Democrats are very liberal — 43 percent called themselves “socialists” in a Des Moines Register poll this month.

_____________________________

Question:

What, exactly, is stopping a big state like NY, Texas, California or Florida from just moving up their primaries to before Iowa and simply beating back party leaders through their sheer importance in population/delegates?

Trip Gabriel:

The national parties, which control the nominating convention, write the rules, and they can — and have — discounted the delegates from states that try to jump ahead of the traditional early-voting states. That said, the GOP chairman Reince Priebus is not a fan of the four early “carve-out states” and wants to see a regional primary system that would spread the responsibility for choosing the nominee more broadly.

_____________________________

Question:

Who in the GOP side has the most extensive field operation in Iowa? I went to a couple of rallies this past weekend and didn’t see much volunteer recruitment from Trump or Rubio rallies.

Trip Gabriel:

Ted Cruz has the biggest field operation on the GOP side. He has a college dorm in Des Moines that has housed waves of volunteers from out of state. Jeb Bush, Trump and Rubio have lighter footprints, but they are still playing. I met a Rubio volunteer from Chicago at one of his events over the weekend, asking people to sign “commit to caucus” cards. You wouldn’t see much fresh recruitment of volunteers at this point. It’s all about GOTV — getting out the voters who you know support you.

_____________________________

Question:

Why do you think Rubio has failed to consolidate support? I think a lot of people expected he would emerge as the alternative to the anti-establishment trump and Cruz, but his performance has been pretty lackluster. What is he doing wrong, and do you expect to see an “establishment” candidate emerge eventually?

Trip Gabriel:

I think Rubio sent confusing messages about who he was running against. For a long time he contrasted himself with Cruz, trying to look equally conservative on immigration, promoting a “dark days for America’s future” message. Lately he has returned to his message of optimism. I do expect the anti-Trump, anti-Cruz voters to rally around one candidate eventually. The question is whether it will be too late, ie after Super Tuesday.

_____________________________

Question:

How much do you think Hillary and Bernie’s positions on climate change and fossil fuels will play into the Democratic winner?

Trip Gabriel:

I was interested to see in the new Q-Pac poll that 11 percent of Dems ranked climate change as their top issue, a pretty strong showing (only health care and the economy ranked higher). Sanders was earlier and stronger on climate change, opposing the Keystone XL pipeline for example, but Clinton has since rolled out strong proposals, going beyond even the Obama administration. If climate change is your top issue, you’d probably be happy with either candidate at this point and might be also asking about who would be most effective in getting something done.

_____________________________

Question:

Who do you think is most likely to win the Iowa Caucuses on either side? Will you be in a Caucus room while it is happening. If so, what will it be like?

Trip Gabriel:

As of this moment (and this really does change moment to moment), I’m expecting good nights for Trump and Clinton. Take it with a grain of salt, or maybe a whole shaker — we “experts” have been wrong over and over about the races this year.

_____________________________

Question:

Something I haven’t seen anywhere: What is your plan after Iowa?

Trip Gabriel:

Heading to South Carolina, the first-in-the-South primary.•

Tags:

18lpea9eos1ltjpg

I don’t think earthlings should travel to Mars by 2025. We’re in a rush, sure, but probably not in that much of a hurry. My own hope would be that in the near-term future we send unpeopled probes to our neighbor, loaded with 3D printers that begin experimenting with building a self-sustaining colony.

Of course, I’m not a billionaire, so my vote really won’t amount to much. The best argument that Elon Musk and other nouveau space entrepreneurs have for leading us at warp speed into being a multi-planet species isn’t only existential risk but also that the next generation of fabulously wealthy technologists may turn their attention from the skies. It wouldn’t be the first time the stars lost our interest.

A transcript of Musk discussing space exploration at last week’s 2016 StartmeupHK Venture Forum in Hong Kong:

Question:

Let’s get even more way out there and talk about SpaceX. You’ve said that your ultimate goal is getting to Mars. Why is Mars important? Why does Mars matter?

Elon Musk:

It’s really a fundamental decision we need to make as a civilization. What kind of future do we want? Do we want a future where we’re forever confined to one planet until some eventual extinction event, however far in the future that might occur. Or do we want to become a multi-planet species, and then ultimately be out there among the stars, among many planets, many star systems? I think the latter is a far more exciting and inspiring future than the former. 

Mars is the next natural step. In fact, it’s really the only planet we have a shot of establishing a self-sustaining city on. I think once we do establish such a city, there will be a strong forcing function for the improvement of spaceflight technology that will then enable us to establish colonies elsewhere in the solar system and ultimately extend beyond our solar system.

There’s the defensive reason of protecting the future of humanity, ensuring that the light of consciousness is not extinguished should some calamity befall Earth. That’s the defensive reason, but personally I find what gets me more excited is that this would be an incredible adventure–like the greatest adventure ever. It would be exciting and inspiring, and there needs to be things that excite and inspire people. There have to be reasons why you get up in the morning. It can’t just be solving problems. It’s got to be something great is going to happen in the future.

Question:

It’s not an exit strategy or back-up plan for when Earth fails. It’s also to inspire people and to transcend and go beyond our mental limits of what we think we can achieve.

Elon Musk:

Think of how sort of incredible the Apollo program was. If you ask anyone to name some of humanity’s greatest achievements of the 20th century, the Apollo program, landing on the moon, would in many places be number one.

Question:

When will there be a manned SpaceX mission and when will you go to Mars?

Elon Musk:

We’re pretty close to sending crew up to the Space Station. That’s currently scheduled for the end of next year. So that will be exciting, with our Dragon 2 spacecraft. Then we’ll have a next-generation rocket and spacecraft beyond the Falcon-Dragon series, and I’m hoping to describe that architecture later this year at the National Aeronautical Congress, which is the big international space event every year. I think that will be quite exciting.

In terms of me going, I don’t know, maybe four or five years from now. Maybe going to the Space Station would be nice. In terms of the first flights to Mars, we’re hoping to do that around 2025. Nine years from now or thereabouts. 

Question:

Oh my goodness, that’s right around the corner.

Elon Musk:

Well, nine years. Seems like a long time to me.

Question:

Are you doing the zero-gravity training?

Elon Musk:

I’ve done the parabolic flights. Those are fun.

Question:

You must be reading up and doing the physical work to get ready for the ultimate flight of your life.

Elon Musk:

Umm, I don’t think it’s that hard, honestly. Just float around. It’s not that hard to float around. [Laughter] Well, going to Mars is going to be hard and dangerous and difficult in every way, and if you care about being safe and comfortable going to Mars would be a terrible choice.

Tags:

Tea_Party_Protest,_Hartford,_Connecticut,_15_April_2009_-_028

It’s not that there’s nothing of use in John O’Sullivan’s Wall Street JournalSaturday Essay” about this upside-down American election season, but it’s built, in part, on shaky and partisan foundations. It argues that President Obama’s use of executive orders is an unprecedented outlier that has caused the nation to be torn asunder. Except that both Presidents George W. Bush and Bill Clinton issued far more during their terms in office. The elder President Bush was on pace to as well had he won a second term. The same goes for many earlier Commanders in Chief. 

In regards to the Affordable Care Act, O’Sullivan uses the phrase “pushed through,” language that makes it seem as if something unfair or uncommon occurred. Pushing agendas through Congress is something the Oval Office has always done. 

Let’s recall that the GOP was holding meetings prior to Obama’s inauguration to plan to torpedo his Presidency. The divisiveness wasn’t a reaction but a preemptive strike.

O’Sullivan is correct in saying the Left and Right alike have been disappointed with Obama for different reasons, though you have to wonder in those cases if the fault lies with him or if no President could satisfy such a factious moment in our nation’s history. An excerpt:

President Barack Obama is the catalyst that made everything boil over. It shouldn’t be surprising. He proclaimed that he wanted to transform America fundamentally. While the Democrats controlled Congress, he pushed through the semi-nationalization of health care. Since the Democrats lost control, he has pushed his presidential authority to the very limits of the Constitution to secure his agenda on immigration, treaty-making with Iran, global warming and much else.

Mr. Obama has succeeded in getting a majority-Republican Congress to eschew its power of the purse and finance almost his entire agenda. Only the courts have effectively blocked his extensions of lawmaking and regulatory power, and that battle is still being waged. So it would be very odd if people didn’t conclude that a determined president could achieve almost anything he wanted if he were bold enough—and that Mr. Obama has done so.

As a result, his period in office has provoked rebellious popular movements outside Washington on the right and, more surprisingly, on the left.•

Tags: ,

telephoneoperators

Do you want a digital assistant 10,000 times more useful than Siri? A voice-activated universal remote that runs your life? I suppose the answer is “yes.”

Moore’s Law made supercomputers of yore affordable and portable for almost everyone, stealing them from the domain of superwealthy corporations and states and sliding them into our shirt pockets. Similarly, efforts are being made to create AI that acts as a voice-activated universal remote for our lives, anticipating and satisfying our needs. We may soon be able to enjoy the benefits of a “staff” the way our richer brethren do. 

The thing is, most of the new technologies have not created more leisure. Will these tools, if realized, be the same? If they do actually reduce toil, what will we use the extra bandwidth for?

From Zoë Corbyn’s Guardian article about Dag Kittlaus’ attempts to create not Frankenstein but Igor:

Kittlaus is the co-founder and CEO of Viv, a three-year-old AI startup backed by $30m, including funds from Iconiq Capital, which helps manage the fortunes of Mark Zuckerberg and other wealthy tech executives. In a blocky office building in San Jose’s downtown, the company is working on what Kittlaus describes as a “global brain” – a new form of voice-controlled virtual personal assistant. With the odd flashes of personality, Viv will be able to perform thousands of tasks, and it won’t just be stuck in a phone but integrated into everything from fridges to cars. “Tell Viv what you want and it will orchestrate this massive network of services that will take care of it,” he says.

It is an ambitious project but Kittlaus isn’t without a track record. The last company he co-founded invented Siri, the original virtual assistant now standard in Apple products. Siri Inc was acquired by the tech giant for a reported $200m in 2010. The inclusion of the Siri software in the iPhone in 2011 introduced the world to a new way to interact with a mobile device. Google and Microsoft soon followed with their versions. More recently they have been joined by Amazon, with the Echo you can talk to, and Facebook, with its experimental virtual assistant, M.

But, Kittlaus says, all these virtual assistants he helped birth are limited in their capabilities. Enter Viv. “What happens when you have a system that is 10,000 times more capable?” he asks. “It will shift the economics of the internet.”•

Tags: ,

Some of the things contemporary consumers most desire to possess are tangible (smartphones) and others not at all (Facebook, Instagram, etc.). In fact, many want the former mainly to get the latter. A social media “purchase” requires no money but is a trade of information for attention, a dynamic that’s been widely acknowledged, but one that still stuns me. Our need to share ourselves–to write our names Kilroy-like on a wall, as Hunter S. Thompson once said–is etched so deeply in our brains. Manufacturers have used psychology to sell for at least a century, but the transaction has never been purer, never required us to not only act on impulse but to publish that instinct as well. Judging by the mood of America, this new thing, while it may provide some satisfaction, also promotes an increased hunger in the way sugar does. And while the Internet seems to encourage individuality, its mass use and many memes suggests something else.

On a somewhat related topic: Rebecca Spang’s Financial Times article analyzes a new book which argues that a consumerist shift is more a political movement than we’d like to believe, often a culmination of large-scale state decisions rather than of personal choice. The passage below is referring to material goods, but I think the implications for the immaterial are the same. The excerpt:

In Empire of Things, Frank Trentmann brings history to bear on all these questions. His is not a new subject, per se, but his thick volume is both an impressive work of synthesis and, in its emphasis on politics and the state, a timely corrective to much existing scholarship on consumption. Based on specialist studies that range across five centuries, six continents and at least as many languages, the book is encyclopedic in the best sense. In his final pages, Trentmann intentionally or otherwise echoes Diderot’s statement (in his own famous Encyclopédie) that the purpose of an encyclopedia is to collect and transmit knowledge “so that the work of preceding centuries will not become useless to the centuries to come”. Empire of Things uses the evidence of the past to show that “the rise of consumption entailed greater choice but it also involved new habits and conventions . . . these were social and political outcomes, not the result of individual preferences”. The implications for our current moment are significant: sustainable consumption habits are as likely to result from social movements and political action as they are from self-imposed shopping fasts and wardrobe purges.

When historians in the 1980s-1990s first shifted from studying production to consumption, our picture of the past became decidedly more individualist. In their letters and diaries, Georgian and Victorian consumers revealed passionate attachments to things — those they had and those they craved. Personal tastes and preferences hence came to rival, then to outweigh, abstract processes (industrialisation, commodification, etc) as explanations for historical change. The world looked so different! Studied from the vantage point of production, the late 18th and 19th centuries had appeared uniformly dark and dusty with soot; imagined from the consumer’s perspective, those same years glowed bright with an entire spectrum of strange, distinct colours (pigeon’s breast, carmelite, eminence, trocadero, isabella, Metternich green, Niagra [sic] blue, heliotrope). At the exact moment when Soviet power seemed to have collapsed chiefly from the weight of repressed consumer desire, consumption emerged as a largely positive, almost liberating, historical force. “Material culture” became a common buzzword; “thing theory” — yes, it really is a thing — was born.•

Tags: ,

edisonbulb

Asking if innovation is over is no less narcissistic than suggesting that evolution is done. It flatters us to think that we’ve already had all the good ideas, that we’re the living end. More likely, we’re always closer to the beginning.

Of course, when looking at relatively short periods of time, there are ebbs and flows in invention that have serious ramifications for the standard of living. In Robert Gordon’s The Rise and Fall of American Growth, the economist argues that the 1870-1970 period was a golden age of productivity and development unknown previously and unmatched since.

In an excellent Foreign Affairs review, Tyler Cowen, who himself has worried that we’ve already picked all the low-hanging fruit, lavishly praises the volume–“likely to be the most interesting and important economics book of the year.” But in addition to acknowledging a technological slowdown in the last few decades, Cowen also wisely counters the book’s downbeat tone while recognizing the obstacles to forecasting, writing that “predicting future productivity rates is always difficult; at any moment, new technologies could transform the U.S. economy, upending old forecasts. Even scholars as accomplished as Gordon have limited foresight.” In fact, he points out that the author, before his current pessimism, predicted earlier this century very healthy growth rates.

My best guess is that there will always be transformational opportunities, ripe and within arm’s length, waiting for us to pluck them.

An excerpt:

In the first part of his new book, Gordon argues that the period from 1870 to 1970 was a “special century,” when the foundations of the modern world were laid. Electricity, flush toilets, central heating, cars, planes, radio, vaccines, clean water, antibiotics, and much, much more transformed living and working conditions in the United States and much of the West. No other 100-year period in world history has brought comparable progress. A person’s chance of finishing high school soared from six percent in 1900 to almost 70 percent, and many Americans left their farms and moved to increasingly comfortable cities and suburbs. Electric light illuminated dark homes. Running water eliminated water-borne diseases. Modern conveniences allowed most people in the United States to abandon hard physical labor for good.

In highlighting the specialness of these years, Gordon challenges the standard view, held by many economists, that the U.S. economy should grow by around 2.2 percent every year, at least once the ups and downs of the business cycle are taken into account. And Gordon’s history also shows that not all GDP gains are created equal. Some sources of growth, such as antibiotics, vaccines, and clean water, transform society beyond the size of their share of GDP. But others do not, such as many of the luxury goods developed since the 1980s. GDP calculations do not always reflect such differences. Gordon’s analysis here is mostly correct, extremely important, and at times brilliant—the book is worth buying and reading for this part alone.

Gordon goes on to argue that today’s technological advances, impressive as they may be, don’t really compare to the ones that transformed the U.S. economy in his “special century.” Although computers and the Internet have led to some significant breakthroughs, such as allowing almost instantaneous communication over great distances, most new technologies today generate only marginal improvements in well-being. The car, for instance, represented a big advance over the horse, but recent automotive improvements have provided diminishing returns. Today’s cars are safer, suffer fewer flat tires, and have better sound systems, but those are marginal, rather than fundamental, changes. That shift—from significant transformations to minor advances—is reflected in today’s lower rates of productivity.•

Tags: ,

robot-congo-2

An Economist article looks at the latest report on automation by Carl Benedikt Frey, Michael Osborne and Craig Holmes, which argues that poorer nations are more likely than, say, America, to be prone to technological unemployment despite the U.S. holding an advantage in AI.

Because such countries are not yet as widely engaged in information work, their Industrial Age could be interrupted mid-epoch before they arrive at the Information Age. It’s like being pushed down a ladder when you’ve only scaled it part of the way. The academics acknowledge, though, that everything from policy to consumer preference may forestall the rise of the machines in India and China others. After all, Foxconn’s promised one-million robots factory workforce has yet to be realized.

An excerpt:

BILL BURR, an American entertainer, was dismayed when he first came across an automated checkout. “I thought I was a comedian; evidently I also work in a grocery store,” he complained. “I can’t believe I forgot my apron.” Those whose jobs are at risk of being displaced by machines are no less grumpy. A study published in 2013 by Carl Benedikt Frey and Michael Osborne of Oxford University stoked anxieties when it found that 47% of jobs in America were vulnerable to automation. Machines are mastering ever more intricate tasks, such as translating texts or diagnosing illnesses. Robots are also becoming capable of manual labour that hitherto could be carried out only by dexterous humans.

Yet America is the high ground when it comes to automation, according to a new report* from the same pair along with other authors. The proportion of threatened jobs is much greater in poorer countries: 69% in India, 77% in China and as high as 85% in Ethiopia. There are two reasons. First, jobs in such places are generally less skilled. Second, there is less capital tied up in old ways of doing things. Driverless taxis might take off more quickly in a new city in China, for instance, than in an old one in Europe.

Attracting investment in labour-intensive manufacturing has been a route to riches for many developing countries, including China. But having a surplus of cheap labour is becoming less of a lure to manufacturers. An investment in industrial robots can be repaid in less than two years. This is a particular worry for the poor and underemployed in Africa and India, where industrialisation has stalled at low levels of income—a phenomenon dubbed “premature deindustrialisation” by Dani Rodrik of Harvard University.•

Tags: , ,

dallas-cowboys-coach-tom-landry-and-quarterback-eddie-lebaron-the-boys-are-back-blog_thumb (1)

We know football is horrible for the game’s players, the head injuries traumatic and unavoidable regardless of the equipment. The question is whether this truth is an existential threat for the most popular team sport in America. It was for boxing, once not that long ago the king of the U.S. athletics. But prizefighting was an ever-changing hodge-podge of crooked promoters and money men, whereas the NFL is a unified–and crooked–billion-dollar corporation. Can it find some way to keep kids playing a game that will ruin them?

Two recent tragic examples underline the seriousness of the crisis: The physical and mental deterioration at 36 of former wide receiver Antwaan Randle-El and the troubling post-mortem of ex-Giant Tyler Sash. In the latter case, a study of brain tissue conducted after the fatal overdose of the increasingly erratic retired safety proved he suffered from CTE (Chronic Traumatic Encephalopathy), a degenerative condition caused by repeated concussions and (most likely) sub-concussive impacts. 

CTE has thus far shown up in the tissue of many former football players who’ve died, but the rub is that there’s no way to test for it in the living. That may soon change, and if it does, it could be a game-changer for football and other contact sports. From Jack Encarnacao at the Boston Herald:

As it stands, an athlete has to be dead before he can be diagnosed with Chronic Traumatic Encephalopathy, the trauma-induced brain disease prominent in ex-football players. The disease manifests in a way that standard scans can’t detect, so there’s no way to advise a player to hang it up before irreversible damage is done.

Leading concussion researcher Dr. Robert Cantu of Boston University sees a day when this will change.

“I think we’re within a fairly short window, I hope no more than a few years, of being able to detect CTE in living people with almost 100 percent certainty,” Cantu told me in a sit-down interview for the second installment of my podcast series “Unfiltered,” which continues this week on Boston Herald Radio.

The key, Cantu said, is identifying a marker specific to CTE that a brain scan can pick up. A radioactive substance in tau — the protein at the heart of CTE — may be that marker, but current tests produce smudgy images that make it hard to discern, he said.

“Images will only get better over time, and hopefully soon it will be ready for prime time,” Cantu said.•

Tags: , , ,

retrofutre7 (2)

The late, great AI pioneer Marvin Minsky referred to us as “meat machines,” which irked many (very biased) humans. The more polite phrase subsequently coined to describe our brains in computer terms is “wetware.” Regardless of the vernacular, I think we’re essentially machines, though (for a little while longer) easily the most complex ones.

On that topic, John Pavlus of Quanta has an interesting interview with Harvard computer scientist Leslie Valiant, who believes all biology computational, that “ecorithms” underlie life the way algorithms do machines. To the researcher, learning is learning, human or AI, though there are significant differences in stimuli (external, unpredictable vs. internal, predictable). Not everyone may agree with Valiant, but we’re a far cry from the brickbats he would have received for his beliefs in the 1980s when he began working on machine learning, a field then very belittled if not verboten.

An excerpt:

Question:

So what is learning? Is it different from computing or calculating?

Leslie Valiant:

It is a kind of calculation, but the goal of learning is to perform well in a world that isn’t precisely modeled ahead of time. A learning algorithm takes observations of the world, and given that information, it decides what to do and is evaluated on its decision. A point made in my book is that all the knowledge an individual has must have been acquired either through learning or through the evolutionary process. And if this is so, then individual learning and evolutionary processes should have a unified theory to explain them.

Question:

And from there, you eventually arrived at the concept of an “ecorithm.” What is an ecorithm, and how is it different from an algorithm?

Leslie Valiant:

An ecorithm is an algorithm, but its performance is evaluated against input it gets from a rather uncontrolled and unpredictable world. And its goal is to perform well in that same complicated world. You think of an algorithm as something running on your computer, but it could just as easily run on a biological organism. But in either case an ecorithm lives in an external world and interacts with that world.

Question:

So the concept of an ecorithm is meant to dislodge this mistaken intuition many of us have that “machine learning” is fundamentally different from “non-machine learning”? An ecorithm is an algorithm, but its performance is evaluated against input it gets from a rather uncontrolled and unpredictable world. And its goal is to perform well in that same complicated world. You think of an algorithm as something running on your computer, but it could just as easily run on a biological organism. But in either case an ecorithm lives in an external world and interacts with that world.

Leslie Valiant:

Yes, certainly. Scientifically, the point has been made for more than half a century that if our brains run computations, then if we could identify the algorithms producing those computations, we could simulate them on a machine, and “artificial intelligence” and “intelligence” would become the same. But the practical difficulty has been to determine exactly what these computations running on the brain are. Machine learning is proving to be an effective way of bypassing this difficulty.•

 

Tags: ,

lat77

L.A. 2013” is a 1988 Los Angeles Times feature that imagined life in the future for a family of four-–and their robots. The feature dreamed too big in some cases and not enough in others, though it did see smart homes, quantified health, personalization, etc. An excerpt:

6 A.M.

WITH A BARELY perceptible click, the Morrow house turns itself on, as it has every morning since the family had it retrofitted with the Smart House system of wiring five years ago. Within seconds, warm air whooshes out of heating ducts in the three bedrooms, while the water heater checks to make sure there’s plenty of hot water. In the kitchen, the coffee maker begins dripping at the same time the oven switches itself on to bake a fresh batch of cinnamon rolls. Next door in the study, the family’s personalized home newspaper, featuring articles on the subjects that interest them, such as financial news and stories about their community, is being printed by laser-jet printer off the home computer–all while the family sleeps.

6:30 A.M.

With a twitch, “Billy Rae,” the Morrows’ mobile home robot, unplugs himself from the kitchen wall outlet–where he has been recharging for the past six hours–then wheels out of the kitchen and down the hall toward the master bedroom for his first task of the day. Raising one metallic arm, Billy Rae gently knocks on the door, calling out the Morrows’ names and the time in a pleasant, if Southern drawl: ‘Hey, y’all–rise an’ shine!’

On the other side of the door, Alma Morrow, a 44-year-old information specialist. Pulling on some sweats, Alma heads for the tiny home gym, where she slips a credit–card-size X–ER Script–her personal exercise prescription–into a slot by the door. Electronic weights come out of the wall, and Alma begins her 20-minute workout.

Meanwhile, her husband, Bill, 45, a senior executive at a Los Angeles–based multinational corporation, is having a harder time. He’s still feeling exhausted from the night before, when his 70-year-old mother, Camille, who lives with the family, accidentally fell asleep with a lighted cigarette. Minutes after the house smoke detector notified them of a potential hazard, firefighters from the local station were pounding on the front door. Camille, one of the last of the old–time smokers, had blamed the accident on these “newfangled Indian cigarettes” she’s been forced to buy since India has overtaken the United States in cigarette production. Luckily, she only singed a pillowcase–and her considerable pride. Bill, however, had been unable to fall back asleep and had spent a couple of hours in the study at the personal computer, teleconferring with his counterparts in the firm’s Tokyo office. But this morning, he can’t afford to be late. With a grunt, he rolls out of bed and heads for the bathroom, where he swishes and swallows Denturinse–much easier and more effective than toothbrushing–and then hurries to get dressed. As he does, the video intercom buzzes. Camille’s collagen-improved face appears on the video screen, her gravelly voice booming over the speaker. Bill clicks off the camera on his side so Camille can’t see him in his boxer shorts, then talks to her. She tells him she wants him to drive her downtown to finalize her retirement plan with her attorney. Knowing this will make him late, he suggests that Alma could drop Camille off at the law firm’s branch office in the Granada Hills Community Center. Camille reluctantly agrees– much to Alma’s chagrin–then buzzes off. When the couple heads for the kitchen, they leave the bed unmade: Billy Rae can change the sheets.•

In his great song “Pretty Boy Floyd,” Woody Guthrie, knowing that when it comes to crime a collar can be white just as easily as blue, sang these words:

Yes, as through this world I’ve wandered
I’ve seen lots of funny men;

Some will rob you with a six-gun,
And some with a fountain pen.

For those who employ the latter modus operandi, not even a stylus, let alone a pen, is necessary anymore. Over the last four decades in the U.S. (and much of the rest of the developed world), money has mysteriously moved from the middle class into the accounts of the 1%, and no one seems completely sure how it was transferred. We’re only know that it’s shifted, that it’s been shifty. Maybe it was the manipulation of tax codes or the decline of unions or the rise of the machines or the forces of globalization or the invention of outlandish Wall Street products. Probably it was all of that and more. The result is the disappearance of the prosperity enjoyed by a far greater percentage of Americans in the aftermath of WWII through the early 1970s, which was created by a humming capitalist engine paired with severe progressive tax rates that redistributed the wealth. No one need want to return to the pre-Civil Rights United States–wildly uneven in other odious ways–but there are some economic lessons to be learned there.

One thing that seems sure is the vast accumulation of riches at the top isn’t the end result of a successful experiment in meritocracy. These are the not uniformly the best, the brightest and the most deserving. Similarly, the shit-out-of-luck souls aren’t on the ever-widening bottom because of any defect of character or lack of work ethic. Some may drink or use drugs or divorce, but so do those whose wealth provides a cushion for such failings common to mere mortals. The main reason that poor people are so is because, at long last, they don’t have any money. They haven’t failed the system. Quite the contrary.

In a London Review of Books essay, Ed Miliband, the leader of the British Labour Party prior to Jeremy Corbyn, opines on the haves, the have-nots and the what-the-fuck situation we all find ourselves in, the eclipsed and the sun-kissed alike. The politician, who believes that beyond sheer unfairness, inequality ultimately inhibits economic growth, offers some prescriptions. The opening:

‘What do I see in our future today you ask? I see pitchforks, as in angry mobs with pitchforks, because while … plutocrats are living beyond the dreams of avarice, the other 99 per cent of our fellow citizens are falling farther and farther behind.’ Who said this? Jeremy Corbyn? Thomas Piketty? In fact it was Nick Hanauer, an American entrepreneur and multibillionaire, who in a TED talk in 2014 confessed to living a life that the rest of us ‘can’t even imagine’. Hanauer doesn’t believe he’s particularly talented or unusually hardworking; he doesn’t believe he has a great technical mind. His success, he says, is a ‘consequence of spectacular luck, of birth, of circumstance and of timing’. Just as his own extraordinary wealth can’t be explained by his unique talents, neither, he says, can rising inequality in the United States be justified on the grounds that it is a side effect of a broader economic success from which everyone benefits. As Henry Ford recognised, if you don’t pay ordinary workers decent wages, the economy will lack the demand to sustain economic growth.

Hanauer is in the vanguard of the ‘Fight for 15’, the campaign for a $15 minimum wage. Like Bill Gates and Warren Buffett, who have also issued loud warnings about inequality, he is heir to a long tradition of social concern among the wealthy in the US. They have reason to be worried. The last time inequality reached comparable levels was shortly before the Wall Street Crash. As Anthony Atkinson shows in Inequality: What Can Be Done?, inequality in the US fell for decades after the crash, before beginning to rise again in the 1970s. Since then the gap between the wealthy and the rest has grown steadily wider. The top 1 per cent now has nearly 20 per cent of total US personal income. In the 1980s, inequality in the UK went up even more sharply than in the US. Since then, overall UK inequality has been relatively stable but the income share of the top 1 per cent has increased significantly and now accounts for about 12 per cent of UK personal income. The important factors are rising inequality in wages, a decline in the share of the national income that wages represent as more money goes to corporate profits and dividends, and a reversal of redistribution from the rich to the poor.

The rise in inequality should not, Atkinson insists, be brushed aside as an inevitable effect of irresistible forces such as globalisation or developments in technology. It is driven by political choices.•

Tags: ,

pi-1

AI cracked backgammon in 1979, putting all other games on notice. But today’s announcement about a Google computer system besting a human Go champion was still surprising since most researchers thought we were years, perhaps a decade, from machine intelligence accomplishing such a feat in the complex, ancient game. What does this mean for Artificial General Intelligence and where does research head next? In a Conversation piece, Peter Cowling and Sam Devlin try to answer. An excerpt:

However the real world is a step up, full of ill-defined questions that are far more complex than even the trickiest of board games. The techniques which conquered Go can certainly be applied in medicine, education, science or any other domain where data is available and outcomes can be evaluated and understood.

The big question is whether Google just helped us towards the next generation of Artificial General Intelligence, where machines learn to truly think like – and beyond – humans. Whether we’ll see AlphaGo as a step towards Hollywood’s dreams (and nightmares) of AI agents with self-awareness, emotion and motivation remains to be seen. However the latest breakthrough points to a brave new future where AI will continue to improve our lives by helping us to make better-informed decisions in a world of ever-increasing complexity.

Now that Go has seemingly been cracked, AI needs a new grand challenge – a new “lab rat” – and it seems likely that many of these challenges will come from the $100 billion digital games industry. The ability to play alongside or against millions of engaged human players provides unique opportunities for AI research. At York’s centre for Intelligent Games and Game Intelligence, we’re working on projects such as building an AI aimed at player fun (rather than playing strength), for instance, or using games to improve well-being of people with Alzheimer’s. Collaborations between multidisciplinary labs like ours, the games industry and big business are likely to yield the next big AI breakthroughs.•

____________________________

“The possibilities of game play are endless.”

Tags: ,

bookomatvending

There’s never been greater access to books than there is right now, but all progress comes with a price. If print fiction and histories and such should disappear or become merely a luxury item, digital media would change the act of reading in unexpected ways over time.

Some see screen reading promoting a decline in analytical skills, but the human brain sure seems able to adapt to new forms once it becomes acclimated. Even as someone raised on paper books, I’m not worried that what’s lost in translation will be greater than what’s gained. Of course, I say that while still primarily using dead-tree volumes.

In a smart BBC Future article, Rachel Nuwer traces the fuzzy history of e-books and considers the future of reading. Some experts she interviews hope for a “bi-literate” society that values both the paperback and the Kindle. That would be a great outcome, but I don’t know how realistic a scenario it is. The opening:

When Peter James published his novel Host on two floppy disks in 1993, he was ill-prepared for the “venomous backlash” that would follow. Journalists and fellow writers berated and condemned him; one reporter even dragged a PC and a generator out to the beach to demonstrate the ridiculousness of this new form of reading. “I was front-page news of many newspapers around the world, accused of killing the novel,”James told pop.edit.lit. “[But] I pointed out that the novel was already dying at an alarming rate without my assistance.”

Shortly after Host’s debut, James also issued a prediction: that e-books would spike in popularity once they became as easy and enjoyable to read as printed books. What was a novelty in the 90s, in other words, would eventually mature to the point that it threatened traditional books with extinction. Two decades later, James’ vision is well on its way to being realised.

That e-books have surged in popularity in recent years is not news, but where they are headed – and what effect this will ultimately have on the printed word – is unknown. Are printed books destined to eventually join the ranks of clay tablets, scrolls and typewritten pages, to be displayed in collectors’ glass cases with other curious items of the distant past?

And if all of this is so, should we be concerned?•

Tags: ,

trumpmegyn

Donald Trump, Pol Pot with hair plugs, is like all bullies, a coward. His behavior stems from weakness and insecurity, so he can be handled. Jeb!, the favored son of a privileged family, doesn’t have adequate experience neutralizing such toxic types, but there are sure ways to deal. Lecture him about it in a mature way like Bernie Sanders and the hideous hotelier looks small. Aggressively return his obnoxious behavior like Megyn Kelly and he positively wilts. 

Women in particular throw Trump for a loss because he’s spent his life making sure he’s in a superior position to the ones in his life. He controls the purse strings and they should bleed in silence. Edward Luce’s latest Financial Times column about the 2016 race looks at this particular Trump shortcoming. The opening:

Hillary Clinton should be celebrating. Donald Trump’s decision to boycott the Fox News debate was ostensibly about ratings. How can the cable network make money without his celebrity pull? Mr Trump may prove his point when Thursday night’s viewership numbers come in.

But switching channels is not the same thing as showing up at a polling booth. More than half America’s electorate is female — they accounted for 53 per cent of the vote in the last election. Even the most apathetic will by now have heard Mr Trump’s opinions about Megyn Kelly, the Fox anchor, who will co-host the debate. Ms Kelly is a “bimbo”, according to Mr Trump, who is incapable of objectivity when there is “blood coming out of her whatever”.

So that is settled. Mr Trump thinks the menstrual cycle is a handicap. He also recoils at other female bodily functions. When Mrs Clinton took a bathroom break at a recent Democratic debate, Mr Trump described her as “disgusting”. He used the same word about an opposing lawyer in a 2011 hearing when she asked for a short break to pump breast milk. Looks are also fair game. Among those attacked for their appearance are the actress Bette Midler (“extremely unattractive”), Angelina Jolie (“she’s been with so many guys she makes me look like a baby”), media figure Arianna Huffington (“unattractive both inside and out”), fellow Republican candidate Carly Fiorina (“look at that face. Would anyone vote for that?”) and comedian Rosie O’Donnell (“fat pig”).

None of which has done Mr Trump’s ratings any harm. The more controversial a celebrity, the bigger audiences they attract. The question is whether there is any longer a meaningful distinction between show business and US politics. Do ratings equal votes?•

Tags: , , , ,

applecomp1981 (2)

In a series of articles in the New York Review of Books over the last couple of years, Sue Halpern has taken a thought-provoking look at the dubious side of the Digital Era, considering the impact of tech billionaires, technological unemployment and the Internet of Things.

Her latest salvo tries to locate the real legacy of Steve Jobs, who was mourned equally in office parks and Zuccotti Park. In doing so she calls on the two recent films on the Apple architect, Alex Gibney’s and Danny Boyle’s, and the new volume about him by Brent Schlender and Rick Tetzeli. Ultimately, the key truth may be that Jobs used a Barnum-esque “magic” and marketing myths to not only sell his new machines but to plug them into consumers’ souls.

An excerpt:

So why, Gibney wonders as his film opens—with thousands of people all over the world leaving flowers and notes “to Steve” outside Apple Stores the day he died, and fans recording weepy, impassioned webcam eulogies, and mourners holding up images of flickering candles on their iPads as they congregate around makeshift shrines—did Jobs’s death engender such planetary regret?

The simple answer is voiced by one of the bereaved, a young boy who looks to be nine or ten, swiveling back and forth in a desk chair in front of his computer: “The thing I’m using now, an iMac, he made,” the boy says. “He made the iMac. He made the Macbook. He made the Macbook Pro. He made the Macbook Air. He made the iPhone. He made the iPod. He’s made the iPod Touch. He’s made everything.”

Yet if the making of popular consumer goods was driving this outpouring of grief, then why hadn’t it happened before? Why didn’t people sob in the streets when George Eastman or Thomas Edison or Alexander Graham Bell died—especially since these men, unlike Steve Jobs, actually invented the cameras, electric lights, and telephones that became the ubiquitous and essential artifacts of modern life?* The difference, suggests the MIT sociologist Sherry Turkle, is that people’s feelings about Steve Jobs had less to do with the man, and less to do with the products themselves, and everything to do with the relationship between those products and their owners, a relationship so immediate and elemental that it elided the boundaries between them. “Jobs was making the computer an extension of yourself,” Turkle tells Gibney. “It wasn’t just for you, it was you.”•

Tags: , , ,

reed-hastings

17nfchxk68rn8jpg

ironically-hastings-offered-to-sell-forty-nine-percent-of-netflix-to-blockbuster-in-2000-to-act-as-an-online-arm-for-the-video-r

underwood5

The particular rules Clayton Christensen laid down for disruptive innovation probably don’t much matter because the world doesn’t exist within his constructs, but ginormous companies (even entire industries) being done in by much smaller ones has become an accepted part of life in the Digital Age.

In trying to explain this phenomenon, Christopher Mims of the Wall Street Journal explores the ideas in Anshu Sharma’s much-debated article about Stack Fallacy, which argues that companies moving up beyond their core businesses are likely to fail (Google+, anyone?), while those moving down into the guts of what they know have a far better chance. For an example of the latter, Mims writes of the ride-sharing sector. An excerpt:

To really understand the stack fallacy, it helps to recognize that companies move “down” the stack all the time, and it often strengthens their position. It is the same thing as vertical integration. For example, engineers of Apple’s iPhone know exactly what they want in a mobile chip, so Apple’s move to make its own chips has yielded enormous dividends in terms of how the iPhone performs. In the same way, Google’s move down its own stack—creating its own servers, designing its own data centers, etc.—allowed it to become dominant in search. Similarly, Tesla’s move to build its own batteries could—as long as it allows Tesla to differentiate its products in terms of price and/or performance—be a deciding factor in whether or not it succeeds.

Of course, the real test of a sweeping business hypothesis is whether or not it has predictive power. So here’s a prediction based on the stack fallacy: We’re more likely to see Uber succeed at making cars than to see General Motors succeed at creating a ride-sharing service like Uber. Both companies appear eager to invade each other’s territory. But, assuming that ride sharing becomes the dominant model for transportation, Uber has the advantage of knowing exactly what it needs in a vehicle for such a service.

It is also worth noting that the stack fallacy is just that: a fallacy and not a law of nature. There are ways around it. The key is figuring out how to have true, firsthand empathy for the needs of the customer for whatever product you’re trying to build next.•

Tags: , ,

tentaclearm (1)

In addition to yesterday’s trove of posts about the late Marvin Minsky, I want to refer you to a Backchannel remembrance of the AI pioneer by Steven Levy, the writer who had the good fortune to arrive on the scene at just the right moment in the personal-computer boom and the great talent to capture it. The journalist recalls Minsky’s wit and conversation almost as much as his contributions to tech. Just a long talk with the cognitive scientist was a perception-altering experience, even if his brilliance was intimidating.

[Editor’s note: It should be stated that Levy’s article appeared five years before Minsky was accused of participating in the rape of minor children as part of Jeffrey Epstein’s web of shocking criminality.]

The opening:

There was a great contradiction about Marvin Minsky. As one of the creators of artificial intelligence (with John McCarthy), he believed as early as the 1950s that computers would have human-like cognition. But Marvin himself was an example of an intelligence so bountiful, unpredictable and sublime that not even a million Singularities could conceivably produce a machine with a mind to match his. At the least, it is beyond my imagination to conceive of that happening.

But maybe Marvin could imagine it. His imagination respected no borders.

Minsky died Sunday night, at 88. His body had been slowing down, but that mind had kept churning. He was more than a pioneering computer scientist — he was a guiding light for what intellect itself could do. He was also our Yoda. The entire computer community, which includes all of us, of course, is going to miss him. 

I first met him in 1982; I had written a story for Rolling Stone about young computer hackers, and it was optioned by Jane Fonda’s production company. I traveled to Boston with Fonda’s producer, Bruce Gilbert; and Susan Lyne, who had engineered my assignment to begin with. It was my first trip to MIT; my story about been about Stanford hackers.

I was dazzled by Minsky, an impish man of clear importance whose every other utterance was a rabbit’s hole of profundity and puzzlement.•

Tags: ,

DR. WERNHER VON BRAUN SUITED UP IN SPACE SUIT PRIOR TO ENTERING MARSHALL SPACE FLIGHT CENTER'S NEUTRAL BUOYANCY SIMULATOR. 1967

Five Books did an excellent interview with geneticist Matthew Cobb on the topic of the “History of Science.” In discussing William E. Burrows’ really fun 1999 title, This New Ocean: The Story of the First Space Age, Cobb comments on Wernher von Braun an erstwhile Nazi and American hero who directly oversaw the murders of Jewish prisoners and who wanted to gas monkey astronauts in outer space (I swear!). An excerpt:

Question:

You just mentioned Enceladus so, talking of space missions, we’ll go on to your next book: William Burrows’s This New Ocean: The Story of the First Space Age published in 1998. What do you like about this book?

Matthew Cobb:

Space! Rockets! When it came out I was about to go on holiday and wanted a thick book to read. Burrows is a science journalist: not a historian or a scientist. I find it incredibly readable, very exciting. Although it was written by an American, it didn’t cover up the fact that Wernher von Braun, the brains behind the Apollo programme, was a Nazi Party member who was absolved for his involvement with the Hitler regime because he could build ICBMs. The book contains a good account—as good as there could be at the time, given the archives in the USSR hadn’t fully opened—of the huge advances the Russians made, which became obvious as they first flew up the Sputnik and then put the first man in space. I find it an extremely readable account of a time I grew up in—almost like a novel. I wasn’t reading it with a professional eye because I don’t know much about space history.

Question:

Burrows’s book is very dramatic—especially some of the moments like the first moon landing.

Matthew Cobb:

I remember it! I was 11 years old at the time. I was watching it with my uncle Brian in the middle of the night. Although I remember the excitement of seeing Neil Armstrong’s feet stepping down on to the ground, I was equally amazed by the fact that Brian was eating four Weetabix at three o’clock in the morning. We have lost a lot of the excitement about space flight. A year ago NASA trialled the Orion space capsule, which they may use to fly to Mars. The launch was in the middle of one of my lectures, so I decided to take a brief break and show the students the NASA live stream. You don’t see rocket launches on live TV anymore. The space shuttle has been scrapped and although there are rockets going to the Space Station, and private companies like SpaceX and Blue Origin developing reusable rockets, they doesn’t enjoy the same media attention as in the 60s and 70s. So we all sat and watched it—the students were very excited.•

Tags: ,

sinatra56

Who knows for sure if Avo Uvezian’s story about having his song stolen by Frank Sinatra is true, but it’s true to him, and the narratives we believe, myth or fact, shape our lives. The octogenarian claims, with some plausibility, that he had the melody for “Strangers in the Night” pilfered in the 1960s, altering his life, eventually ushering him bitterly from the music industry into the cigar business, where he found great success.

In a wonderful New York Times piece written by Michael Wilson, whose work appeared on Afflictor’s “50 Great 2015 Articles Online for Free” list, a simple story of a few dozen pinched cigars triggers a bildungsroman about a man who knew opportunities missed and made. An excerpt:

By the 1960s, he had written his own music. One melody stood out.

“The song itself is a very simple song,” Mr. Uvezian, 89, said this month by telephone from his home in Orlando, Fla. “You take the thing and you repeat it. ‘Dah-dah-dah-dah-daaaah.’ It’s the same line repeated throughout.”

He had a friend who knew Sinatra. The friend set up a meeting and told Mr. Uvezian to bring along his music. Someone else had put lyrics to the melody, and called it “Broken Guitar.”

Sinatra gave it a listen.

“He said, ‘I love the melody, but change the lyrics,’” Mr. Uvezian recalled. The task was given to studio songwriters, and they came back with new words. Sinatra, legend has it, hated it. “I don’t want to sing this,” he said when he first saw the sheet music, according to James Kaplan’s new book, “Sinatra: The Chairman.” Nonetheless, with his last No. 1 single several years behind him, he was persuaded to record the song in 1966.

The title was new, too. “Broken Guitar” was out. The new name was “Strangers in the Night.”

In Mr. Uvezian’s telling, what should have been a monumental triumph and breakthrough turned out to be a source of great grief.•

Tags: ,

1Sadly, the legendary MIT cognitive scientist Marvin Minsky just died. From building a robotic tentacle arm nearly 50 years ago to consulting on 2001: A Space Odyssey, the AI expert–originator, really–thought as much as anyone could about smart machines during a lifetime. From Glenn Rifkin’s just-published New York Times obituary:

Well before the advent of the microprocessor and the supercomputer, Professor Minsky, a revered computer science educator at M.I.T., laid the foundation for the field of artificial intelligence by demonstrating the possibilities of imparting common-sense reasoning to computers.

“Marvin was one of the very few people in computing whose visions and perspectives liberated the computer from being a glorified adding machine to start to realize its destiny as one of the most powerful amplifiers for human endeavors in history,” said Alan Kay, a computer scientist and a friend and colleague of Professor Minsky’s.•

The following are a collection of past posts about his life and work.

_______________________________

“Such A Future Cannot Be Realized Through Biology”

WESTWORLD

Reading Michael Graziano’s great essay about building a mechanical brain reminded me of Marvin Minsky’s 1994 Scientific American article,Will Robots Inherit the Earth?It foresees a future in which intelligence is driven by nanotechnology, not biology. Two excerpts follow.

· · · · · · · · · ·

Everyone wants wisdom and wealth. Nevertheless, our health often gives out before we achieve them. To lengthen our lives, and improve our minds, in the future we will need to change our bodies and brains. To that end, we first must consider how normal Darwinian evolution brought us to where we are. Then we must imagine ways in which future replacements for worn body parts might solve most problems of failing health. We must then invent strategies to augment our brains and gain greater wisdom. Eventually we will entirely replace our brains — using nanotechnology. Once delivered from the limitations of biology, we will be able to decide the length of our lives–with the option of immortality — and choose among other, unimagined capabilities as well.

In such a future, attaining wealth will not be a problem; the trouble will be in controlling it. Obviously, such changes are difficult to envision, and many thinkers still argue that these advances are impossible–particularly in the domain of artificial intelligence. But the sciences needed to enact this transition are already in the making, and it is time to consider what this new world will be like.

Such a future cannot be realized through biology. 

       · · · · · · · · · ·

Once we know what we need to do, our nanotechnologies should enable us to construct replacement bodies and brains that won’t be constrained to work at the crawling pace of “real time.” The events in our computer chips already happen millions of times faster than those in brain cells. Hence, we could design our “mind-children” to think a million times faster than we do. To such a being, half a minute might seem as long as one of our years, and each hour as long as an entire human lifetime.

But could such beings really exist? Many thinkers firmly maintain that machines will never have thoughts like ours, because no matter how we build them, they’ll always lack some vital ingredient. They call this essence by various names–like sentience, consciousness, spirit, or soul. Philosophers write entire books to prove that, because of this deficiency, machines can never feel or understand the sorts of things that people do. However, every proof in each of those books is flawed by assuming, in one way or another, the thing that it purports to prove–the existence of some magical spark that has no detectable properties.

I have no patience with such arguments.•

_________________________________

“A Century Ago, There Would Have Been No Way Even To Start Thinking About Making Smart Machines”

AI pioneer Marvin Minsky at MIT in ’68 showing his robotic arm, which was strong enough to lift an adult, gentle enough to hold a child.

Minsky discussing smart machines on Edge: 

Like everyone else, I think most of the time. But mostly I think about thinking. How do people recognize things? How do we make our decisions? How do we get our new ideas? How do we learn from experience? Of course, I don’t think only about psychology. I like solving problems in other fields — engineering, mathematics, physics, and biology. But whenever a problem seems too hard, I start wondering why that problem seems so hard, and we’re back again to psychology! Of course, we all use familiar self-help techniques, such as asking, “Am I representing the problem in an unsuitable way,” or “Am I trying to use an unsuitable method?” However, another way is to ask, “How would I make a machine to solve that kind of problem?”

A century ago, there would have been no way even to start thinking about making smart machines. Today, though, there are lots of good ideas about this. The trouble is, almost no one has thought enough about how to put all those ideas together. That’s what I think about most of the time.•

________________________________

“People Have A Fuzzy Idea Of Consciousness”

800px-На_концерте_Шойфет_2-Ashek1881-e1416263704118

Consciousness is the hard problem for a reason. You could define it by saying it means we know our surroundings, our reality, but people get lost in delusions all the time, sometimes even nation-wide ones. What is it, then? Is it the ability to know something, anything, regardless of its truth? In this interview with Jeffrey Mishlove, cognitive scientist Marvin Minsky, no stranger to odysseys, argues against accepted definitions of consciousness, in humans and machines.

________________________________

“The Brain Doesn’t Work In A Simple Way”

tentaclearm

Marvin Minsky, visionary of robotic arms, thinking computers and major motion pictures, is interviewed by Ray Kurzweil. The topic, unsurprisingly: “Is the Singularity Near?”

________________________________

“Do Outstanding Minds Differ From Ordinary Minds In Any Special Way?”

hairdryer44

Humans experience consciousness even though we don’t have a solution to the hard problem. Will we have to crack the code before we can make truly smart machines–ones that not only do but know what they are doing–or is there a way to translate the skills of the human brain to machines without figuring out the mystery? From Marvin Minsky’s 1982 essay, “Why People Think Computers Can’t“:

CAN MACHINES BE CREATIVE?

We naturally admire our Einsteins and Beethovens, and wonder if computers ever could create such wondrous theories or symphonies. Most people think that creativity requires some special, magical ‘gift’ that simply cannot be explained. If so, then no computer could create – since anything machines can do most people think can be explained.

To see what’s wrong with that, we must avoid one naive trap. We mustn’t only look at works our culture views as very great, until we first get good ideas about how ordinary people do ordinary things. We can’t expect to guess, right off, how great composers write great symphonies. I don’t
believe that there’s much difference between ordinary thought and highly creative thought. I don’t blame anyone for not being able to do everything the most creative people do. I don’t blame them for not being able to explain it, either. I do object to the idea that, just because we can’t explain it now, then no one ever could imagine how creativity works.

We shouldn’t intimidate ourselves by our admiration of our Beethovens and Einsteins. Instead, we ought to be annoyed by our ignorance of how we get ideas – and not just our ‘creative’ ones. Were so accustomed to the marvels of the unusual that we forget how little we know about the marvels of ordinary thinking. Perhaps our superstitions about creativity serve some other needs, such as supplying us with heroes with such special qualities that, somehow, our deficiencies seem more excusable.

Do outstanding minds differ from ordinary minds in any special way? I don’t believe that there is anything basically different in a genius, except for having an unusual combination of abilities, none very special by itself. There must be some intense concern with some subject, but that’s common enough. There also must be great proficiency in that subject; this, too, is not so rare; we call it craftsmanship. There has to be enough self-confidence to stand against the scorn of peers; alone, we call that stubbornness. And certainly, there must be common sense. As I see it, any ordinary person who can understand an ordinary conversation has already in his head most of what our heroes have. So, why can’t ‘ordinary, common sense’ – when better balanced and more fiercely motivated – make anyone a genius,

So still we have to ask, why doesn’t everyone acquire such a combination? First, of course, it sometimes just the accident of finding a novel way to look at things. But, then, there may be certain kinds of difference-in-degree. One is in how such people learn to manage what they learn: beneath the surface of their mastery, creative people must have unconscious administrative skills that knit the many things they know together. The other difference is in why some people learn so many more and better skills. A good composer masters many skills of phrase and theme – but so does anyone who talks coherently.

Why do some people learn so much so well? The simplest hypothesis is that they’ve come across some better ways to learn! Perhaps such ‘gifts’ are little more than tricks of ‘higher-order’ expertise. Just as one child learns to re-arrange its building-blocks in clever ways, another child might learn to play, inside its head, at rearranging how it learns!

Our cultures don’t encourage us to think much about learning. Instead we regard it as something that just happens to us. But learning must itself consist of sets of skills we grow ourselves; we start with only some of them and and slowly grow the rest. Why don’t more people keep on learning more and better learning skills? Because it’s not rewarded right away, its payoff has a long delay. When children play with pails and sand, they’re usually concerned with goals like filling pails with sand. But once a child concerns itself instead with how to better learn, then that might lead to exponential learning growth! Each better way to learn to learn would lead to better ways to learn – and this could magnify itself into an awesome, qualitative change. Thus, first-rank ‘creativity’ could be just the
consequence of little childhood accidents.

So why is genius so rare, if each has almost all it takes? Perhaps because our evolution works with mindless disrespect for individuals. I’m sure no culture could survive, where everyone finds different ways to think. If so, how sad, for that means genes for genius would need, instead of nurturing, a frequent weeding out.”•

_______________________________

“Backgammon Is Now The First Board Or Card Game With, In Effect, A Machine World Champion”

sciambackgammon123 (1)

For some reason, the editors of the New Yorker never ask me for advice. I don’t know what they’re thinking. I would tell them this if they did: Publish an e-book of the greatest technology journalism in the magazine’s history. Have one of your most tech-friendly writers compose an introduction and include Lillian Ross’1970 pieceabout the first home-video recorder, Malcolm Ross’1931 lookinside Bell Labs, Anthony Hiss’ 1977 story about the personal computer, Hiss’1975 articleabout visiting Philip K. Dick in Los Angeles, and Jeremy Bernstein’s short1965 pieceand long1966 oneabout Stanley Kubrick making 2001: A Space Odyssey.

Another inclusion could be A.I.Bernstein’s 1981 profile of the great artificial-intelligence pioneer Marvin Minsky. (It’s gated, so you need a subscription to read it.) The opening:

In July of 1979, a computer program called BKG 9.8–the creation of Hans Berliner, a professor of computer science at Carnegie-Mellon University, in Pittsburgh–played the winner of the world backgammon championship in Monte Carlo. The program was run on a large computer at Carnegie-Mellon that was connected by satellite to a robot in Monte Carlo. The robot, named Gammonoid, had a visual-display backgammon board on its chest, which exhibited its moves and those of its opponent, Luigi Villa, of Italy, who by beating all his human challengers a short while before had won the right to play against the Gammonoid. The stakes were five thousand dollars, winner take all, and the computer won, seven games to one. It had been expected to lose. In a recent Scientific American article, Berliner wrote:

Not much was expected of the programmed robot…. Although the organizers had made Gammonoid the symbol of the tournament by putting a picture of it on their literature and little robot figures on the trophies, the players knew the existing microprocessors could not give them a good game. Why should the robot be any different?

This view was reinforced at the opening ceremonies in the Summer Sports Palace in Monaco. At one point the overhead lights dimmed, the orchestra began playing the theme of the film Star Wars, and a spotlight focused on an opening in the stage curtain through which Gammonoid was supposed to propel itself onto the stage. To my dismay the robot got entangled and its appearance was delayed for five minutes.

This was one of the few mistakes the robot made. Backgammon is now the first board or card game with, in effect, a machine world champion. Checkers, chess, go, and the rest will follow–and quite possibly soon. But what does that mean for us, for our sense of uniqueness and worth–especially as machines evolve whose output we can less distinguish from our own?•

________________________________

“Each One Of Us Already Has Experienced What It Is Like To Be Simulated By A Computer”

MaxheadroomMpegMan

We know so little about the tools we depend on every day. When I was a child, I was surprised that no one expected me to learn how to build a TV even though I watched a TV. But, no, I was just expected to process the surface of the box’s form and function, not to understand the inner workings. Throughout life, we use analogies and signs and symbols to make sense of things we constantly consume but don’t truly understand. Our processing of these basics is not unlike a computer’s process. Marvin Minsky wrote brilliantly on this topic in an Afterword of a 1984 Vernor Vinge novel. An excerpt:

Let’s return to the question about how much a simulated life inside a world inside a machine could resemble our real life “out here.” My answer, as you know by now, is that it could be very much the same––since we, ourselves, already exist as processes imprisoned in machines inside machines! Our mental worlds are already filled with wondrous, magical, symbol–signs, which add to every thing we “see” its “meaning” and “significance.” In fact, all educated people have already learned how different are our mental worlds than the ‘real worlds’ that our scientists know.

Consider the table in your dining room; your conscious mind sees it as having familiar functions, forms, and purposes. A table is “a thing to put things on.” However, our science tells us that this is only in the mind; the only thing that’s “really there” is a society of countless molecules. That table seems to hold its shape only because some of those molecules are constrained to vibrate near one another, because of certain properties of force-fields that keep them from pursuing independent trajectories. Similarly, when you hear a spoken word, your mind attributes sense and meaning to that sound––whereas, in physics, the word is merely a fluctuating pressure on your ear, caused by the collisions of myriads of molecules of air––that is, of particles whose distances are so much less constrained.

And so––let’s face it now, once and for all: each one of us already has experienced what it is like to be simulated by a computer!•

_________________________________

“The Book Is About Ways To Read Out The Contents Of A Person’s Brain”

heads789-2

In 1992, AI legend Marvin Minsky believed that by the year 2023 people would be able to download the contents of their brains and achieve “immortality.” That was probably too optimistic. He also thought such technology would only be possible for people who had great wealth. That was probably too pessimistic. Froman interview that Otto Laske conducted with Minsky about his sci-fi novel, The Turing Option:

Otto Laske:

I hear you are writing a science fiction novel. Is that your first such work?

Marvin Minsky:

Well, yes, it is, and it is something I would not have tried to do alone. It is a spy-adventure techno-thriller that I am writing together with my co-author Harry Harrison. Harry did most of the plotting and invention of characters, while I invented new brain science and AI technology for the next century.

Otto Laske:

At what point in time is the novel situated?

Marvin Minsky:

It’s set in the year 2023.

Otto Laske: 

I may just be alive to experience it, then …

Marvin Minsky:

Certainly. And furthermore, if the ideas of the story come true, then anyone who manages to live until then may have the opportunity to live forevermore…

Otto Laske: 

How wonderful …

Marvin Minsky:

 … because the book is about ways to read out the contents of a person’s brain, and then download those contents into more reliable hardware, free from decay and disease. If you have enough money…

Otto Laske: 

 That’s a very American footnote…

Marvin Minsky:

Well, it’s also a very Darwinian concept.

Otto Laske: 

Yes, of course.

Marvin Minsky:

There isn’t room for every possible being in this finite universe, so, we have to be selective …

Otto Laske: 

 And who selects, or what is the selective mechanism?

Marvin Minsky:

Well, normally one selects by fighting. Perhaps somebody will invent a better way. Otherwise, you have to have a committee …

Otto Laske: 

That’s worse than fighting, I think.•

___________________________________

“We Are On The Threshold Of An Era That Will Be Strongly Influenced, And Quite Possibly Dominated, By Intelligent Machines”

sa

In the introduction to his 1960 paper, “Steps Toward Artificial Function,” Marvin Minsky, who later served as a technical consultant for 2001: A Space Odyssey, succinctly described the present and future of computers:

A VISITOR to our planet might be puzzled about the role of computers in our technology. On the one hand, he would read and hear all about wonderful “mechanical brains” baffling their creators with prodigious intellectual performance. And he (or it) would be warned that these machines must be restrained, lest they overwhelm us by might, persuasion, or even by the revelation of truths too terrible to be borne. On the other hand, our visitor would find the machines being denounced on all sides for their slavish obedience, unimaginative literal interpretations, and incapacity for innovation or initiative; in short, for their inhuman dullness.

Our visitor might remain puzzled if he set out to find, and judge for himself, these monsters. For he would find only a few machines mostly general-purpose computers), programmed for the moment to behave according to some specification) doing things that might claim any real intellectual status. Some would be proving mathematical theorems of rather undistinguished character. A few machines might be playing certain games, occasionally defeating their designers. Some might be distinguishing between hand-printed letters. Is this enough to justify so much interest, let alone deep concern? I believe that it is; that we are on the threshold of an era that will be strongly influenced, and quite possibly dominated, by intelligent problem-solving machines. But our purpose is not to guess about what the future may bring; it is only to try to describe and explain what seem now to be our first steps toward the construction of “artificial intelligence.”•

_________________________________

“He Is, In A Sense, Trying To Second-Guess The Future”

sk

I posted a brief Jeremy Bernstein New Yorker piece about Stanley Kubrick that was penned in 1965 during the elongated production of 2001: A Space Odyssey.The following year the same writer turned out a much longer profile for the same magazine about the director and his sci-fi masterpiece. Among many other interesting facts, it mentions that MIT AI legend Marvin Minsky, who’s appeared on this blog many times, was a technical consultant for the film. An excerpt from “How About a Little Game?:

By the time the film appears, early next year, Kubrick estimates that he and [Arthur C.] Clarke will have put in an average of four hours a day, six days a week, on the writing of the script. (This works out to about twenty-four hundred hours of writing for two hours and forty minutes of film.) Even during the actual shooting of the film, Kubrick spends every free moment reworking the scenario. He has an extra office set up in a blue trailer that was once Deborah Kerr’s dressing room, and when shooting is going on, he has it wheeled onto the set, to give him a certain amount of privacy for writing. He frequently gets ideas for dialogue from his actors, and when he likes an idea he puts it in. (Peter Sellers, he says, contributed some wonderful bits of humor for Dr. Strangelove.)

In addition to writing and directing, Kubrick supervises every aspect of his films, from selecting costumes to choosing incidental music. In making 2001, he is, in a sense, trying to second-guess the future. Scientists planning long-range space projects can ignore such questions as what sort of hats rocket-ship hostesses will wear when space travel becomes common (in 2001 the hats have padding in them to cushion any collisions with the ceiling that weightlessness might cause), and what sort of voices computers will have if, as many experts feel is certain, they learn to talk and to respond to voice commands (there is a talking computer in 2001 that arranges for the astronauts’ meals, gives them medical treatments, and even plays chess with them during a long space mission to Jupiter–‘Maybe it ought to sound like Jackie Mason,’ Kubrick once said), and what kind of time will be kept aboard a spaceship (Kubrick chose Eastern Standard, for the convenience of communicating with Washington). In the sort of planning that NASA does, such matters can be dealt with as they come up, but in a movie everything is visible and explicit, and questions like this must be answered in detail. To help him find the answers, Kubrick has assembled around him a group of thirty-five artists and designers, more than twenty-five special effects people, and a staff of scientific advisers. By the time this picture is done, Kubrick figures that he will have consulted with people from a generous sampling of the leading aeronautical companies in the United States and Europe, not to mention innumerable scientific and industrial firms. One consultant, for instance, was Professor Marvin Minsky, of M.I.T., who is a leading authority on artificial intelligence and the construction of automata. (He is now building a robot at M.I.T. that can catch a ball.) Kubrick wanted to learn from him whether any of the things he was planning to have his computers do were likely to be realized by the year 2001; he was pleased to find out that they were.•

_____________________________

“We Will Go On, As Always, To Seek More Robust Illusions”

416px-Von_Krahl_Theatre_The_Magic_flute_Kantele

Times of great ignorance are petri dishes for all manner of ridiculous myths, but, as we’ve learned, so are times of great information. The more things can be explained, the more we want things beyond explanation. And maybe for some people, it’s a need rather than a want. The opening of “Music, Mind and Meaning,” Marvin Minsky’s 1981 Computer Music Journal essay:

Why do we like music? Our culture immerses us in it for hours each day, and everyone knows how it touches our emotions, but few think of how music touches other kinds of thought. It is astonishing how little curiosity we have about so pervasive an “environmental” influence. What might we discover if we were to study musical thinking?

Have we the tools for such work? Years ago, when science still feared meaning, the new field of research called “Artificial Intelligence” started to supply new ideas about “representation of knowledge” that I’ll use here. Are such ideas too alien for anything so subjective and irrational, aesthetic, and emotional as music? Not at all. I think the problems are the same and those distinctions wrongly drawn: only the surface of reason is rational. I don’t mean that understanding emotion is easy, only that understanding reason is probably harder. Our culture has a universal myth in which we see emotion as more complex and obscure than intellect. Indeed, emotion might be “deeper” in some sense of prior evolution, but this need not make it harder to understand; in fact, I think today we actually know much more about emotion than about reason.

Certainly we know a bit about the obvious processes of reason–the ways we organize and represent ideas we get. But whence come those ideas that so conveniently fill these envelopes of order? A poverty of language shows how little this concerns us: we “get” ideas; they “come” to us; we are “reminded of” them. I think this shows that ideas come from processes obscured from us and with which our surface thoughts are almost uninvolved. Instead, we are entranced with our emotions, which are so easily observed in others and ourselves. Perhaps the myth persists because emotions, by their nature, draw attention, while the processes of reason (much more intricate and delicate) must be private and work best alone.

The old distinctions among emotion, reason, and aesthetics are like the earth, air, and fire of an ancient alchemy. We will need much better concepts than these for a working psychic chemistry.

Much of what we now know of the mind emerged in this century from other subjects once considered just as personal and inaccessible but which were explored, for example, by Freud in his work on adults’ dreams and jokes, and by Piaget in his work on children’s thought and play. Why did such work have to wait for modern times? Before that, children seemed too childish and humor much too humorous for science to take them seriously.

Why do we like music? We all are reluctant, with regard to music and art, to examine our sources of pleasure or strength. In part we fear success itself– we fear that understanding might spoil enjoyment. Rightly so: art often loses power when its psychological roots are exposed. No matter; when this happens we will go on, as always, to seek more robust illusions!•

________________________

“Most People Think Computers Will Never Be Able To Think”

h9

Here’s the opening of a 1982 AI Magazine piece by cognitive scientist MIT’s Marvin Minsky, which considers the possibility of computers being able to think:

Most people think computers will never be able to think. That is, really think. Not now or ever. To be sure, most people also agree that computers can do many things that a person would have to be thinking to do. Then how could a machine seem to think but not actually think? Well, setting  aside the question of what thinking actually is, I think that most of us would answer that by saying that in these cases, what the computer is doing is merely a superficial imitation of human intelligence. It has been designed to obey certain simple commands, and then it has been provided with programs composed of those commands. Because of this, the computer has to obey those commands, but without any idea of what’s happening.

Indeed, when computers first appeared, most of their designers intended them for nothing only to do huge, mindless computations. That’s why the things were called “computers”. Yet even then, a few pioneers — especially Alan Turing — envisioned what’s now called ‘Artificial Intelligence’ – or ‘AI.’ They saw that computers might possibly go beyond arithmetic, and maybe imitate the processes that go on inside human brains.

Today, with robots everywhere in industry and movie films, most people think Al has gone much further than it has. Yet still, ‘computer experts’ say machines will never really think. If so, how could they be so smart, and yet so dumb?

Indeed, when computers first appeared, most of their designers intended them for nothing only to do huge, mindless computations. That’s why the things were called “computers.” Yet even then, a few pioneers –especially Alan Turing — envisioned what’s now called “Artificial Intelligence” – or “AI.” They saw that computers might possibly go beyond arithmetic, and maybe imitate the processes that go on inside human brains.

Today, with robots everywhere in industry and movie films, most people think Al has gone much further than it has. Yet still, “computer experts” say machines will never really think. If so, how could they be so smart, and yet so dumb?•

___________________________

“Using This Instrument, You Can ‘Work’ In Another Room, In Another City, In Another Country, Or On Another Planet”

vrnasalead-thumb-550x388-19831

The opening of “Telepresence,” Marvin Minsky’s 1980 Omni think piece which suggested we should bet our future on a remote-controlled economy:

You don a comfortable jacket lined with sensors and muscle-like motors. Each motion of your arm, hand, and fingers is reproduced at another place by mobile, mechanical hands. Light, dexterous, and strong, these hands have their own sensors through which you see and feel what is happening. Using this instrument, you can ‘work’ in another room, in another city, in another country, or on another planet. Your remote presence possesses the strength of a giant or the delicacy of a surgeon. Heat or pain is translated into informative but tolerable sensation. Your dangerous job becomes safe and pleasant.

The crude ‘robotic machines of today can do little of this. By building new kinds of versatile, remote‑controlled mechanical hands, however, we might solve critical problems of energy, health, productivity, and environmental quality, and we would create new industries. It might take 10 to 20 years and might cost $1 billion—less than the cost of a single urban tunnel or nuclear power reactor or the development of a new model of automobile.•

Tags:

« Older entries § Newer entries »