Science/Tech

You are currently browsing the archive for the Science/Tech category.

In a recent Telegraph essay, Sir Martin Rees took on what he realizes is almost a fool’s errand: predicting the future. He holds forth on bioengineering, Weak Ai, Strong AI, etc. A passage about what he sees for us–them, really–in the far future:

Let me briefly deploy an astronomical perspective and speculate about the really far future – the post-human era. There are chemical and metabolic limits to the size and processing power of organic brains. Maybe humans are close to these limits already. But there are no such constraints on silicon-based computers (still less, perhaps, quantum computers): for these, the potential for further development could be as dramatic as the evolution from monocellular organisms to humans. So, by any definition of “thinking”, the amount and intensity that’s done by organic human-type brains will, in the far future, be utterly swamped by the cerebrations of AI. Moreover, the Earth’s biosphere in which organic life has symbiotically evolved is not a constraint for advanced AI. Indeed, it is far from optimal – interplanetary and interstellar space will be the preferred arena where robotic fabricators will have the grandest scope for construction, and where non-biological “brains” may develop insights as far beyond our imaginings as string theory is for a mouse.

Abstract thinking by biological brains has underpinned the emergence of all culture and science. But this activity – spanning tens of millennia at most – will be a brief precursor to the more powerful intellects of the inorganic post-human era. So, in the far future, it won’t be the minds of humans, but those of machines, that will most fully understand the cosmos – and it will be the actions of autonomous machines that will most drastically change our world, and perhaps what lies beyond.•

Tags:

Attending the recent O’Reilly Solid conference in San Francisco, Richard Waters of the Financial Times glimpsed the future of the Internet of Things, still at a larval stage but a dramatic metamorphosis that will almost definitely happen, though no one knows exactly when. Gathering the information will be only half the battle as processing it intelligently is just as key. In his article, Waters focuses on the potential of ubiquitous connectedness but not the potential perils (privacy concerns, technological unemployment, etc.). An excerpt:

The world-changing applications made possible by the new technology platform cannot be imagined at the outset. …

The exhibits included a “pop-up factory” to make electronics on the fly and a part-3D printed car designed to be built in small local “microfactories”. Much of the discussion was of synthetic biology that will take manufacturing down to the microscopic level and merge the inorganic with the organic.

Behind the disruption lie three technologies that are on a collision course, according to Mickey McManus, a researcher at design software company Autodesk.

Extending internet connectivity to the physical world is only part of the story. A second seminal tech change will stem from the spread of artificial intelligence, which will make it easier to design and control complex ecosystems of objects, as well as put a higher level of intelligence into the individual “things” themselves. The third leg of the revolution, says Mr McManus, is digital manufacturing exemplified by 3D printing, which could present an alternative to some forms of mass-market production.

Taken together, he hints at the types of changes that could result: three students in a dorm room could start a car company; a distributed social network might replace a factory; or objects may disassemble and reassemble themselves as needs change.•

 

Tags:

In a fascinating 1968 Mechanix Illustrated article, science-fiction writer James R. Berry predicted life forty years hence, prescient about communications devices, online shopping and driverless cars, though he was too aggressive in some other prognostications. The opening:

IT’S 8 a.m., Tuesday, Nov. 18, 2008, and you are headed for a business appointment 300 mi. away. You slide into your sleek, two-passenger air-cushion car, press a sequence of buttons and the national traffic computer notes your destination, figures out the current traffic situation and signals your car to slide out of the garage. Hands free, you sit back and begin to read the morning paper–which is flashed on a flat TV screen over the car’s dashboard. Tapping a button changes the page.

The car accelerates to 150 mph in the city’s suburbs, then hits 250 mph in less built-up areas, gliding over the smooth plastic road. You whizz past a string of cities, many of them covered by the new domes that keep them evenly climatized year round. Traffic is heavy, typically, but there’s no need to worry. The traffic computer, which feeds and receives signals to and from all cars in transit between cities, keeps vehicles at least 50 yds. apart. There hasn’t been an accident since the system was inaugurated. Suddenly your TV phone buzzes. A business associate wants a sketch of a new kind of impeller your firm is putting out for sports boats. You reach for your attache case and draw the diagram with a pencil-thin infrared flashlight on what looks like a TV screen lining the back of the case. The diagram is relayed to a similar screen in your associate’s office, 200 mi. away. He jabs a button and a fixed copy of the sketch rolls out of the device. He wishes you good luck at the coming meeting and signs off.

Ninety minutes after leaving your home, you slide beneath the dome of your destination city. Your car decelerates and heads for an outer-core office building where you’ll meet your colleagues. After you get out, the vehicle parks itself in a convenient municipal garage to await your return. Private cars are banned inside most city cores. Moving sidewalks and electrams carry the public from one location to another.•

Tags:

At any moment in history we accept some things that are wrong and others that are even monstrous. But which ones are those in our current age, a time when technology is viewed as totem?

In “Humanist Among the Machines,” an Aeon essay by Ian Beacock, the writer suggests we seek alternatives to the received wisdom of the Digital Age by taking a cue from twentieth-century historian Arnold Toynbee, who pushed back at the “mechanization” of Homo sapiens and its world in an earlier period. 

In one passage, Beacock writes that “As Toynbee recognised, scientific principles and technical innovations might help us build a better railway, a faster locomotive – but they aren’t very good at telling us who can buy tickets, what direction we should lay the track, or whether we should be taking the train at all.” The thing is, technology has gotten pretty good at telling us those type of things and is getting better all the time. Of course, that’s just more reason to pay heed to Beacock’s clarion call. As tech’s influence casts a larger shadow, the light we shine on it should be brighter still. 

An excerpt:

There’s no shortage of writing about Silicon Valley, no lack of commentary about how smartphones and algorithms are remaking our lives. The splashiest salvos have come from distinguished humanists. In The New York Times Book Review, Leon Wieseltier, acidly indicted the culture of technology for flattening the capacious human subject into a few lines of computer code. Rebecca Solnit, in the London Review of Books, rejects the digital life as one of distraction, while angrily documenting the destruction of bohemian San Francisco at the hands of hoodied young software engineers who ride to work aboard luxury buses like “alien overlords.” Certainly there’s reason to be outraged: much good is being lost in our rush to optimisation. Yet it’s hard not to think that we’ve been so distracted by such totems as the Google Bus that we’re failing to ask the most interesting, constructive, radical questions about our digital times. Technology isn’t going anywhere. The real issue is what to do with it.

Scientific principles and the tools they generate aren’t necessarily liberating. They’re not inherently destructive, either. What matters is how they’re put to use, for which values and in whose interest they’re pressed into service. Silicon Valley’s most successful companies often present their services as value-free: Google just wants to make the world’s information transparent and accessible; Facebook humbly offers us greater connectivity with the people we care about; Lyft and Airbnb extol the virtues of sharing among friends, new and old. If there are values here, they seem to be fairly innocuous ones. How could you possibly oppose making new friends or learning new things?

Yet each of these high-tech services is motivated by a vision of the world as it ought to be, an influential set of assumptions about how we should live together, what we owe one another as neighbours and citizens, the relationship between community and individual, the boundary between public good and private interest. Technology comes, in other words, with political baggage. We need critics who can pull back the curtain, who can scrutinise digital technology without either antipathy or boosterism, who can imagine how it might be used differently. We need critics who can ask questions of value.•

Tags: ,

I love Ray Kurzweil, but unfortunately, he’s not going to become immortal as he expects he will, and it’s unlikely he’ll be right in his prediction that nanobots introduced into our brains will be doing the thinking for us by the 2030s. Most of what Kurzweil says is theoretically possible, especially if we’re talking about human life surviving for a significant span, but his timeframe for execution of radical advances seems increasingly frantic to me. From Andrew Griffin at the Independent:

In the near future, humans’ brains will be helped out by nanobot implants that will make us into “hybrids,” one of the world’s leading thinkers has claimed.

Ray Kurzweil, an inventor and director of engineering at Google, said that in the 2030s the implants will help us connect to the cloud, allowing us to pull information from the internet. Information will also be able to sent up over those networks, letting us back up our own brains.

“We’re going to gradually merge and enhance ourselves,” he said, reported CNN. “In my view, that’s the nature of being human — we transcend our limitations.”

As the cloud that our brains access improves, our thinking would get better and better, Kurzweil said. So while initially we would be a “hybrid of biological and non-biological thinking”, as we move into the 2040s, most of our thinking will be non-biological.•

Tags: ,

Futurist Stowe Boyd outdid himself when asked to imagine the nature of corporations in 2050, as you can see in his excellent Medium essay. He believes our response to challenges of wealth inequality, climate change and AI will make things break in one of three ways, resulting in scenarios he labels Humania (great), Neo-feudalistan (not-so-great) and Collapseland (yikes!). An excerpt about the most hopeful outcome:

After mounting concern about inequality, the climate, and the inroads that AI and robots were having on society, in the 2020s Western nations — and later other developing countries — were hit by a ‘Human Spring.’ New populist movements rose up and rejected the status quo, and demanded fundamental change. At first the demands were uneven — some groups emphasized climate, or inequality, or the right to work.

But by the mid 2030s, all three forces were more-or-less equal planks in the Humania platform. This led to mandated barriers to inequality — such as limits on the multiple of the salaries of highest to lowest paid workers, and progressive taxation so that the well-off paid much higher taxes by percentage. Additionally, there were worldwide actions to limit oil and coal use, and a dramatic shift to solar in the early 2020s. Concerned that people would be pushed inexorably out of the job market, governments build limits on AI use into international trade agreements, based on a notion of the human right to work.

In the year 2050, businesses in Humania are egalitarian, fast-and-loose, and porous. Egalitarian in the sense that Humania workers have great autonomy: They can choose who they want to work with and for, as well as which initiatives or projects they’d like to work on.

They’re fast-and-loose in that they are organized to be agile and lean, and in order to do so, the social ties in businesses are much looser than in the 2010s. It was those rigid relationships — for example, the one between a manager and her direct reports — that, when repeated across layers of a hierarchical organization, lead to slow-and-tight company.

Instead of a pyramid, Humania’s companies are heterarchies: They are more like a brain than an army. In the brain — and in fast-and-loose companies — different sorts of connections and groupings of connected elements can form. There is no single way to organize. People can choose the sort of relationships that most make sense.

People’s careers involve many different jobs and roles, and considerable periods of time out of work. Basic universal income is guaranteed and generous benefits for family leave are a regular feature of work, such as paternity/maternity leave, looking after ill loved ones, and subsidized opportunities for life-long learning. This is the porous side of things; The edge of the company is permeable, and people easily leave and return.•

Tags:

DARPA wants to be able to terraform Earth and Mars and whatever other sphere it chooses, editing genes in organisms which will allow for the altering of environments, the healing and fine-tuning of atmospheres. Very useful, provided nothing goes wrong. From Jason Koebler at Vice Motherboard:

The goal is to essentially pick and choose the best genes from whatever form of life we want and to edit them into other forms of life to create something entirely new. This will probably first happen in bacteria and other microorganisms, but it sounds as though the goal may to do this with more complex, multicellular organisms in the future.

The utility of having such a capability is pretty astounding: Jackson threw out goals of eradicating vector-borne illnesses, which obviously sounds lovely and utopian. But perhaps more interesting is DARPA’s plan to use specifically engineered organisms to help repair environmental damage. [Deputy Director of DARPA’s Biological Technologies Office Alicia] Jackson said that after a natural or man-made disaster, it’d be possible to engineer new types of extremophile organisms capable of surviving in a scarred wasteland. As those organisms photosynthesized and thrived, it would naturally bring that environment back to health, she said.

And that’s where terraforming Mars comes in.•

Tags: ,

When I first realized driverless cars were being road-tested and that vehicle-to-vehicle communication would be part of that new order, one of my first thoughts was that a thousand cars in one area would be hacked to suddenly turn left. I’m far from the only one to imagine this scenario, and, of course, the trick is prevention, something pressing since cars are already essentially rolling computers. From the Economist:

ONE ingenious conceit employed to great effect by science-fiction writers is the sentient machine bent on pursuing an inner mission of its own, from HAL in 2001: A Space Odyssey to V.I.K.I. in the film version of I Robot. Usually, humanity thwarts the rogue machine in question, but not always. In Gridiron, released in 1995, a computer system called Ismael—which controls the heating, lighting, lifts and everything else in a skyscraper in Los Angeles—runs amok and wreaks havoc on its occupants. The story’s cataclysmic conclusion involves Ismael instructing the skyscraper’s computer-controlled hydraulic shock-absorbers (installed to damp the swaying caused by earthquakes) to shake the building, literally, to pieces. As it does so, Ismael’s cyber-spirit flees the crumbling tower by e-mailing a copy of its malevolent code to a diaspora of like-minded computers elsewhere in the world.

While vengeful cyber-spirits may not lurk inside today’s buildings or machines, malevolent humans frequently do. Taking control remotely of modern cars, for instance, has become distressingly easy for hackers, given the proliferation of wireless-connected processors now used to run everything from keyless entry and engine ignition to brakes, steering, tyre pressure, throttle setting, transmission and anti-collision systems. Today’s vehicles have anything from 20 to 100 electronic control units (ECUs) managing their various electro-mechanical systems. Without adequate protection, the “connected car” can be every bit as vulnerable to attack and subversion as any computer network.

Were that not worrisome enough, motorists can expect further cyber-mischief once vehicle-to-vehicle (V2V) communication becomes prevalent, and cars are endowed with their own IP addresses and internet connections.•

Most scenarios of AI dominance end, for humans, with extinction, but Steve Wozniak no longer feels that way, believing we can lose the war but be happy captives–pets, even. His scenario seems unlikely. From Samuel Gibbs at the Guardian:

Apple’s early-adopting, outspoken co-founder Steve Wozniak thinks humans will be fine if robots take over the world because we’ll just become their pets.

After previously stating that a robotic future powered by artificial intelligence (AI) would be “scary and very bad for people” and that robots would “get rid of the slow humans,” Wozniak has staged a U-turn and says he now thinks robots taking over would be good for the human race.

“They’re going to be smarter than us and if they’re smarter than us then they’ll realise they need us,” Wozniak said at the Freescale technology forum in Austin. “We want to be the family pet and be taken care of all the time.” …

For Wozniak, it will be “hundreds of years” before AI is capable of taking over, but that by the time it does it will no longer be a threat to our existence: “They’ll be so smart by then that they’ll know they have to keep nature, and humans are part of nature. I got over my fear that we’d be replaced by computers. They’re going to help us. We’re at least the gods originally.”•

Tags: ,

In a recent episode of EconTalk, host Russ Roberts invited journalist Adam Davidson of the New York Times to discuss, among other things, his recent articleWhat Hollywood Can Teach Us About the Future of Work.” In this “On Money” column, Davidson argues that short-term Hollywood projects–a freelance, piecemeal model–may be a wave of the future. The writer contends that this is better for highly talented workers and worrisome for the great middle. I’ll agree with the latter, though I don’t think the former is as uniformly true as Davidson believes. In life, stuff happens that talent cannot save you from, that the market will not provide for.

What really perplexed me about the program was the exchange at the end, when the pair acknowledges being baffled by Uber’s many critics. I sort of get it with Roberts. He’s a Libertarian who loves the unbridled nature of the so-called Peer Economy, luxuriating in a free-market fantasy that most won’t be able to enjoy. I’m more surprised by Davidson calling Uber a “solution” to the crisis of modern work, in which contingent positions have replaced FT posts in the aftermath of the 2008 financial collapse. You mean it’s a solution to a problem it’s contributed to? It seems a strange assertion given that Davidson has clearly demonstrated his concern about the free fall of the middle class in a world in which rising profits have been uncoupled from hiring.

The reason why Uber is considered an enemy of Labor is because Uber is an enemy of Labor. Not only are medallion owners and licensed taxi drivers (whose rate is guaranteed) hurt by ridesharing, but Uber’s union-less drivers are prone to pay decreases at the whim of the company (which may be why about half the drivers became “inactive”–quit–within a year). And the workers couldn’t be heartened by CEO Travis Kalanick giddily expressing his desire to be rid of all of them before criticism intruded on his obliviousness, and he began to pretend to be their champion for PR purposes.

The Sharing Economy (another poor name for it) is probably inevitable and Uber and driverless cars are good in many ways, but they’re not good for Labor. If Roberts wants to tell small-sample-size stories about drivers he’s met who work for Uber just until their start-ups receive seed money and pretend that they’re the average, so be it. The rest of us need to be honest about what’s happening so we can reach some solutions to what might become a widespread problem. If America’s middle class is to be Uberized, to become just a bunch of rabbits to be tasked, no one should be satisfied with the new normal.

From EconTalk:

Russ Roberts:

A lot of people are critical of the rise of companies like Uber, where their workforce is essentially piece workers. Workers who don’t earn an annual salary. They’re paid a commission if they can get a passenger, if they can take someone somewhere, and they don’t have long-term promises about, necessarily, benefits. They have to pay for their own car, provide their own insurance, and a lot of people are critical of that, and my answer is, Why do people do it if it’s so awful? That’s really important. But I want to say something slightly more optimistic about it which is a lot of people like Uber, working for Uber or working for a Hollywood project for six months, because when it’s over they can take a month off or a week off. A lot of the people I talk to who drive for Uber are entrepreneurs, they’re waiting for their funding to come through, they’re waiting for something to happen, and they might work 80 hours a week while they’re waiting and when the money comes through or when their idea starts to click, they’re gonna work five hours a week, and then they’ll stop, and they don’t owe any loyalty to anyone, they can move in and out of work as they choose. I think there’s a large group of people who really love that. And that’s a feature for many people, not a bug. What matters is–beside your satisfaction and how rewarding your life is emotionally in that world–your financial part of it depends on what you make while you’re working. It’s true it’s only sort of part-time, but if you make enough, and evidently many Uber drivers are former taxi drivers who make more money with Uber for example, if you make enough, it’s great, so it seems to me that if we move to a world where people are essentially their own company, their own brand, the captain of their own ship rather than an employee, there are many good things about that as long as they have the skills that are in demand that people are willing to pay for. Many people will unfortunately will not have those skills. It’s a serious issue, but for many people those are enormous pluses, not minuses. 

Adam Davidson:

Yes, I agree with you. Thinking of life as an Uber driver with that as your only possible source of income, I would guess that might be tough. Price competition is not gonna be your friend. Thinking about a world where you have a whole bunch of options, including Task Rabbit, and who knows what else, Airbnb, to earn money in a variety of ways, that’s at various times and at various levels of intensity, that strikes me as only good. If we could shove that into the 1950s, I think you would have seen a lot more people leaving that corporate model and starting their own businesses or spending more time doing more creative endeavors. That all strikes me as a helpful tool. It does sound like some of the people who work at Uber have kind of been jerks, but it does seem strange to me that some people are mad at the company that’s providing this opportunity. It is tough that lots of Americans are underemployed and aren’t earning enough. That’s a bad situation, but it is confusing to me that we get mad at companies that are providing a solution.•

Tags: , ,

Despite what some narratives say, Bill Gates was completely right about the Internet and mobile. That doesn’t mean he’ll be correct about every seismic shift, but I think his intuition about autonomous cars is almost definitely accurate: Driverless functions will be useful if partially completed and a societal game-changer if completely perfected. Just helpful or a total avalanche. In an interview conducted by Financial Times Deputy Editor John Thornhill, Gates discussed these matters, among many others. An excerpt from Shane Ferro’s article at Business Insider (which relies on Izabella Kaminska tweets from the event):

With regards to robots, the economy, and logistics, the takeaway seems to be that Gates thinks we’re in the fastest period of innovation ever, and it’s still unclear how that will affect the economy.

But there’s still quite a way to go. Robots “will be benign for quite some time,” Gates said. The future of work is not in immediate danger — although the outlook is not good for those who have a high school degree or less. 

Gates was also asked about Uber. He seems to think the real disruption to the driving and logistics industry is not going to come until we have fully driverless cars. That’s the “rubicon,” he says.

Kaminska relays that currently, Gates thinks that Uber “is just a reorganization of labour into a more dynamic form.” However, and this is big, Uber does have the biggest research and development budget out there on the driverless vehicle front. And that’s to its advantage.•

Tags: , ,

“We face a future in which robots will test the boundaries of our ethical and legal frameworks with increasing audacity.” writes Illah Reza Nourbakhsh in his Foreign Affairs article “The Coming Robot Dystopia,” and it’s difficult to envision a scenario in which the pace doesn’t get just faster, cheaper and at least somewhat out of control. 

We live in a strange duality now: On one hand, citizens worry that government has too much access to their information–and that’s true–but government is likely tightening its grip just as it’s losing it. Technology easily outpaces legislation, and it’s possible that at some point in the near future even those who espoused hatred of government may be wistful for a stable center. 

From Nourbakhsh:

Robotic technologies that collect, interpret, and respond to massive amounts of real-world data on behalf of governments, corporations, and ordinary people will unquestionably advance human life. But they also have the potential to produce dystopian outcomes. We are hardly on the brink of the nightmarish futures conjured by Hollywood movies such as The Matrix or The Terminator, in which intelligent machines attempt to enslave or exterminate humans. But those dark fantasies contain a seed of truth: the robotic future will involve dramatic tradeoffs, some so significant that they could lead to a collective identity crisis over what it means to be human.

This is a familiar warning when it comes to technological innovations of all kinds. But there is a crucial distinction between what’s happening now and the last great breakthrough in robotic technology, when manufacturing automatons began to appear on factory floors during the late twentieth century. Back then, clear boundaries separated industrial robots from humans: protective fences isolated robot workspaces, ensuring minimal contact between man and machine, and humans and robots performed wholly distinct tasks without interacting.

Such barriers have been breached, not only in the workplace but also in the wider society: robots now share the formerly human-only commons, and humans will increasingly interact socially with a diverse ecosystem of robots.•

Tags:

When I put up a post three days ago about the automated grocery store in Iowa, it brought to mind the first attempt at such a store, the Keedoozle, one of Clarence Saunders attempts at a resurgence in the aftermath of the Wall Street bath the Memphis-based Piggly Wiggly founder took while attempting and failing spectacularly at a corner. In his 1959 New Yorker piece about the Saunders Affair, John Brooks described the Keedoozle:

His hopes were pinned on the Keedoozle, an electrically operated grocery store, and he spent the better part of the last twenty years of his life trying to perfect it. In a Keedoozle store, the merchandise was displayed behind glass panels, each with a slot beside it, like the food in an Automat. There the similarity ended, for, instead of inserting coins in the slot to open a panel and lift out a purchase. Keedoozle customers inserted a key that they were given on entering the store. Moreover, Saunders’ thinking had advanced far beyond the elementary stage of having the key open the panel; each time a Keedoozle key was inserted inside a slot, the identity of the item selected was inscribed in code on a segment of recording tape embedded in the key itself, and simultaneously the item was automatically transferred to a conveyor belt that carried it to an exit gate at the front of the store. When a customer had finished his shopping, he would present his key to an attendant at the gate, who would decipher the tape and add up the bill. As soon as this was paid, the purchases would be catapulted into the customer’s arms, all bagged and wrapped by a device at the end of a conveyor belt. 

A couple of pilot Keedoozle stores were tried out–one in Memphis and the other in Chicago–but it was found that the machinery was too complex and expensive to compete with the supermarket pushcarts. Undeterred, Saunders set to work on an even more intricate mechanism–the Foodlectric, which would do everything the Keedoozle would do and add up the bill as well.•

______________________

From the February 19, 1937 Brooklyn Daily Eagle:

______________________

The Keedoozle inspired a Memphis competitor in 1947:

Tags: , ,

Sometime in the 21st century, you and me and Peter Thiel are going to die, and that’s horrible because even when the world is trying, it’s spectacular.

The Paypal cofounder is spending a portion of his great wealth on anti-aging research, hoping to radically extend life if not defeat death, which is a wonderful thing for people of the distant future, though it likely won’t save any of us. I will say that I wholly agree with Thiel that those who oppose radical life extension because it’s “unnatural” are just wrong.

From a Washington Post Q&A Ariana Eunjung Cha conducted with Thiel:

Question:

Leon Kass — the physician who was head of the President’s Council on Bioethics from 2001 to 2005 — as well as a number of other prominent historians, philosophers and ethicists have spoken out against radical life extension. Kass, for instance, has argued that it’s just not natural, that we’ll end up losing some of our humanity in the process. What do you think of their concerns?

Peter Thiel:

I believe that evolution is a true account of nature, but I think we should try to escape it or transcend it in our society. What’s true of evolution, I would argue, is true of all of nature. Even basic dental hygiene. If it’s natural for your teeth to start falling out, then you shouldn’t get cavities replaced? In the 19th century, people made the argument that it was natural for childbirth to be painful for women and therefore you shouldn’t have pain medication. I think the nature argument tends to go very wrong. . . . I think it is against human nature not to fight death.

Question:

What about the possibility of innovation stagnation? Some argue that if you live forever, you won’t be as motivated to invent new ways of doing this.

Peter Thiel:

That’s the Steve-Jobs-commencement-speech-in-2005 argument — that he was working so hard because he knew he was going to die. I don’t believe that’s true. There are many people who stop trying because they think they don’t have enough time. Because they are 85. But that 85-year-old could have gotten four PhDs from 65 to 85, but he didn’t do it because he didn’t think he had enough time. I think these arguments can go both ways. I think some people could be less motivated. I think a lot of people would be more motivated because they would have more time to accomplish something meaningful and significant.•

 

Tags: ,

In a 2012 Playboy Interview, Richard Dawkins addressed whether a fuller understanding of genetics would allow us to create something akin to extinct life forms, even prehistoric ones. The passage:

Playboy:

Do we know which came first—bigger brains or bipedalism?

Richard Dawkins:

Bipedalism came first.

Playboy:

How do we know that?

Richard Dawkins:

Fossils. That’s one place the fossils are extremely clear. Three million years ago Australopithecus afarensis were bipedal, but their brains were no bigger than a chimpanzee’s. The best example we have is Lucy [a partial skeleton found in 1974 in Ethiopia]. In a way, she was an upright-walking chimpanzee.

Playboy:

You like Lucy.

Richard Dawkins:

Yes. [smiles]

Playboy:

You’ve said you expect mankind will have a genetic book of the dead by 2050. How would that be helpful?

Richard Dawkins:

Because we contain within us the genes that have survived through generations, you could theoretically read off a creature’s evolutionary history. “Ah, yes, this animal lived in the sea. This is the time when it lived in deserts. This bit shows it must have lived up mountains. And this shows it used to burrow.”

Playboy:

Could that help us bring back a dinosaur? You have suggested crossing a bird and a crocodile and maybe putting it in an ostrich egg.

Richard Dawkins:

It would have to be more sophisticated than a cross. It’d have to be a merging.

Playboy:

Could we recreate Lucy?

Richard Dawkins:

We already know the human genome and the chimpanzee genome, so you could make a sophisticated guess as to what the genome of the common ancestor might have been like. From that you might be able to grow an animal that was close to the common ancestor. And from that you might split the difference between that ancestral animal you re-created and a modern human and get Lucy.•

Tags:

Excellent job by Daniel Oberhaus of Vice Motherboard with his smart interview of Noam Chomsky and theoretical physicist Lawrence Krauss about contemporary scientific research and space exploration. Chomsky is disturbed by the insinuiation of private enterprise into Space Race 2.0, a quest for trillions, while Krauss thinks the expense of such an endeavor permanently makes it a moot point. I’m not so sure about the “permanently” part. Both subjects encourage unmanned space missions as a way to speed up science while scaling back costs. The opening:

Vice:

The cost of entry is so high for space, and arguably for science as well, that the general public seems to be excluded from partaking right from the start. In that light, what can really be done to reclaim the commons of space?

Noam Chomsky:

If you look at the whole history of the space program, a lot of things of interest were discovered, but it was done in a way that sort of ranges from misleading to deceitful. So what was the point of putting a man on the moon? A person is the worst possible instrument to put in space: you have to keep them alive, which is very complex, there are safety procedures, and so on. The right way to explore space is with robots, which is now done. So why did it start with a man in space? Just for political reasons.

Lawrence Krauss:

Of course we should [pressure the government to divert more funds to space programs]. But again, if you ask me if we should appropriate funds for the human exploration of space, than my answer is probably not. Unmanned space exploration, from a scientific perspective is far more important and useful. If we’re doing space exploration for adventure, then it’s a totally different thing. But from a scientific perspective, we should spend the money on unmanned space exploration.

Noam Chomsky:

John F. Kennedy made it a way of overcoming the failure of the Bay of Pigs and the fact that the Russians in some minor ways had gotten ahead of us, even though the American scientists understood that that wasn’t true. So you had to have a dramatic event, like a man walking on the moon. There’s not very much point to have a man walking on the moon except to impress people.

As soon as the public got bored with watching some guy stumble around on the moon, those projects were ended. Then space exploration began as a scientific endeavor. Things continue to develop like this to a large extent. Take, again, the development of computers. That was presented under the rubric of defense. The Pentagon doesn’t say, ‘We’re taking your tax money so that maybe your grandson can have an iPad.’ What they say is, ‘We’re defending ourselves from the Russians.’ What we’re actually doing is seeing if we can create the cutting edge of the economy.•

Tags: , ,

French aviation pioneer Robert Esnault-Pelterie, inventor of the joystick flight control, knew 41 years before “the giant leap” that a manned trip to the moon and back was theoretically possible. He believed we were “actually becoming birdmen” and thought atomic energy might aid us in reaching not only the moon but also Mars and Venus, a plan Project Orion scientists worked on in earnest in the 1950s. Below is an article from the February 12, 1928 Brooklyn Daily Eagle.

Tags:

In a Washington Post piece, Vivek Wadha reveals how bullish he is on the near-term future of robotics in the aftermath of the DARPA challenge. He believes Jetsons-level assistants are close, and although he acknowledges such progress would promote technological unemployment, he doesn’t really dwell on that thorny problem. An excerpt:

For voice recognition, we are already pretty close to C-3PO-like capabilities. Both Apple and Google use artificial intelligence to do a reasonably good job of translating speech to text, even in noisy environments. No bot has passed the Turing Test yet, but they are getting closer and closer. When it happens, your droid will be able to converse with you in complex, human-like interactions.

The computational power necessary to enable these robots to perform these difficult tasks is still lacking. Consider, however, that in about seven or eight years, your iPhone will have the computational ability of a human brain, and you can understand where we are headed.

Robots will be able to walk and talk like human beings.

What are presently halting steps moving up stairs will, in the next DARPA challenge, become sure-footed ascents. The ability to merely open a door will become that of opening a door and holding a bag of groceries and making sure the dog doesn’t get out.

And, yes, Rosie will replace lots of human jobs, and that is reason to worry — and cheer.•

Tags:

I was on the subway the other day and a disparate group of six people of different ages, races and genders began a spontaneous conversation about how the they couldn’t afford to live anywhere nice anymore and how the middle class was gone in America, that the country wasn’t for them anymore. Small sample size to be sure, but one that’s backed up by more than four decades of research. Part of the problem could be remedied politically if finding solutions was in vogue in America, but the bigger picture would seem to be a grand sweep of history that announced itself in the aftermath of the Great Recession, as profits returned but not jobs.

I fear Derek Thompson’s excellent Atlantic feature “A World Without Work” may be accurate in its position that this time it’s different, that technological unemployment may take root in America (and elsewhere), and I think one of the writer’s biggest contributions is explaining how relatively quickly the new normal can take hold. (He visits Youngstown, a former industrial boomtown that went bust, to understand the ramifications of work going away.)

I don’t believe a tearing of the social fabric need attend an enduring absence of universal employment provided wealth isn’t aggregated at one end of the spectrum, but I don’t have much faith right now in government to step into the breach should such opportunities significantly deteriorate. Much of Thompson’s piece is dedicated finding potential solutions to a radical decline of Labor–a post-workist world. He believes America can sustain itself if citizens are working fewer hours but perhaps not if most don’t need to punch the clock at all. I’m a little more sanguine than that if basic needs are covered. Then I think we’ll see people get creative.

An excerpt:

After 300 years of breathtaking innovation, people aren’t massively unemployed or indentured by machines. But to suggest how this could change, some economists have pointed to the defunct career of the second-most-important species in U.S. economic history: the horse.

For many centuries, people created technologies that made the horse more productive and more valuable—like plows for agriculture and swords for battle. One might have assumed that the continuing advance of complementary technologies would make the animal ever more essential to farming and fighting, historically perhaps the two most consequential human activities. Instead came inventions that made the horse obsolete—the tractor, the car, and the tank. After tractors rolled onto American farms in the early 20th century, the population of horses and mules began to decline steeply, falling nearly 50 percent by the 1930s and 90 percent by the 1950s.

Humans can do much more than trot, carry, and pull. But the skills required in most offices hardly elicit our full range of intelligence. Most jobs are still boring, repetitive, and easily learned. The most-common occupations in the United States are retail salesperson, cashier, food and beverage server, and office clerk. Together, these four jobs employ 15.4 million people—nearly 10 percent of the labor force, or more workers than there are in Texas and Massachusetts combined. Each is highly susceptible to automation, according to the Oxford study.

Technology creates some jobs too, but the creative half of creative destruction is easily overstated. Nine out of 10 workers today are in occupations that existed 100 years ago, and just 5 percent of the jobs generated between 1993 and 2013 came from “high tech” sectors like computing, software, and telecommunications. Our newest industries tend to be the most labor-efficient: they just don’t require many people. It is for precisely this reason that the economic historian Robert Skidelsky, comparing the exponential growth in computing power with the less-than-exponential growth in job complexity, has said, “Sooner or later, we will run out of jobs.”

Is that certain—or certainly imminent? No. The signs so far are murky and suggestive. The most fundamental and wrenching job restructurings and contractions tend to happen during recessions: we’ll know more after the next couple of downturns. But the possibility seems significant enough—and the consequences disruptive enough—that we owe it to ourselves to start thinking about what society could look like without universal work, in an effort to begin nudging it toward the better outcomes and away from the worse ones.

To paraphrase the science-fiction novelist William Gibson, there are, perhaps, fragments of the post-work future distributed throughout the present. I see three overlapping possibilities as formal employment opportunities decline. Some people displaced from the formal workforce will devote their freedom to simple leisure; some will seek to build productive communities outside the workplace; and others will fight, passionately and in many cases fruitlessly, to reclaim their productivity by piecing together jobs in an informal economy. These are futures of consumption, communal creativity, and contingency. In any combination, it is almost certain that the country would have to embrace a radical new role for government.

Tags: ,

Excerpts from a pair of recent Harvard Business Review articles which analyze the increasing insinuation of robots in the workplace. The opening of Walter Frick’s “When Your Boss Wears Metal Pants” examines the emotional connection we quickly make with robots who can feign social cues. In “The Great Decoupling,” Amy Bernstein and Anand Raman discuss technological unemployment, among other topics, with Andrew McAfee and Erik Brynjolfsson, authors of The Second Machine Age.

___________________________

From Frick:

At a 2013 robotics conference the MIT researcher Kate Darling invited attendees to play with animatronic toy dinosaurs called Pleos, which are about the size of a Chihuahua. The participants were told to name their robots and interact with them. They quickly learned that their Pleos could communicate: The dinos made it clear through gestures and facial expressions that they liked to be petted and didn’t like to be picked up by the tail. After an hour, Darling gave the participants a break. When they returned, she handed out knives and hatchets and asked them to torture and dismember their Pleos.

Darling was ready for a bit of resistance, but she was surprised by the group’s uniform refusal to harm the robots. Some participants went as far as shielding the Pleos with their bodies so that no one could hurt them. “We respond to social cues from these lifelike machines,” she concluded in a 2013 lecture, “even if we know that they’re not real.”

This insight will shape the next wave of automation. As Erik Brynjolfsson and Andrew McAfee describe in their book The Second Machine Age, “thinking machines”—from autonomous robots that can quickly learn new tasks on the manufacturing floor to software that can evaluate job applicants or recommend a corporate strategy—are coming to the workplace and may create enormous value for businesses and society.•

___________________________

From Bernstein and Raman:

Harvard Business Review:

As the Second Machine Age progresses, will there be any jobs for human beings?

Andrew McAfee:

Yes, because humans are still far superior in three skill areas. One is high-end creativity that generates things like great new business ideas, scientific breakthroughs, novels that grip you, and so on. Technology will only amplify the abilities of people who are good at these things.

The second category is emotion, interpersonal relations, caring, nurturing, coaching, motivating, leading, and so on. Through millions of years of evolution, we’ve gotten good at deciphering other people’s body language…

Eric Brynjolfsson:

…and signals, and finishing people’s sentences. Machines are way behind there.

The third is dexterity, mobility. It’s unbelievably hard to get a robot to walk across a crowded restaurant, bus a table, take the dishes back into the kitchen, put them in the sink without breaking them, and do it all without terrifying the restaurant’s patrons. Sensing and manipulation are hard for robots.

None of those is sacrosanct, though; machines are beginning to make inroads into each of them.

Andrew McAfee:

We’ll continue to see the middle class hollowed out and will see growth at the low and high ends. Really good executives, entrepreneurs, investors, and novelists—they will all reap rewards. Yo-Yo Ma won’t be replaced by a robot anytime soon, but financially, I wouldn’t want to be the world’s 100th-best cellist.•

Tags: , , , ,

softbank-pepper-robot-shop-store-staff-humanoid-2

Softbank’s Pepper looks like a child killed by a lightning strike who returned as a ghost to make you pay for handing him a watering can during an electrical storm.

He’s described as an “emotional robot,” which makes me take an immediate disliking to him. Manufactured to express feelings based on stimuli in his surroundings, Pepper is supposed to be shaped by his environment, but I wonder if his behavior will shape those who own him. We may get an answer since the robot sold out in Japan in under a minute and will soon be available for sale internationally.

From Marilyn Malara at UPI:

The humanoid robot is described as one that can feel emotion in a way humans do naturally through a system similar to a human’s hormonal response to stimuli. The robot can generate its own emotions by gathering information from its cameras and various sensors. Softbank says that Pepper is a “he” and can read human facial expressions, words and surroundings to make decisions. He can sigh or even raise his voice; he can get scared from dimming lights and happy when praised.

Along with the product’s launch, 200 applications are available to download into the robot including one that can record everyday life in the form of a robotic scrapbook.

Last year, Nestle Japan used Pepper to sell Nescafe coffee machines in appliance stores all over the country. “Pepper will be able to explain Nescafe products and services and engage in conversation with consumers,” Nestle Japan CEO Kohzoh Takaoka said in October before its roll-out.•

____________________________

“Can you lend me a $100?”

Tags:

In a New York Times review, A.O. Scott, who is quietly one of the funniest writers working anywhere, offers a largely positive review of philosopher Susan Neiman’s new book about perpetual adolescence, something that’s become the norm in this era of fanboy (and -girl) ascendancy, its commodification seemingly having reached a saturation point until, yes, the next comic-book or YA franchise. The opening:

A great deal of modern popular culture — including just about everything pertaining to what French savants like to call le nouvel âge d’or de la comédie américaine — runs on the disavowal of maturity. The ideal consumer is a mirror image of a familiar comic archetype: a man-child sitting in his parents’ basement with his video games and his Star Wars figurines; a postgraduate girl and her pals treating the world as their playground. Baby boomers pursue perpetual youth into retirement. Gen-Xers hold fast to their skateboards, their Pixies T-shirts and their Beastie Boys CDs. Nobody wants to be an adult anymore, and every so often someone writes an article blaming Hollywood, attachment parenting, global capitalism or the welfare state for this catastrophe. I’ve written one or two of those myself. It’s not a bad racket, and since I’m intimately acquainted, on a professional basis, with the cinematic oeuvre of Adam Sandler, I qualify as something of an expert. 

In the annals of anti-infantile cultural complaint, Susan Neiman’s new book, Why Grow Up?, is both exemplary and unusual. An American-born philosopher who lives in Berlin, Neiman has a pundit’s fondness for the sweeping generalization and the carefully hedged argumentative claim. “I’m not suggesting that we do without the web entirely,” she writes in one of her periodic reflections on life in the digital age, “just that we refuse to let it rule.” Elsewhere she observes that “if you spend your time in cyberspace watching something besides porn and Korean rap videos, you can gain a great deal,” a ­hypothesis I for one am eager to test.•

Tags: ,

Wow, this is wonderful: Nicholas Carr posted a great piece from a recent lecture in which he addressed Marshall McLuhan’s idea of automation as media. In this excerpt, he tells a history of how cartography, likely the first medium, went from passive to active player as we transitioned from paper to software:

I’m going to tell the story through the example of the map, which happens to be my all-time favorite medium. The map was, so far as I can judge, the first medium invented by the human race, and in the map we find a microcosm of media in general. The map originated as a simple tool. A person with knowledge of a particular place drew a map, probably in the dirt with a stick, as a way to communicate his knowledge to another person who wanted to get somewhere in that place. The medium of the map was just a means to transfer useful knowledge efficiently between a knower and a doer at a particular moment in time.

Then, at some point, the map and the mapmaker parted company. Maps started to be inscribed on pieces of hide or stone tablets or other objects more durable and transportable than a patch of dirt, and when that happened the knower’s presence was no longer necessary. The map subsumed the knower. The medium became the knowledge. And when a means of mechanical reproduction came along — the printing press, say — the map became a mass medium, shared by a large audience of doers who wanted to get from one place to another.

For most of recent history, this has been the form of the map we’ve all been familiar with. You arrive in some new place, you go into a gas station and you buy a map, and then you examine the map to figure out where you are and to plot a route to get to wherever you want to be. You don’t give much thought to the knower, or knowers, whose knowledge went into the map. As far as you’re concerned, the medium is the knowledge.

Something very interesting has happened to the map recently, during the course of our own lives. When the medium of the map was transferred from paper to software, the map gained the ability to speak to us, to give us commands. With Google Maps or an in-dash GPS system, we no longer have to look at a map and plot out a route for ourselves; the map assumes that work. We become the actuators of the map’s instructions: the assistants who, on the software’s command, turn the wheel. You might even say that our role becomes that of a robotic apparatus controlled by the medium.

So, having earlier subsumed the knower, the map now begins to subsume the doer. The medium becomes the actor.

In the next and ultimate stage of this story, the map becomes the vehicle. The map does the driving.•

Tags:

I believe Weak AI can remake a wide swath of our society in the coming decades, but the more sci-fi Strong AI moonshots don’t seem within reach to me. When Yuval Harari worries that technologists may play god, he’s saying nothing theoretically impossible. In fact, the innovations he’s discussing (genetic engineering, cyborgism, etc.) will almost definitely occur if we get lucky (and creative) and survive any near-term extinction.

But the thing about Silicon Valley remaking our world is that it’s really tough do that, especially when dealing with such hard problems–even the hard problem (i.e., consciousness). Google is an AI company disguised as a search company, but it’s certainly possible that it never becomes great at anything beyond search and (perhaps) a few Weak AI triumphs. Time will tell. But it probably will take significant time.

In a Washington Post piece, Bhaskar Chakravorti wonders if Google X is more moonshot or crater, though I think it’s too early to be assesssing such things. Creating 100% driverless autos wasn’t going to happen overnight, let alone radical life extension. An excerpt:

In its relentless hunt for innovation, Google is a voracious acquirer of innovative companies. In the two years prior to 2014, it outspent its five closest rivals combined on acquisitions. Here, too, it has failed in dramatic ways. A single acquisition, Motorola Mobility, cost $12 billion — almost half the amount that Google spent on all its acquisitions over a decade — which it sold for $3 billion two years later.

None of these factors deters Google’s leaders or its many admirers. Much of the public focus has shifted recently to its Google X unit, which not only has a chief most appropriately named, Astro Teller, it has a manager with an official job title of Head of Getting Moonshots Ready for Contact With the Real World. Now, the drumbeat has picked up as some of Google’s moonshots come closer to landing. The Google self-driven car is coming around the corner quite literally.  Google’s high-altitude balloons are being tested to offer Internet access to those without access. And the latest: Google intends to take on the myriad urban innovation challenges with its brand new Sidewalk Labs. Beyond the roads, sidewalks and the skies, Google wants to tinker with life itself, from glucose-monitoring contact lenses to longevity research.

Google’s revenue source is essentially unchanged and yet it spends disproportionately to move the needle. But these unprecedented moonshots could simply be money pits.•

Tags:

The unanswered questions that we have about the Snowden Affair are probably a little different than the ones circulating in the head of cybersecurity expert and former fugitive John McAfee, who has written an unsurprisingly strange, paranoid and colorful piece on the topic for International Business Times. An excerpt:

The Russian interviewer also asked me about Snowden: “In your opinion, is Edward Snowden a real character or one invented by the intelligence services?”

And this was my answer

“I doubt everything, even my own senses at times. Is the apparent US government the real US government? Could the real government be a committee of the largest corporate entities who mount this play of democracy to veil the real machinations?

Are the divisions of the world into apparent “countries” even real? Are the apparent divisions within my own country real? Do we really have a tripartite system of government, where the executive, legislative and judicial divisions are, in fact, real divisions? I could go on forever.

As to Edward Snowden, I find the following inconsistencies to be very troubling:

1. He is a man of soft character and limited experience in the difficult and dangerous world into which he so willingly and knowingly thrust himself. I have personally been a fugitive. I have experienced many dangers and difficult situations, and even I with my excellent survival skills would not willingly bring down such wrath upon myself. Why would a man of Snowden’s apparent character do so?

2. He was safe in Hong Kong prior to entering Russia. With no offense to your country, I believe that Snowden was smart enough to know that he could have faded into the back alleys and byways of Hong Kong and, with his talents, have led a thriving existence there. Chinese women are equally as attractive as Russian women and not quite so dangerous. It is cheaper to live in Hong Kong and the weather is better. It is, quite frankly, a colourful place full of opportunity for a clever person. Why did he leave for Russia?

3. I doubt the truth of it all because my only source of information on the subject I have obtained through the world’s press. What truth can there be in it?”•

 

Tags:

« Older entries § Newer entries »