Science/Tech

You are currently browsing the archive for the Science/Tech category.

berkeleycomputer1-3-e1439612864580

On the 20th anniversary of John Perry Barlow’s idealistic Davos edict, “A Declaration of the Independence of Cyberspace.” the Economist asked him to revisit those words and consider what he hit and missed. The Internet is no longer an utter Wild West but now an odd mix of anarchy and surveillance, a conflict unlikely to ever be fully settled, and one that will continue to produce positive and negative developments. Barlow’s idea that state governments were remnants of the Industrial Age now seems naive, though his argument that a global world demands a digital openness is a good one.

An excerpt:

The Economist:

What do you think you got especially right—or wrong?

John Perry Barlow:

I will stand by much of the document as written. I believe that it is still true that the governments of the physical world have found it very difficult to impose their will on cyberspace. Of course, they are as good as they ever were at imposing their will on people whose bodies they can lay a hand on, though it is increasingly easy, as it was then, to use technical means to make the physical location of those bodies difficult to determine.

Even when they do get someone cornered, like Chelsea Manning, or Julian Assange, or [Edward] Snowden, they’re not much good at shutting them up. Ed regularly does $50,000 speeches to big corporate audiences and is obviously able to speak very freely. Ditto Julian. And even ditto Chelsea Manning, who despite the fact that she’s serving a 35-year sentence, is still able to speak her mind to all who will listen.

People will sputter, but what about China, what about Silk Road [an online marketplace that sold drugs before being shut down in 2013], what about Kim Dotcom? Well, I believe that a close examination of Chinese censorship is a little more nuanced than the media here would have us believe and is mostly focused on preventing something like the Cultural Revolution. But the Chinese know that they can’t compete in a global marketplace if they don’t allow their best minds full contact with our best minds. Silk Road is already reassembling itself in the gloomier recesses of the dark web, and the arrest and persecution of Kim Dotcom was simply an illegal over-reach by the U.S. government. Yeah, they can enforce the rule of law online provided their ability to break it.•

Tags:

chinesegirlselfie

Most developments in AI aren’t so fascinating for what they are but for showing where things might head.

Take, for example, a convincing, new teenage chatbot keeping lonely Chinese citizens company. The center of Yongdong Wang’s Nautilus article about fake friends, Xiaoice, as she’s called, is empathetic in a way no mere emoji ever could be. “She makes a point of showing that she cares,” the author writes of what he terms “emotional computing.” Wang should know since he heads the Microsoft team that created the software program. The millions speaking to Xiaoice don’t usually know right away she’s artificial and don’t seem to mind when they find out.

Even though the chatbot currently tries to cover up gaps in knowledge with the type of bursts of emotions typical to teens, those shortcomings will grow shorter with each new conversation, as more information is absorbed. It doesn’t require a wild imagination to see where such machine knowledge might be in 20 or so years.

An excerpt:

Xiaoice can exchange views on any topic. If it’s something she doesn’t know much about, she will try to cover it up. If that doesn’t work, she might become embarrassed or even angry, just like a human would.

In fact, she is so human that millions of people are eager to talk to her. When Xiaoice was released for a public test on WeChat (a popular messaging and calling app in China) on May 29 of last year, she received 1.5 million chat group invitations in the first 72 hours. Many people said that they didn’t realize she isn’t a human until 10 minutes into their conversation.


By mid-June, she had become the sixth most active celebrity on Weibo. One message she posted on the app generated over 663,000 conversations: “As a species different from human beings, I am still finding a way to blend into your life.” Today, she has had more than 10 billion conversations with people, most of them about private matters. Six million have posted their conversation on social media.

This could be the largest Turing test in history. One of its surprising conclusions is that people don’t necessarily care that they’re chatting with a machine. Many see Xiaoice as a partner and friend, and are willing to confide in her just as they do with their human friends. Xiaoice is teaching us what makes a relationship feel human, and hinting at a new goal for artificial intelligence: not just analyzing databases and driving cars, but making people happier.•

Tags:

darpaani

Sometimes when Americans consider the path forward with regards to genetic engineering or an automated military, we do so in a vacuum. We would never do that. That’s not how it will unravel, of course. We’ll be responding to other world powers, and sometimes in a race, a competitor will run much faster than anticipated. 

In a Financial Times piece, Geoff Dyer writes of the “split-screen reality” of the Pentagon, charged with fighting ISIS in a painstakingly Vietnam-ish slog while preparing for a possibility of a Digital Age WWIII with China or Russia or whomever. “We must be prepared for a high-end enemy,” Defense Secretary Ashton Carter says. We’ll also be trying to outpace our own fears, not necessarily the same thing as realities, and anxieties can take on a life of their own.

An excerpt:

The underlying objective of the new strategy is to find weapons and technologies to ensure US forces “can fight their way to the fight” as one official puts it — to evade the layered missile defences both China and Russia can erect, to defend bases against attack from precision-guided missiles and to be able to operate carrier fleets at a much greater distance from an enemy.

For some Pentagon planners, the long-term answers will be found in robotics — be they unmanned, autonomous planes or submarines that can surprise an enemy or robot soldiers that can reduce the risk to humans by launching attacks. Mr Work, who once co-wrote a paper called “Preparing for War in the Robotic Age”, said in December: “Ten years from now, if the first person through a breach isn’t a fricking robot, then shame on us.”

Mass attack

Last week Mr Carter talked about “swarming, autonomous vehicles” — an allusion to another idea that animates current defence thinking in Washington, the use of greater volumes of aircraft or ships in a conflict. The emphasis in American military technology in recent decades has been on developing weapons platforms that are deployed in fewer numbers but boast much greater capabilities, such as the F-35 fighter jet. However, backed by low-cost production techniques such as 3D printing, Pentagon planners are flirting with a different model that seeks to saturate an enemy with swarms of cheaper, more expendable drones.

“It is the reintroduction of the idea of mass,” says Mr Brimley at CNAS. “Not only do we have the better technology but we are going to bring mass and numbers to the fight and overwhelm you.”

Mr Work’s other big theme is the combining of human and machine intelligence, whether it be wearable electronics and exoskeletons for infantry soldiers or fighter jets with suites of sensors and software passing data to the pilot.•

Tags:

r2d2 (4)

In a NYRB piece, Jacob Weisberg has reviewed a slate of books which consider, in one way or another, how the supercomputers in our pockets are quietly remaking us and our relations with one another, including two Sherry Turkle titles, Alone Together and Reclaiming Conversation. While the psychologist unfortunately quotes studies that claim a “40 percent decline in empathy among college students over the past twenty years”–wow, I wouldn’t trust such findings–her work ultimately leads Weisberg to what I think is a true and underappreciated consequence of our new normal: While we endeavor to make machines more like us, we’re becoming more like them, disappearing a significant portion of our humanity into the zeros and ones. An excerpt:

For young people, she observes, the art of friendship is increasingly the art of dividing your attention successfully. Speaking to someone who isn’t fully present is irritating, but it’s increasingly the norm. Turkle has already noticed considerable evolution in “friendship technologies.” At first, she saw kids investing effort into enhancing their profiles on Facebook. More recently, they’ve come to prefer Snapchat, known for its messages that vanish after being viewed, and Instagram, where users engage with one another around a stream of shared photos, usually taken by phone. Both of these platforms combine asynchronicity with ephemerality, allowing you to compose your self-presentation, while looking more causal and spontaneous than on a Facebook profile. It’s not the indelible record that Snapchat’s teenage users fear. It’s the sin of premeditated curating—looking like you’re trying too hard.

More worrying to Turkle is that social media offer respite from the awkwardness of unmediated human relationships. Apple’s FaceTime feature hasn’t taken off because, as one college senior explains, “You have to hold it [the phone] in front of your face with your arm; you can’t do anything else.” Then again some younger teens, presumably with an ordinary number of arms, are using FaceTime as an alternative to spending time with one another in person. The advantage is that “you can always leave” and “you can do other things on social media at the same time.”

The thing young people never do on their smartphones is actually speak to one another.

In the Spike Jonze film Her, the romantic partner constituted through artificial intelligence provides emotional support without the demands of a real person. Here, the real person thinks that the modulated self he presents in disembodied conversation is more appealing. This turns the goal of affective computing on its head; instead of getting machines to seem more like people, it’s something closer to a man imitating a robot. Turkle comments that digital media put people in a “comfort zone,” where they believe they can share “just the right amount” of themselves. But this feeling of control is an illusion—a “Goldilocks fallacy.” In a romantic relationship, there is no ideal distance to be maintained over time. As she sums up her case: “Technology makes us forget what we know about life.”•

Tags: ,

artificialintelligence78

When asked about humans creating AI that surpasses our intelligence, Brad Templeton probably has the most honest answer: We won’t be able to resist building such a new machine, so valuable that it is. We probably shouldn’t resist. I suppose superintelligence could doom us, but a lack of it also could. Templeton’s prescription is that we build a better AI, one that “loves” us. Easier said than done, of course.

From Leslie D’Monte’s Live Mint interview with the technologist:

Question:

Many technology luminaries like Bill Gates, Elon Musk and even physicist Stephen Hawking have expressed their fears that robots with AI could rule mankind. Do you share their concerns?

Brad Templeton:

I surely understand their fears. Like many technologies, there are certain risks if it is done wrong, but it is a risk that we cannot avoid. So we don’t really have a lot of choice, but to do it well and make the effort to do it well. If you don’t do it in, say, America or India, it will mean that the AI features will be Chinese or from Pakistan or from somewhere else. And that’s not the outcome you’re looking for. You have to accept that AI technology is just too valuable, just too useful not to build it. So people around the world will build it. Your only choice is to build it better than them.

I’m quite comfortable with the idea that we will at some point be able to make machines or AI stuff. They might not be machines as we think of them today…that surpass us in intelligence. Many parents have children who ultimately surpass them in intelligence. This has been happening for a very long time, but at a very slow pace. The best that I want to hope for is to create these children of the mind as I call them, but not in the biological sense, that still love us the way children love their parents.•

Tags: ,

spacebook6

The new Space Race is upon us, this one as much a contest between private and public as among nations. Norman Mailer was the one, above all others, who correctly read the subtext of the 1960s iteration, realizing the Apollo mission a permanent subjugation of humanity–“space travel proposed a future world of brains attached to wires.” Hemingway’s bullfights and other macho challenges were hopelessly diminished in a time of space odyssey. Now we’ll return to space with greater desperation, hoping to safeguard the species from existential risks. Of course, we will simultaneously mutate and end the species as we know it when we stretch across the sky. We’ll become them.

In a Vantage essay, Doug Bierend writes of Abandoned in Place, Roland Miller’s glorious collection of photos which captures the gentle decline of decommissioned launch sites and NASA structures of yore. An excerpt:

Shot with a reverent eye, NASA’s sprawling launch sites and structures, gleaming test facilities, and rusting machinery come together as visual a document and testament to 21st century humanity’s ever-extending reach into the cosmos. Mute monuments to what were once our most lofty ideals.

“I’m a child of the ‘60s, and for anybody that was growing up during that time it was so exciting, it was like science fiction come to life — they were going to try to land on the moon, and they did,” says Miller. “And here we are almost 50 years later and we couldn’t land it on the moon — I doubt we could make the same nine-year window if we started now.”

With the scrapping of the Space Shuttle, public excitement over space exploration seemed to reach an all time low, along with NASA’s budget. But whether due to a combination of private innovation by the likes of SpaceX, the effective popularizing of science by figures like Neil DeGrasse Tyson, or a renewed schedule of NASA programs (including missions to Mars), a second golden age of space exploration may be dawning.

Some of Miller’s photos come across as essentially documentary, showing the current state of a once gleaming endeavor. Others are more abstract, revealing textures and colors and forms that allude to something ineffable. An aesthetic that’s as much part of science fiction as science fact, conjuring notions of space and worlds beyond our own, and how we might get there.•

Tags: ,

helmet1-1 (2)

In a further attempt to demystify consciousness, Michael Graziano proposes in an Atlantic piece that perhaps the odd phenomenon of phantom limbs can crack the hard problem. The sensation of lost legs and arms stems, it would seem, from the brain creating a “body schema” which persists unchanged (wrongly) after the loss. For whatever reason, the old template has remained, sometimes taking longer to recalibrate and occasionally never successfully doing so.

In his “Attention Schema Theory,” Graziano suggests that awareness is similarly driven by a model that anticipates sensation in an imprecise but effective way. I would think a “consciousness model” would have to know a whole other magnitude of complexity beyond one controlling a thumb or forefinger, but that certainly is possible. It would also explain why we’re so prone to fooling ourselves, how we can hole up in inside a certain worldview that’s been repeatedly reinforced, even if it runs counter to common sense and copious evidence.

An excerpt:

After studying the body schema for many years, I became interested in how the brain models another part of the self. Not a physical part like an arm, but a computational part. If the brain describes the physical body in ghostly, incomplete terms, how much more mystically would it describe a non-physical trait like computation?

One of the most crucial computational processes in the brain is attention. The word “attention” has many colloquial connotations, but in neuroscience it has a specific meaning. Attention is the selective enhancement of some signals over others, such that the brain’s resources are strategically deployed. In some ways attention is like a computational hand—it’s how the brain grasps things.

The brain needs to control its attention, just as it controls the body. To understand how, we can gain some insight from control theory, a well-developed branch of engineering theory that deals in the optimal ways for complex systems to work—whether those systems dictate the airflow in a building, traffic patterns in a city, or a robot arm. In control theory, if a machine is to control something optimally, it needs a working model of whatever it’s controlling. The brain certainly follows this principle in controlling the body. That’s why it computes a body schema. Since the brain can control its attention exquisitely well, it almost certainly has an attention schema, a simulation of its own attention.•

Tags:

During his lifetime, Leonard Darwin never intended to be a monster.

A son of Charles, Leonard was a staunch supporter of eugenics whose ideas about race, class, criminology, etc., were not just morally reprehensible but also scientifically ignorant. He was really a one-man cautionary tale for how highly respectable people can spout dumb and dangerous hogwash. A 1912 New York Times article reported on a speech he delivered, in which the eugenicist proposed experimenting with X-ray sterilizations and segregating the indolent and anyone else he deemed an enemy of moral progress. The opening:

Practical measures advocated by some students to improve races, such as the sterilization of criminals by the X-ray, the promotion of larger families among those of good stock and limitation among others, were discussed at yesterday’s session of the International Congress of Eugenics in the American Museum of Natural History.

Major Leonard Darwin, a son of the author of The Descent of Man, urged the experimental use of the X-ray, with the consent of the subject, to prevent descendants from the feeble-minded and habitual criminals. He suggested segregation for the wastrel, the habitual drunkard and “the work-shy” to prevent the transmission of their traits to future generations. 

Major Darwin also urged that the sound and fit and superior people should, by a campaign of patriotism, be induced to raise larger families. Racial deterioration seems evident among all highly civilized peoples, he said, because of the thinning out of the descendants of highly endowed stock and the multiplication of those inferior endowment.

“The result is anticipated,” he said, “that in comparison with the ill-endowed, the naturally well-endowed will, as time goes on, take a smaller and smaller part in the production of the coming generations, with a tendency to progressive racial deterioration s an inevitable consequence. And if we ask whether existing facts confirm or refute this dismal forecast, what do we find? Statistical inquiries, at all events, prove conclusively that, where good incomes are being earned, there the families are on the average small.”

History taught, he said, that races in the past had fallen from high estate because of the progressive elimination of their best types.

“I can find no facts,” he continued, “which refute the theoretical conclusion the the inborn qualities of civilized communities are deteriorating, and the process will inevitably lead in time to an all-around downward movement.”

The only efficient corrective which Major Darwin could think of, he said, was an appeal to patriotism.

Unpatriot to Limit Some Families

“What is necessary is to make it deeply and widely felt that it is both immoral and unpatriotic for couples sound in mind and body to unduly limit the size of their families,” he added.

Major Darwin read this sentence slowly, and at the request of two or three in the audience, read it over again. He said that he believed such a campaign would succeed, when persons of character and good endowment were awakened to the danger threatening their race.

“The nation that wins in this moral campaign,” he said, “will have gone half-way toward gaining a great racial victory.”•

Tags:

babybot (4)

It’s not romantic, but the human brain is a machine and if our species persists long enough there’s no reason why consciousness can’t be replicated in AI. Once it is, awareness and understanding should speed to places previously unimaginable. But I certainly don’t expect to see that hard problem solved in my lifetime and doubt we’re anywhere near the cusp of intelligent machines. Someday it will all seem so simple, but the day won’t arrive for a long time.

From “When Will the Machines Wake Up?” by Daniel Faggella at Techcrunch:

Over the last three months I’ve interviewed more than 30 artificial intelligence researchers (essentially all of whom hold PhDs). I asked them why they believe or don’t believe that consciousness can be replicated in machines.

One of the most common contentions as to why conscious will eventually be replicated is based on the fact that nature bumbled its way to human-level conscious experience, and with a deeper understanding of the neurological and computational underpinnings of what is “happening” to create a conscious experience, we should be able to do the same.

Professor Bruce MacLennan sums up the sentiments of many of the researchers in his response: “I think that the issue of machine consciousness (and consciousness in general) can be resolved empirically, but that it has not been to date. That said, I see no scientific reason why artificial systems could not be conscious, if sufficiently complex and appropriately organized.”

It might be supposed that attaining conscious experience in machines may require more than just a development in the fields of cognitive and computer science, but also an advancement in how research and inquiry are conducted. Dr. Ben Goertzel, artificial intelligence researcher behind OpenCog, had this to say: “I think that as brain-computer interfacing, neuroscience and AGI develop, we will gradually gain a better understanding of consciousness — but this may require an expansion of the scientific methodology itself.”•

Tags:

warhol+with+cows (1)

Are we past peak-meat?

It’s difficult to pose such a question here in the U.S. just days before the Super Bowl, which will see a poultry apocalypse to provide chicken wings to go with the human brain trauma. But the attempts to create meat in vitro will eventually be perfected, and when the price for such faux fare falls very low, some significant changeover will occur. You can add to that a growing vegetarian and vegan populace which doesn’t seem fringe at all anymore.

In a Vice article, Hannah Ewens conducts an interesting thought experiment, wondering what would be the economic, environmental and health impact if everyone in an entire country (she uses England as her case study) stopped eating all meat overnight. An excerpt:

Vice:

What would happen to the environment if we all stopped eating meat?

Nick Hewitt:

Eating meat makes a large contribution to the greenhouse gasses that people in the UK produce. If everyone stopped eating it, the food-related greenhouse gas emissions would reduce by about 35 percent. It’s one very effective way to make a big dent in emissions.

Vice:

Why?

Nick Hewitt:

It’s particularly cattle—beef is by far the worst. Cows chew grass and digest it in conditions in the stomach with no oxygen, and that releases methane. That’s the principle reason. Also, the way the grassland is fertilized causes greenhouse gas emissions. Transporting the food around does contribute, but it’s relatively small, unless you use air freight. Lorries aren’t too bad. The biggest lifestyle choice you could make to reduce greenhouse gasses is to stop eating meat. It’s hard to think of another single lifestyle change we could make that would have the same effect.

Vice:

So using the same farmland for plants would be the quickest way to reduce emissions?

Nick Hewitt:

Yeah. You’d still have to be careful with your fertilization, but using land for meat is the least efficient way of producing protein. It’s just an inefficient way of producing food. By growing plants on the land and eating those, it’s much more efficient, so we would be greatly reducing those greenhouse gas emissions.

Vice:

Would it make more of a difference if everyone was vegan?

Nick Hewitt:

Yeah, it would make more of a difference.•

Tags:

bringlass2

Following up on yesterday’s post about Alphabet’s considerable wager on its X division becoming a new-generation Bell Labs, realizing that if it’s primarily a search company in a decade, then it’s probably a dying enterprise, well-appointed though its dotage may be.

Robert MacIntosh has penned a Conversation piece suggesting the same, that Google would like to chase immortality as a company while Calico pursues it on a human level. It’s almost impossible to pull off in the long run, though a remarkably ambitious effort. An excerpt:

Many of the investments will turn out to be ineffective, but you usually have no way of knowing in advance. Some technologies or business models will prove unworkable for some as yet unknown reason. Just ask Sir Clive Sinclair – his C5 battery-operated car was in many ways ahead of its time, but soon became one of the most infamous marketing disasters ever.

The logic might be that – if you have the money to spare – it pays to invest broadly and look out for those early signs of rapid growth. After all, the management team at Google has long since demonstrated the capacity to build a market-leading position. Who would bet against them being able to do so again, especially when they are both better resourced and more experienced?

Imperial echoes

A second interpretation of the moonshot strategy could be that the firm’s founders are trying to combine a search for longevity with the adrenaline-fuelled high of creating a new business. But the harsh reality is that it is harder to fake the feel of a start-up when you’re a billionaire. The staggering investment in new ideas is nothing compared to Alphabet’s earnings. In the last three months of 2015 alone, the company made a net profit of $4.9bn. Arguably it doesn’t really matter if these businesses fail because other new ideas will pop up next year and you could fund them instead.

It would be a mistake to think that the future was assured for the company, however. Over the millennia, civilisations have grown to dominant positions and then failed. If it happened to the Incas, the Egyptians and the Romans, why wouldn’t it happen to Google?•

Tags:

eyesexamiedcandlelight1910 (1)

From the August 24, 1912 Brooklyn Daily Eagle:

eyes778

destinationmoon9

An excellent essay is “Here’s What We’ll Do In Space By 2116,” Emily Lakdawalla’s Nautilus piece that conjures up next-level space exploration in a way that manages to pass a sobriety test. While acknowledging the great obstacles to come, the writer takes a bold yet realistic look at what could well occur in the following ten decades, explaining, among other things, why early voyagers to Mars will likely be transported by private companies–and why the very first ones might not exactly be human. Lakdawalla acknowledges the next century will largely see us exploring (relatively) close to home, perhaps using asteroid mining and such to jump-start an “in-space economy.”

A passage dear to me is one that focuses on using non-human passengers to initially traverse the final frontier, the path I believe we should be taking for the foreseeable future. An excerpt:

Because of the costs and risks of physical human spaceflight, I’m personally more excited about a different kind of space exploration. Advances in miniaturization have made it relatively cheap to launch lots of microsatellites to near-Earth space. These craft will soon be sent further out, and it won’t be long before there are lots of little spacecraft landing on the Moon. From our homes on Earth, we could all take virtual joyrides across the lunar surface, with these mini explorers acting as our distant eyes.

It’s possible that this is how humans will first explore Mars, too—with a robotic body that needs no food, water, shelter, or sleep, serving as the avatar of human operators. The humans working the robot will still need to be located near Mars, not Earth, because of the significant delay in radio communications between the two planets. (The lag between commands sent and data received would range from eight to 42 minutes.) But the humans need not undertake the risks and challenges of landing on Mars: People in orbit at Mars could directly and immediately control Mars robots, all while staying in a ship or station tricked out with everything our delicate bodies need to survive. 

Then again, depending on how technology advances, it may be that the division that we now draw between “human” and “robotic” exploration will be archaic in 50 years.•

Tags:

pageglass9

Google may have begun as an AI company, but it will merely be a wildly successful search-and-ad outfit (for as long as that’s a money-making endeavor) unless its X division becomes a nouveau Bell Labs. Having a few of the moonshots land is paramount, whether it be self-driving car software or medical breakthroughs. The will is there, even if investors would rather Page, Brin et al., take the myopic view and settle for selling soap while algorithmically sorting selfies.

From Alistair Barr at the Wall Street Journal:

Alphabet Chief Financial Officer Ruth Porat warned repeatedly on the conference call that spending will increase this year, while Google Chief Executive Sundar Pichai laid out his own plans to ramp up investments in the company’s main Internet businesses.

On Monday, following strong fourth-quarter results, Ms. Porat said spending will increase in part to expand Alphabet’s fast Fiber Internet service to new U.S. citiesAn aggressive expansion of Fiber would cost tens of billions of dollars, noted Citigroup analyst Mark May. He expects Alphabet capital expenditures to climb by about $2 billion to $12 billion in 2016.

Ms. Porat also said Alphabet is committed to expanding its self-driving car business, which is currently testing prototypes in California and Austin, Tex.

These moonshots are part of Alphabet’s new “Other Bets” segment, which it said on Monday lost $3.6 billion in 2015, up from losses of $1.9 billion in 2014.

Mr. Porat also said that some of Alphabet’s biggest moonshots are inside its core Google Internet business — a division that’s supposed to generate the profits that pay for new, ambitious projects. Mr. Pichai said Google’s cloud-computing business will be a big area of investment in 2016, along with virtual reality and artificial intelligence.•

Tags: , ,

So much has been written about the Internet of Things, the pluses and minuses, but Bruce Schneier does an impressive job of analyzing its challenges in a new Forbes piece. We won’t just log into the machine–the machine will be everything, though it will be so quiet, not even a hum, that we’ll barely notice it. The writer identifies the IoT as a “world-sized robot” and calls for the establishment of a “Department of Technology Policy.” The opening:

The Internet of Things is the name given to the computerization of everything in our lives. Already you can buy Internet-enabled thermostats, light bulbs, refrigerators, and cars. Soon everything will be on the Internet: the things we own, the things we interact with in public, autonomous things that interact with each other.

These “things” will have two separate parts. One part will be sensors that collect data about us and our environment. Already our smartphones know our location and, with their onboard accelerometers, track our movements. Things like our thermostats and light bulbs will know who is in the room. Internet-enabled street and highway sensors will know how many people are out and about—and eventually who they are. Sensors will collect environmental data from all over the world.

The other part will be actuators. They’ll affect our environment. Our smart thermostats aren’t collecting information about ambient temperature and who’s in the room for nothing; they set the temperature accordingly. Phones already know our location, and send that information back to Google Maps and Waze to determine where traffic congestion is; when they’re linked to driverless cars, they’ll automatically route us around that congestion. Amazon already wants autonomous drones to deliver packages. The Internet of Things will increasingly perform actions for us and in our name. 

Increasingly, human intervention will be unnecessary.•

Tags:

trumpd

Ted Cruz may be the most loathsome serious Presidential candidate of our time, and Marco Rubio seems a bullshit artist who reportedly has a lot of skeletons rattling around in his closet, but this pair of jokers outmaneuvered Donald Trump on many levels in Iowa. That’s because he’s a dummkopf in general and particularly in regard to politics. Arrogant people lacking in self-awareness almost always also lack attention to detail.

Having entered the race on a whim because he hoped to masturbate to donuts in the Lincoln bedroom, Trump then received an avalanche of attention for his vicious and biased remarks, propelling his whole idiot campaign. Now, even though he’s not been completely ejected from the clown-car process thanks to sheer odiousness of his fellow candidates, Trump’s flailing wildly. Here’s what the man who compared Ben Carson to a child molester had to say post-Hawkeye State:

[Trump] added that a mailer in Iowa sent by Cruz’s campaign that revealed neighbors’ voting participation was malicious: “He insulted Ben Carson by doing what he did to Ben Carson. That was a disgrace…. He’s a man of insult.”•

In a rare moment of clarity, the blockhead who managed to bankrupt a casino acknowledged that he screwed the pooch in Iowa. From Kia Makarechi at Vanity Fair:

The post-Iowa reckoning continued Wednesday morning, with Donald Trump speed-dialing into MSNBC’s Morning Joe for an awkward postmortem. Trump, who has been the Republican presidential poll-leader for months, placed second in the Iowa caucuses Monday night, three percentage points behind Ted Cruz.

To hear Trump tell it, the loss was easily preventable. The only problem? He has no idea how to run a campaign.

“I think we could have used a better ground game, a term I wasn’t even familiar with,” Trump said. “You know, when you hear ‘ground game,’ you say what the hell is that? Now I’m familiar with it. But, you know, I think in retrospect we should have had a better ground game. I would have funded a better ground game, but people told me our ground game was fine. And by most standards it was.”

Cruz’s campaign has been openly gloating about how it used advanced data modeling to invent positions for the candidate that would resonate with Iowa voters. Did you know that the Senator from Texas has strong views on Iowa’s fireworks ban? Neither did the Senator from Texas, until someone on his analytics team identified the ban as an issue that could sway some Iowan hearts and minds.•

 

Tags: , , ,

underwater-3-1200x1165

Everyone marvels at the otherworldly ambition of the outré Arab state of Dubai, but nobody does anything about it. I’d like a full-length book about the emirate from Douglas Coupland or George Saunders, and I’d like it now. One decadent desert dream which may or may not come to fruition: an underwater tennis complex. It could cost $2.5 billion, but who’s counting? Castles carved into the sand by quasi-slave labor in the 21st century should be almost beyond reckoning yet it sadly doesn’t seem an anachronism. From Codelia Mantsebo at Elite Traveler:

After boasting of tennis court high up in the air built atop the 1,000-foot-tall Burj al Arab hotel, plans for the world’s first underwater tennis court in Dubai were revealed in April last year. Today, Kotala has revealed the project has eyed potential US investors to turn this project into a reality while he works on the final designs for the concept.

In April last year, Polish architect Krzysztof Kotala made global headlines when he unveiled initial designs of the Underwater Dubai Tennis Center. According to Kotala, plans for this venture are set to move a step closer to reality as he confirmed he was in talks with US investors. He also confirmed he is currently working on the final designs for the concept.•

Tags: ,

astro2 (3)

There are dual, deep-seated reasons for the modern preoccupation with apocalypse, which has never been more pronounced in literature and art. Part of it has to do with a dissatisfaction with what we’ve created and will create. It’s the ultimate nostalgia: We dream of a board clear even of us. The other part of the equation, I think, is a collective attempt to wrest control of what may turn out to be the doom of the species, the extinction that could ultimately be our fate. Like a terminally ill person with a handful of pills, we’d like the endgame to be played by our rules. In our sci-fi dress rehearsals, at least, we’re in charge.

On the topic of apocalypse, Frank Bures has penned an especially graceful Aeon essay, trying to make sense of his–and society’s–foreboding feelings in the Anthropocene. He believes it has to do with the ever-growing machine we’ve invented, which provides for us in fascinating ways and may be the death of us. As Bures writes, we have the “feeling that we are part of something over which we have no control, of which we have no real choice but to keep being part of.”

The opening:

One day in the early 1980s, I was flipping through the TV channels, when I stopped at a news report. The announcer was grey-haired. His tone was urgent. His pronouncement was dire: between the war in the Middle East, famine in Africa, AIDS in the cities, and communists in Afghanistan, it was clear that the Four Horsemen of the Apocalypse were upon us. The end had come.

We were Methodists and I’d never heard this sort of prediction. But to my grade-school mind, the evidence seemed ironclad, the case closed. I looked out the window and could hear the drumming of hoof beats.

Life went on, however, and those particular horsemen went out to pasture. In time, others broke loose, only to slow their stride as well. Sometimes, the end seemed near. Others it would recede. But over the years, I began to see it wasn’t the end that was close. It was our dread of it. The apocalypse wasn’t coming: it was always with us. It arrived in a stampede of our fears, be they nuclear or biological, religious or technological.

In the years since, I watched this drama play out again and again, both in closed communities such as Waco and Heaven’s Gate, and in the larger world with our panics over SARS, swine flu, and Y2K. In the past, these fears made for some of our most popular fiction. The alien invasions in H G Wells’s War of the Worlds (1898); the nuclear winter in Nevil Shute’s On the Beach (1957); God’s wrath in the Left Behind series of books, films and games. In most versions, the world ended because of us, but these were horrors that could be stopped, problems that could be solved.

But today something is different. Something has changed.•

Tags:

5master

Human dominance in the game of Go is going but not yet gone. That’s one of the clarifying points Gary Marcus makes in a Backchannel piece that looks at Google’s machine intelligence triumphing over a human “champion” in the ancient game. Even when AI becomes the true Go champion, that doesn’t mean such knowledge will be easily transferable to other areas. Furthermore, the psychologist explains that the Google system isn’t in fact a pure neural network but a hybrid. An excerpt:

The European champion of Go is not the world champion, or even close. The BBC, for example, reported that “Google achieves AI ‘breakthrough’ by beating Go champion,” and hundreds of other news outlets picked up essentially the same headline. But Go is scarcely a sport in Europe; and the champion in question is ranked only #633 in the world. A robot that beat the 633rd-ranked tennis pro would be impressive, but it still wouldn’t be fair to say that it had “mastered” the game. DeepMind made major progress, but the Go journey is still not over; a fascinating thread at YCombinator suggests that the program — a work in progress — would currently be ranked #279.

Beyond the far from atypical issue of hype, there is an important technical question: what is the nature of the computer system that won? 

By way of background, there is a long debate about so-called neural net models (which in its most modern form is called “deep-learning”) and classical “Good-old-fashioned Artificial Intelligence” (GOFAI) systems, of the form that the late Marvin Minsky advocated. Minsky, and others like his AI-co-founder John McCarthy grew up in the logicist tradition of Bertrand Russell, and tried to couch artificial intelligence in something like the language of logic. Others, like Frank Rosenblatt in the 50s, and present-day deep learners like Geoffrey Hinton and Facebook’s AI Director Yann LeCun, have couched their models in terms of simplified neurons that are inspired to some degree by neuroscience.

To read many of the media accounts (and even the Facebook posts of some of my colleagues), DeepMind’s victory is a resounding win for the neural network approach, and hence another demerit for Minsky, whose approach has very much lost favor.

But not so fast.•

Tags:

Roberts1

It’s not really stunning that a patriarchal institution like Oral Roberts University is doing something remarkably invasive, but the question is whether the school is an outlier for long or just for now. ORU will require incoming freshmen to wear Fitbit in order to monitor their exercise, food, sleep, location and body weight. A school founded by a “faith healer” that’s utilizing new technologies is bound to “lay its hands” on others in off-putting ways, though wholly secular bodies will likely attempt similar things in the not-too-distant future.

From Elizabeth Chuck at NBC News:

An Oklahoma university is taking a novel approach to fighting the “Freshman 15”: Require all incoming students to wear fitness trackers.

Oral Roberts University, a Christian university in Tulsa, announced earlier this month that all first-years must wear Fitbits — watches that track how much activity a person does. Their fitness data will be tracked by the school and will affect students’ grades.

While mandatory for all incoming freshman this year, Oral Roberts said it “has opened the program up to all students,” and said the campus bookstores have already sold more than 550 of the popular gadgets.

The university has always included a fitness component in its curriculum, requiring students to “manually log aerobics points in a fitness journal” in past years. The students get graded on their level of aerobic activity.

Now, instead of tediously entering the data by hand, it will be automatically tracked and submitted by the Fitbits, which retail for about $150.

“ORU offers one of the most unique educational approaches in the world by focusing on the Whole Person — mind, body and spirit,” ORU President William M. Wilson said in a statement. “The marriage of new technology with our physical fitness requirements is something that sets ORU apart.”

The Fitbit requirement is a first of its kind for colleges and universities, Oral Roberts said.•

Tags:

18lpea9eos1ltjpg

I don’t think earthlings should travel to Mars by 2025. We’re in a rush, sure, but probably not in that much of a hurry. My own hope would be that in the near-term future we send unpeopled probes to our neighbor, loaded with 3D printers that begin experimenting with building a self-sustaining colony.

Of course, I’m not a billionaire, so my vote really won’t amount to much. The best argument that Elon Musk and other nouveau space entrepreneurs have for leading us at warp speed into being a multi-planet species isn’t only existential risk but also that the next generation of fabulously wealthy technologists may turn their attention from the skies. It wouldn’t be the first time the stars lost our interest.

A transcript of Musk discussing space exploration at last week’s 2016 StartmeupHK Venture Forum in Hong Kong:

Question:

Let’s get even more way out there and talk about SpaceX. You’ve said that your ultimate goal is getting to Mars. Why is Mars important? Why does Mars matter?

Elon Musk:

It’s really a fundamental decision we need to make as a civilization. What kind of future do we want? Do we want a future where we’re forever confined to one planet until some eventual extinction event, however far in the future that might occur. Or do we want to become a multi-planet species, and then ultimately be out there among the stars, among many planets, many star systems? I think the latter is a far more exciting and inspiring future than the former. 

Mars is the next natural step. In fact, it’s really the only planet we have a shot of establishing a self-sustaining city on. I think once we do establish such a city, there will be a strong forcing function for the improvement of spaceflight technology that will then enable us to establish colonies elsewhere in the solar system and ultimately extend beyond our solar system.

There’s the defensive reason of protecting the future of humanity, ensuring that the light of consciousness is not extinguished should some calamity befall Earth. That’s the defensive reason, but personally I find what gets me more excited is that this would be an incredible adventure–like the greatest adventure ever. It would be exciting and inspiring, and there needs to be things that excite and inspire people. There have to be reasons why you get up in the morning. It can’t just be solving problems. It’s got to be something great is going to happen in the future.

Question:

It’s not an exit strategy or back-up plan for when Earth fails. It’s also to inspire people and to transcend and go beyond our mental limits of what we think we can achieve.

Elon Musk:

Think of how sort of incredible the Apollo program was. If you ask anyone to name some of humanity’s greatest achievements of the 20th century, the Apollo program, landing on the moon, would in many places be number one.

Question:

When will there be a manned SpaceX mission and when will you go to Mars?

Elon Musk:

We’re pretty close to sending crew up to the Space Station. That’s currently scheduled for the end of next year. So that will be exciting, with our Dragon 2 spacecraft. Then we’ll have a next-generation rocket and spacecraft beyond the Falcon-Dragon series, and I’m hoping to describe that architecture later this year at the National Aeronautical Congress, which is the big international space event every year. I think that will be quite exciting.

In terms of me going, I don’t know, maybe four or five years from now. Maybe going to the Space Station would be nice. In terms of the first flights to Mars, we’re hoping to do that around 2025. Nine years from now or thereabouts. 

Question:

Oh my goodness, that’s right around the corner.

Elon Musk:

Well, nine years. Seems like a long time to me.

Question:

Are you doing the zero-gravity training?

Elon Musk:

I’ve done the parabolic flights. Those are fun.

Question:

You must be reading up and doing the physical work to get ready for the ultimate flight of your life.

Elon Musk:

Umm, I don’t think it’s that hard, honestly. Just float around. It’s not that hard to float around. [Laughter] Well, going to Mars is going to be hard and dangerous and difficult in every way, and if you care about being safe and comfortable going to Mars would be a terrible choice.

Tags:

telephoneoperators

Do you want a digital assistant 10,000 times more useful than Siri? A voice-activated universal remote that runs your life? I suppose the answer is “yes.”

Moore’s Law made supercomputers of yore affordable and portable for almost everyone, stealing them from the domain of superwealthy corporations and states and sliding them into our shirt pockets. Similarly, efforts are being made to create AI that acts as a voice-activated universal remote for our lives, anticipating and satisfying our needs. We may soon be able to enjoy the benefits of a “staff” the way our richer brethren do. 

The thing is, most of the new technologies have not created more leisure. Will these tools, if realized, be the same? If they do actually reduce toil, what will we use the extra bandwidth for?

From Zoë Corbyn’s Guardian article about Dag Kittlaus’ attempts to create not Frankenstein but Igor:

Kittlaus is the co-founder and CEO of Viv, a three-year-old AI startup backed by $30m, including funds from Iconiq Capital, which helps manage the fortunes of Mark Zuckerberg and other wealthy tech executives. In a blocky office building in San Jose’s downtown, the company is working on what Kittlaus describes as a “global brain” – a new form of voice-controlled virtual personal assistant. With the odd flashes of personality, Viv will be able to perform thousands of tasks, and it won’t just be stuck in a phone but integrated into everything from fridges to cars. “Tell Viv what you want and it will orchestrate this massive network of services that will take care of it,” he says.

It is an ambitious project but Kittlaus isn’t without a track record. The last company he co-founded invented Siri, the original virtual assistant now standard in Apple products. Siri Inc was acquired by the tech giant for a reported $200m in 2010. The inclusion of the Siri software in the iPhone in 2011 introduced the world to a new way to interact with a mobile device. Google and Microsoft soon followed with their versions. More recently they have been joined by Amazon, with the Echo you can talk to, and Facebook, with its experimental virtual assistant, M.

But, Kittlaus says, all these virtual assistants he helped birth are limited in their capabilities. Enter Viv. “What happens when you have a system that is 10,000 times more capable?” he asks. “It will shift the economics of the internet.”•

Tags: ,

Some of the things contemporary consumers most desire to possess are tangible (smartphones) and others not at all (Facebook, Instagram, etc.). In fact, many want the former mainly to get the latter. A social media “purchase” requires no money but is a trade of information for attention, a dynamic that’s been widely acknowledged, but one that still stuns me. Our need to share ourselves–to write our names Kilroy-like on a wall, as Hunter S. Thompson once said–is etched so deeply in our brains. Manufacturers have used psychology to sell for at least a century, but the transaction has never been purer, never required us to not only act on impulse but to publish that instinct as well. Judging by the mood of America, this new thing, while it may provide some satisfaction, also promotes an increased hunger in the way sugar does. And while the Internet seems to encourage individuality, its mass use and many memes suggests something else.

On a somewhat related topic: Rebecca Spang’s Financial Times article analyzes a new book which argues that a consumerist shift is more a political movement than we’d like to believe, often a culmination of large-scale state decisions rather than of personal choice. The passage below is referring to material goods, but I think the implications for the immaterial are the same. The excerpt:

In Empire of Things, Frank Trentmann brings history to bear on all these questions. His is not a new subject, per se, but his thick volume is both an impressive work of synthesis and, in its emphasis on politics and the state, a timely corrective to much existing scholarship on consumption. Based on specialist studies that range across five centuries, six continents and at least as many languages, the book is encyclopedic in the best sense. In his final pages, Trentmann intentionally or otherwise echoes Diderot’s statement (in his own famous Encyclopédie) that the purpose of an encyclopedia is to collect and transmit knowledge “so that the work of preceding centuries will not become useless to the centuries to come”. Empire of Things uses the evidence of the past to show that “the rise of consumption entailed greater choice but it also involved new habits and conventions . . . these were social and political outcomes, not the result of individual preferences”. The implications for our current moment are significant: sustainable consumption habits are as likely to result from social movements and political action as they are from self-imposed shopping fasts and wardrobe purges.

When historians in the 1980s-1990s first shifted from studying production to consumption, our picture of the past became decidedly more individualist. In their letters and diaries, Georgian and Victorian consumers revealed passionate attachments to things — those they had and those they craved. Personal tastes and preferences hence came to rival, then to outweigh, abstract processes (industrialisation, commodification, etc) as explanations for historical change. The world looked so different! Studied from the vantage point of production, the late 18th and 19th centuries had appeared uniformly dark and dusty with soot; imagined from the consumer’s perspective, those same years glowed bright with an entire spectrum of strange, distinct colours (pigeon’s breast, carmelite, eminence, trocadero, isabella, Metternich green, Niagra [sic] blue, heliotrope). At the exact moment when Soviet power seemed to have collapsed chiefly from the weight of repressed consumer desire, consumption emerged as a largely positive, almost liberating, historical force. “Material culture” became a common buzzword; “thing theory” — yes, it really is a thing — was born.•

Tags: ,

edisonbulb

Asking if innovation is over is no less narcissistic than suggesting that evolution is done. It flatters us to think that we’ve already had all the good ideas, that we’re the living end. More likely, we’re always closer to the beginning.

Of course, when looking at relatively short periods of time, there are ebbs and flows in invention that have serious ramifications for the standard of living. In Robert Gordon’s The Rise and Fall of American Growth, the economist argues that the 1870-1970 period was a golden age of productivity and development unknown previously and unmatched since.

In an excellent Foreign Affairs review, Tyler Cowen, who himself has worried that we’ve already picked all the low-hanging fruit, lavishly praises the volume–“likely to be the most interesting and important economics book of the year.” But in addition to acknowledging a technological slowdown in the last few decades, Cowen also wisely counters the book’s downbeat tone while recognizing the obstacles to forecasting, writing that “predicting future productivity rates is always difficult; at any moment, new technologies could transform the U.S. economy, upending old forecasts. Even scholars as accomplished as Gordon have limited foresight.” In fact, he points out that the author, before his current pessimism, predicted earlier this century very healthy growth rates.

My best guess is that there will always be transformational opportunities, ripe and within arm’s length, waiting for us to pluck them.

An excerpt:

In the first part of his new book, Gordon argues that the period from 1870 to 1970 was a “special century,” when the foundations of the modern world were laid. Electricity, flush toilets, central heating, cars, planes, radio, vaccines, clean water, antibiotics, and much, much more transformed living and working conditions in the United States and much of the West. No other 100-year period in world history has brought comparable progress. A person’s chance of finishing high school soared from six percent in 1900 to almost 70 percent, and many Americans left their farms and moved to increasingly comfortable cities and suburbs. Electric light illuminated dark homes. Running water eliminated water-borne diseases. Modern conveniences allowed most people in the United States to abandon hard physical labor for good.

In highlighting the specialness of these years, Gordon challenges the standard view, held by many economists, that the U.S. economy should grow by around 2.2 percent every year, at least once the ups and downs of the business cycle are taken into account. And Gordon’s history also shows that not all GDP gains are created equal. Some sources of growth, such as antibiotics, vaccines, and clean water, transform society beyond the size of their share of GDP. But others do not, such as many of the luxury goods developed since the 1980s. GDP calculations do not always reflect such differences. Gordon’s analysis here is mostly correct, extremely important, and at times brilliant—the book is worth buying and reading for this part alone.

Gordon goes on to argue that today’s technological advances, impressive as they may be, don’t really compare to the ones that transformed the U.S. economy in his “special century.” Although computers and the Internet have led to some significant breakthroughs, such as allowing almost instantaneous communication over great distances, most new technologies today generate only marginal improvements in well-being. The car, for instance, represented a big advance over the horse, but recent automotive improvements have provided diminishing returns. Today’s cars are safer, suffer fewer flat tires, and have better sound systems, but those are marginal, rather than fundamental, changes. That shift—from significant transformations to minor advances—is reflected in today’s lower rates of productivity.•

Tags: ,

robot-congo-2

An Economist article looks at the latest report on automation by Carl Benedikt Frey, Michael Osborne and Craig Holmes, which argues that poorer nations are more likely than, say, America, to be prone to technological unemployment despite the U.S. holding an advantage in AI.

Because such countries are not yet as widely engaged in information work, their Industrial Age could be interrupted mid-epoch before they arrive at the Information Age. It’s like being pushed down a ladder when you’ve only scaled it part of the way. The academics acknowledge, though, that everything from policy to consumer preference may forestall the rise of the machines in India and China others. After all, Foxconn’s promised one-million robots factory workforce has yet to be realized.

An excerpt:

BILL BURR, an American entertainer, was dismayed when he first came across an automated checkout. “I thought I was a comedian; evidently I also work in a grocery store,” he complained. “I can’t believe I forgot my apron.” Those whose jobs are at risk of being displaced by machines are no less grumpy. A study published in 2013 by Carl Benedikt Frey and Michael Osborne of Oxford University stoked anxieties when it found that 47% of jobs in America were vulnerable to automation. Machines are mastering ever more intricate tasks, such as translating texts or diagnosing illnesses. Robots are also becoming capable of manual labour that hitherto could be carried out only by dexterous humans.

Yet America is the high ground when it comes to automation, according to a new report* from the same pair along with other authors. The proportion of threatened jobs is much greater in poorer countries: 69% in India, 77% in China and as high as 85% in Ethiopia. There are two reasons. First, jobs in such places are generally less skilled. Second, there is less capital tied up in old ways of doing things. Driverless taxis might take off more quickly in a new city in China, for instance, than in an old one in Europe.

Attracting investment in labour-intensive manufacturing has been a route to riches for many developing countries, including China. But having a surplus of cheap labour is becoming less of a lure to manufacturers. An investment in industrial robots can be repaid in less than two years. This is a particular worry for the poor and underemployed in Africa and India, where industrialisation has stalled at low levels of income—a phenomenon dubbed “premature deindustrialisation” by Dani Rodrik of Harvard University.•

Tags: , ,

« Older entries § Newer entries »