Science/Tech

You are currently browsing the archive for the Science/Tech category.

maxresdefault (8)

In 2016, GM promises to begin selling the first affordable EV possessing a 200-mile range. That might merely be a short-term victory for Big Auto, though history seems to suggest otherwise. 3D Printers may ultimately disrupt the business of car manufacture, seriously lower the barrier to entry and allow for the extreme customization of cars, but that may very likely not be the case. If I had been around in the early days of Homebrew, I probably would have felt the same of personal computers or at the very least, software, but I would have been wrong. The same may hold true for the auto sector.

In a Wired cover story about GM seemingly outfoxing Tesla thus far in the EV market, an unlikely twist to be sure, Alex Davies writes about the urgency of the Chevy Bolt’s creation. An excerpt:

These days it’s a refrain among GM executives that in the next five to 10 years, the auto industry will change as much as it has in the past 50. As batteries get better and cheaper, the propagation of electric cars will reinforce the need to build out charging infra­structure and develop clean ways to generate electricity. Cars will start speaking to each other and to our infrastructure. They will drive themselves, smudging the line between driver and passenger. Google, Apple, Uber, and other tech companies are invading the transportation marketplace with fresh technology and no ingrained attitudes about how things are done.

The Bolt is the most concrete evidence yet that the largest car companies in the world are contemplating a very different kind of future too. GM knows the move from gasoline to electricity will be a minor one compared to where customers are headed next: away from driving and away from owning cars. In 2017, GM will give Cadillac sedans the ability to control themselves on the highway. Instead of dismissing Google as a smart-aleck kid grabbing a seat at the adults’ table, GM is talking about partnering with the tech firm on a variety of efforts. Last year GM launched car-sharing programs in Manhattan and Germany and has promised more to come. In January the company announced that it’s investing $500 million in Lyft, and that it plans to work with the ride-sharing company to develop a national network of self-driving cars. GM is thinking about how to use those new business models as it enters emerging markets like India, where lower incomes and already packed metro areas make its standard move—put two cars in every garage—unworkable.

This all feels strange coming from GM because, for all the changes of the past decade and despite the use of words like disruption and mobility, it’s no Silicon Valley outfit.

Tags: ,

eq

New York City has earthquakes, but they’re so minor we never feel them. In most instances, the earth prefers to swallow us up one by one. But it’s different in Los Angeles.

L.A.’s temperamental turf is the subject of The Myth of Solid Ground: Earthquakes, Prediction, and the Fault Line Between Reason and Faith, a fine 2005 volume on the topic of secular shaking by David L. Ulin. Among other topics, Ulin’s volume looks at the thorny issue of earthquake prediction, by scientists and psychics, the concerned and the kooky. An excerpt about Linda Curtis, who works for the Southern California field office of the United States Geological Survey in Pasadena:

Curtis is, in many ways, the USGS gatekeeper, the public affairs officer who serves as a frontline liaison with the community and the press. Her office sits directly across the hall from the conference room, and if you call the Survey, chances are it will be her low-key drawl you’ll hear on the line. In her late forties, dark-haired and good-humored, Curtis has been at the USGS since 1979, and in that time, she’s staked out her own odd territory as a collector of earthquake predictions, which come across the transom at sporadic but steady intervals, like small seismic jolts themselves.

“I’ve been collecting almost since day one,’ she tells me on a warm July afternoon in her office, adding that it’s useful for USGS to keep records, if only to mollify the predictors, many of whom view the scientific establishment with frustration, paranoia even, at least as far as their theories are concerned.

“Basically,” she says, ‘we are just trying  to protect our reputation. We don’t want to throw these predictions in the wastebasket, and then a week later…” She chuckles softly, a rolling R sound as thick and throaty as a purr. “Say somebody predicted a seven in downtown L.A., and we ignored it. Can you imagine the reaction if it actually happened? So this is sort of a little bit of insurance. If you send us a prediction, we put it in the file.”•

________________________

Marjoe Gortner–in Sensurround!

medicine (1)

In Ben Hirschler’s Reuters Davos report, he identifies several game-changing technologies that World Economic Forum attendees believe will be possible by 2025: implantable mobile phones, 3D-printed organs for transplant, clothes and reading-glasses connected to the Internet. Apart from the bioprinting of organs, the rest seem to be not very risky bets. If human kidneys can be printed by that year (or any near it), we’ll have moved into a very remarkable new age. Of course, all new eras have their challenges as well. 

An excerpt:

One of the most in-demand participants in Davos this year is not a central banker, CEO or politician but a prize-winning South Korean robot called HUBO, which is strutting its stuff amid a crowd of smartphone-clicking delegates.

But there are deep worries, as well as awe, at what technology can do.

A new report from UBS released in Davos predicts that extreme levels of automation and connectivity will worsen already deepening inequalities by widening the wealth gap between developed and developing economies.

“The fourth industrial revolution has potentially inverted the competitive advantage that emerging markets have had in the form of low-cost labor,” said Lutfey Siddiqi, global head of emerging markets for FX, rates and credit at UBS.

“It is likely, I would think, that it will exacerbate inequality if policy measures are not taken.”

An analysis of major economies by the Swiss bank concludes that Switzerland is the country best-placed to adapt to the new robot world, while Argentina ranks bottom.•

Tags: ,

Head of "Mind Control" Jack Gariss conducting group who are hooked-up to bioscope machines. Location:	Los Angeles, CA, US Date taken:	March 1972

In the same way that you either bring the water to L.A. or you bring L.A. to the water, Singularitarians want to bring the computer inside of humans or put humans inside computers. They’d ideally prefer both options. 

The idea of digitizing brain function seems impossibly far off in the future to me, but one idea if it does someday become reality is that we could capture the wetware in our heads, make copies of it, and upload it into a video-game-ish scenario or a synthetic bodysuit of sorts.

Yes, far-flung stuff, and none of us will live to see it, though Nell Watson of Singularity U believes it could be possible this century. From Marie Boran at the Irish Times:

Nell Watson’s job is to think about the future and she says: “I often wonder if, since we could be digitized from the inside out – not in the next 10 years but sometime in this century – we could create a kind of digital heaven or playground where our minds will be uploaded and we could live with our friends and family away from the perils of the physical world.

“It wouldn’t really matter if our bodies suddenly stopped functioning, it wouldn’t be the end of the world. What really matters is that we could still live on.”

In other words you could simply upload to a new, perhaps synthetic, body.

As a futurist with Singularity University (SU), a Silicon Valley-based corporation that is part university, part business incubator, Watson, in her own words, is “someone who looks at the world today and projects into the future; who tries to figure out what current trends mean in terms of the future of technology, society and how these two things intermingle”.

She talks about existing technologies that are already changing our bodies and our minds: “There are experiments using DNA origami. It’s a new technique that came out a few years ago and uses the natural folding abilities of DNA to create little Lego blocks out of DNA on a tiny, tiny scale. You can create logic gates – the basic components of computers – out of these things.

“These are being used experimentally today to create nanobots that can go inside the bloodstream and destroy leukaemia cells, and in trials they have already cured two people of leukaemia. It is not science fiction: it is fact.”•

Tags: ,

sixmillionardollarmanani9

Five years ago when I looked back at the 1978 Michael Crichton film, Coma, through the lens of the new millennium, I wrote this: 

The nouveau tech corporations are aimed at locating and marking our personal preferences, tracking our interests and even our footsteps, knowing enough about what’s going on inside our heads to predict our next move. In a time of want and desperation and disparity of wealth, how much information will we surrender?•

It seems truer now than in 2011, as wearables multiply and the Internet of Things comes closer to fruition. Whenever data is collected, it will be sold, whether that was the original intent or not. And the collection process will grow so seamless and unobtrusive we’ll hardly notice it.

From “What Happens to the Data Collected On Us While We Sleep,” by Meghan Neal at Wired Motherboard:

We already know that the major data brokers like Acxiom and Experian collect thousands of pieces of information on nearly every US consumer to paint a detailed personality picture, by tracking the websites we visit and the things we search for and buy. These companies often know sensitive things like our sexual preference or what illnesses we have.

Now with wearables proliferating (it’s estimated there will be 240 million devices sold by 2019) that profile’s just going to get more detailed: Get ready to add how much body fat you have, when you have sex, how much sleep you get, and all sorts of physiological data into the mix.

“Whenever there’s information that you’re collecting about yourself and you’re quantifying, there’s a very good chance that it will end up in a profile of you,” Michelle De Mooy, a health privacy expert at the Center for Democracy & Technology, told me.

This has many privacy and security experts, politicians, and the government wringing their hands, worried that if and when all that granular personal information gathered gets in the hands of advertisers and data brokers, it could be used in ways we never intended or even suspected.

“Biometric data is perhaps the last ‘missing link’ of personal information collected today,” said Jeffrey Chester, Executive Director of the Center for Digital Democracy.

“The next great financial windfall for the digital data industry will be our health information, gathered thru wearables, swallowable pills and an ever-present Internet of Things,” Chester told me. “Pharma companies, hospitals and advertisers see huge profits in our health information.”•

Tags: , ,

From the September 26, 1909 New York Times:

Paris–Jules Bois believes that motor cars will in a hundred years be things of the past, and that a kind of flying bicycle will have been invented which will enable everybody to traverse the air at will, far above the earth. Hardly any one will remain in the cities at night. They will be places of business only. People of every class will reside in the country or in garden towns at considerable distances from the populous centres. Pneumatic railways and flying cars and many other means of quick transit will be so developed that the question of time will enter but little into one’s choice of a home. Transportation will be immensely cheaper than it is at present. As there will be less crowding, realty values and rentals will less exorbitant.•

Echoing what physicist Stephen Hsu wrote in Nautilus in 2015 (a piece that made the “50 Great 2015 Articles Online for Free” list), Reed Hastings of Netflix believes the future will see a race between engineered humans and Intelligent AI. To me, that seems the most likely outcome. In the very long run, we will probably increasingly drive our own evolution. We’ll be the “existential risk” to what Homo sapiens currently is.

From Chris O’Brien at Venturebeat:

During a conversation on stage today at the DLD Conference in Munich, Germany, Hastings said he was far less worried about looming threats of an AI-triggered apocalypse than are many other observers, such as Tesla’s Elon Musk.

“Some people worry about what happens when machine intelligence is too strong,” Hastings said. “That’s like worrying about our Mars colony and people being overweight on our Mars colony. We can deal with that later.”

He emphasized, rather, that machine intelligence is just beginning to be felt, through applications like Netflix’s recommendation engine. But he expects that impact to accelerate and predicted that the world is only five to 15 years away from a time when we can hold a three-person conversation and not be sure which of the three is a machine.

Even then, Hastings believes all will not be lost for the puny human race. As AI accelerates, he predicts that so, too, will the ability to augment our genetic code.•

Tags: ,

bradbury_nasa (1)

Audio only of Ray Bradbury lecturing in 1964 at UCLA during the “most exciting age in the history of mankind,” labeling the Space Age as greater than the Renaissance. He was right that we would soon be on the moon, but he didn’t foresee the forestalling of space exploration post-Apollo and the geographical barriers to global unity. Bradbury also speaks adoringly of Mad magazine and discusses a Life short story to be published at the beginning of 1965.

sleepingani9

Here’s something I feel very good about guaranteeing: Sleep will not be “cured” in 25 years. 

Transhumanist Presidential candidate Zoltan Istvan sees things very differently, believing the REM state a “disease,” a dress rehearsal for death, that robs us of waking life. He thinks within a quarter century we might not even need to sleep. I, on the other hand, think much of my self-awareness comes from analyzing my dreams. Beyond that, what Istvan proposes seems not even a remote biological possibility by 2040.

From his article on the topic at Vice Motherboard:

To me, sleeping is a disease. Luckily, in the next 25 years, scientists may cure it. For millions, that cure can’t come soon enough. I hate sleeping and always have. I see sleeping as an early form of dipping in and out of death. Sleeping is probably the most wasteful thing all humans do—we spend a third of our lives in basically a lobotomized state. I wish I could I will myself from doing it, but like everyone else, I’m a slave to my body and mind, and I require sleep to function normally.

While much has been made about how beneficial a good night of sleep is, few discuss that sleeping is stealing away conscious time with loved ones, hampering economies around the world, and even indirectly hurting our bodies. We should never forget we age whether we’re awake or sleeping. And while the studies say the better we sleep the longer we live, this information may be misleading. I believe we age much more in our sleep than our lifespans gain from sleeping well. Sleeping—like being awake—is slowly killing us.

Scientifically speaking, sleep is a process where internal restoration and recuperation of the body and mind takes place. Sleep is comprised of various cycles, which are often separated by two classifications: non-REM and REM sleep.

There are numerous researchers in the world working on ways to try to remain alert despite sleep deprivation.•

Tags:

images (16)

In a Backchannel piece, Steven Levy shares everything most things he learned during an inside look at Google’s autonomous-car mission command at the decommissioned Castle Air Force Base in Atwater, California. Most of the (non-)drivers hired to put miles on the vehicles are recent Liberal Arts grads who test the prototypes on streets in Mountain View and Austin. Some are even employed as human props, known as “professional pedestrians.” “We just have to learn to trust,” one tells Levy. It seems the tight-lipped company’s testing of the cars may have gone beyond what people realize.

An excerpt:

Google’s ultimate goal, of course, is to make a transition from testing to systems where no safety drivers are needed — just passengers. For some time, Google has been convinced that the semiautonomous systems that others champion (which include various features like collision prevention, self-parking, and lane control on highways) are actually more dangerous than to the so-called Level Four degree of control, where the car needs no human intervention. (Each of the other levels reflects a degree of driver involvement.) The company is convinced that with cars that almost but don’t drive themselves, humans will be lulled into devoting attention elsewhere and unable to take quick control in an emergency. (Google came to that conclusion when it allowed some employees to commute with the cars, using autodrive only on premapped freeways. One Googler, perhaps forgetting that the company was capturing the whole ride on video, pretty much crawled into the backseat for a phone charger while the car sped along at 65 miles per hour.)

Google also believes that cars should be able to move around even with no humans in them, and it has been hoping for an official go-ahead to begin a shuttle service between the dozens of buildings it occupies in Mountain View, where slow-moving, no-steering-wheel prototypes would putter along by themselves to pick up Googlers. It was bitterly disappointed when the California DMV ruled it was not yet time for driverless cars to travel the streets, even in those limited conditions. The DMV didn’t even propose a set of requirements that Google could satisfy to make this happen. Meanwhile, Elon Musk, CEO of Tesla, is barreling ahead, introducing a driverless feature in his Tesla cars called Summon. He predicts that by 2018, Tesla owners will be able to summon their cars from the opposite coast, though it’s a mystery how the cars would recharge themselves every 200 or so miles.

But maybe Musk is not the first. When I discussed this with [program director Chris] Urmson, he postulated that in most states — California not among them — it was not illegal to operate driverless cars on public streets. I asked him whether Google had sent out cars with no one in them to pick up people in Austin. He would not answer.•

Tags: ,

retrofuturedoct876

In a Phys.org piece, astronomer Seth Shostak identifies the twenty-first century as the last one that may be ruled by Homo sapiens, with speciation being driven by three huge changes. Perhaps it’s surprising the writer believes humans living on other planets won’t be altered as radically as those on Earth engineered by General Artificial Intelligence. My bearish mind thinks 85 years is a very aggressive timeframe for what he proposes, but nothing about it seems theoretically impossible in the long run. 

An excerpt:

To begin with, we’re finally going to understand biology at a molecular level. DNA’s double helix was discovered a mere six decades ago, and now – for hardly more than a kilobuck – you can sequence the genome of your yorkie or yourself.

The relentless interplay of science and technology ensures that genomic knowledge will spawn a growing number of applications. Curing disease is one of these, and it’s obviously desirable. But our efforts won’t be limited to merely fixing ourselves; we’ll also opt for improvement. You may hesitate to endorse designer babies, but hot-rodding our children is as much on the horizon as the morning sun.

Number two on my list of major developments is expanding into nearby space. We need more resources – both acreage and raw materials – unless we’re happy to condemn our descendants to a limited lifestyle and unlimited war. You may worry about running out of oil, but that’s not the resource that should really make you antsy. We’re going to eat through the easily recoverable reserves of stuff like copper, zinc, and the platinum group metals in a matter of decades.

We can find more of these elements in asteroids, and already several companies are planning to do so. But nearby space could also provide unlimited real estate for siting the condos of the future. Everyone expects our progeny to establish colonies on the moon or Mars, but the better deal is to build huge, orbiting habitats in which you can live without a spacesuit. Think of scaling up the International Space Station a few thousand times. We can put unlimited numbers of people in such engineered environments, and sometime in this century we’ll start doing that. The days of being confined to the bassinette of our birth are coming to an end.

The third thing you can expect before the year 2100 is the development of generalized artificial intelligence (GAI). In other words, machines that don’t just play games like chess or Jeopardy, but can do the thinking required for any white-collar job, including all the ones at the top. And such machines won’t necessarily be large. A synapse in your brain is a few thousand nanometers in size. A transistor on a chip is hundreds of times smaller. The hardware necessary for human-level smarts – even today – could fit in an iPad.•

Tags:

retromars

Has there ever been an era when enthusiasts have gotten so far ahead of themselves in terms of scientific and technological possibilities? I was reading an article the other day, clearly written by an intelligent person, who proclaimed that by 2050 we would see the “end of death,” that we will have left mere biological life behind. I’m not saying that such a transition is impossible, but it won’t be happening during our lifetimes. We are, ultimately, toast.

Likewise, I have no doubt we can eventually colonize space if we don’t do ourselves in first. We should certainly be sending human-less probes and 3D printers to Mars and elsewhere, but it’s probably a good idea to stay realistic about what we can accomplish in each era. In a rush to save ourselves, we may lose sight of the proper path.

In “What Will It Take for Humans to Colonize the Milky Way?” a Scientific American article, Kim Stanley Robinson stresses the difficulty of near-term colonization of other star systems. The opening:

The idea that humans will eventually travel to and inhabit other parts of our galaxy was well expressed by the early Russian rocket scientist Konstantin Tsiolkovsky, who wrote, “Earth is humanity’s cradle, but you’re not meant to stay in your cradle forever.” Since then the idea has been a staple of science fiction, and thus become part of a consensus image of humanity’s future. Going to the stars is often regarded as humanity’s destiny, even a measure of its success as a species. But in the century since this vision was proposed, things we have learned about the universe and ourselves combine to suggest that moving out into the galaxy may not be humanity’s destiny after all.

The problem that tends to underlie all the other problems with the idea is the sheer size of the universe, which was not known when people first imagined we would go to the stars. Tau Ceti, one of the closest stars to us at around 12 light-years away, is 100 billion times farther from Earth than our moon. A quantitative difference that large turns into a qualitative difference; we can’t simply send people over such immense distances in a spaceship, because a spaceship is too impoverished an environment to support humans for the time it would take, which is on the order of centuries. Instead of a spaceship, we would have to create some kind of space-traveling ark, big enough to support a community of humans and other plants and animals in a fully recycling ecological system.•

Tags:

If I could have dinner with any three living Americans, Ricky Jay would definitely be one, even though I can’t say I care much for magic. Jay, of course, practices magic in the same sense that Benjamin Franklin flew kites. It’s the invisible stuff being conducted that makes all the difference.

It’s always amazed me that Jay’s enjoyed so much success despite having a brilliance driven so far from the mainstream by manias about marginalia, things barely perceptible to most. In that vein, he’s written a book about Matthias Buchinger, an eighteenth-century German magician whose unlikely success even outdoes Jay’s. 

Buchinger was a 29-inch tall phocomelic who lacked properly formed limbs yet managed to gain acclaim in a variety of fields: marksmanship, bowling, illustration, music, dance and micrography. The latter gift–the ability to write in incredibly small letters–is the basis of the book and a part of an exhibit at the Met.

From Charles McGrath at the New York Times:

The magician Ricky Jay, considered by many the greatest sleight-of-hand artist alive, is also a scholar, a historian, a collector of curiosities. Master of a prose style that qualifies him as perhaps the last of the great 19th-century authors, he has written about oddities like cannonball catchers, poker-playing pigs, performing fleas and people who tame bees. But probably his most enduring interest is a fellow polymath, an 18th-century German named Matthias Buchinger.

Buchinger (1674-1739) was a magician and musician, a dancer, champion bowler and trick-shot artist and, most famously, a calligrapher specializing in micrography — handwriting so small it’s barely legible to the naked eye. His signature effect was to render locks of hair that, when examined closely, spelled out entire Psalms or books from the Bible. What made his feats even more remarkable is that Buchinger was born without hands or feet and was only 29 inches tall. Portraits show him standing on a cushion and wearing a sort of lampshade-like robe. Yet he married four times and had 14 children. Some people have suggested that he also had up to 70 mistresses, but Mr. Jay says that’s nonsense.

Mr. Jay, 67, has been studying Buchinger and collecting his work since he was in his 20s and has now written a book about him, just out from Siglio, with the mouthful of a title Matthias Buchinger: “The Greatest German Living,” by Ricky Jay, Whose Peregrinations in Search of the ‘Little Man of Nuremberg’ are Herein Revealed.

Tags: , ,

rs_1024x759-140919103400-1024-sugar-bear-momma-june.ls.91914_copy

If the population of humans on Earth dwindled to just one man and one woman, I’m willing to wager that Homo sapiens would soon be extinct. That’s because whatever catastrophic event(s) led to the extreme thinning of the ranks would soon claim the last of us. 

Ignoring that likely outcome, Zoria Gorvett at BBC Future presents a thought experiment: If you place a post-apocalyptic Adam and Eve on Earth with just one another, would they be able to repopulate the planet? Well, they’d have to fuck like bunnies, and think of the incest! It’s actually best not to think about it. Hundreds of years of inbreeding would not be pretty, in any sense. Even if people somehow survived, the lack of diversity would probably cause us to transition into a different species. The most positive way to look at it? Anything’s possible.

The opening:

The alien predators arrived by boat. Within two years, everyone was dead. Almost.

The tiny islet of Ball’s Pyramid lies 600km east of Australia in the South Pacific, rising out of the sea like a shard of glass. And there they were – halfway up its sheer cliff edge, sheltering under a spindly bush – the last of the species. Two escaped and just nine years later there were 9,000, the children and grandchildren and great-grandchildren of Adam and Eve.

No, this isn’t a bizarre take on the story of creation. The lucky couple were tree lobsters Dryococelus australis, stick insects the size of a human hand. They were thought to be extinct soon after black rats invaded their native Lord Howe Island in 1918, but were found clinging on in Ball’s Pyramid 83 years later. The species owes its miraculous recovery to a team of scientists who scaled 500ft of vertical rock to reach their hiding place in 2003. The lobsters were named “Adam” and “Eve” and sent to start a breeding programme at Melbourne Zoo.

Bouncing back after insect Armageddon is one thing. Female tree lobsters lay 10 eggs every 10 days and are capable of parthenogenesis; they don’t need a man to reproduce. Repopulating the earth with humans is quite another matter. Could we do it? And how long would it take?•

Tags:

spacecolony8“Greed is good,” proclaimed fictional robber baron Gordon Gekko in 1987, echoing a speech from a year earlier by the very real Ivan Boesky, who by the time Wall Street opened had traded the Four Seasons for the Graybar Hotel, his desires having pried him from the penthouse. The point is well-taken, however, when applied correctly: Unhealthy desires can be useful. You don’t get people to risk life and limb–emigrating to the “New” World or participating in the dangerous Manifest Destiny–unless there’s a potential for a better life, and, often, a bigger bank account. 

I’ve posted previously about my queasiness over recent U.S. regulation which unilaterally allows its corporations to lay claim to bodies in space, but perhaps the quest to go for the gold in out there has a silver lining. While it’s gross for those already fabulously wealthy to be wondering who will use asteroid mining to become the first trillionaire, Grayson Cary considers in a smart Aeon essay that perhaps avarice is a necessary evil if we are to colonize space and safeguard our species against single-planet calamity. As the writer states, past multinational treaties may inhibit unfettered speculation, but probably not. Private, public, U.S., China, etc.–it’s going to be a land rush that sorts itself out as we go, and go we will. As Cary writes, “There comes a point at which Earthbound opinions hardly matter.”

An excerpt:

Over the 2015 Thanksgiving holiday – which, in the spirit of appropriation, seems appropriate – President Barack Obama signed into law the Spurring Private Aerospace Competitiveness and Entrepreneurship (SPACE) Act. It had emerged from House and Senate negotiations with surprisingly robust protections for US asteroid miners. In May, the House had gone only so far as to say that ‘[a]ny asteroid resources obtained in outer space are the property of the entity that obtained them’. In the Senate, commercial space legislation had moved forward without an answer to the question of property. In the strange crucible of the committee process, the bill ended up broader, bolder and more patriotic than either parent.

‘A United States citizen,’ Congress resolved, ‘engaged in commercial recovery of an asteroid resource or a space resource under this chapter shall be entitled to any asteroid resource or space resource obtained, including to possess, own, transport, use and sell the asteroid resource or space resource obtained.’ It’s a turning point, maybe a decisive one, in a remarkable debate over the administration of celestial bodies. It’s an approach with fierce critics – writing for Jacobin magazine in 2015, Nick Levine called it a vision for ‘trickle-down astronomics’ – and the stakes, if you squint, are awfully high. A small step for 535 lawmakers could amount to one giant leap for humankind.

If you hew to the right frame of mind, decisions about space policy have enormous consequences for the future of human welfare. Nick Bostrom, Director of the Future of Humanity Institute at the University of Oxford, offered a stark version of that view in a paper called ‘Astronomical Waste: The Opportunity Cost of Delayed Technological Development’ (2003). By one estimate, he wrote, ‘the potential for approximately 1038 human lives is lost every century that colonisation of our local supercluster is delayed; or, equivalently, about 1029 potential human lives per second’. Suppose you accept that perspective, or for any other reason feel an urgent need to get humanity exploring space. How might a species hurry things up?

For a vocal chorus of pro-space, pro-market experts, the answer starts with property: to boldly go and buy and sell. ‘The only way to interest investors in building space settlements,’ writes the non-profit Space Settlement Institute on its website, ‘is to make doing so very profitable.’ In other words: show me the money.•

Tags: ,

AP_police_shooting_mar_140811_16x9_992

Confirmation bias is a dangerous thing, so the Harvard economist Roland Fryer likes to stick to data, which can, of course, lead to some inconvenient truths. How about being an African-American scholar in the time of Ferguson who’s convinced that police in the U.S. are no more likely to shoot a black person than a white one?

Fryer’s argument, which he relates to John McDermott of the Financial Times, is that the numbers say officers harass and manhandle African-Americans in a disproportionate way, but actual lethal violence is proportionate among different race groups. The more minor and incessant acts of persecution persuade black folks that they are being shot far more often. 

Well, I haven’t studied the numbers, but if this is true it should make us incredibly vigilant of the type racial profiling and serial intimidation that divides us. The so-called quality-of-life approach to policing has provided too much wiggle room for some to be targeted. Even Fryer himself acknowledges that he had guns pulled on him by police six or seven times during his youth.

An excerpt:

At a quiet table in the cavernous Hawksmoor Seven Dials, a branch of the high-end restaurant chain in central London, where the decor is brown and the meat is red, Fryer tells me how he spent two days last year on the beat shadowing cops in Camden, New Jersey. (On his first day on patrol a woman overdosed in front of him and died.) What Fryer wanted to figure out was whether the killings of Michael Brown and Eric Garner — two African-Americans whose deaths led to widespread protests — were part of an observable pattern of discrimination, as activist groups such as Black Lives Matter have suggested. After his week on patrol, he collected more than 6m pieces of data from forces such as New York City’s on cases of blacks, whites and Latinos being victims of police violence.

The graph he passes between the salt and pepper displays his provisional findings. The horizontal axis is a scale of the severity of the violence, from shoving on the left all the way to shootings on the right. The curve starts high, suggesting strong differences in minor incidents, but descends to zero as the cases become more violent. In other words, once contextual factors were taken into account, blacks were no more likely to be shot by police. All of which raises the question: why the outcry in 2014 in Ferguson, Missouri, where Brown was shot?

“That’s the data,” Fryer says. “Now one hypothesis for why Ferguson happened — not the shooting but the outcry — was not because people were making statistical inference, not from whether Michael Brown was guilty or innocent but because they fucking hate the police.” He continues: “The reason they hate the police is because if you spent years having hands put on you and [being] pushed to the ground and handcuffed without proper cause, and then you hear about a [police] shooting in your town, how could you believe it was anything but discrimination?”•

Tags: ,

three-day-week-telephonis-004

Almost all the entries in my Gmail spam folder are boner-juice ads written by bots in barely comprehensible English. They suck. But according to Nellie Bowles’ new Guardian piece, the performance of high-end virtual email assistants has recently risen to an impressive, and perhaps unsettling, level. These tools can show empathy just as surely as they arrange business meetings, and the humans who interact with them often treat them like people even when they know they’re not. The opening:

It started as a normal email exchange with a tech CEO. He was up for a coffee, and passed me to his assistant to find a date. But then it turned a bit strange.

Her emails were too good: all written in the same carefully casual, slightly humourless style. All formatted the same. All sent at socially convincing times. And all at believable intervals from my own messages. But they were off just a little.

Hi Nellie,

No worries! Unfortunately, Swift is unavailable tomorrow morning. Can you talk at one of the following times?

Tuesday (Nov 10) at 3pm EST

Tuesday (Nov 10) at 4:30pm EST

Let me know!

Best,

Clara

I stared at her notes for a few minutes before it hit me: she was a bot.

Leaving aside the issues around giving the admin bot a female name, as all these services seem to do, this feels like one of those moments the future promised us. So now it’s here, I thought to myself, staring at her emails. It has arrived. She is among us. And she’s excellent.

“You asked me how I conceived of ‘her,’ not ‘it’. When you talk about ‘her,’ we’re 90% there already,” said Dennis Mortensen, whose company, X.ai, has pioneered email bot personal assistants. “You already conceive of her as a human being even though you know she’s a machine. Now we have something to work with.”•

Tags:

wsm (2)Even by the oft-eccentric standards of mid-century cyberneticists, Warren Sturgis McCulloch was something of an outlier. Known for his diet of cigarettes, whiskey and ice cream, the MIT genius was the proud father of 17 adopted children. More than six decades ago he was extrapolating the power of then-rudimentary machines, concerned that eventually AI might rule humankind, a topic of much concern in these increasingly automated times. The below article from the September 22, 1948 Brooklyn Daily Eagle records his clarion call about the future.

321warrenmcculloch-e1411438055172

_______________________

In 1969, the year before McCulloch died, his opinions on the Singularity had modified.

123allenkeaton (2)

In a Techcrunch piece, Vivek Wadhwa identifies 2016 as a technological inflection point, naming six fields which he believes will see significant progress, promising the next 12 months “will be the beginning of an even bigger revolution, one that will change the way we live, let us visit new worlds, and lead us into a jobless future.”

I don’t know for most of the areas he mentions that this year will be any more important than 2015 or 2017. Consider the example of space exploration. Perhaps in 2016 private companies or government will accomplish something more impressive than the Falcon 9 landing or maybe not. Even if they do, it will be part of an incremental process rather than a radical breakthrough. Life on Mars will get nearer every year.

Wadhwa’s best bet, I think, is in the area of driverless cars, which will likely move much closer to fruition based on tests done this year. The writer is more measured with robotics, believing the industrial kind is on the cusp of major advances, but personal assistants still have a ways to go. An excerpt:

The 2015 DARPA Robotics Challenge required robots to navigate over an eight-task course simulating a disaster zone. It was almost comical to see them moving at the speed of molasses, freezing up, and falling over. Forget folding laundry and serving humans; these robots could hardly walk. As well, although we heard some three years ago that Foxconn would replace a million workers with robots in its Chinese factories, it never did so.

The breakthroughs may, however, be at hand. To begin with, a new generation of robots is being introduced by companies such as Switzerland’s ABB, Denmark’s Universal Robots, and Boston’s Rethink Robotics—robots dextrous enough to thread a needle and sensitive enough to work alongside humans. They can assemble circuits and pack boxes. We are at the cusp of the industrial-robot revolution.

Household robots are another matter. Household tasks may seem mundane, but they are incredibly difficult for machines to perform. Cleaning a room and folding laundry necessitate software algorithms that are more complex than those to land a man on the moon. But there have been many breakthroughs of late, largely driven by A.I., enabling robots to learn certain tasks by themselves and teach each other what they have learnt. And with the open source robotic operating system, ROS, thousands of developers worldwide are getting close to perfecting the algorithms.

Don’t be surprised when robots start showing up in supermarkets and malls—and in our homes.  Remember Rosie, the robotic housekeeper from the TV series The Jetsons?  I am expecting version 1 to begin shipping in the early 2020s.•

Tags:

edisonlightbulbani

Northwestern economist Robert Gordon may be too bearish on the transformative powers of the Internet, but he does make a good case that the technological innovations of a century ago dwarf the impact of the information revolution. 

A well-written and sadly un-bylined Economist review of the academic’s new book, The Rise and Fall of American Growth, looks at how the wheels came off the U.S. locomotive in the 1970s, courtesy of the rise of global competition and OPEC along with increasing inequality on the homefront. Gordon is dour about the prospects of a new American century, believing technologists are offering thin gruel and that Moore’s Law is running aground. The reviewer thinks the economist is ultimately too dismissive of Silicon Valley.

An excerpt:

The technological revolutions of the late 19th century transformed the world. The life that Americans led before that is unrecognisable. Their idea of speed was defined by horses. The rhythm of their days was dictated by the rise and fall of the sun. The most basic daily tasks—getting water for a bath or washing clothes—were back-breaking chores. As Mr Gordon shows, a succession of revolutions transformed every aspect of life. The invention of electricity brought light in the evenings. The invention of the telephone killed distance. The invention of what General Electric called “electric servants” liberated women from domestic slavery. The speed of change was also remarkable. In the 30 years from 1870 to 1900 railway companies added 20 miles of track each day. By the turn of the century, Sears Roebuck, a mail-order company that was founded in 1893, was fulfilling 100,000 orders a day from a catalogue of 1,162 pages. The price of cars plummeted by 63% between 1912 and 1930, while the proportion of American households that had access to a car increased from just over 2% to 89.8%.

America quickly pulled ahead of the rest of the world in almost every new technology—a locomotive to Europe’s snail, as Andrew Carnegie put it. In 1900 Americans had four times as many telephones per person as the British, six times as many as the Germans and 20 times as many as the French. Almost one-sixth of the world’s railway traffic passed through a single American city, Chicago. Thirty years later Americans owned more than 78% of the world’s motor cars. It took the French until 1948 to have the same access to cars and electricity that America had in 1912.

The Great Depression did a little to slow America’s momentum. But the private sector continued to innovate. By some measures, the 1930s were the most productive decade in terms of the numbers of inventions and patents granted relative to the size of the economy. Franklin Roosevelt’s government invested in productive capacity with the Tennessee Valley Authority and the Hoover Dam.

The second world war demonstrated the astonishing power of America’s production machine. After 1945 America consolidated its global pre-eminence by constructing a new global order, with the Marshall Plan and the Bretton Woods institutions, and by pouring money into higher education. The 1950s and 1960s were a golden age of prosperity in which even people with no more than a high-school education could enjoy a steady job, a house in the suburbs and a safe retirement.

But Mr Gordon’s tone grows gloomy when he turns to the 1970s.•

Tags:

warehouserobots89

It’s not a done deal that technological employment will be widespread, that the “lights-out” factory will become the norm, but it’s possible to the extent that we should worry about such a scary situation now.

I doubt the answers will lie in somehow reigning in technology. Not to overly anthropomorphize robots, but they have a “life” of their own. If humans and machines can both do the same job, the work will ultimately become the domain of AI. The solutions, if needed, will have to emerge from policy. Not the kind that artificially limits machines, but the type that provides security derived from social safety nets. 

In an In These Times article, David Moberg writes that “much will depend on whether we humans leave robotization to the free market or whether we take deliberate steps to shape our future relationships with robots.” I disagree with his suggestion that perhaps we can design robots to merely augment human production. That’s implausible and at best an intermediary step, but the author writes intelligently on the topic.

An excerpt:

If we’re on the brink of a period of robotic upheaval, labor organizing will be more crucial than ever. Workers will need unions with the power to negotiate the needs of the displaced.

Another aspect of the disruption could be an exacerbation of economic inequality. MIT economist David Autor argues that the advent of computing in the late 1970s helped drive our current stratification. As demand increased for abstract labor (college-educated workers using computers) and decreased for manual, routine labor (service workers with few skilled tasks), he says, the pay for different occupations consequently became more polarized, fueling the rise of inequality.

But Lawrence Mishel and his Economic Policy Institute colleagues, along with Dean Baker, argue that this model of polarization misses important nuances of contemporary labor markets and ignores the primary driver of inequality: public policy, not robots. They point to a range of U.S. policies, including encouragement of financial sector growth and suppression of the minimum wage, as contributing to burgeoning inequality. 

No matter who is right, it’s indisputable that public policy, in addition to unions, can play a powerful role in curbing the ill effects of technological disruption.

Luckily, we don’t need to reinvent the wheel.•

Tags:

yul123

In a piece that landed on Afflictor’s “50 Great 2015 Articles Online for Free” list, the Princeton neuroscientist Michael Graziano wrote of building an artificial brain, a process which would strip from gray matter its mysticism, arguing that consciousness was merely a sort of illusion perpetrated by the computers in our heads. Graziano furthers the discussion in a new Atlantic piece, suggesting that once we separate false narratives from explanations of consciousness, we may be able to hasten the creation of intelligent machines. An excerpt:

The human brain insists it has consciousness, with all the phenomenological mystery, because it constructs information to that effect. The brain is captive to the information it contains. It knows nothing else. This is why a delusional person can say with such confidence, “I’m a kangaroo rat. I know it’s true because, well, it’s true.” The consciousness we describe is non-physical, confusing, irreducible, and unexplainable, because that packet of information in the brain is incoherent. It’s a quick sketch.

What’s it a sketch of? The brain processes information. It focuses its processing resources on this or that chunk of data. That’s the complex, mechanistic act of a massive computer. The brain also describes this act to itself. That description, shaped by millions of years of evolution, weird and quirky and stripped of details, depicts a “me” and a state of subjective consciousness.

This is why we can’t explain how the brain produces consciousness. It’s like explaining how white light gets purified of all colors. The answer is, it doesn’t. Let me be as clear as possible: Consciousness doesn’t happen. It’s a mistaken construct.•

Tags:

drone-catcher

For every action, a reaction: Small drones, in addition to all the good they can do, can be used for illicit surveillance and delivering explosives and smuggling, among other nefarious deeds, so Michigan researchers created a concept prototype of an anti-drone tool called “robotic falconry,” which nets the interloping technology and commandeers it to a safe place. What will the countermeasure be when spy drones can fit on the head of a pin? There’ll be a market, so something will emerge.

From Marcia Goodrich at Michigan Tech News:

In January 2015, a Washington, DC, hobbyist accidentally flew his DJI Phantom quadcopter drone over the White House fence and crashed it on the lawn.

Two years earlier, a prankster sent his drone toward German prime minister Angela Merkel during a campaign rally.

Small drones have also proven to be effective tools of mischief that doesn’t make the national news, from spying to smuggling to hacking. So when Mo Rastgaar was watching World Cup soccer and heard about snipers protecting the crowd, he doubted that they’d fully understood a drone’s potential.

“I thought, ‘If the threat is a drone, you really don’t want to shoot it down—it might contain explosives and blow up. What you want to do is catch it and get it out of there.’”

Safe Drone Catcher

So Rastgaar, an associate professor of mechanical engineering at Michigan Technological University, began work on a drone catcher, which could pursue and capture rogue drones that might threaten military installations, air traffic, sporting events—even the White House.•

Tags: ,

asteroidsani8

In her BBC article about asteroid mining, Sarah Cruddas asks a vital question: “Would it be worth it?” If we’re not placing any onerous timeframes on such prospecting, the answer, of course, is “yes.” Exploring and colonizing space will require us to build using resources gathered up there, since transporting them is prohibitively expensive. Even more vital than attaining iron for tools is securing a steady supply of H2O. As the author notes, the first water to be extracted from an asteroid will “mark the beginning of new era.” An excerpt:

The first thing to understand about space mining is that it is not only about mining asteroids, or even the Moon and then returning those resources back to Earth. “Instead, there is a lot of value in keeping the resources in space and using them to continue our exploration of the Solar System and beyond,” says Anderson. 

The most important resource for prospective space miners is water. The reason: travelling into space by current standards is the equivalent of taking a road trip across America, but having to bring all your fuel with you – only much worse. It takes more energy to escape the first 300 kilometres from Earth than the next 300 million kilometres. “Once in Earth’s orbit, you are halfway to anywhere in the Solar System,” says Lewicki.

But if rocket fuel was sourced from space for space, that problem can be avoided. When water is broken into its constituents – hydrogen and oxygen – you have two of the most commonly used elements in rocket fuel. What is most exciting for those looking to mine space is that water is throughout our Solar System. It is on the Moon, Mars and asteroids, and that’s just the places we know about.

Asteroids are of particular interest to Planetary Resources. “We know asteroids have water because it has been found on meteorites which have landed on the surface of the Earth,” says Lewicki. “They also don’t need much energy to land on. It’s easier than a trip to the surface of the Moon.” These near-Earth asteroids could act as off-world ‘gas stations’.

And as humans venture beyond Earth orbit, water will be essential for life support and growing food.•

Tags:

cellphoneangry

Some people actually believe that those participating in the Gig Economy, that Libertarian wet dream, are mostly entrepreneurial souls gladly Ubering others just until they secure seed money for their startup. That’s preposterous.

Piecework employment isn’t good at all for Labor unless basic income in uncoupled from work, which isn’t the arrangement most citizens find themselves in. And if wages remain flat and too many people are reduced to rabbits with tasks but no benefits, we’re in a collective quandary.

Andrew Callaway has penned a Policy Alternatives article about his perplexing experiences in the so-called Sharing Economy. The writer ultimately doesn’t feel that such an arrangement is bad for everyone, but that most will not prosper within its new rules. The opening:

If you spend enough time in San Francisco, you’ll notice sharing economy workers everywhere. While you’re waiting to get some food, look for the most frantic person in the lineup and you can bet they’re working with an app. Some of them are colour-coded: workers in orange T-shirts are with Caviar, a food delivery app; those in green represent Instacart, an app for delivering groceries. The blue jackets riding Razor scooters are with Luxe—if you’re still driving yourself around this city, these app workers will park your car. 

In the Bay Area, there are thousands of such people running through the aisles, fidgeting in line and racing against the clock. They spend most of their time in cars, where it can be harder to spot them. Oftentimes they’re double-parked in the bike lane, picking up a burrito from inside an adjacent restaurant or waiting for a passenger to come down from the apartment on top. If you look closely, you’ll see a placard in the window that says Uber or a glowing pink moustache indicating they drive around Lyft’s passengers. Last summer, I was one of them.

Oh, Canada! I’m writing you from Berkeley, California to warn you about this thing called “the sharing economy.” Since no one is really sharing anything, many of us prefer the term “the exploitation economy,” but due to its prevalence many in the Bay Area simply think of it as “the economy.”•

Tags:

« Older entries § Newer entries »