Science/Tech

You are currently browsing the archive for the Science/Tech category.

1473606747742It’s amusing that the truest thing Hillary Clinton has said during the election season–the “basket of deplorables” line–has caused her grief. The political rise of the hideous hotelier Donald Trump, from his initial announcement of his candidacy forward, has always been about identity politics (identity: white), with the figures of the forgotten, struggling Caucasians of Thomas Frank narratives more noise than signal.

The American middle class has legitimately taken a big step back for four decades, owing to a number of factors (globalization, computerization, tax rates, etc.), but the latest numbers show a huge rebound for middle- and lower-class Americans under President Obama and his worker-friendly policies. 

Perhaps that progress will be short lived, with automation and robotics poised to alter the employment landscape numerous times in the coming decades. But the Digital Age challenges have been completely absent from Trump’s rhetoric (if he even knows about them). And his stated policies will reverse the gains made by the average American over the last eight years. His ascent has always been about color and not the color of money.

From Sam Fleming at the Financial Times:

Household incomes surged last year in the US, suggesting American middle class fortunes are improving in defiance of the dark rhetoric that has dominated the presidential election campaign.

A strengthening labour market, higher wages and persistently subdued inflation pushed real median household income up 5.2 per cent between 2014 and 2015 to $56,516, the Census Bureau said on Tuesday. This marked the first gain since the eve of the global financial crisis in 2007 and the first time that inflation-adjusted growth exceeded 5 per cent since the bureau’s records began in 1967.

But the increase in 2015 still brought incomes to just 1.6 per cent below the levels they were hovering at the year before the recession started and they remain 2.4 per cent below their peak in 1999. Income gains were largest at the bottom and middle of the income scale relative to the top, reducing income inequality. 

The US election debate has been dominated by the story of long-term income stagnation, with analysts attributing the rise of Donald Trump in part to the shrinking ranks of America’s middle class, rising inequality and the impact of globalisation on household incomes. Tuesday’s strong numbers, which cover the year in which the Republican candidate launched his campaign, cast that narrative in a new light.•

Tags: , ,

ford-gyron-1961

Not that long ago, it was considered bold–foolhardy, even–to predict the arrival of a more technological future 25 years down the road. That was the gambit the Los Angeles Times made in 1988 when it published “L.A. 2013,” a feature that imagined the next-level life of a family of four and their robots.

We may not yet be at the point when “whole epochs will pass, cultures rise and fall, between a telephone call and a reply“–I mean, who talks on the phone anymore?–but it doesn’t require a quarter of a century for the new to shock us now. In the spirit of our age, Alissa Walker of the Curbed LA works from the new report “Urban Mobility in the Digital Age” when imagining the city’s remade transportation system in just five years. Regardless of what Elon Musk promises, I’ll bet the over on autonomous cars arriving in a handful of years, but it will be transformative when it does materialize, and it’ll probably happen sooner than later.

The opening:

It’s 2021, and you’re making your way home from work. You jump off the Expo line (which now travels from Santa Monica to Downtown in 20 minutes flat), and your smartwatch presents you with options for the final two miles to your apartment. You could hop on Metro’s bike share, but you decide on a tiny, self-driving bus that’s waiting nearby. As you board, it calculates a custom route for you and the handful of other passengers, then drops you off at your doorstep in a matter of minutes. You walk through your building’s old parking lot—converted into a vegetable garden a few years ago—and walk inside in time to put your daughter to bed.

That’s the vision for Los Angeles painted in Urban Mobility in the Digital Age, a new report that provides a roadmap for the city’s transportation future. The report, which was shared with Curbed LA and has been posted online, addresses the city’s plan to combine self-driving vehicles (buses included) with on-demand sharing services to create a suite of smarter, more efficient transit options.

But it’s not just the way that we commute that will change, according to the report. Simply being smarter about how Angelenos move from one place to another brings additional benefits: alleviating vehicular congestion, potentially eliminating traffic deaths, and tackling climate change—where transportation is now the fastest-growing contributor to greenhouse gases. And it will also impact the way the city looks, namely by reclaiming the streets and parking lots devoted to the driving and storing of cars that sit motionless 95 percent of the time.

The report is groundbreaking because it makes LA the first U.S. city to specifically address policies around self-driving cars.•

Tags:

321snowdeni9

We should always err on the side of whistleblowers like Edward Snowden because they’ve traditionally served an important function in our democracy, but that doesn’t mean the former NSA employee has changed America for the better–or much at all.

In the wake of 9/11, most in the country wanted to feel safe and were A-OK with the government taking liberties (figuratively and literally). Big Brother became the favorite sibling. The White House position and policy has shifted somewhat since Snowden went rogue, but I believe from here on in we’re locked in a cat-and-mouse game among government, corporations and citizens, with surveillance and leaks a permanent part of the landscape. The technology we have–and the even more powerful tools we’ll have in the future–almost demands such an arrangement. We’re all increasingly inside a machine now, one that moves much faster than legislation. That’s the new abnormal.

The Financial Times set up an interview with Snowden conducted by Alan Rusbridger, former EIC of the Guardian, the publication that broke the story. The subject is unsurprisingly much more hopeful about the impact of his actions than I am. An excerpt:

Alan Rusbridger:

It’s now, what, three years since the revelations?

Edward Snowden:

It’s been more than three years. June 2013.

Alan Rusbridger:

Tell me first how the world has changed since then. What’s changed as a result of what you did, from your perspective? Not from your personal life, but the story you revealed.

Edward Snowden:

The main thing is that our culture has changed, right? There are many different ways of approaching this. One is we look at the structural changes, we look at the policy changes, we look at the fact that the day the Guardian published the story, for example, the entire establishment leaped out of their chairs and basically said ‘This is untrue, it’s not right, there’s nothing to see here’. You know, ‘Nobody’s listening to your phone calls’, as the president said very early on. Do you remember? I think he sort of spoke with the voice of the establishment in all of these different countries here, saying, ‘I think we’ve drawn the right balance’.

Then you move on to later in the same year when the very first court verdicts began to come forward and they found that these programmes were ‘unlawful, likely unconstitutional’ — that’s a direct quote — and ‘Orwellian in their scope’ — again a quote. And this trend continued in many different courts. The government realising that these programmes could not be legally sustained and would have to be amended if they were to keep any of these powers at all. And to avoid a precedent that they would consider damaging, which is that the Supreme Court basically locks the power of mass surveillance away from them forever, they need a pretty substantial pivot, whereby January of 2014 the president of the US said that, well, of course you could never condone what I did. He believes that this has made us stronger as a nation and that he was going to be recommending changes to a law of Congress, which then later, again this is Congress, they don’t do anything quickly, they actually did amend the law.

Now, they would not likely have made these changes to law on their own without the involvement of the Courts. But these are all three branches of government in the US completely changing their position. In March of 2013, the Supreme Court flushed the case, right, saying that this is a state secret, we can’t talk about it and you can’t prove that you were spied on. Then suddenly when everyone can prove that they had been spied on, we see that the law changed. So that’s sort of the policy side of looking at that. And people can look at the substance there and say, ‘This is significant’. Even though it didn’t solve the problem, it’s a start and, more importantly, it empowers people, it empowers the public; it shows that, for the first time in four years, we can actually start to impose more oversight on intelligence agencies, on spies, rather than giving them a free pass to do whatever, simply because we’re scared, which is understandable but clearly not ethical.

As online threats race up national security agendas and governments look at ways of protecting their national infrastructures a cyber arms race is causing concern to the developed world. Then there’s the other way of looking at it, which is in terms of public awareness.•

Tags: ,

robotheadcovering7

Right now the intrusion of Digital Age surveillance is still (mostly) external to our bodies, though computers have shrunk small enough to slide into our pockets. If past is prologue, the future progression would move this hardware inside ourselves, the way pacemakers for the heart were originally exterior machines until they could fit in our chests. Even if no such mechanisms were necessary and we manipulated health, longevity, appearance and longevity though biological means, the thornier ethical questions would probably remain. 

A month ago, I published a post about Eve Herold’s new book, Beyond Human, when the opening was excerpted in Vice. Here’s a piece from “Transhumanism Is Inevitable,” Ronald Bailey’s review of the title in the Libertarian magazine Reason:

Herold thinks these technological revolutions will be a good thing, but that doesn’t mean she’s a Pollyanna. Throughout the book, she worries about how becoming ever more dependent on our technologies will affect us. She foresees a world populated by robots at our beck and call for nearly any task. Social robots will monitor our health, clean our houses, entertain us, and satisfy our sexual desires. Isolated users of perfectly subservient robots could, Herold cautions, “lose important social skills such as unselfishness and the respect for the rights of others.” She further asks, “Will we still need each other when robots become our nannies, friends, servants, and lovers?”

There is also the question of how centralized institutions, as opposed to empowered individuals, might use the new tech. Behind a lot of the coming enhancements you’ll find the U.S. military, which funds research to protect its warriors and make them more effective at fighting. As Herold reports, the Defense Advance Research Projects Agency (DARPA) is funding research on a drug that would keep people awake and alert for a week. DARPA is also behind work on brain implants designed to alter emotions. While that technology could help people struggling with psychological problems, it might also be used to eliminate fear or guilt in soldiers. Manipulating soldiers’ emotions so they will more heedlessly follow orders is ethically problematic, to say the least.

Similar issues haunt Herold’s discussion of the technologies, such as neuro-enhancing drugs and implants, that may help us build better brains. Throughout history, the ultimate realm of privacy has been our unspoken thoughts. The proliferation of brain sensors and implants might open up our thoughts to inspection by our physicians, friends, and family—and also government officials and corporate marketers.

Yet Herold effectively rebuts bioconservative arguments against the pursuit and adoption of human enhancement.•

Tags: ,

A big problem with data analysis is that when it goes really deep, it’s not so easy to know why it’s working, if it’s working. Algorithms can be skewed consciously or not to favor some and keep us in separate silos, and the findings of artificial neural networks can be mysterious to even machine-learning professionals. We already base so much on silicon crunching numbers and are set to bet the foundations of our society on these operations, so that’s a huge issue. Another one: The efficacy of neural nets may be inhibited by more transparent approaches. Two pieces on the topic follow.


The opening of Aaron M. Bornstein’s Nautilus essay “Is Artificial Intelligence Permanently Inscrutable?“:

Dmitry Malioutov can’t say much about what he built.

As a research scientist at IBM, Malioutov spends part of his time building machine learning systems that solve difficult problems faced by IBM’s corporate clients. One such program was meant for a large insurance corporation. It was a challenging assignment, requiring a sophisticated algorithm. When it came time to describe the results to his client, though, there was a wrinkle. “We couldn’t explain the model to them because they didn’t have the training in machine learning.”

In fact, it may not have helped even if they were machine learning experts. That’s because the model was an artificial neural network, a program that takes in a given type of data—in this case, the insurance company’s customer records—and finds patterns in them. These networks have been in practical use for over half a century, but lately they’ve seen a resurgence, powering breakthroughs in everything from speech recognition and language translation to Go-playing robots and self-driving cars.

As exciting as their performance gains have been, though, there’s a troubling fact about modern neural networks: Nobody knows quite how they work. And that means no one can predict when they might fail.•


From Rana Foroohar’s Time article about mathematician and author Cathy O’Neil:

O’Neil sees plenty of parallels between the usage of Big Data today and the predatory lending practices of the subprime crisis. In both cases, the effects are hard to track, even for insiders. Like the dark financial arts employed in the run up to the 2008 financial crisis, the Big Data algorithms that sort us into piles of “worthy” and “unworthy” are mostly opaque and unregulated, not to mention generated (and used) by large multinational firms with huge lobbying power to keep it that way. “The discriminatory and even predatory way in which algorithms are being used in everything from our school system to the criminal justice system is really a silent financial crisis,” says O’Neil.

The effects are just as pernicious. Using her deep technical understanding of modeling, she shows how the algorithms used to, say, rank teacher performance are based on exactly the sort of shallow and volatile type of data sets that informed those faulty mortgage models in the run up to 2008. Her work makes particularly disturbing points about how being on the wrong side of an algorithmic decision can snowball in incredibly destructive ways—a young black man, for example, who lives in an area targeted by crime fighting algorithms that add more police to his neighborhood because of higher violent crime rates will necessarily be more likely to be targeted for any petty violation, which adds to a digital profile that could subsequently limit his credit, his job prospects, and so on. Yet neighborhoods more likely to commit white collar crime aren’t targeted in this way.

In higher education, the use of algorithmic models that rank colleges has led to an educational arms race where schools offer more and more merit rather than need based aid to students who’ll make their numbers (thus rankings) look better. At the same time, for-profit universities can troll for data on economically or socially vulnerable would be students and find their “pain points,” as a recruiting manual for one for-profit university, Vatterott, describes it, in any number of online questionnaires or surveys they may have unwittingly filled out. The schools can then use this info to funnel ads to welfare mothers, recently divorced and out of work people, those who’ve been incarcerated or even those who’ve suffered injury or a death in the family.

Indeed, O’Neil writes that WMDs [Weapons of Math Destruction] punish the poor especially, since “they are engineered to evaluate large numbers of people. They specialize in bulk. They are cheap. That’s part of their appeal.” Whereas the poor engage more with faceless educators and employers, “the wealthy, by contrast, often benefit from personal input. A white-shoe law firm or an exclusive prep school will lean far more on recommendations and face-to-face interviews than a fast-food chain or a cash-strapped urban school district. The privileged… are processed more by people, the masses by machines.”•

Tags: , , ,

pies

Many of the goals of the Industrial and Digital Ages have been aimed at trying to make heretofore irreducible things smaller: putting the contents of the Encyclopedia Britannica on the head of a pin, reducing musical recordings and newspapers to pure data, etc. In that same vein, scientists endeavored to shrink an entire meal to the size of a pill long before Rob Rhinehart made dinner drinkable with Soylent. An article in the August 28, 1899 Brooklyn Daily Eagle reported on such efforts.

pilldinners345

the-bellboy-jerry-lewis-1960

botlr-in-hallway-lr

When Silicon Valley stalwart Marc Andreessen directs mocking comments or tweets at those who fear the Second Machine Age could lead to mass unemployment, even societal upheaval, he usually depicts them as something akin to Luddites, pointing out that the Industrial Revolution allowed for the creation of more and better jobs. Of course, history doesn’t necessarily repeat itself. 

In a Bloomberg View column, Noah Smith says that while the statistics say a robot revolution hasn’t yet arrived and may or may not emerge, relying on the past to predict the future isn’t sound strategy. An excerpt:

Predicting whether machines will make the bulk of humans useless is beyond my capability. The future of technology is much too hard to predict. But I can say this: one of the main arguments often used to rule out this worrisome possibility is very shaky. If you think that history proves that humans can’t be replaced, think again.

I see this argument all the time. Because humans have never been replaced before, people say, it can’t happen in the future. Many cite the example of the Luddites, British textile workers in the early 19th century who protested against the introduction of technologies that could do their jobs more cheaply. In retrospect, the Luddites look foolish. As industrial technology improved, skilled workers were not impoverished — instead, they found ever-more-lucrative jobs that made use of new tools. As a result, “Luddite” is now a term of derision for those who doubt the power of technology to improve the world.

A more sophisticated version of this argument is offered by John Lewis of the Bank of England, in arecent blog post. Reviewing economic history, he shows what most people intuitively understand — new technology has complemented human labor rather than replacing it. Indeed, as Lewis points out, most macroeconomic models assume that the relationship between technology and humans is basically fixed.

That’s the problem, though — economic assumptions are right, until they’re not. The future isn’t always like the past. Sometimes it breaks in radical ways.•

 

Tags:

vintagehairdryer

If you like your human beings to come with fingers and toes, you may be disquieted by the Reddit Ask Me Anything conducted by Andrew Hessel, a futurist and a “biotechnology catalyst” at Autodesk. It’s undeniably one of the smartest and headiest AMAs I’ve ever read, with the researcher fielding questions about a variety of flaws and illnesses plaguing people that biotech may be able to address, even eliminate. Of course, depending on your perspective, humanness itself can be seen as a failing, something to be “cured.” A few exchanges follow.


Question:

Four questions, all on the same theme:

1) What is the probable timeframe for when we’ll see treatments that will slow senescence?

2) Do we have a realistic estimate, in years or decades (or bigger, fingers crossed!), on the life extension potential of such treatments?

3) Is it realistic that such treatments would also include senescence reversal (de-ageing)?

4) Is there any indication at present as to what kind of form these treatments will take, particularly with regards to their invasiveness?

Andrew Hessel:

1) We are already seeing some interesting results here — probably the most compelling I’ve seen is in programming individually senescent cells to die. More work needs to be done. 2) In humans, no. We are already long-lived. Experiments that lead to longer life can’t be rushed — the results come at the end! 3) TBD — but I can’t see why not 4) Again, TBD, but I think it will involve tech like viruses and nanoparticles that can target cells / tissue with precision.

Overall, trying to extend our bodies may be throwing good effort at a bad idea. In some ways, the important thing is to be able to extract and transfer experience and memory (data). We do this when we upgrade our phones, computers, etc.


Question:

Can Cas9/Crispr edit any gene that control physical appearance in an adult human? say for example it it’s the gene that controls the growth of a tail? will reactivating it actually cause a tail to grow in an already mature human?

Andrew Hessel:

It’s a powerful editing technology that could potentially allow changing appearance. The problem is editing a fully developed organism is new territory. Also, there’s the challenges of reprogramming millions or billions of cells! But it’s only a 4 year old technology, lots of room to explore and learn.


Question:

I’m an artist who’s curious about using democratized genetic engineering techniques (i.e. CRISPR) to make new and aesthetically interesting plant life, like roses the size of sunflowers or lillies and irises in shapes and colors nobody has ever scene. Is this something that is doable by an non-scientist with the tools and understanding available today? I know there are people inserting phosphorescence into plant genes – I’d like to go one further and actually start designing flower, or at least mucking around with the code to see what kinds of (hopefully) pretty things emerge. I’d love your thoughts on this… Thanks!

Andrew Hessel:

I think it’s totally reasonable to start thinking about this. CRISPR allows for edits of genomes and using this to explore size/shape/color etc of plants is fascinating. As genome engineering (including whole genome synthesis) tech becomes cheaper and faster, doing more extensive design work will be within reach. The costs need to drop dramatically though — unless you’re a very rich artists. :-) As for training, biodesign is already so complicated that you need software tools to help. The software tools are going to improve a lot in the future, allowing designers to focus more on what they want to make, rather than the low level details of how to make it. But we still have a ways to go on this front. We still don’t have great programming tools for single cells, let alone more complex organisms. But they WILL come.


Question:

So my question is, do you think there will be a “biological singularity,” similar to Ray Kurzweil’s “technological singularity?”

Will there be a time in the near future where the exponential changes in genetic engineering (synthetic biology, dna synthesis, genome sequencing, etc.) will have such a profound impact on human civilization that it is difficult to predict what the future will be like?

Andrew Hessel:

I think it’s already hard to figure out where the future is going. Seriously. Who would have predicted politics to play out this way this year? But yes, I think Kurzweil calls it right that the combination of accelerating computation, biotech, etc creates a technological future that is hard to imagine. This said, I don’t think civilization will change that quickly. Computers haven’t changed the fundamentals of life, just the details of how we go about our days. Biotech changes can be no less profound, but they take longer to code, test, and implement., Overall, though, I think we come of this century with a lot more capabilities than we brought into it!•

Tags:

img-williamklein_102633410484

Desperation sounds funny when expressed in words. A scream would probably be more coherent.

Nobody really knows how to remake newspapers and magazines of a bygone era to be profitable in this one, and the great utility they provided–the Fourth Branch of Government the traditional media was called–is not so slowly slipping away. What’s replaced much of it online has been relatively thin gruel, with the important and unglamorous work of covering local politics unattractive in a viral, big-picture machine.

All I know is when Condé Nast is using IBM’s Watson to help advertisers “activate” the “right influencers” for their “brands,” we’re all in trouble.

From The Drum:

With top titles like Vogue, Vanity Fair, Glamour and GQ, Conde Nast’s partnership heralds a key step merging targeted influencer marketing and artificial intelligence in the fashion and lifestyle industry. The platform will be used to help advertiser clients improve how they connect with audiences over social media and gain measurable insights into how their campaigns resonate.

“Partnering with Influential to leverage Watson’s cognitive capabilities to identify the right influencers and activate them on the right campaigns gives our clients an advantage and increases our performance, which is paramount in today’s distributed content world,” said Matt Starker, general manager, digital strategy and initiatives at Condé Nast. “We engage our audiences in innovative ways, across all platforms, and this partnership is another step in that innovation.”

By analyzing unstructured data from an influencer’s social media feed and identifying key characteristics that resonate with a target demographic, the Influential platform uses IBM’s personality insights into, for example, a beauty brand that focuses on self-enhancement, imagination and trust. This analysis helps advertisers identify the right influencers by homing in on previously hard-to-measure metrics–like how they are perceived by their followers, and how well their specific personality fits the personality of the brand.•

meh-ro12299

From the July 31, 1928 Brooklyn Daily Eagle:

robotarmies1

robotarmies2

200f12da85c3bdb6d96e07dc72357ee8

In a Literary Review piece about Kyle Arnold’s new title, The Divine Madness Of Philip K. Dick, Mike Jay, who knows a thing or two about the delusions that bedevil us, writes about the insane inner world of the speed-typing, speed-taking visionary who lived during the latter stages of his life, quite appropriately, near the quasi-totalitarian theme park Disneyland, a land where mice talk and corporate propaganda is endlessly broadcast. Dick was a hypochondriac about the contents of his head, and it’s no surprise his life was littered with amphetamines, anorexia and anxiety, which drove his brilliance and abbreviated it.

The opening:

Across dozens of novels and well over a hundred short stories, Philip K Dick worried away at one theme above all others: the world is not as it seems. He worked through every imaginable scenario: consensus reality was variously a set of implanted memories, a drug-induced hallucination, a time slip, a covert military simulation, an illusion projected by mega-corporations or extraterrestrials, or a test set by God. His typical protagonist was conspired against, drugged, hypnotised, paranoid, schizophrenic – or, possibly, the only person in possession of the truth.

The preoccupation all too clearly reflected the author’s life. Dick was a chronic doubter, tormented, like René Descartes, by the suspicion that the world was the creation of an evil demon ‘who has directed his entire effort to misleading me’. But cogito ergo sum was not enough to rescue someone who in 1972, during one of his frequent bouts of persecution mania, called the police to confess to being an android. Dick took scepticism to a level that he made his own. It became his brand, and since his death it has been franchised across popular culture. He isn’t credited on Hollywood blockbusters such as The Matrix (in which reality is a simulation created by machines from the future) or The Truman Show (about a reality TV programme in which all but the protagonist are complicit), but their mind-bending plot twists are his in all but name.

As Kyle Arnold acknowledges early in his lucid and accessible study, it would be impossible to investigate the roots of Dick’s cosmic doubt more doggedly than he did himself. He was ‘his own best psychobiographer”…

Tags: ,

Some sort of survival mechanism allows us to forget the full horror of a tragedy, and that’s a good thing. That fading of facts makes it possible for us to go on. But it’s dangerous to be completely amnesiac about disaster.

Case in point: In 2014, Barry Diller announced plans to build a lavish park off Manhattan at the pier where Titanic survivors came to shore. Dial back just a little over two years ago to another waterlogged disaster, when Hurricane Sandy struck the city, and imagine such an island scheme even being suggested then. The wonder at that point was whether Manhattan was long for this world. Diller’s designs don’t sound much different than the captain of the supposed unsinkable ship ordering a swimming pool built on the deck after the ship hit an iceberg.

In New York magazine, Andrew Rice provides an excellent profile of scientist Klaus Joseph, who believes NYC, as we know it, has no future. The academic could be wrong, but if he isn’t, his words about the effects of Irene and Sandy are chilling: “God forbid what’s next.”

The opening:

Klaus Jacob, a German professor affiliated with Columbia’s University’s Lamont-Doherty Earth Observatory, is a geophysicist by profession and a doomsayer by disposition. I’ve gotten to know him over the past few years, as I’ve sought to understand the greatest threat to life in New York as we know it. Jacob has a white beard and a ponderous accent: Imagine if Werner Herzog happened to be a renowned expert on disaster risk. Jacob believes most people live in an irrational state of “risk denial,” and he takes delight in dispelling their blissful ignorance. “If you want to survive an earthquake, don’t buy a brownstone,” he once cautioned me, citing the catastrophic potential of a long-dormant fault line that runs under the city. When Mayor Bloomberg announced nine years ago an initiative to plant a million trees, Jacob thought, That’s nice — but what about tornadoes?

For the past 15 years or so, Jacob has been primarily preoccupied with a more existential danger: the rising sea. The latest scientific findings suggest that a child born today in this island metropolis may live to see the waters around it swell by six feet, as the previously hypothetical consequences of global warming take on an escalating — and unstoppable — force. “I have made it my mission,” Jacob says, “to think long term.” The life span of a city is measured in centuries, and New York, which is approaching its fifth, probably doesn’t have another five to go, at least in any presently recognizable form. Instead, Jacob has said, the city will become a “gradual Atlantis.”

The deluge will begin slowly, and irregularly, and so it will confound human perceptions of change. Areas that never had flash floods will start to experience them, in part because global warming will also increase precipitation. High tides will spill over old bulkheads when there is a full moon. People will start carrying galoshes to work. All the commercial skyscrapers, housing, cultural institutions that currently sit near the waterline will be forced to contend with routine inundation. And cataclysmic floods will become more common, because, to put it simply, if the baseline water level is higher, every storm surge will be that much stronger. Now, a surge of six feet has a one percent chance of happening each year — it’s what climatologists call a “100 year” storm. By 2050, if sea-level rise happens as rapidly as many scientists think it will, today’s hundred-year floods will become five times more likely, making mass destruction a once-a-generation occurrence. Like a stumbling boxer, the city will try to keep its guard up, but the sea will only gain strength.•

Tags: ,

779px-drei_frauen_am_pranger_china_anonym_um_1875

Before robots take our jobs, a more mechanical form of human will handle many of them. In fact, it’s already happening.

The new connectedness and tools have allowed for certain types of employment to be shrunk if not disappeared. It’s true whether your collar is blue or white, whether you have a job or career, if you’re a taxi driver or the passenger being transported to a fancy office.

“Meatware”–a term which perfectly sums up a faceless type of human labor pool–minimizes formerly high-paying positions into tasks any rabbit can handle. It’s a race to the bottom, where there’s plenty of room, with the winners also being losers.

In Mark Harris’ insightful Backchannel article, the writer hired some Mechanical Turk-ers to explore the piecework phenomenon. The opening:

Harry K. sits at his desk in Vancouver, Canada, scanning sepia-tinted swirls, loops and blobs on his computer screen. Every second or so, he jabs at his mouse and adds a fluorescent dot to the image. After a minute, a new image pops up in front of him.

Harry is tagging images of cells removed from breast cancers. It’s a painstaking job but not a difficult one, he says: “It’s like playing Etch A Sketch or a video game where you color in certain dots.”

Harry found the gig on Crowdflower, a crowdworking platform. Usually that cell-tagging task would be the job of pathologists, who typically start their careers with annual salaries of around $200,000 — an hourly wage of about $80. Harry, on the other hand, earns just four cents for annotating a batch of five images, which takes him between two to eight minutes. His hourly wage is about 60 cents.

Granted, Harry can’t perform most of the tasks in a pathologist’s repertoire. But in 2016 — 11 years after the launch of the ur-platform, Amazon Mechanical Turk — crowdworking (sometimes also called crowdsourcing) is eating into increasingly high-skilled jobs. The engineers who are developing this model of labor have a bold ambition to atomize entire careers into micro-tasks that almost anyone, anywhere in the world, can carry out online. They’re banking on the idea that any technology that can make a complex process 100 times cheaper, as in Harry’s case, will spread like wildfire.•

Tags:

childrenmasks3

We were a web before there was a Web. Things didn’t begin going viral in the Digital Age, and human systems existed long before the Industrial Revolution or even agriculture. None of this is new. What’s different about the era of machines is the brute efficiency of data due to heightened computing power being applied to increasingly ubiquitous connectedness. 

Some more dazzling thoughts by Yuval Noah Harari’s on “Dataism” can be read in Wired UK, which presents a passage from the historian’s forthcoming book, Homo Deus: A Brief History of Tomorrow. Just two examples: 1) “The entire human species is a single data-processing system,” and 2) “We often imagine that democracy and the free market won because they were ‘good.’ In truth, they won because they improved the global data-processing system.”

Harari writes that humans are viewed increasingly as an anachronism by Dataists, who prefer intelligence to the continuation of the species, an “outdated technology.” Like the Finnish philosopher Erkki Kurenniemi, they doubt the long-term preservation of “slime-based machines.” I don’t know how widespread this feeling really is, but I have read theorists in computing who feel it their duty to always opt for greater intelligence, even if it should come at the cost of humanity. I think the greater threat to our survival isn’t conscious decisions made at our expense but rather the natural progression of systems that don’t necessarily require us.

An excerpt:

Like capitalism, Dataism too began as a neutral scientific theory, but is now mutating into a religion that claims to determine right and wrong. The supreme value of this new religion is “information flow”. If life is the movement of information, and if we think that life is good, it follows that we should extend, deepen and spread the flow of information in the universe. According to Dataism, human experiences are not sacred and Homo sapiens isn’t the apex of creation or a precursor of some future Homo deus. Humans are merely tools for creating the Internet-of-All-Things, which may eventually spread out from planet Earth to cover the whole galaxy and even the whole universe. This cosmic data-processing system would be like God. It will be everywhere and will control everything, and humans are destined to merge into it.

This vision is reminiscent of some traditional religious visions. Thus Hindus believe that humans can and should merge into the universal soul of the cosmos – the atman. Christians believe that after death, saints are filled by the infinite grace of God, whereas sinners cut themselves off from His presence. Indeed, in Silicon Valley, the Dataist prophets consciously use traditional messianic language. For example, Ray Kurzweil’s book of prophecies is called The Singularity is Near, echoing John the Baptist’s cry: “the kingdom of heaven is near” (Matthew 3:2).

Dataists explain to those who still worship flesh-and-blood mortals that they are overly attached to outdated technology. Homo sapiens is an obsolete algorithm. After all, what’s the advantage of humans over chickens? Only that in humans information flows in much more complex patterns than in chickens. Humans absorb more data, and process it using better algorithms. (In day-to-day language, that means that humans allegedly have deeper emotions and superior intellectual abilities. But remember that, according to current biological dogma, emotions and intelligence are just algorithms.)

Well then, if we could create a data-processing system that absorbs even more data than a human being, and that processes it even more efficiently, wouldn’t that system be superior to a human in exactly the same way that a human is superior to a chicken?•

Tags:

68471-004-73027C1B

Georges_Claude_à_l'Institut_1926

56603b3863fdb

A century ago in France it might have been as apt to refer to Georges Claude as a luminary as anyone else. The inventor of neon lights, which debuted at the Paris Motor Show of 1910, the scientist was often thought of as a “French Edison,” a visionary who shined his brilliance on the world. Problem was, there was a dark side: a Royalist who disliked democracy, Claude eagerly collaborated with the Nazis during the Occupation and was arrested once Hitler was defeated. He spent six years in prison, though he was ultimately cleared of the most serious charge of having invented the V-1 flying bomb for the Axis. Two articles below from the Brooklyn Daily Eagle chronicle his rise and fall. 


From February 25, 1931:

lumineer3

lumineer1

From September 20, 1944:

456

Tags:

gawker_founder_nick_denton_has_filed_for_personal_bankruptcy_m11

What perplexed me about Gawker during the last few years of existence and throughout its holy-shit Hulk Hogan trial was that the principals on the inside of the company seemed tone-deaf at best and oblivious at worst. That allowed an emotional homunculus like Peter Thiel to use a short stack from his billions to drive the media company into bankruptcy.

In Matthew Garrahan’s Financial Times interview with Nick Denton, the former owner discusses why Thiel and others in Silicon Valley were so angered about darts thrown at them by Gawker, stressing insulation from criticism on the outside can be vital when building a corporation. Perhaps the same is true of those running an independent media empire?

An excerpt:

The appeal is likely to take at least a year to get to court, which means Denton and Thiel will not be burying the hatchet soon. And yet they have much in common. They are of similar age: Denton turned 50 last month, while Thiel will be 49 in October. They are both gay, tech-obsessed European émigrés (Thiel is from Germany; Denton from the UK) and they are both libertarians.

There the similarities end, Denton suggests. “Thiel’s idea of freedom is that you can create a society that is insulated from mainstream society … and imagine your own world in which none of the old rules apply.” He is alluding to Thiel’s interest in seasteading — the largely theoretical creation of autonomous societies beyond the reach of meddling national governments. “My idea of free society always had more of an anarcho-syndicalist bent,” he says. “If I was in Barcelona during the Spanish civil war [an anarcho-syndicalist] is probably what I would have been.”

Still, he says he understands the desire to operate beyond the restrictions of normal society, saying that such thinking is common in start-up culture. He points to Uber, the ride-sharing app, to underline the point. When its founders set out to launch a product that would up-end the personal transportation industry, they had to protect their vision from external doubters or naysayers. “You need to be insulated from the critics if you’re going to persuade people to join you, believe in you, invest in you.” Great companies are often based on a single idea, he continues. “And if someone questions that idea, it can undermine the support within the organisation for that idea.”

This, he says, explains Thiel’s animosity towards Gawker. Valleywag, a Denton-owned tech site that was eventually folded into Gawker.com, used to cover Silicon Valley with a critical eye and was a constant thorn in the side of its community of companies and investors — including Thiel.•

Tags: ,

retrofuturesuburbs3456876

The robots may be coming for our jobs, but they’re not coming for our species, not yet.

Anyone worried about AI extincting humans in the short term is really buying into sci-fi hype far too much, and those quipping that we’ll eventually just unplug machines if they get too smart is underselling more distant dangers. But in the near term, Weak AI (e.g., automation) is far more a peril to society than Strong AI (e.g., conscious machines). It could move us into a post-scarcity tomorrow, or it could do great damage if it’s managed incorrectly.What happens if too many jobs are lost all at once? Will there be enough of a transition period to allow us to pivot?

In a Technology Review piece, Will Knight writes of a Stanford study on AI that predicts certain key disruptive technologies will not have cut a particularly wide swath by 2030. Of course, even this research, which takes a relatively conservative view of the future, suggests we start discussing social safety nets for those on the short end of what may become an even more imbalanced digital divide.

The opening:

The odds that artificial intelligence will enslave or eliminate humankind within the next decade or so are thankfully slim. So concludes a major report from Stanford University on the social and economic implications of artificial intelligence.

At the same time, however, the report concludes that AI looks certain to upend huge aspects of everyday life, from employment and education to transportation and entertainment. More than 20 leaders in the fields of AI, computer science, and robotics coauthored the report. The analysis is significant because the public alarm over the impact of AI threatens to shape public policy and corporate decisions.

It predicts that automated trucks, flying vehicles, and personal robots will be commonplace by 2030, but cautions that remaining technical obstacles will limit such technologies to certain niches. It also warns that the social and ethical implications of advances in AI, such as the potential for unemployment in certain areas and likely erosions of privacy driven by new forms of surveillance and data mining, will need to be open to discussion and debate.•

Tags:

MV5BMTUyMjEyMzAzOF5BMl5BanBnXkFtZTgwMjc2NjYwOTE@._V1_UX477_CR0,0,477,268_AL_

“What hath God wrought?” was the first piece of Morse code ever sent, a melodramatic message which suggested something akin to Mary Shelley’s monster awakening and, perhaps, technology putting old myths to sleep. In his movie, Lo and Behold: Reveries Of A Connected World, Werrner Herzog believes something even more profoundly epiphanic is happening in the Digital Age, and it’s difficult to disagree.

The director tells Ben Makuch of Vice that for him, technology is an entry point to learning about people (“I’m interested, of course, in the human beings”). Despite Herzog’s focus, the bigger story is events progressing in the opposite direction, from carbon to silicon.

In a later segment about space colonization, Herzog acknowledges having dreams of filming on our neighboring planet, saying, “I want to be the poet of Mars.” But, in the best sense, he’s already earned that title.

Tags: ,

future-self-driving-cars-e1424120051731

Not an original idea: Driverless cars are perfected in the near future and join the traffic, and some disruptive souls, perhaps us, decide to purchase an autonomous taxi and set it to work. We charge less than any competitor, use our slim profits for maintenance and to eventually buy a second taxi. Those two turn into an ever-growing fleet. We subtract our original investment (and ourselves) from the equation, and let this benevolent monster grow, ownerless, allowing it to automatically schedule its own repairs and purchases. Why would anyone need Uber or Lyft in such a scenario? Those outfits would be value-less.

In a very good Vanity Fair “Hive” piece, Nick Bilton doesn’t extrapolate Uber’s existential risk quite this far, but he writes wisely of the technology that may make rideshare companies a shooting star, enjoying only a brief lifespan like Compact Discs, though minus the outrageous profits that format produced. 

The opening:

Seven years ago, just before Uber opened for business, the company was valued at exactly zero dollars. Today, it is worth around $68 billion. But it is not inconceivable that Uber, as mighty as it currently appears, could one day return to its modest origins, worth nothing. Uber, in fact, knows this better than almost anyone. As Travis Kalanick, Uber’s chief executive, candidly articulated in an interview with Business Insider, ride-sharing companies are particularly vulnerable to an impeding technology that is going to change our society in unimaginable ways: the driverless car. “The world is going to go self-driving and autonomous,” he unequivocally told Biz Carson. He continued: “So if that’s happening, what would happen if we weren’t a part of that future? If we weren’t part of the autonomy thing? Then the future passes us by, basically, in a very expeditious and efficient way.”

Kalanick wasn’t just being dramatic. He was being brutally honest. To understand how Uber and its competitors, such as Lyft andJuno, could be rendered useless by automation—leveled in the same way that they themselves leveled the taxi industry—you need to fast-forward a few years to a hypothetical version of the future that might seem surreal at the moment. But, I can assure you, it may well resemble how we will live very, very soon.•

Tags:

A CBP Border Patrol Agent investigates a potential landing area for illegal immigrants along the Rio Grande River in Texas

Surveillance is a murky thing almost always attended by a self-censorship, quietly encouraging citizens to abridge their communication because maybe, perhaps someone is watching or listening. It’s a chilling of civil rights that happens in a creeping manner. Nothing can be trusted, not even the mundane, not even your own judgement. That’s the goal, really, of such a system–that everyone should feel endlessly observed.

In a Texas Monthly piece, Sasha Von Oldershausen, a border reporter in West Texas, finds similarities between her stretch of America, which feverishly focuses on security from intruders, and her time spent living under theocracy in Iran. An excerpt:

Surveillance is key to the CBP’s strategy at the border, but you don’t have to look to the skies for constant reminders that they’re there. Internal checkpoints located up to 100 miles from the border give Border Patrol agents the legal authority to search any person’s vehicle without a warrant. It’s enough to instill a feeling of guilt even in the most exemplary of citizens. For those commuting daily on roads fitted with these checkpoints, the search becomes rote: the need to prove one’s right to abide is an implicit part of life.

Despite the visible cues, it’s still hard to figure just how all-seeing the CBP’s eyes are. For one, understanding the “realities” of border security varies based on who you talk to.

Esteban Ornelas—a Mexican citizen who was charged with illegal entry into the United States in 2012 and deported shortly thereafter—swears that he was caught was because a friend he was traveling through the backcountry with sent a text message to his family. “They traced the signal,” he told me in his hometown of Boquillas.

When I consulted CBP spokesperson Brooks and senior Border Patrol agent Stephen Crump about what Ornelas had told me, they looked at each other and laughed. “That’s pretty awesome,” Crump said. “Note to self: develop that technology.”

I immediately felt foolish to have asked. But when I asked Pauling that same question, his reply was much more austere: “I can’t answer that,” he said, and left it at that.•

 

Tags:

driverlesscar345678987654

Some argue, as John Thornhill does in a new Financial Times column, that technology may not be the main impediment to the proliferation of driverless cars. I doubt that’s true. If you could magically make available today relatively safe and highly functioning autonomous vehicles, ones that operated on a level superior to humans, then hearts, minds and legislation would soon favor the transition. I do think driving as recreation and sport would continue, but much of commerce and transport would shift to our robot friends.

Earlier in the development of driverless, I wondered if Americans would hand over the wheel any sooner than they’d turn in their guns, but I’ve since been convinced we (largely) will. We may have a macro fear of robots, but we hand over control to them with shocking alacrity. A shift to driverless wouldn’t be much different.

An excerpt from Thornhill in which he lists the main challenges, technological and otherwise, facing the sector:

First, there is the instinctive human resistance to handing over control to a robot, especially given fears of cyber-hacking. Second, for many drivers cars are an extension of their identity, a mechanical symbol of independence, control and freedom. They will not abandon them lightly.

Third, robots will always be held to far higher safety standards than humans. They will inevitably cause accidents. They will also have to be programmed to make a calculation that could kill their passengers or bystanders to minimise overall loss of life. This will create a fascinating philosophical sub-school of algorithmic morality. “Many of us are afraid that one reckless act will cause an accident that causes a backlash and shuts down the industry for a decade,” says the Silicon Valley engineer. “That would be tragic if you could have saved tens of thousands of lives a year.”

Fourth, the deployment of autonomous vehicles could destroy millions of jobs. Their rapid introduction is certain to provoke resistance. There are 3.5m professional lorry drivers in the US.

Fifth, the insurance industry and legal community have to wrap their heads around some tricky liability issues. In what circumstances is the owner, car manufacturer or software developer responsible for damage?•

Tags:

images (2)

The introduction to Nicholas Carr’s soon-to-be published essay collection, Utopia Is Creepy, has been excerpted at Aeon, and it’s a beauty. The writer argues (powerfully) that we’ve defined “progress as essentially technological,” even though the Digital Age quickly became corrupted by commercial interests, and the initial thrill of the Internet faded as it became “civilized” in the most derogatory, Twain-ish use of that word. To Carr, the something gained (access to an avalanche of information) is overwhelmed by what’s lost (withdrawal from reality). The critic applies John Kenneth Galbraith’s term “innocent fraud” to the Silicon Valley marketing of techno-utopianism. 

You could extrapolate this thinking to much of our contemporary culture: binge-watching endless content, Pokémon Go, Comic-Con, fake Reality TV shows, reality-altering cable news, etc. Carr suggests we use the tools of Silicon Valley while refusing the ethos. Perhaps that’s possible, but I doubt you can separate such things.

An excerpt:

The greatest of the United States’ homegrown religions – greater than Jehovah’s Witnesses, greater than the Church of Jesus Christ of Latter-Day Saints, greater even than Scientology – is the religion of technology. John Adolphus Etzler, a Pittsburgher, sounded the trumpet in his testament The Paradise Within the Reach of All Men (1833). By fulfilling its ‘mechanical purposes’, he wrote, the US would turn itself into a new Eden, a ‘state of superabundance’ where ‘there will be a continual feast, parties of pleasures, novelties, delights and instructive occupations’, not to mention ‘vegetables of infinite variety and appearance’.

Similar predictions proliferated throughout the 19th and 20th centuries, and in their visions of ‘technological majesty’, as the critic and historian Perry Miller wrote, we find the true American sublime. We might blow kisses to agrarians such as Jefferson and tree-huggers such as Thoreau, but we put our faith in Edison and Ford, Gates and Zuckerberg. It is the technologists who shall lead us.

Cyberspace, with its disembodied voices and ethereal avatars, seemed mystical from the start, its unearthly vastness a receptacle for the spiritual yearnings and tropes of the US. ‘What better way,’ wrote the philosopher Michael Heim inThe Erotic Ontology of Cyberspace’ (1991), ‘to emulate God’s knowledge than to generate a virtual world constituted by bits of information?’ In 1999, the year Google moved from a Menlo Park garage to a Palo Alto office, the Yale computer scientist David Gelernter wrote a manifesto predicting ‘the second coming of the computer’, replete with gauzy images of ‘cyberbodies drift[ing] in the computational cosmos’ and ‘beautifully laid-out collections of information, like immaculate giant gardens’.

The millenarian rhetoric swelled with the arrival of Web 2.0. ‘Behold,’ proclaimed Wired in an August 2005 cover story: we are entering a ‘new world’, powered not by God’s grace but by the web’s ‘electricity of participation’. It would be a paradise of our own making, ‘manufactured by users’. History’s databases would be erased, humankind rebooted. ‘You and I are alive at this moment.’

The revelation continues to this day, the technological paradise forever glittering on the horizon. Even money men have taken sidelines in starry-eyed futurism. In 2014, the venture capitalist Marc Andreessen sent out a rhapsodic series of tweets – he called it a ‘tweetstorm’ – announcing that computers and robots were about to liberate us all from ‘physical need constraints’. Echoing Etzler (and Karl Marx), he declared that ‘for the first time in history’ humankind would be able to express its full and true nature: ‘we will be whoever we want to be.’ And: ‘The main fields of human endeavour will be culture, arts, sciences, creativity, philosophy, experimentation, exploration, adventure.’ The only thing he left out was the vegetables.•

Tags: ,

PA-9984662

There’s probably no reason to think prognosticating crime via computer will be more biased than traditional racial profiling and other less-algorithmic methods of anticipating lawlessness, but it’s uncertain it will be an improvement. In any system with embedded prejudice–pretty much all of them–won’t those suspicions of some be translated into code? It doesn’t need be that way, but there will have to be an awful lot of skepticism and oversight to keep discrimination from taking a prominent place in the digital realm.

The opening of “The Power of Learning” at the Economist:

In Minority Report, a policeman, played by Tom Cruise, gleans tip-offs from three psychics and nabs future criminals before they break the law. In the real world, prediction is more difficult. But it may no longer be science fiction, thanks to the growing prognosticatory power of computers. That prospect scares some, but it could be a force for good—if it is done right.

Machine learning, a branch of artificial intelligence, can generate remarkably accurate predictions. It works by crunching vast quantities of data in search of patterns. Take, for example, restaurant hygiene. The system learns which combinations of sometimes obscure factors are most suggestive of a problem. Once trained, it can assess the risk that a restaurant is dirty. The Boston mayor’s office is testing just such an approach, using data from Yelp reviews. This has led to a 25% rise in the number of spot inspections that uncover violations.

Governments are taking notice. A London borough is developing an algorithm to predict who might become homeless. In India Microsoft is helping schools predict which students are at risk of dropping out. Machine-learning predictions can mean government services arrive earlier and are better targeted (see article). Researchers behind an algorithm designed to help judges make bail decisions claim it can predict recidivism so effectively that the same number of people could be bailed as are at present by judges, but with 20% less crime. To get a similar reduction in crime across America, they say, would require an extra 20,000 police officers at a cost of $2.6 billion.
 
But computer-generated predictions are sometimes controversial.•
   

images

download (1)More than six decades ago, long before Siri got her voice, Georgetown and IBM co-presented the first public demonstration of machine translation. Russian was neatly converted into English by an “electronic brain,” the IBM 701, and one of the principals involved, the university’s Professor Leon Dostert, excitedly reacted to the success, proclaiming the demo a “Kitty Hawk of electronic translation.” Certainly the impact from this experiment was nothing close to the significance of the Wright brothers’ achievement, but it was a harbinger of things to (eventually) come. An article in the January 8, 1954 Brooklyn Daily Eagle covered the landmark event.

electrictranslator5

Tags: ,

HI-abc.net_.au_

Beginning in the 1960s, freelance speleologist Michel Siffre embedded himself in glaciers and underground caves in an attempt to understand sensory privations astronauts might experience on long missions. It was dangerous and not just psychologically. Today’s “missions to Mars,” test runs in isolation like NASA’s one-year HI-SEAS project which just concluded in Hawaii, aren’t nearly as fraught. Only time will tell, however, if they’re an acceptable spacesuit dress rehearsal. The multinational HI-SEAS crew, having now “returned,” thinks the historic travel to Mars will be manageable, which it probably is, though it’s still a question as to whether humans need to be making such voyages right now.

From an un-bylined AP report:

HILO, Hawaii (AP) — Six scientists have completed a yearlong Mars simulation in Hawaii, where they lived in a dome in near isolation.

For the past year, the group in the dome on a Mauna Loa mountain could go outside only while wearing spacesuits.

On Sunday, the simulation ended, and the scientists emerged. 

Cyprien Verseux, a crew member from France, said the simulation shows a mission to Mars can succeed.

“I can give you my personal impression which is that a mission to Mars in the close future is realistic. I think the technological and psychological obstacles can be overcome,” Verseux said.

Christiane Heinicke, a crew member from Germany, said the scientists were able to find their own water in a dry climate.

“Showing that it works, you can actually get water from the ground that is seemingly dry. It would work on Mars and the implication is that you would be able to get water on Mars from this little greenhouse construct,” she said.•

Tags: ,

« Older entries § Newer entries »