Science/Tech

You are currently browsing the archive for the Science/Tech category.

We shouldn’t use the term “Sharing Economy” because there’s no actual sharing involved. And “Peer Economy” doesn’t really say it since the workers aren’t treated like peers, far from it. But whatever the term, the Ubers and Lyfts and Airbnbs have made their way in the world by making up their own rules as needed and leaving until later any worry about preexisting legislation. It’s a willful attack upon convention, one that previously wrecked Napster but worked out wonderfully for Youtube. Google was also aided early in its development of driverless cars by just going forward with highway testing, only thinking about laws when they were presented in response.

It’s not so easy to reconcile one’s feelings about such corporate behavior. There are many good things that have come from such advances (e.g., smartphone hailing and payment for car service, the amazing archival video trove now available to us), but those ruined empires left behind also provided wealth now forever lost. For instance: Despite’s Napster’s own demise, the record-industry model was toppled, which might seem like a good thing, but didn’t it bring us so much joy across decades? It’s a complicated situation. From the Economist:

The tension between innovators and regulators has been particularly intense of late. Uber and Lyft have had complaints that their car-hailing services break all sorts of taxi regulations; people renting out rooms on Airbnb have been accused of running unlicensed hotels; Tesla, a maker of electric cars, has suffered legal setbacks in its attempts to sell directly to motorists rather than through independent dealers; and in its early days Prosper Marketplace, a peer-to-peer lending platform, suffered a “cease and desist” order from the Securities and Exchange Commission. It sometimes seems as if the best way to identify a hot new company is to look at the legal trouble it is in.

There are two big reasons for this growing friction. The first is that many innovative companies are using digital technology to attack heavily regulated bits of the service economy that are ripe for a shake-up. Often they do so by creating markets for surplus labour or resources, using websites and smartphone apps: Uber and Lyft let people turn their cars into taxis; Airbnb lets them rent out their spare rooms; Prosper lets them lend out their spare cash. Conventional taxi firms, hoteliers and banks argue, not unreasonably, that if they have to obey all sorts of regulations, so should their upstart competitors.

The second is the power of network effects: there are huge incentives to get to the market early and grow as quickly as possible, even if it means risking legal challenges. Benjamin Edelman of Harvard Business School argues that YouTube owes its success in part to this strategy.•

After finishing this (often) funny book about an unfunny subject, I started on Martin Ford’s Rise of the Robots (excellent so far), which wonders if this new machine age will differ from the Industrial Age by not creating newer, better jobs to replace those disappeared by automation. So perhaps I’m thinking even more than usual about technological unemployment, especially in regards to the service sector, which, as Ford reminds, is where most Americans earn a living now and which is most prone to robotization. It isn’t so much a fear stoked by where Deep Learning is right now but where it may be a decade on. If it advances rapidly, how do we proceed?

In a Los Angeles Times piece, Jon Healey is concerned about the same after attending the Milken Global Conference panel on robotics. The opening:

Sometimes I wonder if I’m in the very last generation of newspaper reporters.

After hearing Jeremy Howard talk at a Milken Global Conference panel on robotics this week, however, I’m wondering if I’m in the very last generation of workers.

Howard is chief executive of Enlitic, which uses computers to help doctors make diagnoses. His technology relies on something known as machine learning, or the process by which a computer improves its own capabilities. He’s also a top data scientist, which gives him a much better view of what’s coming than most people have.

This year, Howard said, machines are better than humans at recognizing objects in an image. Now here’s the scary part. Compared to where they were in November, Howard said, they are 15 times faster in recognizing objects while being more accurate and using fewer computational resources. In five years, they will be 10,000 times faster.

“We are seeing order-of-magnitude improvements every few months,” Howard said. Similar leaps are starting to appear in computers’ ability to understand written text.

In five years’ time, a single computer could be hundreds or thousands of times better at that task than humans, Howard said. Combine it with other computers on a network, and the advantage becomes even more pronounced.

“Probably in your lifetime, certainly in your kids’ lifetime … computers will be better than humans at all these things,” he said. And within five years after that, they will be 1,000 times better.

Gulp.•

Tags: ,

In a wonderful Backchannel piece, historian Leslie Berlin answers two key questions: “Why did Silicon Valley happen in the first place, and why has it remained at the epicenter of the global tech economy for so long?”

Sharing granular details (before the name “Silicon Valley” was popularized in 1971, the area was known as “Valley of the Heart’s Delight”) and big-picture items (William Shockley’s genius drew talent to the community, and his bizarre paranoia dispersed them), Berlin provides a full-bodied sense of the place’s past, something she says continues to be of interest to the latest wave of technologists.

The short answer to the two questions posed is that there was confluence of technical, cultural and financial forces in this place in a relatively short span of time, and these same factors continue to sustain the area’s growth. (Oh, and immigration helps.) An excerpt from the “Money” section:

The third key component driving the birth of Silicon Valley, along with the right technology seed falling into a particularly rich and receptive cultural soil, was money. Again, timing was crucial. Silicon Valley was kick-started by federal dollars. Whether it was the Department of Defense buying 100% of the earliest microchips, Hewlett-Packard and Lockheed selling products to military customers, or federal research money pouring into Stanford, Silicon Valley was the beneficiary of Cold War fears that translated to the Department of Defense being willing to spend almost anything on advanced electronics and electronic systems. The government, in effect, served as the Valley’s first venture capitalist.

The first significant wave of venture capital firms hit Silicon Valley in the 1970s. Both Sequoia Capital and Kleiner Perkins Caufield and Byers were founded by Fairchild alumni in 1972. Between them, these venture firms would go on to fund Amazon, Apple, Cisco, Dropbox, Electronic Arts, Facebook, Genentech, Google, Instagram, Intuit, and LinkedIn — and that is just the first half of the alphabet.

This model of one generation succeeding and then turning around to offer the next generation of entrepreneurs financial support and managerial expertise is one of the most important and under-recognized secrets to Silicon Valley’s ongoing success. Robert Noyce called it “re-stocking the stream I fished from.” Steve Jobs, in his remarkable 2005 commencement address at Stanford, used the analogy of a baton being passed from one runner to another in an ongoing relay across time.•

Tags: ,

Friend, is your home drone-proof? Are you keeping surveillance cameras and potential flying explosives at bay? Do you realize that soon enough they’ll be the size of a flea and you won’t even be able to see these invaders? Act now!

From an Economist report about the fledgling anti-drone industry:

Detecting a small drone is not easy. Such drones are slow-moving and often low-flying, which makes it awkward for radar to pick them up, especially in the clutter of a busy urban environment. “Defeating” a detected drone is similarly fraught with difficulty. You might be able to jam its control signals, to direct another drone to catch or ram it, or to trace its control signals to find its operator and then “defeat” him instead. But all of this would need to take place, as far as possible, without disrupting local Wi-Fi systems (drones are often controlled by Wi-Fi), and it would certainly have to avoid any risk of injuring innocent bystanders.

Bringing down quads

One company which thinks itself up to fulfilling the detection part of the process is DroneShield, in Washington, DC. This firm was founded by John Franklin and Brian Hearing after Mr Franklin crashed a drone he was flying into his neighbours’ garden by accident, without them noticing. He realised then how easily drones could be used to invade people’s privacy and how much demand there might be for a system that could warn of their approach.

DroneShield’s system is centred on a sophisticated listening device that is able to detect, identify and locate an incoming drone based on the sound it makes. The system runs every sound it hears through a sonic “library,” which contains all the noises that are made by different types of drone. If it finds a match, it passes the detected drone’s identity and bearing to a human operator, who can then take whatever action is appropriate.

Other ways of detecting drones are also under investigation.•

Tags: ,

Quantifying our behavior is likely only half the task of the Internet of Things, with nudging us the other part of the equation. I don’t necessarily mean pointing us toward healthier choices we wouldn’t necessarily make (which is dubious if salubrious) but placing us even more inside a consumerist machine.

Somewhat relatedly: Quentin Hardy of the New York Times looks at how the data-rich tomorrow may mostly benefit the largest technology companies. An excerpt:

This sensor explosion is only starting: Huawei, a Chinese maker of computing and communications equipment with $47 billion in revenue, estimates that by 2025 over 100 billion things, including smartphones, vehicles, appliances and industrial equipment, will be connected to cloud computing systems.

The Internet will be almost fused with the physical world. The way Google now looks at online clicks to figure out what ad to next put in front of you will become the way companies gain once-hidden insights into the patterns of nature and society.

G.E., Google and others expect that knowing and manipulating these patterns is the heart of a new era of global efficiency, centered on machines that learn and predict what is likely to happen next.

“The core thing Google is doing is machine learning,” Eric Schmidt, Google’s executive chairman, said at an industry event on Wednesday. Sensor-rich self-driving cars, connected thermostats or wearable computers, he said, are part of Google’s plan “to do things that are likely to be big in five to 10 years. It just seems like automation and artificial intelligence makes people more productive, and smarter.”

Tags: ,

We were aware, more than seven decades ago, that the moon could be a landing pad, a rocket launcher and a nonpareil space observatory. Our failure to execute in this area is one of will, not knowledge. An article follows about the moon and its uses (including being an airport of sorts) from the December 29, 1940 Brooklyn Daily Eagle.

It’s certainly disingenuous that the UK publication the Register plastered the word “EXCLUSIVE” on Brid-Aine Parnell’s Nick Bostrom interview, since the philosopher, who’s become widely known for writing about existential risks in his book Superintelligence, has granted many interviews in the past. The piece is useful, however, for making it clear that Bostrom is not a confirmed catastrophist, but rather someone posing questions about challenges we may (and probably will) face should our species continue in the longer term. An excerpt:

Even if we come up with a way to control the AI and get it to do “what we mean” and be friendly towards humanity, who then decides what it should do and who is to reap the benefits of the likely wild riches and post-scarcity resources of a superintelligence that can get us out into the stars and using the whole of the (uninhabited) cosmos.

“We’re not coming from a starting point of thinking the modern human condition is terrible, technology is undermining our human dignity,” Bostrom says. “It’s rather starting from a real fascination with all the cool stuff that technology can do and hoping we can get even more from it, but recognising that there are some particular technologies that also could bring risks that we really need to handle very carefully.

“I feel a little bit like humanity is a bit like an infant or a teenager: some fairly immature person who has got their hands on increasingly powerful instruments. And it’s not clear that our wisdom has kept pace with our increasing technological prowess. But the solution to that is to try to turbo-charge the growth of our wisdom and our ability to solve global coordination problems. Technology will not wait for us, so we need to grow up a little bit faster.”

Bostrom believes that humanity will have to collaborate on the creation of an AI and ensure its goal is the greater good of everyone, not just a chosen few, after we have worked hard on solving the control problem. Only then does the advent of artificial intelligence and subsequent superintelligence stand the greatest chance of coming up with utopia instead of paperclipped dystopia.

But it’s not exactly an easy task.•

Tags: ,

Last month, I blogged about a 1928 article which told of the demise of the great Norwegian explorer Roald Amundsen, so I would be remiss if I didn’t post a passage from “Moving to Mars,” Tom Kizzia’s wonderful New Yorker article which investigates NASA’s attempts to understand the isolating effects of a potential Mars mission, which uses the 1898 Antarctic voyage of the Belgica, of which Amundsen was a crewmember, as a seafaring parallel. That ship ran into major difficulties, among them the remoteness of the trip playing havoc with the minds of the discoverers. Kizzia visits a contemporary experiment in astronaut sequestration (and other pragmatic problems) in Hawaii (“Mauna Loa is our Martian mountain,” as it’s put), a federal study similar to what speliologist Michel Siffre attempted by his lonesome in the 1960s-70s. An excerpt:

A century after the Belgica’s return, a NASA research consultant named Jack Stuster began examining the records of the trip to glean lessons for another kind of expedition: a three-year journey to Mars and back.“Future space expeditions will resemble sea voyages much more than test flights, which have served as the models for all previous space missions,” Stuster wrote in a book, Bold Endeavors, which was published in 1996 and quickly became a classic in the space program. A California anthropologist, Stuster had helped design U.S. space stations by studying crew productivity in cases of prolonged isolation and confinement: Antarctic research stations, submarines, the Skylab station. The study of stress in space had never been a big priority at NASA—or of much interest to the stoic astronauts, who worried that psychologists would uncover some hairline crack that might exclude them from future missions. (Russia, by contrast, became the early leader in the field, after being forced to abort several missions because of crew problems.) But in the nineteen-nineties, with planning for the International Space Station nearly complete, NASA scientists turned their attention to journeys deeper into space, and they found questions that had no answers.“That kind of challenging mission was way out of our comfortable low-earth-orbit neighborhood,”Lauren Leveton, the lead scientist of NASA’s Behavioral Health and Performance program, said. Astronauts would be a hundred million miles from home, no longer in close contact with mission control. Staring into the night for eight monotonous months, how would they keep their focus? How would they avoid rancor or debilitating melancholy?

Stuster began studying voyages of discovery—starting with the Niña, the Pinta, and the Santa Maria, whose deployment, he observed, anticipated the NASA-favored principle of “triple redundancy.” Crews united by a special “spirit of the expedition” excelled. HeO praised the Norwegian Fridtjof Nansen’s three-year journey into the Arctic, launched in 1893, for its planning, its crew selection, and its morale. One icebound Christmas, after a feast of reindeer meat and cranberry jam, Nansen wrote in his journal that people back home were probably worried.“I am afraid their compassion would cool if they could look upon us, hear the merriment that goes on, and see all our comforts and good cheer.” Stuster found that careful attention to habitat design and crew compatibility could avoid psychological and interpersonal problems. He called for windows in spacecraft, noting studies of submarine crewmen who developed temporarily crossed eyes on long missions. (The problem was uncovered when they had an unusual number of automobile accidents on their first days back in port.) He wrote about remote-duty Antarctic posts suffering a kind of insomnia called “polar big eye,”which could be addressed by artificially imposing a diurnal cycle of light and darkness.

Bold Endeavors was a hit with astronauts, who carried photocopied pages into space, bearing Stuster’s recommendations on workload, cognitive impairment, and special celebration days. (He nominated the birthday of Jules Verne, whose fictional explorers headed to the moon with fifty gallons of brandy and a “vigorous Newfoundland.”) But historical analogies could take NASA only so far, Stuster argued. Before humans went to Mars, a final test should run astronauts through “high-fidelity mission simulations.”To the extent possible, these tests should be carried out in some remote environment, whose extreme isolation would bring to bear the stress and confinement of a journey to outer space.•

Tags:

In way or another, bigots are almost always the thing they hate.

It just isn’t always literally so as in the case of Hungarian politician Csanad Szegedi, who was a far-right anti-Semite, until discovering he was Jewish. That stunning revelation, which occurred three years ago, knocked him from his perch, forcing him to learn to walk again as an adult. Nick Thorpe of the BBC follows up with a report. An excerpt: 

He comes across a bit like the American singer Johnny Cash. “Hello, I’m Csanad Szegedi.” And the schoolchildren of the Piarist Secondary School in Szeged hang on every word.

“I’m speaking to you here today,” says the tall chubby faced man, with small, intelligent eyes, “because if someone had told me when I was 16 or 17 what I’m going to go tell you now, I might not have gone so far astray.”

As deputy leader of the radical nationalist Jobbik party in Hungary, Szegedi co-founded the Hungarian Guard – a paramilitary formation which marched in uniform through Roma neighbourhoods.

And he blamed the Jews, as well as the Roma, for the ills of Hungarian society – until he found out that he himself was one. After several months of hesitation, during which the party leader even considered keeping him as the party’s “tame Jew” as a riposte to accusations of anti-Semitism, he walked out.

Not a man to do things in half-measures, he has now become an Orthodox Jew, has visited Israel, and the concentration camp at Auschwitz which his own grandmother survived.•

Tags: ,

Every now and then something truly expeditious occurs in technology or science (e.g., the speed forward of driverless-car development in the aftermath of the 2004 “Debacle in the Desert“). But progress is generally maddeningly slow, at least when viewed from the perspective of our own lifespans. So when Transhumanists promise a-mortality in two decades or people smart enough to know better argue that Moore’s Law will magically make everything just great by 2030, feel free to look askance. 

Demonstrating this point: Sarah Zhang of Gizmodo scooped up an issue of Scientific American from a decade ago and measured the accuracy of the predictions on everything from stem cells to solar cells. It was not pretty. An excerpt:

I recently dig up the 2005 December issue of Scientific American and went entry by entry through the Scientific American 50, a list of the most important trends in science that year. I chose 2005 because 10 years seemed recent enough for continuity between scientific questions then and now but also long enough ago for actual progress. More importantly, I chose Scientific American because the magazine publishes sober assessments of science, often by scientists themselves. (Read: It can be a little boring, but it’s generally accurate.) But I also trusted it not to pick obviously frivolous and clickbaity things.

Number one on the list was a stem cell breakthrough that turned out to be one of the biggest cases of scientific fraud ever. (To be fair, it fooled everyone.) But the list held other unfulfilled promises, too: companies now defunct, an FBI raid, and many, many technologies simply still on the verge of finally making it a decade later. By my count, only two of its 16 medical discoveries of 2005 have resulted in a drug or hospital procedures so far. The rosy future is not yet here.

Science is not a linear march forward, as headlines seem to imply. Science is a long slow slog, and often a twisty one at that.•

Tags:

Your new robot coworkers are darling–and so efficient! They’ll relieve you of so many responsibilities. And, eventually, maybe all of them. For now, factory robots will reduce jobs only somewhat, as we work alongside them. But eventually the band will be broken up, the machines going solo. Even the workers manufacturing the robots will soon enough be robots.

In a Technology Review article, Tom Simonite takes a smart look at this transitional phase, as robots begin to gradually commandeer the warehouse. He focuses on Fetch, a company that makes robots versatile enough to be introduced into preexisting factories. An excerpt:

Freight is designed to help shelf pickers, who walk around warehouses pulling items off shelves to do things like fulfilling online shopping orders. As workers walk around gathering items from shelves, they can toss items into the crate carried by the robot. When an order is complete, a tap on a smartphone commands the robot to scoot its load off to its next destination.

Wise says that robot colleagues like these could make work easier for shelf pickers, who walk as much as 15 miles a day in some large warehouses. Turnover in such jobs is high, and warehouse operators struggle to fill positions, she says. “We can reduce that burden on people and have them focus on the things that humans are good at, like taking things off shelves,” says Wise.

However, Wise’s company is also working on a second robot designed to be good at that, too. It has a long, jointed arm with a gripper, is mounted on top of a wheeled base, and has a moving “head” with a depth camera similar to that found in the Kinect games controller. This robot, named Fetch, is intended to rove around a particular area of shelving, taking items down and dropping them into a crate carried by a Freight robot.

Tags: ,

I loved the Rem Koolhaas book Delirious New York, but I happened to be in Seattle in 2004 the week the Central Library he designed opened and I wasn’t really enamored of it the way I am many of his other works. It has an impressive exterior, but the interior felt like it was meant more to be looked at than utilized, though I guess that is the epitome of the modern library in a portable world, the best-case scenario, even–perhaps people will at least take a glance.

As his Fondazione Prada is set to open in Milan this month in a repurposed, century-old industrial space, the architect has become more focused on revitalization and preservation rather than outré original visions. From a Spiegel Q&A with him conducted by Marianne Wellershoff:

Kultur Spiegel:

Does a building need to have a certain age or degree of prominence for us to recognize it as important?

Rem Koolhaas:

The idea of preservation dates back to the beginning of the modern age. During the 19th century, people essentially felt that something had to be at least 2,000 years old to be worthy of preservation. Today, we already decide during the planning stages how long a building should exist. At first, historical monuments were deemed worthy of preservation, then their surroundings, then city districts and finally large expanses of space. In Switzerland the entire Rhaetian Railway has been added to the list of UNESCO World Heritage Sites. The dimensions and repertoire of what is worthy of preserving have expanded dramatically.

Kultur Spiegel:

Were there structures in recent years that you think should have been better preserved?

Rem Koolhaas:

The Berlin Wall, for example. Only a few sections remain, because no one knew at the time how to deal with this monument. I find that regrettable.

Kultur Spiegel:

And what do you think of the concrete architecture of the 1960s, a style known as brutalism? Should it be protected or torn down?

Rem Koolhaas:

We should preserve some of it. It would be madness for an entire period of architectural history — that had a major influence on cities around the world — to disappear simply because we suddenly find the style ugly. This brings up a fundamental question: Are we preserving architecture or history?

Kultur Spiegel:

What is your answer?

Rem Koolhaas:

We have to preserve history.

 

Tags: ,

In “Ancient DNA Tells A New Human Story,” the “Saturday Essay” at the Wall Street Journal, Matt Ridley explains how “low-cost, high-throughput DNA sequencing” has allowed prehistory to come into sharper focus. The facts don’t speak well of humans (we were not nurturers in the big picture), though it does prove what a polyglot race we actually are. There’s also a lot to reveal about the unusual course diseases may have traveled from the earliest societies to the modern ones. An excerpt:

It turns out that, in the prehistory of our species, almost all of us were invaders and usurpers and miscegenators. This scientific revelation is interesting in its own right, but it may have the added benefit of encouraging people today to worry a bit less about cultural change, racial mixing and immigration.

Consider two startling examples of how ancient DNA has solved long-standing scientific enigmas. Tuberculosis in the Americas today is derived from a genetic strain of the disease brought by European settlers. That is no great surprise. But there’s a twist: 1,000-year-old mummies found in Peru show symptoms of TB as well. How can this be—500 years before any Europeans set foot in the Americas?

In a study published late last year in the journal Nature, Johannes Krause of the Max Planck Institute for the Science of Human History in Jena, Germany, and his colleagues found that all human strains of tuberculosis share a common ancestor in Africa about 6,000 years ago. The implication is that this is when and where human beings first picked up TB. It is much later than other scientists had thought, but Dr. Krause’s finding only deepened the mystery of the Peruvian mummies, since by then, their ancestors had long since left Africa.

Modern DNA cannot help with this problem, but reading the DNA of the tuberculosis bacteria in the mummies allowed Dr. Krause to suggest an extraordinary explanation. The TB DNA in the mummies most resembles the DNA of TB in seals, which resembles that of TB in goats in Africa, which resembles that of the earliest strains in African people. So perhaps Africans gave tuberculosis to their goats, which gave it to seals, which crossed the Atlantic and gave it to native Americans.•

Tags:

It would cost less to offer guaranteed paid work to unemployed Americans than to finance a social safety net, but there’s really no movement on either side of the aisle in Washington to aid the long-time unemployed, those left behind by the 2008 financial collapse and the growth of robotics. The problem has just been permitted to percolate.

In a Financial Times piece, Martin Wolf looks at two new titles about the haves and have-nots, Inequality: What Can be Done? by Anthony Atkinson and The Globalization of Inequality by François Bourguignon. Interesting that the acceleration of inequality is most marked in the U.S. and U.K. and has not been shared by all other industrialized nations. France, in fact, has seen disparity decrease during the same timeframe. An excerpt: 

Both authors agree that something should be done about inequality. Atkinson provides a number of arguments for concern over rising inequality within rich countries. Some argue, for example, that only equality of opportunity matters. To this he responds that successful personal outcomes are often merely a matter of luck, that the structure of rewards is often grossly unfair and that, with sufficient inequality of outcome, equality of opportunity must be mirage.

Beyond this, argues Atkinson, unequal societies do not function well. The need to protect personal security or to incarcerate ever more people is likely to become a drag on economic performance and inimical to civilised life. If inequality becomes extreme, many will be unable to participate fully in their society. In any case, argues Atkinson, a pound in the hands of someone living on £10,000 a year must be worth more than it is to someone living on £1m. This does not justify complete equality, since the attempt to achieve it will impose costs. But it does mean that high inequality needs to be justified.

Atkinson goes far further, offering a programme of radical reform for the UK. It is not merely radical, but precise and (to the extent such a programme can be) costed. It starts from the argument that rising inequality “is not solely the product of forces outside our control. There are steps that can be taken by governments, acting individually or collectively, by firms, by trade union and consumer organisations, and by us as individuals to reduce the present levels of inequality.”What about policy? At the global level, both authors recommend improved and more generous aid. Bourguignon adds that properly managed trade has much to offer developing countries. Within countries, both authors call for higher taxes on wealth and incomes, and for better regulation, particularly of finance. Also important, they agree, will be policies directly addressed at improving educational outcomes for the disadvantaged.

Thus policy makers should develop a national pay policy, including a statutory minimum wage set at the “living wage,” and should also offer guaranteed public employment at that rate.•

Tags: , ,

Goods and food made, served and delivered by humans will some day (and soon) be an artisanal and specialized field, the same way some still buy handmade shoes at a great expense, but most of us hop around on the machine-manufactured kind. That’s right, the wealthy will say, an actual lady’s hands touched my carrots! How smart!

Seriously, almost all of us are eventually being replaced at work by robots, with almost every task that can be automated being automated, and there’s no economic plan in place to deal with that onrushing reality. How do we reconcile a free-market economy with a highly automated one? Of course, I’m just talking about Weak AI. What happens if something stronger comes along, which will likely occur if we go on long enough? As the song says, we’ll make great pets. From recent Steve Wozniak comments reported by Brian Steele at MassLive:

“I love technology, to try it out myself,” said Wozniak. “I’ve got at least 5 iPhones. … I have some Android phones.”

He imagined a world in which these kinds of devices would be able to teach our children for us.

“A lot of our schools slow students down,” he said. “We put computers in schools and the kids don’t come out thinking any better.”

Rather than just putting more gadgets and gizmos in the classroom, he said, each classroom needs to have fewer students, and kids who are further ahead than their peers should be nurtured, not forced to fall in line.

Dismissing the concern over giving artificial intelligence too much intelligence, he said that’s already happened.

“The machines won 200 years ago. We made them too important,” said Wozniak. “That makes us the family pet.”•

Tags: ,

Terrorists dress the part now, aided by Hollywood editing techniques which help them satisfy expectations. And the rest of us also try to project an image virtually of who we want to be, if one not so horrifying. It’s neither quite real nor fake, just a sort of purgatory. It’s a variation of who we actually are–a vulgarization.

Here’s the transcription of a scene from 1981’s My Dinner with Andre, in which Wallace Shawn and Andre Gregory discuss how performance had become introduced in a significant way into quotidian life, and that was long before Facebook gave the word “friends” scare quotes and prior to Reality TV, online identities and selfies:

Andre Gregory:

That was one of the reasons why Grotowski gave up the theater. He just felt that people in their lives now were performing so well, that performing in the theater was sort of superfluous, and in a way, obscene. Isn’t it amazing how often a doctor will live up to our expectation of how a doctor should look? You see a terrorist on television and he looks just like a terrorist. I mean, we live in a world in which fathers, single people or artists kind of live up to someone’s fantasy of how a father or single person or an artist should look and behave. They all act like that know exactly how they ought to conduct themselves at every single moment, and they all seem totally self-confident. But privately people are very mixed up about themselves. They don’t know what they should be doing with their lives. They’re reading all these self-help books.

Wallace Shawn:

God, I mean those books are so touching because they show how desperately curious we all are to know how all the others of us are really getting on in life, even though by performing all these roles in life we’re just hiding the reality of ourselves from everybody else. I mean, we live in such ludicrous ignorance of each other. I mean, we usually don’t know the things we’d like to know even about our supposedly closest friends. I mean, I mean, suppose you’re going through some kind of hell in your own life, well, you would love to know if your friends have experienced similar things, but we just don’t dare to ask each other. 

Andre Gregory:

No, it would be like asking your friend to drop his role.

Wallace Shawn:

I mean, we just put no value at all on perceiving reality. On the contrary, this incredible emphasis we now put on our careers automatically makes perceiving reality a very low priority, because if your life is organized around trying to be successful in a career, well, it just doesn’t matter what you perceive or what you experience. You can really sort of shut your mind off for years ahead in a way. You can turn on the automatic pilot.•

Tags: ,

It doesn’t seem plausible to me that we’re on the cusp of a-mortality, no matter how many Transhumanists say they believe it to be so. My main disagreement with futurists is that they seem to always think the future is now, that any dream theoretically possible will soon be realized. Usually you have to work awhile to get there. 

But I’d be so happy my head would explode if Transhumanist Party Presidential candidate, Zoltan Istvan, was included in the major debates with Hillary and Marco and Jeb, so that he could discuss robot hearts and designer babies. He has as much chance to win the election as Ted Cruz but would be far more interesting to listen to. 

Two questions follow from Roby Guerra’s new h+ interview with Istvan.

___________________________

Roby Guerra:

Zoltan, Is knowledge the new food? Food for a new type of man of year the year 2000 and beyond? 

Zoltan Istvan:

The new way for human beings to move forward is via cyborgism, where we merge machine parts with the human body. This might include things like robotic hearts, artificial limbs, and mind reading headsets. These are the sorts of new technologies that will make up the modern human being moving forward.

___________________________

Roby Guerra:

If you were to get elected what would your practical policies be? In addition to supporting transhumanist projects?

Zoltan Istvan:  

The Transhumanist Party supports American values, prosperity, and security.

So the three primary things I would do if I became president are:

1) Attempt to do everything possible to make it so America’s amazing scientists and technologists have resources to overcome human death and aging within 15-20 years–a goal an increasing number of leading scientist think is reachable.

2) Create a cultural mind-set in America that embracing and producing radical technology and science is in the best interest of our nation and species.

3) Create national and global safeguards and programs that protect people against abusive technology and other possible planetary perils we might face as we transition into the transhumanist era.•

Tags: ,

The quantified self certainly has its benefits, allowing us to detect illnesses early–perhaps eventually even anticipate them. We’ll have the ability to monitor our vitals and behavior whenever we like, but corporations may also have their telescope inside our bodies and minds. From Jacob Silverstein at the Baffler:

This month, John Hancock Insurance—whose patriotic namesake might be disappointed that the company is now a wholly owned subsidiary of Canadian giant Manulife Financial—announced that it would distribute rebates to life insurance customers in exchange for access to their fitness monitor and location information.

IBM and Microsoft are marketing their cloud computing services to insurers, offering to crunch their data for them.

Car insurers like Progressive are discovering the value of real-time telematics data, culled from GPS units or special devices that can track whether you brake too hard. (Want to gag a little? Check out this British insurer using information from car computers to encourage motorists to “drive like a girl.”)

This is the first wave of insurance companies capitalizing on the explosion in personal data, and it looks to get worse. Trade publications are awash with rosy stories about the profits to be extracted from modifying premiums not just once or twice a year, but every day. Soon, rates will be adjusted in real time. As one insurance consultant told Forbes, “the healthier you get the lower your premiums go.” The corollary is that if you get sick or injured, or if you do anything that the insurer’s algorithms deem unhealthy, your premiums will increase.•

Tags:

I’ve always traced the War on Drugs in the U.S. to the Nixon Administration, but British journalist Johann Hari, author of the new book Chasing the Scream, dates it to the end of Prohibition, particularly to bureaucrat Harry Anslinger, who later mentored Sheriff Joe Arpaio of Tent City infamy. He also reveals how intertwined crackdown was (and is) with racism. No shocker there.

The so-called War has been a huge failure tactically and financially and has criminalized citizens for no good reason. All the while, there’s been a tacit understanding that millions of Americans are hooked on Oxy and the like, dousing their pain with a perfectly legal script. These folks are far worse off than pot smokers, who are still afoul of the law in most states. I’m personally completely opposed to recreational drug use, but I feel even more contempt for the War on Drugs. It’s done far more harm than good.

Matthew Harwood of the ACLU interviews Hari at Medium. The opening:

Matthew Harwood:

So Chasing the Scream, what’s with the title?

Johann Hari:

The most influential person who no one has ever heard of is Harry Anslinger, the man who invented the modern War on Drugs — way before Nixon, way before Reagan. He’s the guy who takes over the Federal Bureau of Prohibition just as alcohol prohibition is ending. So, he inherits this big government department with nothing to do, and he basically invents the modern drug war to give his bureaucracy a purpose. For example, he had previously said marijuana was not a problem — he wasn’t worried about it, it wasn’t addictive — but he suddenly announces that marijuana is the most dangerous drug in the world, literally — worse than heroin — and creates this huge hysteria around it. He’s the first person to use the phrase “warfare against drugs.”

But he was driven by more than just trying to keep his large bureaucracy in work. When he was a little boy, he grew up in a place called Altoona in Pennsylvania, and he had this experience that really drove him all his life. He lived near a farmer and his wife, and one day, he goes to the farmhouse, and the farmer’s wife was screaming and asking for something. The farmer sent little Harry Anslinger to the local pharmacy to buy opiates — because of course opiates were legal. Harry Anslinger hurries back and gives the opiates to the farmer’s wife, and the farmer’s wife stops screaming. But he remembered this as this foundational moment where he realized the evils of drugs, and he becomes obsessed with eradicating drugs from the face of the earth. So I think of him as chasing this scream across the world. The tragedy is he created a lot of screams in turn.

It leads him to construct this global drug war infrastructure that we are all living with now. We are all living at end of the barrel of Harry Anslinger’s gun. He didn’t do it alone — I’m not a believer in the “Great Man Theory of History.” He could only do that because he was manipulating the fears of his time. But he played a crucial role.

Matthew Harwood:

We here at the ACLU look at the drug war and see that it has a disproportionate impact on communities of color. You find, however, that this war was pretty racist from the beginning.

Johann Hari:

If you had said to me four years ago, “Why were drugs banned?” I would have assumed it for the reasons people would give today — because you don’t want kids to use them or you don’t want people to become addicted. What’s striking when you look at the archives from the time is that almost never comes up. Overwhelmingly the reason why drugs are banned is race hysteria.•

Tags: , ,

Tesla is officially no longer solely an EV company but a home-battery outfit as well, which could make for a smoother grid and be a boon for alternative energies. Elon Musk should be pleased, as should those early tinkerers who began repurposing his electric-car batteries for makeshift home conversions. Perhaps the biggest benefit, as Chris Mooney of the Washington Post astutely points out, is the ability to store wind and solar power. An excerpt:

“Storage is a game changer,” said Tom Kimbis, vice president of executive affairs at the Solar Energy Industries Association, in a statement. That’s for many reasons, according to Kimbis, but one of them is that “grid-tied storage helps system operators manage shifting peak loads, renewable integration, and grid operations.” (In fairness, the wind industry questions how much storage will be needed to add more wind onto the grid.)

Consider how this might work using the example of California, a state that currently ramps up natural gas plants when power demand increases at peak times, explains Gavin Purchas, head of the Environmental Defense Fund’s California clean energy program.

In California, “renewable energy creates a load of energy in the day, then it drops off in the evening, and that leaves you with a big gap that you need to fill,” says Purchas. “If you had a plenitude of storage devices, way down the road, then you essentially would be able to charge up those storage devices during the day, and then dispatch them during the night, when the sun goes down. Essentially it allows you to defer when the solar power is used.”•

Tags: , , ,

Even someone as lacking in religion as myself can be perplexed by Richard Dawkins’ midlife anti-theology mission to irk people of faith on chat shows and the like. In his proselytizing–and that’s what it is–he has the fervor of a particularly devout and curmudgeonly priest. It’s true that many a horrid act has been committed in the name of the father, but so have many others been by those who believe (like Dawkins and I do) that we’re orphans. I don’t want to deny someone on an operating table (or the one doing the operating) from believing in a little in magic at that delicate moment, even if it is rot. Trust in science, and say a prayer if you like. 

But I wouldn’t let his noisily running a chariot over the gods make me deny his wonderful intellect and contributions to knowledge, from genes to memes. At Edge, the site’s founder and longtime NYC avant-gardist, John Brockman, has an engrossing talk with the evolutionary biologist about his “vision of life.” The transcript makes for wonderful reading.

Dawkins believes if life exists elsewhere in the universe (and his educated guess is that it does), it’s of the Darwinian, evolutionary kind, that no other biological system besides the one we know would work under the laws of physics. He also notes that we contribute in our own way to the amazing progress of life, even if our time on the playing field can be brutal and brief. As Dawkins puts it, “we are temporary survival machines” coded to be hellbent on seeing our genes persevere, even though life will eventually evolve in ways presently unimaginable to us. It will still be life, and that’s our gift to it. No matter what we personally feel is the main purpose of our existence, it’s actually that.

The opening:

Natural selection is about the differential survival of coded information which has power to influence its probability of being replicated, which pretty much means genes. Coded information, which has the power to make copies of itself—“replicator”—whenever that comes into existence in the universe, it potentially could be the basis for some kind of Darwinian selection. And when that happens, you then have the opportunity for this extraordinary phenomenon which we call “life.”

My conjecture is that if there is life elsewhere in the universe, it will be Darwinian life. I think there’s only one way for this hyper complex phenomenon which we call “life” to arise from the laws of physics. The laws of physics—if you throw a stone up in the air, it describes a parabola, and that’s it. But biology, without ever violating the laws of physics, does the most extraordinary things; it produces machines which can run, and walk, and fly, and dig, and swing through the trees, and think, and produce the whole of human technology, human art, human music. This all comes about because at some point in history, about 4 billion years ago, a replicating entity arose, not a gene as we would now see it, but something functionally equivalent to a gene, which because it had the power to replicate and the power to influence its own probability of replicating, and replicated with slight errors, gave rise to the whole of life. 

If you ask me what my ambition would be, it would be that everybody would understand what an extraordinary, remarkable thing it is that they exist, in a world which would otherwise just be plain physics. The key to the process is self-replication. The key to the process is that … let’s call them “genes” because nowadays they pretty much all are genes. Genes have different probabilities of surviving. The ones that survive, because they have such high fidelity replication, are the ones which we see in the world, the ones which dominate gene pools in the world. So for me, the replicator, the gene, DNA, is absolutely key to the whole process of Darwinian natural selection. So when you ask the question, what about group selection, what about higher levels of selection, what about different levels of selection, everything comes down to gene selection. Gene selection is fundamentally what is really going on. 

Originally these replicating entities would have been floating free and just replicating in the primeval soup, whatever that was. But they “discovered” a technique of ganging together into huge robot vehicles, which we call individual organisms.•

 

Tags: ,

At the Gawker site Phase Zero, William H. Arkin conducted a very interesting Q&A with Harper’s Washington Editor, Andrew Cockburn, who’s just published what’s a sadly timely book Kill Chain, which focuses on the U.S. droning program. Although the author doesn’t believe military droning will become automated, he feels the bigger-picture machinery of the system already is. Remote war has been a dream pursued since Tesla and now it’s a global reality. One exchange:

William M. Arkin:

The CIA’s drone program, the President’s drone program, Congressionally approved, not approved, tacitly accepted: almost every description of the drone program makes it sound like it isn’t the United States and its foreign policy. Is that the consequence of something unique to drones?

Andrew Cockburn:

It’s interesting, drones and covert foreign policy seem to go together. In Operation Menu, Nixon’s secret bombing of Cambodia, the B-52 flight paths were directed from the ground, as was the moment of bomb release. In other words, the B-52s were essentially drones. Maybe the drone campaign isn’t described as the foreign policy of the United States because there’s a tinge of embarrassment that we’re murdering people in foreign countries as a matter of routine.

Beyond that, maybe we should call it the drone program’s drone program, because it’s taken on a life of it’s own, a huge machine that exists to perpetuate itself. Just take a look at the jobs listed almost every day for just one of the Distributed Common Ground System stations at Langley AFB in the Virginia Southern Neck. On April 25, for example, various contractors (some of which you’ve never heard of) were asking for a “USAF Intelligence Resource Management Analyst,” a “Systems Integrator,” a “USAF Senior Intelligence Programs and Systems Support Analyst,” a “USAF ISR Weapons Systems Integration Support Analyst” a “DPOC Network Engineer,” whatever that is, and a few others. All high paying, all of course requiring Top Secret or higher clearances. Every so often we hear that the CIA drone program is going to be turned over to the military. I say, ‘good luck with that’ – is the CIA really going to obligingly hand over a huge chunk of its raison d’etre, and its budget, its enormous targeting apparatus? There’s a lot of talk about “autonomous drones,” which aren’t going to happen, but I think the whole system is autonomous, one giant robot that has become unstoppable as it grinds along, sucking up money and killing people along the way.•

Tags: ,

The International Commission of Stratigraphy awards a golden spike–which is promptly driven into the ground–when scientists provide geological proof that a new epoch has begun. By that definition, are we in the Anthropocene, a human-driven age of climate turmoil? Perhaps more importantly: Does it matter that we establish the Anthropocene by measuring rock when signs of the deleterious human effect on the planet are manifest in many other ways?

In “Written in Stone,” an Aeon article, James Westcott wonders about the rush by some geologists to make the Anthropocene “official,” especially by measures that perhaps aren’t the most vital ones. An excerpt:

For a potentially epoch-making event, the press conference after the AWG’s first face-to-face meeting in Berlin in October was muted, and sparsely attended. And yet Jan Zalasiewicz, a paleobiologist at the University of Leicester and chairman of the AWG, had important news to impart. He reported a growing feeling within the group that a strong case for formalising the Anthropocene can be made when the AWG submits its report to the International Commission of Stratigraphy in 2016.

The AWG will recommend a start date for the Anthropocene in the early 1950s (relegating many of our parents and grandparents to an entirely different epoch). Why then? Well, the flurry of post-war thermonuclear test explosions left a radionuclide signature that has spread across the entire planet in the form of carbon 12 and plutonium-239, which has a half-life of 24,110 years. The early 1950s also coincides with the beginning of the Great Acceleration in the second half of the 20th century, a period of unprecedented economic and population growth with matching surges – charted by Will Steffen and colleagues in Global Change and the Earth System (2004) – in every aspect of planetary dominance, from the damming of rivers to fertiliser production, to ozone depletion.

The Anthropocene’s advocates have a huge buffet of evidence that human activity amounts to an almost total domination of the planet – one of the latest being new maps that show the extent to which the United States has been paved over. But their problem in terms of formalisation on the Geological Time Scale is that the Earth has only just begun to digest this deadly feast through the pedosphere (the outermost layer) and into the lithosphere (the crust beneath it). The challenge is to convince geologists accustomed to digging much further back in time that the evidence accumulating now will be significant, stratigraphically speaking, deep into the future. Geologists are being asked to become prophets. …

Whatever happens after the AWG submits its recommendation in 2016, anthropocenists are, ironically, selling their theory short by seeking a place on something as esoteric as the Geological Time Scale. The Anthropocene, in all its multi-faceted, Earth system-altering horror, is more serious than that. The hope of course is that if we can name a new epoch after us then it will finally be a truth universally acknowledged that humans have more power than they know how to handle, and we will be able to start picking up the pieces.•

Tags: ,

Andrew McAfee, co-author with Erik Brynjolfsson of 2014’s great Second Machine Age, recently argued in a Financial Times blog post that the economy’s behavior is puzzling these days. It’s difficult to find fault with that statement.

Inflation was supposed to be soaring by now, but it’s not. Technology was going to make production grow feverishly, but traditional measures don’t suggest that. Job growth and wages were supposed to return to normal once the financial clouds cleared, though that’s been largely a dream deferred. What gives?

In a sequel of sorts to that earlier post, McAfee returns to try to suss out part of the answer, which he feels might be that the new technologies have created an abundance which has suppressed inflation. That seems to be certain feature of the future as 3D printers move to the fore, but has it already happened? And has this plenty made jobs scarcer and suppressed wages? An excerpt:

In a Tweetstorm late last year, venture capitalist Marc Andreessen argued that technological progress might be another important factor driving prices down. He wrote: “While I am a bull on technological progress, it also seems that much of that progress is price deflationary in nature, so even extremely rapid tech progress may not show up in GDP or productivity stats, even as it = higher real standards of living.”

Prof [Larry] Summers shot back quickly, noting: “It is… not clear how one would distinguish deflationary and inflationary progress. The price level reflects the value of goods in terms of money, so it is hard to analyze without thinking about monetary and financial conditions.” This is surely correct, but is Prof Summers being too dismissive of Mr Andreessen’s larger point? Can tech progress be contributing to price declines?

Moore’s law — that computer processing power doubles roughly every two years — has made computers themselves far cheaper. It has also pretty directly led to the shrinkage of industries as diverse as encyclopedias, recorded music, film photography and standalone GPS devices. An intriguing analysis by writer Chris Goodall found that the “UK began to reduce its consumption of physical resources in the early years of the last decade.” Technological progress, which by its nature allows us to do more with less, is a big part of this move past “peak stuff.”

It’s also probably a big part of the reason that corporate profits remain so high, even while overall economic growth stagnates.

Tags: , , , ,

Oliver Sacks on a motorcycle in NYC, 1961. (Photo by Douglas White.)

I’ve read most of Lawrence Weschler’s books and gotten so much from them, particularly Seeing Is Forgetting the Name of the Thing One Sees and Vermeer in Bosnia. In a new Vanity Fair article, he uses passages from a long-shelved biography of his friend Oliver Sacks, the terminally ill neurologist, to profile the doctor in a way only a confidante and great writer can, revealing the many lives Sacks has lived, in addition to the public-intellectual one we’re all familiar with. Weschler is convinced that Sacks’ period of excessive experimentation with drugs when young led to his later scientific breakthroughs. An excerpt:

I had originally written him a letter, sometime in the late 70s, from my California home. Somehow back in college I had come upon Awakenings, published in 1973, an account of his work with a group of patients who had been warehoused for decades in a home for the incurable—they were “human statues,” locked in trance-like states of near-infinite remove following bouts of a now rare form of encephalitis. Some had been in this condition since the mid-1920s. These people were suddenly brought back to life by Sacks, in 1969, following his administration of the then new “wonder drug” L-dopa, and Sacks described their spring-like awakenings and the harrowing siege of tribulations that followed. In the book, Sacks gave the facility where all this happened the pseudonym “Mount Carmel,” an apparent reference to Saint John of the Cross and his Dark Night of the Soul. But, as I wrote to Sacks in that first letter, his book seemed to me much more Jewish and Kabbalistic than Christian mystical. Was I wrong?

He responded with a hand-pecked typed letter of a good dozen pages, to the effect that, indeed, the old people’s home in question, in the Bronx, was actually named Beth Abraham; that he himself came from a large and teeming London-based Jewish family; that one of his cousins was in fact the eminent Israeli foreign minister Abba Eban (another, as I would later learn, was Al Capp, of Li’l Abner fame); and that his principal intellectual hero and mentor-at-a-distance, whose influence could be sensed on every page of Awakenings, had been the great Soviet neuropsychologist A.R. Luria, who was likely descended from Isaac Luria, the 16th-century Jewish mystic.

Our correspondence proceeded from there, and when, a few years later, I moved from Los Angeles to New York, I began venturing out to Oliver’s haunts on City Island. Or he would join me for far-flung walkabouts in Manhattan. The successive revelations about his life that made up the better part of our conversations grew ever more intriguing: how both his parents had been doctors and his mother one of the first female surgeons in England; how, during the Second World War, with both his parents consumed by medical duties that began with the Battle of Britain, he, at age eight, had been sent with an older brother, Michael, to a hellhole of a boarding school in the countryside, run by “a headmaster who was an obsessive flagellist, with an unholy bitch for a wife and a 16-year-old daughter who was a pathological snitch”; and how—though his brother emerged shattered by the experience, and to that day lived with his father—he, Oliver, had managed to put himself back together through an ardent love of the periodic table, a version of which he had come upon at the Natural History Museum at South Kensington, and by way of marine-biology classes at St. Paul’s School, which he attended alongside such close lifetime friends as the neurologist and director Jonathan Miller and the exuberant polymath Eric Korn. Oliver described how he gradually became aware of his homosexuality, a fact that, to put it mildly, he did not accept with ease; and how, following college and medical school, he had fled censorious England, first to Canada and then to residencies in San Francisco and Los Angeles, where in his spare hours he made a series of sexual breakthroughs, indulged in staggering bouts of pharmacological experimentation, underwent a fierce regimen of bodybuilding at Muscle Beach (for a time he held a California record, after he performed a full squat with 600 pounds across his shoulders), and racked up more than 100,000 leather-clad miles on his motorcycle. And then one day he gave it all up—the drugs, the sex, the motorcycles, the bodybuilding. By the time we started talking, he had been pretty much celibate for almost two decades.•

Tags: ,

« Older entries § Newer entries »