Science/Tech

You are currently browsing the archive for the Science/Tech category.

While it might make for an interesting bull session to predict what industry, nascent or developed, could create the first trillion-dollar company, such a development wouldn’t really be a good barometer of how well off we truly are. Is this company helping or hurting the environment? Is it a sign of greater wealth inequality? Are there other unintended consequences? In a Washington Post opinion piece, futurist Dominic Basulto mainly concerns himself with just the central question. An excerpt:

Artificial intelligence is one industry that could give rise to the first $1 trillion company – provided, of course, that sentient AI life doesn’t kill off humanity before it reaches that target. The promise of AI is that almost anything can be made more valuable by making it smarter. It’s no wonder that we’ve started to see an initial land grab of smart “machine learning” companies and talent. Google, for example, has spent more than $400 million to acquire DeepMind Technologies to ramp up its deep learning capabilities. For AI ever to produce a trillion-dollar company, though, machines will need to do more than just recognize patterns and crunch a lot of data. AI start-ups will need to create a fundamentally new way for humans to interact with machines, perhaps even a new way of learning or acquiring knowledge.

Another potential trillion-dollar industry is 3D printing, which promises a “second industrial revolution.” 3D printing has the potential to upend the way we buy, sell and make everything, turning every garage into a personal fabrication unit. If the first industrial revolution was all about mass production, the second industrial revolution will be all about mass customization. There are any number of companies today that hope to use 3D printing for rapid prototyping, design and small-batch manufacturing, but the real boost will come when there is a 3D printer on every desk and in every home. Think of a mega-ecosystem even bigger than the one Apple is building.

But that’s really just scratching the surface. Peter Diamandis and Steven Kotler in their new book Bold: How to Go Big, Create Wealth and Impact the World highlight a handful of technologies — including the Internet of Things, AI, robotics and synthetic biology — that have the potential to fundamentally change the way we live. Add to this list other futuristic tech favorites such as drones, asteroid mining, augmented reality, and nanotechnology and you have other prospects for the first $1 trillion company.•

Tags:

Herman Kahn saw the glass half full. The futurist and systems theorist (mentioned at the end of the post on 1970s Swinging Singles) thought while nuclear war would be awful, it wouldn’t wipe out the entire species. There were varying degrees of awfulness. He was likely right, but his conversation about the end of much of humanity and his coining of the term “megadeath” made him one of the inspirations for Stanley Kubrick’s titular Dr. Strangelove character. Interesting that even someone so associated with nuclear believed that solar was the future. The following is an odd and disapproving 1974 piece about him from something called IPS, which apparently was a press service from loony Lyndon Larouche, one of the strangest figures of 20th-century Americans politics, who turned 92 last fall:

IPS INTERVIEW WITH HERMAN KAHN: WOULD YOU BUY A USED FUTURE FROM THIS MAN?

NEW YORK, N.Y., Nov. 24 (IPS)–In a recent interview with IPS, futurologist Herman.Kahn confirmed that no one but a criminal psychotic could field “ideas” for the Rockefeller family. The portly Kahn, who is the founder and director of the Hudson Institute, gained notoriety during the late 1950s by calculating the number of “megadeaths” that could be expected as a result of nuclear war between the United States and the Soviet Union.

Also present at the interview was Professor Robert Mundell of Columbia: University. Mundell organized the conference of bankers and econcmistsheld near Siena, Italy in September, exposed by IPS as a planning session for the Chileanization of Great Britain and Italy.

During the wide-ranging discussion, the following exchange took place: 

IPS:

If you reduce the level of energy “throughput” into the biosphere, as the Hudson Institute proposes, you tend to set into motion entropic processes which get out of your control. The result would be an ecological holocaust. For example, in Brazil, where levels of nutrition and health have been reduced, you see the spread of new types of plagues faster than you can find vaccines for them.

Herman Kahn:

Well, there are two basic types of population curves, the up and down curve, where you expand population, overgraze, and so forth, and then have to reduce population. Or there is the collapse curve, where you have famine and disease. I would divide the world up into four categories. 1.4 billion people are “rich”–include Portugal in this. 0.85 billion live in Communist Asia. 1.05 billion are “coping”–like Mexico. and Brazil. That is, income is trickling down to at least one third of the population. People always misunderstand “trickle down” theories because they think wealth is supposed to get to the bottom. It never does … But there is no study which shows a correlation between hunger and disease. You won’t have plagues that kill half the world’s population. (At this point Mr. Kahn, who had ordered a Japanese dinner, paused to eat a raw squid.)

Robert Mundell:

No, we are about to have another plague. We have them every 300 years. You know, 1100, 1400, then–I forget the dates exactly.

Herman Kahn:

You mean the one in the eighteenth century where they all danced around?

Robert Mundell:

That was it. Anyway, every 300 years you wipe out half the world’s population.

Herman Kahn:

Oh, a cycle theory. I’m not sure about cycle theories…

Robert Mundell:

Well, anyway, I only make up these theories for fun.

Kahn, whose style is a blend of Jackie Gleason and Heinrich Himmler, turned his attention again to his raw fish, adding to the morning coffee stains on his shirt front.

Both these men, who have access to cabinet-level members of many governments,. have been touted as leading minds of the capitalist class. In fact, “Fat Herman,” as he is known to friends, is something of a public-display item, next to whom the other psychotic “planners” of the Rockefeller faction are intended to  look sane. 

Among his most recent efforts are a study of Britain, calling into question the country’s existence by the year 1980, and the preparation of a four-year development program for the “radical” government of Algeria.•

Tags: , ,

I’m apparently the one person in the world who has no interest in Star Trek, the TV shows, the movies, any of it. Yes, I know, I ruin everything. But Leonard Nimoy’s passing is a real sadness. His gravitas was used to perfection not just for Mr. Spock, but also in the pseudoscience documentary series In Search of… and in one of my favorite movies, Philip Kaufman’s 1978 version of Invasion of the Bodysnatchers, in which he played the bookish psychiatrist of your nightmares.

Tags:

At the Forbes site, John Tamny, author of the forthcoming pop culture-saturated book Popular Economics argues that robots will be job creators, not killers, and breathlessly asserts that the Digital Revolution will follow the arc of the Industrial one. Perhaps. But there could be a very bumpy number of decades while that potential transition takes place. Although, as I’ve said before, you wouldn’t want to live in a country left behind in the race to greater AI.

But robots or no robots, here’s one job that should be created: someone to design a site for Forbes that isn’t a complete piece of shit. It’s really like Web 1.0 over there. The opening of Tamny’s reasoning:

As robots increasingly adopt human qualities, including those that allow them to replace actual human labor, economists are starting to worry.  As the Wall Street Journal reported last week, some “wonder if automation technology is near a tipping point, when machines finally master traits that have kept human workers irreplaceable.”

The fears of economists, politicians and workers themselves are way overdone.  They should embrace the rise of robots precisely because they love job creation.  As my upcoming book Popular Economics points out with regularity, abundant job creation is always and everywhere the happy result of technological advances that tautologically lead to job destruction.

Robots will ultimately be the biggest job creators simply because aggressive automation will free us up to do new work by virtue of it erasing toil that was once essential.  Lest we forget, there was a time in American history when just about everyone worked whether they wanted to or not — on farms — just to survive.  Thank goodness technology destroyed lots of agricultural work that freed Americans up to pursue a wide range of vocations off the farm.

With their evolution as labor inputs, robots bring the promise of new forms of work that will have us marveling at labor we wasted in the past, and that will make past job destroyers like wind, water, the cotton gin, the car, the internet and the computer seem small by comparison.  All the previously mentioned advances made lots of work redundant, but far from forcing us into breadlines, the destruction of certain forms of work occurred alongside the creation of totally new ways to earn a living.  Robots promise a beautiful multiple of the same.•

Tags:

Nothing is more amusing than a mainstream publication introducing the masses to an unsettling subculture, especially if we’re talking about the 1960s. The May 9, 1966 issue of Newsweek did just that with a sprawling piece about LSD, which alternates between interesting writing and a basic primer of the emerging youth revolution. There are quotes from British-born psychiatrist Humphry Osmond, who coined the term “psychedelic,” and the article does wisely comprehend the coming of a pharmacological culture. Most of the article can be read here and here and here, though the last part is missing. The opening:

“As I was lying on the ground, I was looking up at the sky and I could sort of see through the leaves of the plant and see all the plant fluids flowing around inside of it. I thought the plant was very friendly and very, very closely related to me as a living thing. For a while, I became a plant and felt my spine grow down through the bricks and take root…and I raised my arms up and waved them around with the plant and I really was a plant!

“But toward the end I was watching Lois and I thought I saw the drug take hold of her in a bad way…Suddenly I was afraid. I looked down and Lois was miles and miles beneath me sort of as if I were looking at her from the wrong end of the telescope.”

The man who thought he was a plant is a 29-year-old Yale graduate. And he was looking at his wife through the wrong end of a telescope; his perceptions had been altered by a chemical called d-lysergic acid diethylamide.

‘Inner Space’: Largely unknown and untasted outside the researcher’s laboratory until recently, the hallucinogenic drug LSD has suddenly become a national obsession. Depending on who is doing the talking, it is an intellectual tool to explore psychic “inner space,” a new source of kicks for thrill seekers, the sacramental substance of a far-out mystical movement–or the latest and most frightening addition to the list of mind drugs now available in the pill society being fashioned by pharmacology. “Every age produces the thing it requires,” says psychiatrist Humphry Osmond of the New Jersey Neuro-Psychiatric Institute in Princeton. “This age requires ways of learning to develop its inner qualities.”

The new LSD subculture, for the moment at least, is mainly American and young. It has its own vocabulary: on college campuses, in New York’s Greenwich Village, Los Angeles’ Sunset Boulevard and San Francisco’s Haight-Ashbury District, the drug is called “acid” and its devotees “acid heads.” Users “turn on” and go on LSD “trips.” Some of the trips are contemplative affairs; but on others, hippies take off their clothes and turn on orgiastically. And as the young world turns on, the adult world–shocked and bewildered–turns off.•

Tags:

I’m certain a drone carrying explosives will fly into the side of a very important U.S. building at some point in the foreseeable future. It needn’t be large to be deadly. Like 3D-printed guns, in the coming years these things will be cheap and ubiquitous. Drones are just one of the new global challenges of warfare, which is rapidly changing, with stateless ideological sects difficult to pinpoint and robots entering the scrum at an accelerating pace. The opening of David Sterman’s provocative New America essay “Will We Still Call It War?“:

When Army Chief of Staff General Ray Odierno says that today’s environment is the most uncertain in his 40 years in the army, it’s easy to see why. Wars are now less about land than ideology. Robots can kill.  A cold war with one enemy has given way to a world with myriad, inter-connected conflicts with no one the U.S. can call ally or enemy. Global warming has shifted the very nature of the environment upon which wars are fought.

Our increasingly complex conflict environment is part of what’s driving the contentious debate over the President’s proposed authorization for the use of military force (AUMF) against ISIS. How do we define our enemy, and the theatres of conflict, in a war that is metastasizing and changing everyday? As Congress reviews the proposed authorization, it’s hard not to compare the present to the past –and to wonder about what the future holds.  At New America’s Future of War Conference this week,  Odierno’s lament helped frame the conversation: if so much has changed in his 40 years of service, what can we expect in the next 40 years?

First, there’s the spread of new technologies – like the proliferation of drones, combined with America’s deteriorating influence in the fields of drone technology and robotics. According to New America’s new World of Drones project, 85 countries have some form of militarized drone, three countries have used drones in combat, and more have considered it.

Dr. Missy Cummings, an associate professor at Duke and former Navy pilot, said the United States military has “lost the edge” in the field. Today, the Israelis lead the world in drone development, Amazon and Google lead the world in robotics, and her students can 3D print a drone in a weekend, she said. Cummings even “guaranteed” that U.S. forces would be struck by a 3D printed drone in the future. As other countries and even companies surpass or challenge the United States in the development of key technologies, the American capability to manage crises may decline.•

Tags: , ,

We’re dying people on a dying planet in a dying universe. Time itself will eventually collapse. That infuriates me. I’d like every beautiful person to live forever, and I want the same for most of the assholes as well. But, alas.

My unfair kneejerk reaction to cryonics enthusiasts is that they’re delusional, even a little selfish. But if you’re going to be selfish about something, shouldn’t it be lifespan?

One person who wasn’t selfish at all, just a dying 23-year-old neuroscientist who wasn’t ready to go, was Kim Suozzi. She dreamed of somehow continuing, because the alternative was so cruel and pointless. From Molly Lambert of Grantland:

Suozzi posted a video blog about her situation and canvassed Redditors for help fulfilling her dream of being cryonically preserved. “My prognosis looks pretty bleak at this point,” she wrote, “and though I am hoping to exceed the 6-10 month median survival, I have to prepare to die.” Suozzi’s interest in futurism was sparked by reading Ray Kurzweil’s The Age of Spiritual Machines in a cognitive science class at Truman State, which prompted her to also read Kurzweil’s The Singularity Is Near. Kurzweil’s books, beginning with his influential tome The Age of Intelligent Machines, forecast “the Singularity,” a hypothetical future event when artificial intelligence will surpass human intelligence. Kurzweil predicts 2045 as a soft date for that happening. Kurzweil is currently a director of engineering at Google.

Silicon Valley has become a hotbed for futurology, with adherents ranging from Elon Musk, whose company SpaceX has the long-term goal of colonizing Mars, to the founders of the nutrition substitute Soylent, whose winky slogan is “Free Your Body.”1 “Futurology” is an umbrella term that encompasses the beliefs of both kooky conspiratorial types with hilariously janky web domains and actual geniuses like world-renowned theoretical physicist Michio Kaku. Futurists share a common belief that the bleeding edge of science exists in a zone that might seem crackpot now but will prove prophetic later. Believers refer to such successes as splitting the atom as proof that all big scientific leaps were once considered impossible science fiction. Some credit sci-fi author H.G. Wells with founding the discipline because of his turn-of-the-20th-century predictions about the year 2000, some of which even came true. Futurology lumps together a wide array of disciplines, many of them related to the idea of transhumanism — the process by which humans will be integrated with AI through nanotechnology, cybernetic implants, and thought-controlled robotics. Some of the fields are in primitive stages, while others are moving along at a surprisingly rapid pace. The desire to overcome one’s meatbody and be uploaded into a permanent robotic avatar is part of transhumanism; “Free Your Body” could be the slogan. Merging with the machines seems a little like a Revenge of the Nerds fantasy about defeating the jocks by getting your superior intellect uploaded into a sweet new mecha. But not all futurology is optimistic — some predictions are for disaster. The hope of extending life is a central tenet, though.

In a sense, life extension is like a nonbeliever’s version of heaven, an atheist’s dream of eternal life facilitated by scientific innovation. To the faithful, death is just another disease that will eventually be overcome by the power of science and the intellectual capacity of the human mind.•

Tags: ,

Technology has made a certain level of cinematic sophistication available to all, even terrorists. This lesson has clearly been processed by ISIS, which shoots its real-life snuff films to mirror the hard-R torture porn shown in multiplexes, aiming them at the youth quadrant, with sequels that seemingly never stop coming. From Jeffrey Fleishman in the Los Angeles Times:

The Islamic State’s production values have steadily improved since the network grew in Iraq and Syria; it now operates or has affiliates across North Africa and the Middle East. The group’s ranks have been bolstered by as many as several thousand recruits from Europe, which may be where the organization’s videographers learned their trade. The videos, including those showing the deaths of American, British and Japanese hostages, have been frequently released since last summer.

The most recent films unfold with almost surreal matter-of-factness, taking their time before death is carried out. Cameras pan and glance from different angles; anxiety builds. The executioners are masked and often dressed in black, including the militant who beheaded American hostage James Foley in August. In those videos and in the one in which 21 Coptic Christians were decapitated on the Libyan coast, the killers speak in English and relish in lurid exhibitionism.

The 22-minute video depicting the death of Jordanian pilot Lt. Moaz Kasasbeh, who was captured when his F-16 was shot down over Syria during a U.S.-led coalition bombing mission against Islamic State, was filmed amid war ruins. Militants dressed in fatigues and bracing Kalashnikovs stand guard. They seem as if regal sentinels in a perverted ideology to impose a primitive brand of Islamic law on what they see as a permissive and godless world.

Kasasbeh wanders bewildered down a hazy street that leads to a cage. The scene is interspersed with images showing the bodies of Syrians the Islamic State claims were killed by coalition missiles. Kasasbeh’s orange jumpsuit, reminiscent of those worn by suspected extremists held by the U.S. at Guantanamo Bay, appears soaked with accelerant. A short distance away, a militant holds up a torch and then touches it to the ground as fire — the camera lingers on wisps of white smoke — races toward the cage and Kasasbeh is engulfed.

“It’s horrific, but they know the power of storytelling and the importance of images,” said Robert Greenwald, president and founder of the Culver City-based Brave New Films, which has produced documentaries on the Iraq and Afghanistan wars. He added that the videos’ music, sound effects, camera angles and even costumes evoke suspense. “It really gives me pause to think about and to be concerned. It’s a level of sophistication that’s quite striking.”•

Tags: ,

You might be soothed knowing that the mid-1960s’ fear that automation would cause widespread, imminent technological unemployment didn’t come to pass, but the worry may have been more premature than preposterous, more of a battle won in a war that’s ultimately unwinnable. In a riposte to a 1965 Fortune article written by Charles E. Silberman which derided the powers of computing, Edmund C. Berkeley, editor of Computers and Automation, argued that the future, with literate machines and driverless automobiles, would eventually arrive. We’re much closer to that tomorrow today. The opening:

In the 1965 issue of Fortune, Charles E. Silberman, in his article, “The Real News About Automation,” advances an interesting position. He states:

“Employment of manufacturing production workers has increased by one million in the last 3 1/2 years…This turn-around in blue-collar employment raises fundamental questions about the speed with which machines are replacing men…Automation has made substantially less headway in the United States than the literature on the subject suggests…No fully automated process exists for any major product in the U.S….Many people writing about automation…have grossly exaggerated the economic impact of automation…In their eagerness to demonstrate that the apocalypse is at hand, the new technocratic Jeremiahs…show a remarkable lack of interest in getting the details straight, and so have constructed elaborate theories on surprisingly shaky foundations…The view that computers are causing mass unemployment has gained currency largely because of a historical coincidence: the computer happened to come into widespread use in a period of sluggish economic growth and high unemployment…Full automation is far further in the future because ‘there is no substitute for the brewmaster’s nose’…Man’s versatility was never really appreciated until engineers and scientists tried to teach computers to read handwriting, recognize colors, translate foreign languages, or respond to vocal commands..We don’t have enough experience with automation to make any firm generalizations about how technology will change the structure of occupations…” and in essence he asserts that vast unemployment due to automation is not to be expected.

__________________

There are a number of important defects in Silberman’s argument, enough to make the whole argument unsound.

In the first place, Silberman makes a considerable point of the fact that he has investigated a number of situations where a large degree of automation was reported, and he has observed that a much smaller degree of automation was actually to be found there. For example he has found men still at work personally guiding the movement of engine blocks from one automated machine to another. From these instances he concludes that the threat of automation in producing unemployment has been grossly exaggerated.

Basically, this is the argument that because something has not happened yet, it is not going to happen. Of course, as soon as we express the argument in this form, it is obviously not true. I am reminded of what was being said about automatic computers in the early 1950’s by hardheaded business men: the machines would never be reliable enough or versatile enough to do any substantial quantity of useful business work.

Second, Silberman refers to man’s versatility, reading of handwriting, responding to vocal commands, etc. You will notice that he does not mention what would have been mentioned in this sentence if said some 15 years ago: “man’s uniquely human ability to think, to solve problems, to play games, to create”–because now it is abundantly clear that these abilities are being shared by the computer, the programmed automatic computer.

But the versatility area also of man’s capacities is rapidly being “threatened” by the computer, by such devices as the programmable optical reader, in which a computer applies clever programs to deciphering the precise nature of certain kinds of marks and thereby identifies them. A programmable film reader made by a firm in Cambridge, Mass., is able to read film at a speed 5000 times the rate that a human being can read it.

To assert that because of man’s versatility, the computer will not be able to compete with man is a silly argument, because there are no logical, scientific, or technological barriers to this accomplishment. Silberman asserts there will be a cost barrier: It may be many years before a computer can economically displace the human driver of a school bus at the rate of $4 an hour. But developments in microminiature, chemically-grown, circuits are so amazing, that we can look forward to the time when a programmed computer equal to the brains of most men can be produced for say $1000 apiece. Certainly there is nothing magical or supernatural about the brain of a man; and certainly once the process of chemically growing brains is understood, much better materials than protoplasm can be found for making them.

Third, even if “no fully automated process exists for any major product in the United States” at the present time, is there really very much difference between a process which used to require 100 men and now requires 5 or 3, compared with a process which used to require 100 men and now requires zero?•

 

Tags: ,

I have far fewer concerns about Net Neutrality than I do about cable providers. We’re warned that innovation in the sector will be stymied now that throttling is illegal, but we seem to get electricity each day just fine. But even those who didn’t necessarily oppose the FCC’s decision can see some clouds in the commission’s bold call. Two such worried opinions follow.

____________________________

From Tim Harford at the FInancial Times:

This kind of product sabotage is far older than the internet itself. The French engineer and economist Jules Dupuit wrote back in 1849 that third-class railway carriages had no roofs, not to save money but to “prevent the passengers who can pay the second-class fare from travelling third class”. Throttling, 19th-century style.

But imagine that a law was introduced stipulating “railway neutrality” – that all passengers must be treated equally. That might not mean a better deal for poorer passengers. We might hope that everyone would ride in comfort at third-class prices, and that is not impossible. But a train company with a monopoly might prefer to operate only the first-class carriages at first-class prices. Poorer passengers would get no service at all. Product sabotage is infuriating but the alternative – a monopolist who screws every customer equally – is not necessarily preferable.

Fast lanes and slow lanes are a symptom of this market power but the underlying cause is much more important. The US needs more internet service providers, and the obvious way to get them is to force cable companies to unbundle the “last mile” and lease it to new entrants.

Alas, in the celebrated statement announcing a defence of net neutrality, the FCC also specifically ruled out taking that pro-competitive step. The share prices of cable companies? They went up.•

____________________________

From Alex Pareene at Gawker:

Don’t get me wrong. Regulating broadband as a utility is (in my opinion) the correct policy. This is as close as Washington gets to a victory for the forces of “good.” I would just urge everyone to keep in mind that the forces of good in this instance won not because millions of people made their voices heard, but because the economic interests of a few giant corporations aligned with the position of those millions of people. And I say that not simply to be a killjoy (though I do love being a killjoy), but because if anything is to change, we musn’t convince ourselves that actual victory for the masses is possible in this fundamentally broken system. Please don’t begin to believe that the American political establishment is anything but a corrupt puppet of oligarchy.

American politicians are responsive almost solely to the interests and desires of their rich constituents and interest groups that primarily represent big business. Casual observation of American politics over the last quarter-century or so should make that clear, but if you want supporting evidence, look to the research of Vanderbilt political scientist Larry Bartels, and Princeton’s Martin Gilens and Northwestern’s Benjamin Page. Gilen and Page’s conclusions are easily summed up: “economic elites and organized groups representing business interests have substantial independent impacts on U.S. government policy, while mass-based interest groups and average citizens have little or no independent influence.”

Political battles are won when the rich favor them. America’s rich have lately become rather progressive on certain social issues, and those issues have rather suddenly gone from political impossibilities to achievable dreams.•

 

Tags: ,

The parallel story to yesterday’s post about an actual Hyperloop being constructed in California is that its setting is to be the utopic insta-city known as Quay Valley. Edens often end up biting, but perhaps some sustainable strategies will emerge from the planned development regardless of its outcome. From Anthony Cuthbertson at International Business Times:

In 2016, construction is to begin on one of the most ambitious engineering projects ever undertaken.

Ground will be broken on Quay Valley, a brand new city midway between Los Angeles and San Francisco that aims to be 100% solar-powered, entirely self-sustaining and connected by Hyperloop – the world’s most advanced transportation network.

Developed on 7,500 acres of private land straddling California’s Interstate 5 Freeway, Quay Valley is the brainchild of GROW Holdings (Green Renewable Organic and Water), who aspire to achieve what it calls “New Ruralism”. 

This utopic vision for future living is described by GROW as “a model town for the 21st century” that can achieve complete sustainability using the latest technology in water preservation, renewable energy and organic farming.

“Citizens living there will basically not have an electricity bill,” Dirk Ahlborn, CEO of Hyperloop Transportation Technologies [HTT], told IBTimes UK.•

Tags:

A lot of companies believe the Internet of Things won’t just rest in our pockets but will be the pockets themselves, that our apparel will quantify us, track us, commodify us. From Sara Germano at WSJ:

Under Armour Inc. has some out-there ideas for your clothes.

The athletic-gear company has been spending big to buy developers of apps to monitor personal fitness, aiming in the short term to sell more shirts and shoes. Longer term, Chief Executive Kevin Plank envisions a time when clothes themselves become the means to track movement and biorhythms.

“If we believe that our future is going to be defined by these hard pieces of glass or plastic that sit in our back pockets, you’re crazy,” Mr. Plank recently told investors. “It is going to convert into apparel.”

Under Armour sees its acquisition of fitness apps, including two deals announced this year, as establishing a beachhead within a community of people who want to be measured—a relationship that could pay dividends if connected clothing were to become a reality.

Futuristic gear has long appealed to Under Armour, though its track record is spotty. The company spent nearly $1 million developing high-tech racing suits for the 2014 U.S. Olympic speedskating team, but the garb was criticized and ultimately discarded by the skaters. A 2011 project to develop an electronic shirt that monitored biometrics evolved into just a heart-rate strap. The company then phased out the strap because production, which involved 13 different parties on five continents, was too difficult, Mr. Plank said.

From that experience, Under Armour has dropped any ambitions to develop hardware but it hasn’t given up on electronic clothes.•

The utopian dream of the Nakagin Capsule Tower, pod apartments built for Japanese salarymen bachelors in 1972, was over before the building was completed and is now in a state of disrepair. But with more people living single than ever before and urban real estate in a global world priced at a premium, micro living is more necessary than ever. From Franklyn Cater at NPR:

One kind of tiny community that many cities are saying yes to is micro apartments. A half-dozen buildings are now either built or in the works in the nation’s capital alone, and renters are snapping them up.

One popular building is the Harper, right in the middle of a bustling area of new restaurants and shops known as the U Street Corridor. The apartments, all between 350 and 450 square feet, aren’t formally called micro units by the property owners, Keener Management — the company calls them “studios” and “junior one-bedrooms.” But “micro” is the term of art that has taken hold in the real estate world for this kind of unit.

Julie Williams, 37, lives in a studio here — one of those 350 square foot spaces with a combined kitchen, bedroom and living area, roughly 11 by 13 feet. It also has a good-sized separate bathroom.

Williams, who works for the National Institutes of Health, says she pays $1,795 a month, including utilities. Williams saw her rent as a deal compared to neighboring buildings when she moved from a suburban condo – the efficiencies across the street, she says, start at $2,300. Now she reverse commutes to her suburban job.

“My social life now is a lot better,” she says. “Because I am single, I like knocking on my neighbors’ door and being like, ‘Hey Dericka. … Or she’ll knock on my door and be like, ‘I have a date, what should I wear?’ “

A key idea behind buildings like this is that people spend less time in their own apartments. There’s common space — think sharing economy, extra space when you need it. There’s a roof deck, a dining area that can be reserved, lounge with TV and Wi-Fi. This is where Julie Williams brings dates – not to her studio.•

Tags:

The Space Race really started quite a bit before the success of Sputnik in 1957, with transistors at Bell Labs ten years earlier and WWII rocketeering before that. An example of technology wedded to space exploration prior to artificial satellites and moon landings can be seen in this brief article from the January 9, 1955 Brooklyn Daily Eagle.

Paul Krugman is continually taken to task for predicting in 1998 that the Internet would be no more important economically than the fax machine by 2005. Culturally, of course, this new medium has been a watershed event. But he had a point on some level: the Internet–and computers, more broadly–still disappoint from a productivity perspective. Either that or all conventional measurements are insufficient to gauge this new machine. At his Financial Times blog, Andrew McAfee, co-author with Erik Brynjolfsson of 2014’s wonderful The Second Machine Age, wonders about the confusing state of contemporary economics. An excerpt:

The economy’s behaviour is puzzling these days. No matter what you think is going on, there are some facts — important ones — that don’t fit your theory well at all, and/or some important things left unexplained.

For example, if you believe that technological progress is reshaping the economy (as Erik and I do) then you’ve got to explain why productivity growth is so low. As Larry Summers pointed out on the first panel, strong labour productivity growth is the first thing you’d expect to see if tech progress really were taking off and reshaping the economy, disrupting industries, hollowing out the middle class, and so on. So why has it been so weak for the past 10 years? Is it because of mismeasurement? William Baumol’s “Cost Disease” (the idea that all the job growth has come in manual, low-productivity sectors)? Or is it that recent tech progress is in fact economically unimpressive, as Robert Gordon and others believe?

If you believe that tech progress has not been that significant, however, you’ve got to explain why labor’s share of income is declining around the world.•

Tags: , ,

Dr. Eugenie Clark, an ichthyologist who specialized in sharks–even sleeping ones–just passed away at 92. She was not a fan of Jaws, the Spielberg blockbuster adapted from Peter Benchley’s novel. From her New York Times obituary by Robert D. McFadden:

For all her scientific achievements, Dr. Clark was also a figure of popular culture who used her books, lectures and expertise to promote the preservation of ecologically fragile shorelines, to oppose commercial exploitation of endangered species and to counteract misconceptions, especially about sharks.

She insisted that Jaws, the 1975 Steven Spielberg film based on a Peter Benchley novel, and its sequels inspired unreasonable fears of sharks as ferocious killers. Car accidents are far more numerous and terrible than shark attacks, she said in a 1982 PBS documentary, The Sharks.

She said at the time that only about 50 shark attacks on humans were reported annually and that only 10 were fatal, and that the great white shark portrayed in Jaws would attack only if provoked, while most of the world’s 350 shark species were not dangerous to people at all.

“When you see a shark underwater,” she said, “you should say, ‘How lucky I am to see this beautiful animal in his environment.’ ”•

_______________________________

“The big ones are the females.”

Tags: ,

In a belated London Review of Books assessment of The Second Machine Age and Average Is Over, John Lanchester doesn’t really break new ground in considering Deep Learning and technological unemployment, but in his customarily lucid and impressive prose he crystallizes how quickly AI may remake our lives and labor in the coming decades. Two passages follow: The opening, in which he charts the course of how the power of a supercomputer ended up inside a child’s toy in a few short years; and a sequence about the way automation obviates workers and exacerbates income inequality.

__________________________________

In 1996, in response to the 1992 Russo-American moratorium on nuclear testing, the US government started a programme called the Accelerated Strategic Computing Initiative. The suspension of testing had created a need to be able to run complex computer simulations of how old weapons were ageing, for safety reasons, and also – it’s a dangerous world out there! – to design new weapons without breaching the terms of the moratorium. To do that, ASCI needed more computing power than could be delivered by any existing machine. Its response was to commission a computer called ASCI Red, designed to be the first supercomputer to process more than one teraflop. A ‘flop’ is a floating point operation, i.e. a calculation involving numbers which include decimal points (these are computationally much more demanding than calculations involving binary ones and zeros). A teraflop is a trillion such calculations per second. Once Red was up and running at full speed, by 1997, it really was a specimen. Its power was such that it could process 1.8 teraflops. That’s 18 followed by 11 zeros. Red continued to be the most powerful supercomputer in the world until about the end of 2000.

I was playing on Red only yesterday – I wasn’t really, but I did have a go on a machine that can process 1.8 teraflops. This Red equivalent is called the PS3: it was launched by Sony in 2005 and went on sale in 2006. Red was only a little smaller than a tennis court, used as much electricity as eight hundred houses, and cost $55 million. The PS3 fits underneath a television, runs off a normal power socket, and you can buy one for under two hundred quid. Within a decade, a computer able to process 1.8 teraflops went from being something that could only be made by the world’s richest government for purposes at the furthest reaches of computational possibility, to something a teenager could reasonably expect to find under the Christmas tree.

The force at work here is a principle known as Moore’s law. This isn’t really a law at all, but rather the extrapolation of an observation made by Gordon Moore, one of the founders of the computer chip company Intel. By 1965, Moore had noticed that silicon chips had for a number of years been getting more powerful, in relation to their price, at a remarkably consistent rate. He published a paper predicting that they would go on doing so ‘for at least ten years’. That might sound mild, but it was, as Erik Brynjolfsson and Andrew McAfee point out in their fascinating book, The Second Machine Age, actually a very bold statement, since it implied that by 1975, computer chips would be five hundred times more powerful for the same price. ‘Integrated circuits,’ Moore said, would ‘lead to such wonders as home computers – or at least terminals connected to a central computer – automatic controls for automobiles and personal portable communications equipment’. Right on all three. If anything he was too cautious.•

__________________________________

Note that in this future world, productivity will go up sharply. Productivity is the amount produced per worker per hour. It is the single most important number in determining whether a country is getting richer or poorer. GDP gets more attention, but is often misleading, since other things being equal, GDP goes up when the population goes up: you can have rising GDP and falling living standards if the population is growing. Productivity is a more accurate measure of trends in living standards – or at least, it used to be. In recent decades, however, productivity has become disconnected from pay. The typical worker’s income in the US has barely gone up since 1979, and has actually fallen since 1999, while her productivity has gone up in a nice straightish line. The amount of work done per worker has gone up, but pay hasn’t. This means that the proceeds of increased profitability are accruing to capital rather than to labour. The culprit is not clear, but Brynjolfsson and McAfee argue, persuasively, that the force to blame is increased automation.

That is a worrying trend. Imagine an economy in which the 0.1 per cent own the machines, the rest of the 1 per cent manage their operation, and the 99 per cent either do the remaining scraps of unautomatable work, or are unemployed. That is the world implied by developments in productivity and automation. It is Pikettyworld, in which capital is increasingly triumphant over labour. We get a glimpse of it in those quarterly numbers from Apple, about which my robot colleague wrote so evocatively. Apple’s quarter was the most profitable of any company in history: $74.6 billion in turnover, and $18 billion in profit. Tim Cook, the boss of Apple, said that these numbers are ‘hard to comprehend’. He’s right: it’s hard to process the fact that the company sold 34,000 iPhones every hour for three months. Bravo – though we should think about the trends implied in those figures. For the sake of argument, say that Apple’s achievement is annualised, so their whole year is as much of an improvement on the one before as that quarter was. That would give them $88.9 billion in profits. In 1960, the most profitable company in the world’s biggest economy was General Motors. In today’s money, GM made $7.6 billion that year. It also employed 600,000 people. Today’s most profitable company employs 92,600. So where 600,000 workers would once generate $7.6 billion in profit, now 92,600 generate $89.9 billion, an improvement in profitability per worker of 76.65 times. Remember, this is pure profit for the company’s owners, after all workers have been paid. Capital isn’t just winning against labour: there’s no contest. If it were a boxing match, the referee would stop the fight.•

Tags: , , , ,

3D-printing pioneer Behrokh Khoshnevis is at odds with Ma Yihe, the CEO of Winsun, the Chinese company which last year printed 10 concrete houses in a single day, feeling his methods have been appropriated. Beyond this international squabble, the sector seems poised for a big future, with the ability to quickly build cities from scratch, turn out affordable housing for low-income dwellers, quickly rebuild communities devastated by natural disasters and even erect space colonies. From Nicola Davison at the Guardian:

Khoshnevis, who is also working with Nasa on 3D-printed lunar structures, has no doubt that in the future, a large portion of cities will be printed. “I think in about five years you are going to see a lot of buildings built in this way,” he says.

He hopes the technology will help address a worldwide shortage of low-income housing. “I think it is a shame that at the dawn of the 21st century, about two billion people live in slums,” he says. “I think this technology is a good solution.”

He adds that 3D printing will encourage governments to build affordable homes because of savings in time and cost. A significant difference between traditional construction methods and 3D printing is efficiency. If in the future a London borough wished to build a public housing estate, for instance, they could hire a developer with a 3D printer. The printer would then be delivered to the site along with the construction material and architectural design on a flash drive. “They plug it in, hit a button and the buildings get built,” Khoshnevis says. “The nice thing about it is that we can build beautiful, dignified neighbourhoods – not cookie-cutter, box-like houses.”

Not all architects are convinced that 3D printing is good for architecture as a discipline.•

Tags: , ,

The decentralization of media in particular and technology in general is a threat to government, and the Internet of Things, while it has the potential to improve many aspects of life, can also provide a countermeasure to leadership that wishes to quell disquiet. It’s a tension with no end in sight. From Ian Steadman at New Statesman:

Another classic example to cite here is Nest’s thermostat, which users can buy to replace their normal, boring one. It’s clever in that it learns from what you do to it – turn down the temperature at certain times of day, and on certain days of the week, and it’ll automatically build up a profile of your heating habits and adjust before you know you want to do it yourself. And it saves energy from learning not to heat empty homes! Put one of these in every home in the country and the environmental savings could be vast. What possible downside could there be?

Well, as author and digital rights activist Cory Doctorow explained to me when I interviewed him last year, imagine an Arab Spring-type situation in a country with very cold winters, universal Nest thermostat adoption and a dictator with no qualms about mass surveillance of web and mobile data communications. On the first day of a mass uprising the security services can stick fake mobile signal towers up around the public square of the capital and hoover up the unique identification addresses from the smartphones of every single protester there. (These towers exist, even here.) That night, as the temperature drops to its bitter coldest, every single protester finds their heating system remotely disabled. Hypothermia takes care of the dictator’s problem.

This is not science fiction. It’s entirely possible with existing technology, and only made unrealistic because that technology hasn’t reached universal rates of adoption. This is the upcoming Internet of Things, if we’re not careful.•

Tags: ,

The first Hyperloop is slated to be built next year in California. It’s not to be a test track but a fully operational, though only five-mile version, of the nouveau transportation system designed by Elon Musk. From Alex Davies at Wired:

The Hyperloop, detailed by the SpaceX and Tesla Motors CEO in a 57-page alpha white paper in August 2013, is a transportation network of above-ground tubes that would span hundreds of miles. Thanks to extremely low air pressure inside those tubes, capsules filled with people zip through them at near supersonic speeds.

The idea is to build a five-mile track in Quay Valley, a planned community (itself a grandiose idea) that will be built from scratch on 7,500 acres of land around Interstate 5, midway between San Francisco and Los Angeles. Construction of the hyperloop will be paid for with $100 million Hyperloop Transportation Technologies expects to raise through a direct public offering in the third quarter of this year.

They’re serious about this, too. It’s not a proof of concept, or a scale model. It’s the real deal. “It’s not a test track,” CEO Dirk Ahlborn says, even if five miles is well short of the 400-mile stretch of tubes Musk envisions carrying people between northern and southern California in half an hour. Anyone can buy a ticket and climb aboard, but they won’t see anything approaching 800 mph. Getting up to that mark requires about 100 miles of track, Ahlborn says, and “speed is not really what we want to test here.”

Instead, this first prototype will test and tweak practical elements like station setup, boarding procedures, and pod design.•

Tags: ,

Despite some instances of revisionist history, Bill Gates knew early on exactly how disruptive the Internet would be, and now he feels the same about Weak AI. And he’s not alone. The question is how quickly technological unemployment will spread. Could more than 30% of all jobs vanish within a decade without new ones to replace them? Or will it be a slower fade to black for the remnants of the Industrial Age? From Timothy Aeppel at the WSJ:

Microsoft co-founder Bill Gates, speaking in Washington last year, said automation threatens all manner of workers, from drivers to waiters to nurses. “I don’t think people have that in their mental model,” he said.

Robot employment

Gartner Inc., the technology research firm, has predicted a third of all jobs will be lost to automation within a decade. And within two decades, economists at Oxford University forecast nearly half of the current jobs will be performed with machine technology. 

“When I was in grad school, you knew if you worried about technology, you were viewed as a dummy—because it always helps people,” MIT economist David Autor said. But rather than killing jobs indiscriminately, Mr. Autor’s research found automation commandeering such middle-class work as clerk and bookkeeper, while creating jobs at the high- and low-end of the market.

This is one reason the labor market has polarized and wages have stagnated over the past 15 years, Mr. Autor said. The concern among economists shouldn’t be machines soon replacing humans, he said: “The real problem I see with automation is that it’s contributed to growing inequality.”•

Tags: , ,

Computer scientists long labored (and, ultimately, successfully) to make machines superior at backgammon or chess, but they had to teach that AI the rules and moves, designing the brute force of their strikes. Not so with Google’s new game-playing computer which can muster a mean game of Space Invaders or Breakout with no coaching. It’s Deep Learning currently focused on retro pastimes, but soon enough it will be serious business. From Rebecca Jacobson at PBS Newshour:

This isn’t the first game-playing A.I. program. IBM supercomputer Deep Blue defeated world chess champion Garry Kasparov in 1997. In 2011, an artificial intelligence computer system named Watson won a game of Jeopardy against champions Ken Jennings and Brad Rutter.

Watson and Deep Blue were great achievements, but those computers were loaded with all the chess moves and trivia knowledge they could handle, [Demis] Hassabis said in a news conference Tuesday. Essentially, they were trained, he explained.

But in this experiment, designers didn’t tell DQN how to win the games. They didn’t even tell it how to play or what the rules were, Hassabis said.

“(Deep Q-network) learns how to play from the ground up,” Hassabis said. “The idea is that these types of systems are more human-like in the way they learn. Our brains make models that allow us to learn and navigate the world. That’s exactly the type of system we’re trying to design here.”•

Tags: ,

From the April 13, 1943 Brooklyn Daily Eagle:

Pasadena, Cal. — California Institute of Technology today reported successful transfusions of cow and horse blood to human beings.

So far, however, only one transfusion per patient is possible. The second may prove fatal.

Considerable progress is being made, according to Dr. Dan H. Campbell of the department of immunochemistry. Substitution of animal for human blood in transfusions may not be so far off, he said.•

Tags:

Steve Jobs banned typewriters from Apple offices in 1981, no matter how advanced they were, and the NYPD may be very belatedly launching a similar initiative. It’s just stunning to realize that old-school keyboards are still a staple in the city’s policing. From Azi Paybarah at Capital New York:

The New York Police Department would be forced to phase out its use of typewriters, under the terms of a bill being introduced tomorrow by Councilman Danny Dromm of Queens. …

Mayor Bill de Blasio and police commissioner Bill Bratton have made upgrading NYPD equipment a key part of their reforms of the department. In addition to giving every police officer an official email address for the first time, they are also equipping officers with smartphones and tablets, and the NYPD is aggressively using social media platforms like Twitter, Facebook and Instagram.•

 

Tags: , , , ,

I haven’t yet read Naomi Klein’s book, This Changes Everything: Capitalism vs. the Climate, the one that Elizabeth Kolbert took to task for not being bold enough. (Kolbert’s own volume on the topic, The Sixth Extinction, was one of my favorite books of 2014.) In an often-contentious Spiegel interview conducted by Klaus Brinkbäumer, Klein contends that capitalism and ecological sanity are incompatible and calls out supposedly green captains of industry like Michael Bloomberg and Richard Branson. An excerpt:

Spiegel:

The US and China finally agreed on an initial climate deal in 2014.

Naomi Klein:

Which is, of course, a good thing. But anything in the deal that could become painful won’t come into effect until Obama is out of office. Still, what has changed is that Obama said: “Our citizens are marching. We can’t ignore that.” The mass movements are important; they are having an impact. But to push our leaders to where they need to go, they need to grow even stronger.

Spiegel:

What should their goal be?

Naomi Klein:

Over the past 20 years, the extreme right, the complete freedom of oil companies and the freedom of the super wealthy 1 percent of society have become the political standard. We need to shift America’s political center from the right fringe back to where it belongs, the real center.

Spiegel:

Ms. Klein, that’s nonsense, because it’s illusory. You’re thinking far too broadly. If you want to first eliminate capitalism before coming up with a plan to save the climate, you know yourself that this won’t happen.

Naomi Klein:

Look, if you want to get depressed, there are plenty of reasons to do so. But you’re still wrong, because the fact is that focusing on supposedly achievable incremental changes light carbon trading and changing light bulbs has failed miserably. Part of that is because in most countries, the environmental movement remained elite, technocratic and supposedly politically neutral for two-and-a-half decades. We are seeing the result of this today: It has taken us in the wrong direction. Emissions are rising and climate change is here. Second, in the US, all the major legal and social transformations of the last 150 years were a consequence of mass social movements, be they for women, against slavery or for civil rights. We need this strength again, and quickly, because the cause of climate change is the political and economic system itself. The approach that you have is too technocratic and small.

Spiegel:

If you attempt to solve a specific problem by overturning the entire societal order, you won’t solve it. That’s a utopian fantasy.

Naomi Klein:

Not if societal order is the root of the problem. Viewed from another perspective, we’re literally swimming in examples of small solutions: There are green technologies, local laws, bilateral treaties and CO2 taxation. Why don’t we have all that at a global level?

Spiegel:

You’re saying that all the small steps — green technologies and CO2 taxation and the eco-behavior of individuals — are meaningless?

Naomi Klein:

No. We should all do what we can, of course. But we can’t delude ourselves that it’s enough. What I’m saying is that the small steps will remain too small if they don’t become a mass movement. We need an economic and political transformation, one based on stronger communities, sustainable jobs, greater regulation and a departure from this obsession with growth. That’s the good news. We have a real opportunity to solve many problems at once.•

Tags: , , , ,

« Older entries § Newer entries »