Science/Tech

You are currently browsing the archive for the Science/Tech category.

Neil Irwin of the “Upshot” blog at the New York Times suggests that American wage stagnation and the lag in hiring are being driven not by market conditions but by a mentality. An excerpt: 

“So any employer with a job opening should have no problem hiring. If anything, the ratio of openings to hiring should be lower than it was in the mid-2000s, not higher.

Here’s a theory to try to make sense of the disconnect: During the recession, employers got spoiled. When unemployment was near 10 percent, talented workers were lined up outside their door. The workers they did have were terrified of losing their jobs. If you put out word that you had an opening, you could fill the job almost instantly. That’s why the ratio of job openings to hires fell so low in 2009.

As the economy has gotten better the last five years, employers have had more and more job openings, but have been sorely reluctant to accept that it’s not 2009 anymore in terms of what workers they can hire and at what wage.

Yes, unemployment is still elevated, but workers aren’t in nearly as desperate a position as they were then. So to get the kind of talented people they want, employers are going to have to pay more (or offer better benefits or working conditions) than they would have not that long ago.”

Tags:

Long before Silicon Valley, Victorians gave the future a name, recognizing electricity and, more broadly, technology, as transformative, disruptive and decentralizing. We’re still borrowing from their lexicon and ideas, though we need to be writing the next tomorrow’s narratives today. From “Future Perfect,” Iwan Rhys Morus’ excellent Aeon essay:

“For the Victorians, the future, as terra incognita, was ripe for exploration (and colonisation). For someone like me – who grew up reading the science fiction of Robert Heinlein and watching Star Trek – this makes looking at how the Victorians imagined us today just as interesting as looking at the way our imagined futures work now. Just as they invented the future, the Victorians also invented the way we continue to talk about the future. Their prophets created stories about the world to come that blended technoscientific fact with fiction. When we listen to Elon Musk describing his hyperloop high-speed transportation system, or his plans to colonise Mars, we’re listening to a view of the future put together according to a Victorian rulebook. Built into this ‘futurism’ is the Victorian discovery that societies and their technologies evolve together: from this perspective, technology just is social progress.

The assumption was plainly shared by everyone around the table when, in November 1889, the Marquess of Salisbury, the Conservative prime minister of Great Britain, stood up at the Institution of Electrical Engineers’ annual dinner to deliver a speech. He set out a blueprint for an electrical future that pictured technological and social transformation hand in hand. He reminded his fellow banqueteers how the telegraph had already changed the world by working on ‘the moral and intellectual nature and action of mankind’. By making global communication immediate, the telegraph had made everyone part of the global power game. It had ‘assembled all mankind upon one great plane, where they can see everything that is done, and hear everything that is said, and judge of every policy that is pursued at the very moment those events take place’. Styling the telegraph as the great leveller was quite common among the Victorians, though it’s particularly interesting to see it echoed by a Tory prime minister.

Salisbury’s electrical future went further than that, though. He argued that the spread of electrical power systems would profoundly transform the way people lived and worked, just as massive urbanisation was the result of steam technology.”

Tags:

The future usually arrives wearing the clothes of the past, but occasionally we truly and seriously experience the shock of the new. On that topic: The 1965 Life magazine piece “Will Man Direct His Own Evolution?is a fun but extremely overwrought essay by Albert Rosenfeld about the nature of identity in a time when humans would be made by design, comprised of temporary parts. Like a lot of things written in the ’60s about science and society, it’s informed by an undercurrent of anxiety about the changes beginning to affect the nuclear family. An excerpt:

Even you and I–in 1965, already here and beyond the reach of potential modification–could live to face curious and unfamiliar problems in identity as a result of man’s increasing ability to control his own mortality after birth. As organ transplants and artificial body parts become even more available it is not totally absurd to envision any one of us walking around one day with, say, a plastic cornea, a few metal bones and Dacron arteries, with donated glands, kidney and liver from some other person, from an animal, from an organ bank, or even an assembly line, with an artificial heart, and computerized electronic devices to substitute for muscular, neural or metabolic functions that may have gone wrong. It has been suggested–though it will almost certainly not happen in our lifetime–that brains, too, might be replaceable, either by a brain transplanted from someone else, by a new one grown in tissue culture, or an electronic or mechanical one of some sort. ‘What,’ asks Dr. Lederberg, “is the moral, legal or psychiatric identity of an artificial chimera?”

Dr. Seymour Kety, an outstanding psychiatric authority now with the National Institute of Health, points out that fairly radical personality changes already have been wrought by existing techniques like brainwashing, electroshock therapy and prefrontal lobotomy, without raising serious questions of identity. But would it be the same if alien parts and substances were substituted for the person’s own, resulting in a new biochemistry and a new personality with new tastes, new talents, new political views–-perhaps even a different memory of different experiences? Might such a man’s wife decide she no longer recognized him as her husband and that he was, in fact, not? Or might he decide that his old home, job and family situation were not to his liking and feel free to chuck the whole setup that have been quite congenial to the old person?

Not that acute problems of identity need await the day when wholesale replacement of vital organs is a reality. Very small changes in the brain could result in astounding metamorphoses. Scientists who specialize in the electrical probing of the human brain have, in the past few years, been exploring a small segment of the brain’s limbic system called the amygdala–and discovering that it is the seat of many of our basic passions and drives, including the drives that lead to uncontrollable sexual extremes such as satyriasis and nymphomania. 

Suppose, at a time that may be surprisingly near at hand, the police were to trap Mr. X, a vicious rapist whose crimes had terrorized the women of a neighborhood for months. Instead of packing him off to jail, they send him in for brain surgery. The surgeon delicately readjusts the distorted amygdala, and the patient turns into a gentle soul with a sweet, loving disposition. He is clearly a stranger to the man who was wheeled into the operating room. Is he the same man, really? Is he responsible for the crimes that he–or that other person–committed? Can he be punished? Should he go free?

As time goes on, it may be necessary to declare, without the occurrence of death, that Mr. X has ceased to exist and that Mr. Y has begun to be. This would be a metaphorical kind of death and rebirth, but quite real psychologically–and thus, perhaps, legally.•

Like much of the pre-Internet recording-industry infrastructure, the Columbia House Music Club, an erstwhile popular method of bulk-purchasing songs through snail mail, no longer exists, having departed this world before iTunes’ unfeeling gaze, as blank and pitiless as the sun. Your penny or paper dollar will no longer secure you a dozen records or tapes, nor do you have to experience the buyer’s remorse of one who reflexively purchases media without heeding the fine print which reveals that the relationship, as the Carpenters would say, had only just begun.

Of course, music pilfering didn’t start in our digital times with Napster, and Columbia was a prime target for those who loved systems capable of gaming. Via the excellent Longreads, here’s the opening of “The Rise and Fall of the Columbia House Record Club — and How We Learned to Steal Music,” a 2011 Phoenix article by Daniel Brockman and Jason W. Smith:

On June 29, 2011, the last remnant of what was once Columbia House — the mightiest mail-order record club company that ever existed — quietly shuttered for good. Other defunct facets of the 20th-century music business have been properly eulogized, but it seems that nary a tear was shed for the record club. Perhaps no one noticed its demise. After all, by the end, Columbia House was no longer Columbia House; it had folded into its main competitor and become an online-only entity years before.

A more likely explanation, though, is that a new generation of music fans who had never known a world without the Internet couldn’t grasp the marvel that was the record club in its heyday. From roughly 1955 until 2000, getting music for free meant taping a penny to a paper card and mailing it off for 12 free records — along with membership and the promise of future purchasing.

The allure of the record club was simple: you put almost nothing down, signed a simple piece of paper, picked out some records, and voila! — a stack of vinyl arrived at your doorstep. By 1963, Columbia House was the flagship of the record-club armada, with 24 million records shipped. By 1994, they had shipped more than a billion records, accounted for 15 percent of all CD sales, and had become a $500-million-a-year behemoth that employed thousands at its Terre Haute, Indiana, manufacturing and shipping facility.

Of course, most of the record clubs’ two million customers failed to read the fine print, obligating them to purchase a certain number of monthly selections at exorbitant prices and even more exorbitant shipping costs. At the same time, consumers plotted to sign up multiple accounts under assumed names, in order to keep getting those 12-for-a-penny deals as often as possible. Record clubs may have introduced several generations of America’s youth to the concept of collection agencies — and the concept of stealing music, decades before the advent of the Internet.•

 

Tags: ,

In 1958, Disney played large-scale urban planner, imagining the world as interconnected mototopia. Cantilevered skyways and transcontinental motorways and highway escalators, anyone? Nothing so fantastical was necessary, but we should have retrofitted highways and roads to be smarter, cleaner and safer long before driverless cars were even in the conversation, but we never had the ingenuity or political will to do so.

That last 5% of perfecting autonomous vehicles may be more difficult than the first 95%, and driverless options will likely continue to be introduced incrementally rather than all at once, but if such a system is 100% realized, there will be all manner of ramifications. In a post on his blog, Google driverless sector consultant Brad Templeton looks at the possible outcomes in such a brave new world. An excerpt:

“When I talk about robocars, I often get quite opposite reactions:

  • Americans, in particular, will never give up car ownership! You can pry the bent steering wheel from my cold, dead hands.
  • I can’t see why anybody would own a car if there were fast robotaxi service!
  • Surely human drivers will be banned from the roads before too long.

I predict neither extreme will be true. I predict the market will offer all options to the public, and several options will be very popular. I am not even sure which will be the most popular.

  1. Many people will stick to buying and driving classic, manually driven cars. The newer versions of these cars will have fancy ADAS systems that make them much harder to crash, and their accident levels will be lower.
  2. Many will buy a robocar for their near-exclusive use. It will park near where it drops them off and always be ready. It will keep their stuff in the trunk.
  3. People who live and work in an area with robotaxi service will give up car ownership, and hire for all their needs, using a wide variety of vehicles.
  4. Some people will purchase a robocar mostly for their use, but will hire it out when they know they are not likely to use it, allowing them to own a better car. They will make rarer use of robotaxi services to cover specialty trips or those times when they hired it out and ended up needing it. Their stuff will stay in a special locker in the car.”

Tags:

An impediment to automation may be “robotic” humans willing to work for wages so low that it’s not cost efficient to replace them. From a peek inside a sprawling distribution center by Matt King of the Atlantic:

“Susan and her co-workers appeared in good spirits as the manager introduced them by name and told us how long they had been working at the company. About half of the workers had a mental or physical disability, a result of the company’s ‘inclusion’ program which mirrored similar efforts at other major retailers. In a news segment about a DC in South Carolina, one disabled worker said hers was ‘the coolest job in the world.’

These programs are viewed as leading examples of combined corporate and social success, but that success may be short-sighted. Pickers and low-skill jobs of the sort represent a pain point for DCs and the e-commerce executives who are managing their evolution. The jobs appear simple (one Amazon executive referred to the workers as like ‘robots in human form’), but the tasks are difficult to automate at scale: ‘Because products vary so much in size and shape and because of the way they sit on shelves, robotic manipulators still can’t beat real arms and hands,’ explains Erico Guizo on Spectrum, the blog for the Institute of Electrical and Electronic Engineers (IEEE).

Unlike Susan and her co-workers, who were salaried and long-time employees of the company, a growing number of ‘pickers’ at DCs across the country are hired through staffing agencies and classified as ‘non-permanent’ or ‘temporary.’ This means no health care coverage or benefits, pay that’s usually barely above the minimum wage, and employment that can be voided at a whim when the workers are no longer needed.

This tenuous labor arrangement is partly the result of an honest fluctuation in the demand for these jobs: The biggest influx of DC workers occurs just before the holiday season, when online retailers conduct a majority of their annual business. But like retail jobs, the arrangement is also an acknowledgement of the underlying economic reality: The jobs are utterly low-skill, and there exists a large supply of unemployed Americans willing to do the work.

‘In a way, because low-wage jobs are so cheap, we haven’t seen as much automation as you could,’ Joseph Foudy, a professor of economics at NYU’s Stern School of Business, told me.”

Tags: , ,

We’ve been able to feed millions of images into social networks for “free,” armies of servers our seeming supplicants, but with facial-recognition software coming of age, the bill is nearly due. Will the surprising acceptance of surveillance online translate to the physical world? From Paul Rubens at the BBC:

“Imagine walking into a shop you’ve never been in before, to be greeted by name by a sales assistant you don’t know.

If you’re David Beckham or Lily Allen you may be used to this type of VIP treatment, but if your fame is more limited, or even non-existent then you might find this attention rather disconcerting.

Despite this, thanks to facial recognition software you don’t need to be a celebrity for sales assistants to know your name the moment you enter a shop.

That’s because companies such as Japanese technology giant NEC and FaceFirst, a California-based company, offer systems that use cameras placed at the entrances to shops to identify people as they come in.

If your face fits

When important existing or potential customers are spotted, a text message can be sent to appropriate sales staff to ensure they provide personal attention.

‘Someone could approach you and give you a cappuccino when you arrive, and then show you the things they think you will be interested in buying,’ says Joel Rosenkrantz, FaceFirst’s chief executive.

Before a system such as FaceFirst’s can be put into operation, it has to be loaded up with photos. So an obvious question to ask is where would they come from?”

Tags:

It wasn’t a commercial triumph like the organ named for him, but Laurens Hammond’s “Teleview” projection system was a critical triumph in early 3D films. The set-up was installed in Manhattan’s Selwyn Theater in the early 1920s, and moviegoers were treated to screenings of The Man From Mars, a stereoscopic film made especially for Teleview, which was shown on a large screen and on individual viewing devices attached at each seat. It apparently looked pretty great. Alas, the equipment and installation was costly, and no other cinemas adopted the technology. An article follows about the apparatus from the December 17, 1922 Brooklyn Daily Eagle.

Tags:

I’ll use the graph below, from a post by Andrew Sullivan at the Dish, as possible proof of my contention that although police body-cameras may not instantly bring about a higher degree of justice, the images will effect public consciousness, which may in turn be brought to bear on race and policing.

Tags:

Andrew McAfee, co-author with Eric Brynjolfsson of The Second Machine Age, believes that Weak AI will destabilize employment for decades, but he doesn’t think species-threatening Artificial Intelligence is just around the bend. From his most recent Financial Times blog post:

“AI does appear to be taking off: after decades of achingly slow progress, computers have in the past few years demonstrated superhuman ability, from recognising street signs in pictures and diagnosing cancer to discerning human emotions and playing video games. So how far off is the demon?

In all probability, a long, long way away; so long, in fact, that the current alarmism is at best needless and at worst counterproductive. To see why this is, an analogy to biology is helpful.

It was clear for a long time that important characteristics of living things (everything from the colour of pea plant flowers to the speed of racehorses) was passed down from parents to their offspring, and that selective breeding could shape these characteristics. Biologists hypothesised that units labelled ‘genes’ were the agents of this inheritance, but no one knew what genes looked like or how they operated. This mystery was solved in 1953 when James Watson and Francis Crick published their paper describing the double helix structure of the DNA molecule. This discovery shifted biology, giving scientists almost infinitely greater clarity about which questions to ask and which lines of inquiry to pursue.

The field of AI is at least one ‘Watson and Crick moment’ away from being able to create a full artificial mind (in other words, an entity that does everything our brain does). As the neuroscientist Gary Marcus explains: ‘We know that there must be some lawful relation between assemblies of neurons and the elements of thought, but we are currently at a loss to describe those laws.’ We also do not have any clear idea how a human child is able to know so much about the world — that is a cat, that is a chair — after being exposed to so few examples. We do not know exactly what common sense is, and it is fiendishly hard to reduce to a set of rules or logical statements. The list goes on and on, to the point that it feels like we are many Watson and Crick moments away from anything we need to worry about.”

Tags: ,

China has quietly surpassed the U.S. this year as the world’s largest economic power, and that’s not a situation likely to reverse itself anytime soon, even if that nation should suffer a large-scale financial downturn. But what is the significance of America being number two? From Joseph Stiglitz at Vanity Fair:

Now China is the world’s No. 1 economic power. Why should we care? On one level, we actually shouldn’t. The world economy is not a zero-sum game, where China’s growth must necessarily come at the expense of ours. In fact, its growth is complementary to ours. If it grows faster, it will buy more of our goods, and we will prosper. There has always, to be sure, been a little hype in such claims—just ask workers who have lost their manufacturing jobs to China. But that reality has as much to do with our own economic policies at home as it does with the rise of some other country.

On another level, the emergence of China into the top spot matters a great deal, and we need to be aware of the implications.

First, as noted, America’s real strength lies in its soft power—the example it provides to others and the influence of its ideas, including ideas about economic and political life. The rise of China to No. 1 brings new prominence to that country’s political and economic model—and to its own forms of soft power. The rise of China also shines a harsh spotlight on the American model. That model has not been delivering for large portions of its own population. The typical American family is worse off than it was a quarter-century ago, adjusted for inflation; the proportion of people in poverty has increased. China, too, is marked by high levels of inequality, but its economy has been doing some good for most of its citizens. China moved some 500 million people out of poverty during the same period that saw America’s middle class enter a period of stagnation. An economic model that doesn’t serve a majority of its citizens is not going to provide a role model for others to emulate. America should see the rise of China as a wake-up call to put our own house in order.

Second, if we ponder the rise of China and then take actions based on the idea that the world economy is indeed a zero-sum game—and that we therefore need to boost our share and reduce China’s—we will erode our soft power even further. This would be exactly the wrong kind of wake-up call. If we see China’s gains as coming at our expense, we will strive for “containment,” taking steps designed to limit China’s influence. These actions will ultimately prove futile, but will nonetheless undermine confidence in the U.S. and its position of leadership. U.S. foreign policy has repeatedly fallen into this trap.•

Tags:

The whole world is a city, or becoming one, we’ve been told repeatedly, but a new Economist report pushes back at the idea, arguing that China, India and Brazil, three ascendant powers, are embracing the sprawl. Measures must be taken to ensure that the environmental costs of non-density are minimized. The opening:

“IN THE West, suburbs could hardly be less fashionable. Singers and film-makers lampoon them as the haunts of bored teenagers and desperate housewives. Ferguson, Missouri, torched by its residents following the police shooting of an unarmed black teenager, epitomises the failure of many American suburbs. Mayors like boasting about their downtown trams or metrosexual loft dwellers not their suburbs.

But the planet as a whole is fast becoming suburban. In the emerging world almost every metropolis is growing in size faster than in population. Having bought their Gucci handbags and Volkswagens, the new Asian middle class is buying living space, resulting in colossal sprawl. Many of the new suburbs are high-rise, though still car-oriented; others are straight clones of American suburbs (take a look at Orange County, outside Beijing). What should governments do about it?

The space race

Until a decade or two ago, the centres of many Western cities were emptying while their edges were spreading. This was not for the reasons normally cited. Neither the car nor the motorway caused suburban sprawl, although they sped it up: cities were spreading before either came along. Nor was the flight to the suburbs caused by racism. Whites fled inner-city neighbourhoods that were becoming black, but they also fled ones that were not. Planning and zoning rules encouraged sprawl, as did tax breaks for home ownership—but cities spread regardless of these. The real cause was mass affluence. As people grew richer, they demanded more privacy and space. Only a few could afford that in city centres; the rest moved out.

The same process is now occurring in the developing world, but much more quickly.”

The standing desk, a truly bad idea, is not likely to be the furniture of tomorrow’s office. The Dutch firm, RAAAF, has come up with an alternative proposal that’s even battier. It’s ergonomics run amok. From “The Weirdest Proposal Yet for the ‘Office of the Future,'” a Wired piece by Margaret Rhodes:

“The designers are especially interested in supported standing, which standing desks don’t offer. Supported standing, like upright leaning, can engage the muscles—hopefully enough to prevent the drop in fat-burning enzymes that occurs during long periods of sitting—without tiring out the employee’s legs and lower back quite so much. The maze-like series of angled and tapered frames create an infinite number of leaning spots, for workers of any height. There are no fixed desks, so employees might find it natural to roam around and be active.

That feature is also one of the obvious impracticalities of ‘The End of Sitting.’ Without desks, how do staffers keep track of supplies, notes, or work documents? Without offices or conference rooms, how can people have meetings that don’t disrupt everyone else’s concentration? ‘The End of Sitting’ is both an art installation and an experiment, so it’s not actually concerned with answering those questions. Instead, Rietveld says this is “about showing a different way of thinking.'”

______________________________

“Sitting kills”:

Tags:

From Kit Buchan at Guardian, a little more about the Lowe’s robotic shopping assistant, OSHbot, one realized idea from the chain store’s Innovation Labs, and one which won’t be replacing human workers, not yet at least:

“According to [Innovation Lab’s Executive Director Kyle] Nel, OSHbot is the product of an extraordinary innovation scheme in which Lowe’s Innovation Labs ask published science-fiction writers to produce stories predicting futuristic scenarios for the store. Lowe’s then seek out what Nel calls ‘uncommon partners’ to help make the stories reality; in OSHbot’s case, the trendy Silicon Valley learning hub Singularity University and the startup robotics firm Fellow Robots.

OSHbot is a 4ft-something, pear-shaped character; limbless, with nothing but a vague green glow for a face, and a screen slanted in front like a starched pinny. ‘It’s basically a roving kiosk; we definitely didn’t want it to have arms or anything like that,’ says Nel. ‘But there’s still lots to figure out, for instance: what voice should the robot have? Should it be male, should it be female? There are so many things we can’t know until we try it.’

Nel is quick to clarify that OSHbot is not a replacement for human beings – rather it is there to ‘augment [the] store associates.'”

Tags: ,

Ralph H. Baer, who just passed away, began dreaming of designing games for TV sets in 1951, but it wasn’t until 15 years later that he started to fully flesh out the idea, eventually creating the first home video-game system, the Odyssey. From his New York Times obituary by Douglas Martin:

Flash back to the sultry late summer of 1966: Mr. Baer is sitting on a step outside the Port Authority Bus Terminal in Manhattan waiting for a colleague. By profession, he is an engineer overseeing 500 employees at a military contractor. Today, a vision has gripped him, and he begins scribbling furiously on a yellow legal pad with a No. 2 pencil.

The result was a detailed four-page outline for a “game box” that would allow people to play board, action, sports and other games on almost any American television set. An intrigued boss gave him $2,000 for research and $500 for materials and assigned two men to work with him. For all three, as they plowed through prototype after prototype in a secret workshop, the project became an obsession.

In March 1971, Mr. Baer and his employer, Sanders Associates in Nashua, N.H., filed for the first video game patent, which was granted in April 1973 as Patent No. 3,728,480. It made an extraordinarily large claim to a legal monopoly for any product that included a domestic television with circuits capable of producing and controlling dots on a screen.

Sanders Associates licensed its system to Magnavox, which began selling it as Odyssey in the summer of 1972 as the first home video game console. It sold 130,000 units the first year.

Odyssey consisted of a master control unit containing all the electronic gear, two player control units that directed players on the TV screen, and a set of electronic program cards, each of which supported a different game. Plastic overlays that clung to the screen to supply color were included. To supplement the electronic action, a deck of playing cards, poker chips and a pair of dice were included.

But the guts of the device were what mattered: 40 transistors and 40 diodes. That hardware ran everything. Odyssey, often called the first home computing device, had no software.

Several months after Odyssey hit the market, Atari came out with the first arcade video game, Pong. Though Pong became better known than Odyssey and was in some ways more agile, Sanders and Magnavox immediately saw it as an infringement on their patent.•

Tags: ,

In a piece at the Los Angeles Review of Books about Tyler Cowen’s Average Is Over, a meditation on meritocracy run amok, Guy Patrick Cunningham compares tomorrow’s potentially technologically divided society, a sci-fi-ish dystopia few people would find acceptable, to life in the Middle Ages. An excerpt:

“Though Cowen doesn’t see it, the future he lays out seems rife with obvious, intrinsic structural inequalities that will make it very hard for anyone born outside the elite to actually show enough ‘merit’ to rise into it. And when he breezily asserts, ‘The more that the high earners pull in, the more people will compete to serve them, sometimes for high wages, and sometimes for low wages,’ and that, ‘making high earners feel better in just about every part of their lives will be a major source of job growth in the future […] Better about the world. Better about themselves. Better about what they have achieved,’ it becomes hard not to see this as a new form of aristocracy — one where people born with certain advantages are able to leverage them even further than today’s wealthy. Certainly, a smart, capable aristocracy, one theoretically open to talented outsiders, but an aristocracy all the same.

Cowen is careful to note that this system ‘is not necessarily a good and just way for an economy to run,’ but he certainly sees it as a given. Interestingly, he is also keen to emphasize the autonomy of the individual in the hyper-meritocracy. This isn’t itself surprising. But Cowen’s efforts to square the system he anticipates with humanistic ideas about individual agency fall flat. When he defends the possibility of building third-world style slums in the United States, he insists, ‘No one is being forced to live in these places […] I might prefer to live there if my income was low enough.’ Cowen essentially defines choice down to the absence of force. But this is meaningless — after all, no one chooses to live in a slum, unless the alternative is homelessness. Choice only matters when there are real alternatives to pick from. When Cowen compares a hyper-meritocratic society to the Middle Ages, he does so merely to point out that it is possible for a deeply unequal society to remain stable over a long period of time. But the comparison brings to mind another thought instead — that the values that underlie hyper-meritocracy are as un-humanist as those of the Medieval period.”

Tags: ,

Virtual Reality software developer Tony Parisi discusses at Medium how the technology–like all technologies–can be a tool or a weapon, depending on who’s wielding it. An excerpt:

Question:

What does the future of VR look like?

Tony Parisi:

Maybe we can help visualize climate change and figure out what to do about it. We can certainly teach better. And if we can teach better, then we can understand better. If we can simulate better, maybe we can understand other cultures, get a better sense of history, all those things are possible and going to be made better with VR if done well. Then, we can really help the world.

But it’s not going to solve everything; all of the problems we have as a planet or society. Not everything will be better in VR. I believe VR is like any of these other technological innovations. I believe it’s value neutral — it’s as good or bad as the people harnessing it as a technology, communications, and storytelling platform — and can ultimately be used for good or ill. I think we’re going to see abuses of it, surely. I think we’re going to see over-exuberance with what it can do. But that will all be tempered over time, and eventually the laws of the market and consumer attention will just shake it out and we will see VR wins in certain segments — for example, housing and real estate, retail, and travel all have phenomenal potential in VR.”

Tags:

Computer pioneer Clive Sinclair has been predicting since the 1980s that self-designing intelligent machines will definitely be the doom of us, but he’s not letting it ruin his day. Che sera sera, you carbon-based beings. As you were. From Leo Kelion at the BBC:

“His ZX Spectrum computers were in large part responsible for creating a generation of programmers back in the 1980s, when the machines and their clones became best-sellers in the UK, Russia, and elsewhere.

At the time, he forecast that software run on silicon was destined to end ‘the long monopoly’ of carbon-based organisms being the most intelligent life on Earth.

So it seemed worth asking him what he made of Prof Stephen Hawking’s recent warning that artificial intelligence could spell the end of the human race.

‘Once you start to make machines that are rivalling and surpassing humans with intelligence it’s going to be very difficult for us to survive – I agree with him entirely,’ Sir Clive remarks.

‘I don’t necessarily think it’s a bad thing. It’s just an inevitability.’

So, should the human race start taking precautions?

‘I don’t think there’s much they can do,’ he responds. ‘But it’s not imminent and I can’t go round worrying about it.’

It marks a somewhat more relaxed view than his 1984 prediction that it would be ‘decades, not centuries’ in which computers ‘capable of their own design’ would rise.

‘In principle, it could be stopped,’ he warned at the time. ‘There will be those who try, but it will happen nonetheless. The lid of Pandora’s box is starting to open.'”

Tags: , ,

In a post for the “Upshot” section of the New York Times, economist Tyler Cowen suggests a variety of ways technology may begin to reverse the income inequality it has lately helped grow. Many of the ideas are modest and incremental, but there’s one giant one: The rising fortunes of emerging powers like China may eventually also help enrich Americans when such nations lose interest in making knockoff Apple products and create original companies as innovative as Apple. An excerpt:

“A final set of forces to reverse growing inequality stem from the emerging economies, most of all China. Perhaps we are living in a temporary intermediate period when America and many other developed nations bear a lot of the costs of Chinese economic development without yet getting many of the potential benefits. For instance, China and other emerging nations are already rich enough to bid up commodity prices and large enough to drive down the wages of a lot of American middle-class workers, especially in manufacturing. Yet while these emerging economies are keeping down the costs of manufactured goods for American consumers, they are not yet innovative enough to send us many fantastic new products, the way that the United States sends a stream of new products to British or French consumers, to their benefit. 

That state of affairs will probably end. Over the next few decades, we can expect China, India and other emerging nations to supply more innovations to the global economy, including to the United States. This shouldn’t be a cause for alarm. It will lead to many good things.

Since the emerging economies are relatively poor, many of these innovations may benefit relatively low-income Americans.”

Tags:

Batteries, based on chemical reactions, are immune to Moore’s Law, but there’s certainly room for great improvement, and Elon Musk is going all in on the devices as a way to make EVs more affordable. If he’s successful with his Gigafactory, the ramifications will go far beyond cars. Tesla batteries are already being repurposed by homeowners who’ve converted to solar, and we’re just at the beginning. From Mark Chediak at Bloomberg:

“Here’s why something as basic as a battery both thrills and terrifies the U.S. utility industry.

At a sagebrush-strewn industrial park outside of Reno, Nevada, bulldozers are clearing dirt for Tesla Motors Inc.’s battery factory, projected to be the world’s largest.

Tesla’s founder, Elon Musk, sees the $5 billion facility as a key step toward making electric cars more affordable, while ending reliance on oil and reducing greenhouse gas emissions. At first blush, the push toward more electric cars looks to be positive for utilities struggling with stagnant sales from energy conservation and slow economic growth.

Yet Musk’s so-called gigafactory may soon become an existential threat to the 100-year-old utility business model. The facility will also churn out stationary battery packs that can be paired with rooftop solar panels to store power. Already, a second company led by Musk, SolarCity Corp., is packaging solar panels and batteries to power California homes and companies including Wal-Mart Stores Inc.

‘The mortal threat that ever cheaper on-site renewables pose’ comes from systems that include storage, said Amory Lovins, co-founder of the Rocky Mountain Institute, a Snowmass, Colorado-based energy consultant. ‘That is an unregulated product you can buy at Home Depot that leaves the old business model with no place to hide.'”

Tags: ,

  1. Did the millions of Americans newly receiving health insurance via the Affordable Care Act create well-paying jobs?
  1. Did the sanctions against Putin cause countries to buy products from the U.S. that they normally got from Russia, leading to our companies hiring more workers?

Tags: ,

The most chilling words I heard all year were spoken by theoretical physicist David Kaplan near the conclusion of Mark A. Levinson’s documentary, Particle Fever, which focuses on the “awakening” and implementation of the Large Hadron Collider at CERN:

Super Symmetry could still be true, but it would have to be a very strange version of the theory. And if it’s the Multiverse, well, other universes would be amazing, of course, but it could also mean no other new particles discovered, and then a Higgs with a mass of 125 is right at a critical point for the fate of our universe. Without any other new particles, that Higgs is unstable, it’s temporary. Since the Higgs holds everything together, if the Higgs goes, everything goes. It’s amazing that the Higgs, the center of the standard model, the thing we’ve all been looking for, could also be the thing that destroys everything. The creator and the destroyer.

But, we could discover new particles and then none of that would be true.•

Tags: ,

Something significant happened between the mind-boggling grand jury decisions in the Michael Brown and Eric Garner cases, and that was President Obama determining that police-officer body cameras needed to be dispersed across the country. After the brutal Garner homicide, which was captured fully on tape, brought back no indictment, there were pundits who said this was proof that Obama’s initiative wouldn’t help in any meaningful way.

Perhaps. But Eric Garner’s contorted face and cries for mercy are not going to go away thanks to that footage, and those images and sounds have convinced a large number of conservative politicians and editorialists to take an unusual stand, calling on Eric Holder and Congress to further investigate the murder of a victim who will remind us of injustice on an infinite loop. From Ed O’Keefe at the Washington Post:

“House Speaker John A. Boehner (R-Ohio) said Thursday that he still has ‘unanswered questions’ about the recent deaths of Michael Brown and Eric Garner, two African Americans killed during confrontations with police officers.

‘Clearly both of these are serious tragedies that we’ve seen in our society,’ he said in response to a question at his weekly press conference. ‘I think the American people want to understand more of what the facts were. There are a lot of unanswered questions that Americans have, and frankly I have.’

Boehner said he wouldn’t rule out having House committees hold hearings into the matter. ‘I do think that the American people deserve more answers about what really happened here and was our system of justice handled properly,’ he said.

Boehner’s comments a few hours after Rep. Cathy McMorris Rodgers (R-Wash.), the fourth-ranking House Republican, said she ‘absolutely’ thinks the House should hold hearings into the matter.”

Tags: , , ,

Two years before piloting the flight that killed himself and the great comic Will Rogers, aviator Wiley Post completed a ’round-the-world trip that was solo save for a helpful robot, an autopilot device fashioned by Sperry. It wasn’t like he could sleep comfortably while his “co-pilot” took over the controls, but it did allow Post to journey the long distance navigator-less. An article from the July 15, 1933 Brooklyn Daily Eagle published just prior to the mission.

Tags:

« Older entries § Newer entries »