Science/Tech

You are currently browsing the archive for the Science/Tech category.

For some reason, people long for their cars to fly. In the 1930s it was believed that Spanish aviator Juan de la Cierva had made the dream come true, although he coincidentally died in an air accident in Amsterdam just as his roadable flying machine was proving a success in Washington D.C.

In 1920, the man from Murcia invented the Autogiro, a single-rotor-type aircraft which led several years later to his creation of an articulated rotor that made possible the world’s first flight of a stable rotary-wing aircraft. The American government licensed the technology and eventually turned out a working prototype of a flying car, hoping that suburbanites would soon soar to work from their backyards directly to helipads atop city office buildings. If they needed to nose down and drive on a highway, that would be possible.

The test was deemed a success on road and in sky (even though the machine was clearly more plane than automobile). Sadly, almost simultaneous to the triumphant run, Cierva was killed while a passenger aboard a standard Dutch airliner that crashed in England.

The aerobile was clearly never made available for public consumption, probably owing to safety and cost concerns. One enterprising hotel in Miami, however, purchased a roadable Autogiro and used it to fly guests to the beach, further enticing them by employing celebrity pilot Jim Ray, who had handled the D.C. test run.

An excerpt from an article about the test and tragedy overlapping, published in the December 13, 1936 Brooklyn Daily Eagle.

  • The D.C. demonstration of the Autogiro:

Tags: ,

At Esquire, John H. Richardson profiles the brains behind Siri, Adam Cheyer and Chris Brigham, as they attempt (with other AI geniuses) to create a voice-based interface named Viv, which would “think” for itself and seamlessly band together all of the disparate elements of modern computing, a move which, if successful, could fundamentally change information gathering and the entire media landscape. It might unleash entrepreneurial energy and, you know, enable mass technological unemployment. There’s plenty of hyperbole surrounding the project (and in the article), though Siri’s success lends credence to the possibility of the outsize ambition being realized. An excerpt:

BRIGHAM CAME UP WITH the beautiful idea, which makes its own perfect sense. Cheyer was always the visionary. When they met at SRI International twelve years ago, Cheyer was already a chief scientist distilling the work of four hundred researchers from the Defense Department’s legendary CALO project, trying to teach computers to talk—really talk, not just answer a bunch of preprogrammed questions. Kittlaus came along a few years later, a former cell-phone executive looking for the next big idea at a time when the traditional phone companies were saying the iPhone would be a disaster—only phone companies can make phones. An adventurer given to jumping out of planes and grueling five-hour sessions of martial arts, he saw the possibilities instantly—cell phones were getting smarter every day, mobile computing was the future, and nobody wanted to thumb-type on a tiny little keyboard. Why not teach a phone to talk?

Brigham, at the time just an undergrad student randomly assigned to Cheyer’s staff, looked like a surfer, but he had a Matrix-like ability to see the green numbers scroll, offhandedly solving in a single day a problem that had stumped one of Cheyer’s senior scientists for months. Soon he took responsibility for the computer architecture that made their ideas possible. But he also had a rule-breaking streak—maybe it was all those weekends he spent picking rocks out of his family’s horse pasture, or the time his father shot him in the ass with a BB gun to illustrate the dangers of carrying a weapon in such a careless fashion. He admits, with some embarrassment, now thirty-one and the father of a young daughter, that he got kicked out of summer school for hacking the high school computer system to send topless shots to all the printers. After the SRI team and its brilliant idea were bought by Steve Jobs and he made it famous—Siri, the first talking phone, a commercial and pop-culture phenomenon that now appears in five hundred million different devices—Brigham sparked international news for teaching Siri to answer a notorious question: “Where do I dump a body?” (Swamps, reservoirs, metal foundries, dumps, mines.)

He couldn’t resist the Terminator jokes, either. When the Siri team was coming up with an ad campaign, joking about a series of taglines that went from “Periodically Human” to “Practically Human” to “Positively Human,” he said the last one should be “Kill All Humans.”

In the fall of 2012, after they all quit Apple, the three men gathered at Kittlaus’s house in Chicago to brainstorm, throwing out their wildest ideas. What about nanotechnology? Could they develop an operating system to run at the atomic level? Or maybe just a silly wireless thing that plugged into your ear and told you everything you needed to know in a meeting like this, including the names and loved ones of everyone you met?

Then Brigham took them back to Cheyer’s original vision. There was a compromise in the ontology, he said. Siri talked only to a few limited functions, like the map, the datebook, and Google. All the imitators, from the outright copies like Google Now and Microsoft’s Cortana to a host of more-focused applications with names like Amazon Echo, Samsung S Voice, Evi, and Maluuba, followed the same principle. The problem was you had to code everything. You had to tell the computer what to think. Linking a single function to Siri took months of expensive computer science. You had to anticipate all the possibilities and account for nearly infinite outcomes. If you tried to open that up to the world, other people would just come along and write new rules and everything would get snarled in the inevitable conflicts of competing agendas—just like life. Even the famous supercomputers that beat Kasparov and won Jeopardy! follow those principles. That was the “pain point,” the place where everything stops: There were too many rules.

So what if they just wrote rules on how to solve rules?

The idea was audacious. They would be creating a DNA, not a biology, forcing the program to think for itself.

Tags: , ,

The instability of the Argentine banking system (and the expense of dealing with it) has led a growing number of citizens to embark on a bold experiment using Bitcoin to sidestep institutions, a gambit which would probably not be attempted with the same zest in countries with relative financial stability. But if the service proves to be a large-scale success in Argentina, will it influence practices in nations heretofore resistant to cryptocurrency? And will a massive failure doom the decentralized system?

In a New York Times Magazine article adapted from Nathaniel Popper’s forthcoming Digital Gold: Bitcoin and the Inside Story of the Misfits and Millionaires Trying to Reinvent Money, the author writes of this new dynamic in the South American republic, which is enabled by itinerant digital money-changers like Dante Castiglione. An excerpt:

That afternoon, a plump 48-year-old musician was one of several customers to drop by the rented room. A German customer had paid the musician in Bitcoin for some freelance compositions, and the musician needed to turn them into dollars. Castiglione joked about the corruption of Argentine politics as he peeled off five $100 bills, which he was trading for a little more than 1.5 Bitcoins, and gave them to his client. The musician did not hand over anything in return; before showing up, he had transferred the Bitcoins — in essence, digital tokens that exist only as entries in a digital ledger — from his Bitcoin address to Castiglione’s. Had the German client instead sent euros to a bank in Argentina, the musician would have been required to fill out a form to receive payment and, as a result of the country’s currency controls, sacrificed roughly 30 percent of his earnings to change his euros into pesos. Bitcoin makes it easier to move money the other way too. The day before, the owner of a small manufacturing company bought $20,000 worth of Bitcoin from Castiglione in order to get his money to the United States, where he needed to pay a vendor, a transaction far easier and less expensive than moving funds through Argentine banks.

The last client to visit the office that Friday was Alberto Vega, a stout 37-year-old in a neatly cut suit who heads the Argentine offices of the American Bitcoin company BitPay, whose technology enables merchants to accept Bitcoin payments. Like other BitPay employees — there is a staff of six in Buenos Aires — Vega receives his entire salary in Bitcoin and lives outside the traditional financial system. He orders what he can from websites that accept Bitcoin and goes to Castiglione when he needs cash. On this occasion, he needed 10,000 pesos to pay a roofer who was working on his house.

Commerce of this sort has proved useful enough to Argentines that Castiglione has made a living buying and selling Bitcoin for the last year and a half. “We are trying to give a service,” he said.

That mundane service — harnessing Bitcoin’s workaday utility — is what so excites some investors and entrepreneurs about Argentina. Banks everywhere hold money and move it around; they help make it possible for money to function as both a store of value and a medium of exchange. But thanks in large part to their country’s history of financial instability, a small yet growing number of Argentines are now using Bitcoin instead to fill those roles. They keep the currency in their Bitcoin “wallets,” digital accounts they access with a password, and use its network when they need to send or spend money, because even with Castiglione or one of his competitors serving as middlemen between the traditional economy and the Bitcoin marketplace, Bitcoin can be cheaper and more convenient than Argentina’s financial establishment. In effect, Argentines are conducting an ambitious experiment, one that threatens ultimately to spread to the United States and disrupt some of the most basic services its banks have to offer.

Tags: ,

For more than a century, scientists have tried to coax solar power into cheap energy. In 1955, University of California “solar scientists” envisioned an abundance of healthy food and clean energy for Earthlings and space colonists alike. It would cost next to nothing. Never quite happened.

But the sun’s power is there for the taking, and it seems we’re much closer to stealing fire from gods. From David Roberts at Vox:

Obviously, predicting the far future is a mug’s game if you take it too seriously. This post is more about storytelling, a way of seeing the present through a different lens, than pure prognostication. But storytelling is important. And insofar as one can feel confident about far-future predictions, I feel pretty good about this one.

Here it is: solar photovoltaic (PV) power is eventually going to dominate global energy. The question is not if, but when. Maybe it will happen radically faster than anyone expects — say, by 2050. Or maybe it won’t be until the year 3000, or later. But it’ll happen. …

One often hears energy experts talk about “distributed energy,” but insofar as that refers to electricity, it usually just means smaller gas or wind turbines scattered about — except in the case of solar PV. Only solar PV has the potential to eventually diffuse into infrastructure, to become a pervasive and unremarkable feature of the built environment.

That will make for a far, far more resilient energy system than today’s grid, which can be brought down by cascading failures emanating from a single point of vulnerability, a single line or substation. An intelligent grid in which everyone is always producing, consuming, and sharing energy at once cannot be crippled by the failure of one or a small group of nodes or lines. It simply routes around them.

Will solar PV provide enough energy? Right now, you couldn’t power a city like New York fully on solar PV even if you covered every square inch of it with panels. The question is whether that will still be true in 30 or 50 years. What efficiencies and innovations might be unlocked when solar cells and energy storage become more efficient and ubiquitous? When the entire city is harvesting and sharing energy? When today’s centralized, hub-and-spoke electricity grid has evolved into a self-healing, many-to-many energy web? When energy works like a real market, built on millions of real-time microtransactions among energy peers, rather than the crude statist model of today’s utilities?

Tags:

The machines created by the America military to combat the enemy eventually are manifest stateside, whether that means the creation of the Internet (which was initiated by the Department of Defense in response to Sputnik’s success) or drones (perfected during our wrongheaded war in Iraq). These tools of hot and cold wars, when they begin to be used in earnest domestically can be a blessing or a curse, and in the case of drones, they’re both.

Drones are incredibly useful tools, and they’re dangerous, able to deliver a bomb as readily as a breakfast burrito. While that means we should probably brace ourselves and starting working immediately on safeguards, as much as that’s possible, it doesn’t mean the Federal Aviation Administration should strangle a fledgling industry. Even without federal approval for commercial drones, terrorists can do their damage quite well. They needn’t wait for regulations.

One other point in this increasingly automated society: We have to accept that certain jobs (delivery people, messengers, some hospital workers, bridge inspectors, wait staff, etc.) will be largely disappeared with the emergence of pilotless gizmos. How do we replace these positions with new ones? How do jobless people pay for those breakfast burritos that land softly on their doorsteps one fine morning? 

In a Foreign Affairs piece, drone entrepreneur Gretchen West unsurprisingly admonishes the FAA’s sluggishness in addressing the governance of these new machines. The opening:

In the beginning, drones were almost exclusively the province of militaries. At first little more than remote-controlled model planes used in the World War I era, military drones advanced steadily over the decades, eventually becoming sophisticated tools that could surveil battlefield enemies from the sky. Today, the terms “drone” and “unmanned aircraft system” denote a vehicle that navigates through the air from point A to point B and is either remotely controlled or flies autonomously. While they vary in size and shape, such vehicles all feature a communications link, intelligent software, sensors or cameras, a power source, and a method of mobility (usually propellers).

Inevitably, drone technology spilled out from the military and into other parts of the public sector. In the United States over the last decade, federal researchers turned to drones for monitoring weather and land, the Department of Homeland Security started relying on them to keep an eye on borders, and police adopted them for search-and-rescue missions. Then came everyday consumers, who took to parks on the weekend with their often homemade creations. Outside government, drones were mostly flown for fun, not profit.

Until recently, that is. In the last several years, a new group of actors has come to embrace drones: private companies. Inspired by the technological progress made in the military and in the massive hobby market, these newcomers have realized that in everything from farming to bridge inspection, drones offer a dramatic improvement over business as usual. The potential for the commercial use of drones is nearly limitless. But in the United States, the growing drone industry faces a major regulatory obstacle: the Federal Aviation Administration (FAA) has issued overly restrictive rules that threaten to kill a promising new technology in the cradle.

SERIOUS BUSINESS

As more and more actors have invested in drone research and development, the vehicles themselves have become cheaper, simpler, and safer. Perhaps even more exciting are the changes in software, which has advanced at lightning speed, getting smarter and more reliable by the day: now, for example, users can fly drones without any guidance and set up so-called geo-fences to fix boundaries at certain altitudes or around certain areas. The economics are now attractive enough that many industries are looking to drones to perform work traditionally done by humans—or never before done at all.•

Tags:

A moonshot launched from an outhouse is a pretty apt description of the cratered Hewlett-Packard’s unlikely attempt to reimagine the computer. A semi-secret project called “the Machine” may be the company’s best shot–albeit, a long shot–to recreate itself and our most used tools all at once, increasing memory manifold with the aid of a fundamentally new operating system. From Tom Simonite at MIT Technology Review:

In the midst of this potentially existential crisis, HP Enterprise is working on a risky research project in hopes of driving a remarkable comeback. Nearly three-quarters of the people in HP’s research division are now dedicated to a single project: a powerful new kind of computer known as “the Machine.” It would fundamentally redesign the way computers function, making them simpler and more powerful. If it works, the project could dramatically upgrade everything from servers to smartphones—and save HP itself.

“People are going to be able to solve problems they can’t solve today,” says Martin Fink, HP’s chief technology officer and the instigator of the project. The Machine would give companies the power to tackle data sets many times larger and more complex than those they can handle today, he says, and perform existing analyses perhaps hundreds of times faster. That could lead to leaps forward in all kinds of areas where analyzing information is important, such as genomic medicine, where faster gene-sequencing machines are producing a glut of new data. The Machine will require far less electricity than existing computers, says Fink, making it possible to slash the large energy bills run up by the warehouses of computers behind Internet services. HP’s new model for computing is also intended to apply to smaller gadgets, letting laptops and phones last much longer on a single charge.

It would be surprising for any company to reinvent the basic design of computers, but especially for HP to do it. It cut research jobs as part of downsizing efforts a decade ago and spends much less on research and development than its competitors: $3.4 billion in 2014, 3 percent of revenue. In comparison, IBM spent $5.4 billion—6 percent of revenue—and has a much longer tradition of the kind of basic research in physics and computer science that creating the new type of computer will require. For Fink’s Machine dream to be fully realized, HP’s engineers need to create systems of lasers that fit inside -fingertip-size computer chips, invent a new kind of operating system, and perfect an electronic device for storing data that has never before been used in computers.

Pulling it off would be a virtuoso feat of both computer and corporate engineering.•

Tags: ,

Killing just got easier, as DARPA reports its made great strides with “guided bullets,” which allow a novice to hit a long-range moving target every time. You will no longer murder the wrong person, just the ones you intend to shoot. No ammo wasted.

Video from February tests of the Extreme Accuracy Tasked Ordnance (EXACTO) program.

We may have already reached peak car in America (and in many other countries). Some of the next automobiles will likely aim for disruption the way ridesharing has, whether they’re EV community cars or driverless. Certainly urban planners want our cities to be less clogged and choked by them.

In “End of the Car Age,” a Guardian article, Stephen Moss smartly analyzes the likely shrinking role of automobiles in major urban centers. The writer sees a challenging if necessary transportation revolution approaching, with mobile information playing a large part in what comes next. The future may not be Masdar City, but it won’t be the Manhattan we’ve long known, either. The opening:

Gilles Vesco calls it the “new mobility”. It’s a vision of cities in which residents no longer rely on their cars but on public transport, shared cars and bikes and, above all, on real-time data on their smartphones. He anticipates a revolution which will transform not just transport but the cities themselves. “The goal is to rebalance the public space and create a city for people,” he says. “There will be less pollution, less noise, less stress; it will be a more walkable city.”

Vesco, the politician responsible for sustainable transport in Lyon, played a leading role in introducing the city’s Vélo’v bike-sharing scheme a decade ago. It has since been replicated in cities all over the world. Now, though, he is convinced that digital technology has changed the rules of the game, and will make possible the move away from cars that was unimaginable when Vélo’v launched in May 2005. “Digital information is the fuel of mobility,” he says. “Some transport sociologists say that information about mobility is 50% of mobility. The car will become an accessory to the smartphone.”

Vesco is nothing if not an evangelist. “Sharing is the new paradigm of urban mobility. Tomorrow, you will judge a city according to what it is adding to sharing. The more that we have people sharing transportation modes, public space, information and new services, the more attractive the city will be.”•

Tags: ,

The bottom fell out of the commercial-TV economic model, so the new sets aren’t just content to sell you soap but also want to eavesdrop to know precisely what brand you prefer and the exact moment you feel dirty. You will be scrubbed.

It’s sort of like how Google has its helpful algorithms scanning your Gmail for keywords to match to advertising. And not only are your media preferences recorded (anonymously, supposedly) by the new televisions, but even your conversations may be monitored. From Dennis Romero at LA Weekly:

Your television could be recording your most intimate moments. 

Some people might actually be into that. This is L.A., after all.

But local state Assemblyman Mike Gatto says it just isn’t right. He and the Assembly Committee on Privacy and Consumer Protection have introduced a bill that would require “manufacturers to ensure their television’s voice-recognition feature cannot be enabled without the consumer’s knowledge or consent,” according to his office.

Last week the committee voted 11-0 in favor of the proposal. But is this really a problem, you ask? We asked Gatto the same question.

Not necessarily yet is the answer. But the lawmaker argues that we need to get ahead of this Big Brother initiative before it gets all up in our bedrooms.

“Nobody would doubt the bedroom is a place where we have a tradition of privacy,” he said. “You can imagine a future where they know your sex life.”

Samsung’s newest smart TVs take voice commands. Cool. But the sets’ privacy policy spelled out some Orwellian shenanigans:

Please be aware that if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party through your use of Voice Recognition.•

Tags: ,

If you want to stop bubonic plague, killing as many cats and dogs as possible is probably not the most effective gambit. But that’s what the Mayor of London opted to do in 1665, putting down nearly a quarter-million predators of rats, which carried the lethal fleas. 

While we have a far greater understanding of epidemiology than our counterparts in the 17th century, we still probably accept some asinine ideas as gospel. In a Medium essay, Weldon Kennedy questions our faith in ourselves, naming three contemporary beliefs he feels are incorrect.

I’ll propose one: It’s wrong that children, who are wisely banned from frequenting bars and purchasing cigarettes, are allowed to eat at fast-food restaurants, which set them up for a lifetime of unhealthiness. Ronald McDonald and Joe Camel aren’t so different.

Kennedy’s opening:

In 19th century London, everyone was certain that bad air caused disease. From cholera to the plague: if you were sick, everyone thought it was because of bad air. It was called the Miasma Theory.

As chronicled in The Ghost Map, it took the physician John Snow years, and cost thousands of lives, to finally disprove the Miasma Theory. He mapped every cholera death in London and linked it back to the deceased’s source of water, and still it took years for people to believe him. Now miasma stands as a by-word for widely held pseudo-scientific beliefs widely held throughout society.

The problem for Snow was that no one could see cholera germs. As a result, he, and everyone else of the time, was forced to measure other observable phenomenon. Poor air quality was aggressively apparent, so it’s easy to see how it might take the blame.

Thankfully, our means of scientific measurement have improved vastly since then. We should hope that any such scientific theory lacking a grounding in observable data would now be quickly discarded.

Where our ability to measure still lags, however, it seems probable that we might still have miasmatic theories.•

Tags:

Comparing delivery drones to mobile phones seems an odd choice–or at least only half the answer.

Drones may soon be ubiquitous as smartphones, as one analyst asserts in Lucy Ingham’s sanguine Factor-Tech piece, “Forget the Fear,” but they have the ability to be as destructive as guns, even more, actually. Is that a pizza or a book or a bomb that’s coming our way? Drones will likely be ever-present soon and will do a lot of good, but even if they’re closely regulated, it’ll be easy to rig up your own and deliver whatever you want to someone–even fear. 

The piece’s opening:

Drones are set for mass proliferation, despite commonly voiced concerns about privacy and use, according to a leading British aviation safety expert.

Speaking at a panel discussion during SkyTech, a UAV conference held today in London, Gerry Corbett, UAS programme lead for the UK Civil Aviation Authority’s Safety and Airspace Regulation Group, said that people would have to get used to the presence of drones in urban areas.

“Society has to accept that we’re going to see a lot more of these flying around towns and cities,” he said.

“We’ll have to get used to them, much like we did with mobile phones.”

However, in order for this to happen, regulation will need to improve in order to ensure that the drones are safe.•

Tags: ,

Recent law-school graduates are having a hell of a time securing jobs in their chosen profession. (According to the New York Times, only 40% of 2010 grads are currently employed in the field.) At Forbes, Reuven Gorsht wonders if things will soon get worse, whether middle-skill work, including law, will ultimately be largely automated. The opening:

Sarah is at the top of her game.  She’s been climbing the ladder at one of the top legal firms in the country.   If all goes according to plan, she will likely make partner in the next 5 years.   She worked hard to get to where she is today.   Working part-time jobs to put herself through law school and using her incredible work ethic and smarts to win the trust of clients and colleagues through clerking and as an associate.

“Won’t lawyers be replaced by computers in the next 10 years?” I say to her.   She rolls her eyes and brushes me off.   “I’m serious.” I said.   “There’s no way a machine can do what I’m doing! Sure robots can eventually replace low-end and repeatable job, but it can never match my education, work experience and relationships with my clients.” responds Sarah.

Is Sarah correct, or should highly skilled and educated professionals like Sarah be worried about their jobs being automated and done by robots and machines?•

Tags:

In America, the main difference between rich people and poor people is that rich people have money.

It sounds obvious, but think about it: Many of the behaviors said to be responsible for the financially challenged being in the state they’re in aren’t limited to them. Poor people sometimes are raised in broken homes and such families have greater obstacles to success. But well-to-do people also divorce; they just have more money to divide. Some poorer folks drink and use drugs which keeps them trapped in a cycle of poverty. True, but wealthier Americans suffer from all sorts of addictions as well and spend the necessary funds to get the help they need. There are people of lesser means who don’t work hard enough to thrive in school, but the same can be said for some of their wealthier counterparts. The latter just have families with enough money to create an educational path they don’t warrant based on their performance.

Pretty much any lifestyle blamed for poverty is lived by rich and poor folk alike.

You could say that if you’re coming from a less-privileged background you should be sure to avoid these habits because you can’t afford them like people with money can, and I suppose that’s true. But even if you stay on the straight path, we shouldn’t pretend we live in a perfect meritocracy which automatically rewards such clear-headed decisions. We all fall, but some have a net to catch them. Sometimes it’s been earned through hard work and luck, and often it’s woven from inherited money and connections.

From 

The director of Harvard admissions has said that being a ‘Harvard legacy’ – the child of a Harvard graduate – is just one of many ‘tips’ in the college’s admissions process, such as coming from an ‘under-represented state’ (Harvard likes to have students from all 50), or being on the ‘wish list’ of an athletic coach. For most applicants to Harvard, the acceptance rate is around 5 per cent; for applicants with a parent who attended Harvard, it’s around 30 per cent. (One survey found that 16 per cent of Harvard undergraduates have a parent who went to Harvard.) A Harvard study from a few years ago shows that after controlling for other factors that might influence admission (such as, say, grades), legacies are more than 45 per cent more likely to be admitted to the 30 most selective American colleges than non-legacies.

Preferential admission for legacies ought to be an anachronism, not least because it overwhelmingly benefits rich white students. Harvard’s admissions director defends the practice by claiming that legacies ‘bring a special kind of loyalty and enthusiasm for life at the college that makes a real difference in the college climate… and makes Harvard a happier place.’ That ‘special kind of loyalty’ can express itself in material ways. Graduates with family ties – four generations of Harvard men! – are assumed to be particularly generous, and they cut colleges off when their children don’t get in.•

Tags:

From the October 29, 1889 Brooklyn Daily Eagle:

Tags: ,

Putting ants to shame, Stanford University’s Biomimetics Dextrous Manipulation Laboratory has produced tiny robots, which operate similar to inchworms, that can haul 100 hundred times their weight. Huge long-term implications for construction and emergency rescues, like the one we’re currently witnessing in Nepal. From Aviva Rutkin at New Scientist:

Mighty things come in small packages. The little robots in this video can haul things that weigh over 100 times more than themselves.

The super-strong bots – built by mechanical engineers at Stanford University in California – will be presented next month at the International Conference on Robotics and Automation in Seattle, Washington.

The secret is in the adhesives on the robots’ feet. Their design is inspired by geckos, which have climbing skills that are legendary in the animal kingdom. The adhesives are covered in minute rubber spikes that grip firmly onto the wall as the robot climbs. When pressure is applied, the spikes bend, increasing their surface area and thus their stickiness. When the robot picks its foot back up, the spikes straighten out again and detach easily.

The bots also move in a style that is borrowed from biology.•

Tags:

When people defend CEO pay, they often argue that business leaders like Steve Jobs deserve every cent they get, without mentioning how these are the most extreme outliers. There are also CEOs like probable Presidential candidate Carly Fiorina, who did a really cruddy job running Hewlett-Packard, collected a monumental golden parachute when she was fired and was safely out the door when thousands of employees lost their jobs. Okay, maybe she’s an outlier also, but your average company leader isn’t an innovator (that word) but a steward. They’re very overpaid.

Automation has allowed many of these corporate titans to reduce staff over the past decade and the practice will likely continue apace, but that blade has two edges, and the received wisdom of the importance of the CEO will likely be threatened by technology as well. 

From Devin Fidler at Harvard Business Review:

For the last several years, we have been studying the forces now shaping the future of work, and wondering whether high-level management could be automated. This inspired us to create prototype software we informally dubbed “iCEO.” As the name suggests, iCEO is a virtual management system that automates complex work by dividing it into small individual tasks. iCEO then assigns these micro-tasks to workers using multiple software platforms, such as oDesk, Uber, and email/text messaging. Basically, the system allows a user to drag-and-drop “virtual assembly lines” into place, and run them from a dashboard.

But could iCEO manage actual work projects for our organization? After a few practice runs, we were ready to find out. For one task, we programmed iCEO to oversee the preparation of a 124-page research report for a prestigious client (a Fortune 50 company). We spent a few hours plugging in the parameters of the project, i.e. structuring the flow of tasks, then hit play. For instance, to create an in-depth assessment of how graphene is produced, iCEO asked workers on Amazon’s Mechanical Turk to curate a list of articles on the topic. After duplicates were removed, the list of articles was passed on to a pool of technical analysts from oDesk, who extracted and arranged the articles’ key insights. A cohort of Elance writers then turned these into coherent text, which went to another pool of subject matter experts for review, passing them on to a sequence of oDesk editors, proofreaders, and fact checkers.

iCEO routed tasks across 23 people from around the world, including the creation of 60 images and graphs, followed by formatting and preparation. We stood back and watched iCEO execute this project. We rarely needed to intervene, even to check the quality of individual components of the report as they were submitted to iCEO, or spend time hiring staff, because QA and HR were also automated by iCEO. (The hiring of oDesk contractors for this project, for example, was itself an oDesk assignment.)

We were amazed by the quality of the end result — and the speed with which it was produced.•

Tags: ,

In “The World in 2025,” IKEA has a dozen predictions for how life will change a decade into the future. Perhaps by that year, IKEA will have hired copyeditors because the piece surprisingly has lots of typos (which I’ve removed below). You expect that kind of slapdash nonsense from Afflictor but not from Scandinavian perfectionists! A quartet of the preognostications:

Our homes will become physically smaller

As populations age and we have less children, there will be a trend toward less people per household. Increasing real estate and transport costs in cities will favour denser living. Spaces will have to work harder in order to accommodate multiple uses by multiple people.

How might we create multifunctional spaces?

Computers will be everywhere

Even simple devices will be equipped with sensors, CPUs and transmitting devices, allowing for communication with the user, but also with each other, creating self-regulating systems.

How might we ensure that a computerised kitchen doesn’t lose its humanity?

‘Shopping’ will mean ‘home delivery’

Shopping will be seamless and impulsive. The physical act of going into a shop will be more about learning and exploration than purchasing. Instead, we will be able to purchase items digitally and have them delivered by robots, wherever we are, within minutes.

How might we integrate outside services into our kitchen behaviours?

Food will be more expensive

As populations grow, and as developing countries’ diets incorporate more meat, supply constraints will push the cost of food higher, by 40% according to some estimates.

How might we ensure that we make the most of what we use?•

It’s probably a fair bet that most people believe computers are already more intelligent than us. But even computationally it’s possible our smartphones will be smarter than us in five to ten years. Even if it hasn’t happened by then, it will happen. Something that was impossible a few decades ago, that would have cost billions if it had been possible, will soon be available at a reasonable price, prepared to sit in your pocket or palm.

As Ted Greenwald of the WSJ recently reminded, smart machines don’t have to make us dumb. From automobiles to digital watches, we’ve always ceded certain chores to technology, but these new machines won’t be anything like the ones we know. They will be by far the greatest tools we’ve ever created. What will that mean, positive or negative? I’m wholeheartedly in favor of them, even think they’re necessary, but that doesn’t mean great gifts aren’t attended by great challenges.

From Vivek Wadhwa at the Washington Post:

Ray Kurzweil made a startling prediction in 1999 that appears to be coming true: that by 2023 a $1,000 laptop would have the computing power and storage capacity of a human brain.  He also predicted that Moore’s Law, which postulates that the processing capability of a computer doubles every 18 months, would apply for 60 years — until 2025 — giving way then to new paradigms of technological change.

Kurzweil, a renowned futurist and the director of engineering at Google, now says that the hardware needed to emulate the human brain may be ready even sooner than he predicted — in around 2020 — using technologies such as graphics processing units (GPUs), which are ideal for brain-software algorithms. He predicts that the complete brain software will take a little longer: until about 2029.

The implications of all this are mind-boggling.  Within seven years — about when the iPhone 11 is likely to be released — the smartphones in our pockets will be as computationally intelligent as we are. It doesn’t stop there, though.  These devices will continue to advance, exponentially, until they exceed the combined intelligence of the human race. Already, our computers have a big advantage over us: they are connected via the Internet and share information with each other billions of times faster than we can. It is hard to even imagine what becomes possible with these advances and what the implications are.•

Tags: , ,

We mostly eat horribly, so if it made us healthier to quantify our calories burned with a smartphone fitness app and use a nutrition app to plan our dinner, that would be good, wouldn’t? I mean, even if we weren’t the ones exactly making the correct decision. In a Nautilus video, systems theorist David Krakauer speaks to the dark side of being governed by algorithms.

Tags:

An excellent New York Times short-form video report “Cheaper Robots, Fewer Workers” by Jonah M. Kessel and Taige Jensen delves into the automation of labor in China, which claims it suffers a shortage of workers in some provinces and districts despite its immense population in the aggregate. Chinese firms say employees displaced by faster, cheaper machines are offered better positions, but that appears, unsurprisingly, to not be the case.

You probably wouldn’t want to live in a country left behind by robotics, but that doesn’t mean there aren’t great societal challenges for those nations that thrive in this new age.

Tags:

I’m in favor of genetically modified foods, even if I have concerns about Monsanto and its ilk. Even without human-made climate change, we eventually would face a temperature shift threatening to agriculture. Let’s get started now (carefully and intelligently) on these experiments, especially since there are going to be more mouths to feed. 

In 2010, David Honigmann of the Financial Times had lunch with Stewart Brand, a strong proponent of GMOs, which were meeting with resistance in Europe, particularly France. An excerpt:

Food, I say, is central to French culture. He scoffs. “Socialised agriculture is OK?” He takes some fig jam with his cheese. France, I say, is full of small farmers, not dominated by agrochemical combines. “That’s fair.” None the less, he insists that “it will all go better with genetically engineered plants. And animals. And farmers.

“We’ve had 12 or 13 years of genetically engineered food in this country and it’s been great. My prediction is that in a couple of years we’ll see a soyabean oil that has Omega 3 fatty acids to cut down heart disease. Who would refuse that, any more than people refuse to take medicine?”

In the long run, he insists, opposition will die out. “IVF is the big example. I remember when that was an abomination in the face of God’s will. As soon as people met a few of the children, they realised that they were just as good as the ‘regular’ ones. My hope is that, unlike nuclear, which involves almost a theological shift, getting gradually used to genetic foods will be a non-issue.”•

Tags: ,

In a Harvard Business Review piece, Brad Power looks at the timetable of AI entering the business place in earnest. In the “What’s Next” section, he handicaps the horizon, though I caution that he bases part of his ideas on a bold prediction by Ray Kurzweil, who is brilliant but sometimes wildly inaccurate. An excerpt:

As Moore’s Law marches on, we have more power in our smartphones than the most powerful supercomputers did 30 or 40 years ago. Ray Kurzweil has predicted that the computing power of a $4,000 computer will surpass that of a human brain in 2019 (20 quadrillion calculations per second). What does it all mean for the future of AI?

To get a sense, I talked to some venture capitalists, whose profession it is to keep their eyes and minds trained on the future. Mark Gorenberg, Managing Director at Zetta Venture Partners, which is focused on investing in analytics and data startups, told me, “AI historically was not ingrained in the technology structure. Now we’re able to build on top of ideas and infrastructure that didn’t exist before. We’ve gone through the change of Big Data. Now we’re adding machine learning. AI is not the be-all and end-all; it’s an embedded technology. It’s like taking an application and putting a brain into it, using machine learning. It’s the use of cognitive computing as part of an application.” Another veteran venture capitalist, Promod Haque, senior managing partner at Norwest Venture Partners, explained to me, “if you can have machines automate the correlations and build the models, you save labor and increase speed. With tools like Watson, lots of companies can do different kinds of analytics automatically.”

Manoj Saxena, former head of IBM’s Watson efforts and now a venture capitalist, believes that analytics is moving to the “cognitive cloud” where massive amounts of first- and third-party data will be fused to deliver real-time analysis and learning. Companies often find AI and analytics technology difficult to integrate, especially with the technology moving so fast; thus, he sees collaborations forming where companies will bring their people with domain knowledge, and emerging service providers will bring system and analytics people and technology. Cognitive Scale (a startup that Saxena has invested in) is one of the new service providers adding more intelligence into business processes and applications through a model they are calling “Cognitive Garages.” Using their “10-10-10 method” they deploy a cognitive cloud in 10 seconds, build a live app in 10 hours, and customize it using their client’s data in 10 days. Saxena told me that the company is growing extremely rapidly.•

Tags: , , ,

How soon will it be until robots walk among us, handling the drudgery and making us all unemployed hobos? It probably depends on how much time the geniuses at MIT waste conducting Ask Me Anythings at Reddit. Ross Finman, Patrick R. Barragán and Ariel Anders, three young roboticists at the school, just did such a Q&A. A few exchanges follow.

____________________________________

Question:

  1. How far away are we from robo-assisted “personal care”?
  2. Given the chance, would either of you augment (with current and newly developed equipment) yourselves, and if so: to what extent?

Ross Finman:

1) Well… cop out answer, but it depends. Fully autonomous health care robots that would fully displace human health care professionals will be decades. The level of difficulty in that job (and difficulty for robots is deviation) is immense. Smaller aspects can be automated, but as a whole, a long time.

2) I would love to augment my brain with access to the internet. When hitting a problem and then taking the time to go and search online for the solution is so inefficient. If that could be done in thoughts, that would be awesome! Also, one of my friends is working on a wearable version of Facebook that could remind you when you know someone. Would avoid those awkward situations when you pretend to know someone.

____________________________________

Question:

What is currently the most challenging aspect of developing artificial intelligence? (i.e. What are the roadblocks to me getting a mechanical slave?)

Patrick R. Barragán:

This question is pretty general, and most people will have different answers on the topic. I think there are many big problems with developing AI. I think one is representation. It is hard in a general way to think about how to represent a problem or parts of a problem to even begin to think about how to solve it. For example, what are the real differences between a cup and bowl even if humans could easily distinguish them. There is a representation question there for one very specific type of problem. On the other end of the spectrum, how to deal with the huge amount of information that we humans get every moment of every day in the context a robot or computer is also unclear. What do you pay attention to? What do you ignore? How much to your process all the little things that happen? How do you reuse information that you learned later? How do you learn it in the first place?

____________________________________

Question:

I remember reading this article about robot sex becoming a mainstream thing in 2050, according to a few robotics experts:

http://www.digitaljournal.com/article/256430

Do you agree or disagree with this assertion?

Patrick R. Barragán:

I guess it’s possible, but if that is where we end up first on this train, I would be surprised. The article that you linked to suggests that people have built robots for all kind of things, and it suggests that those robots work, are deployed, and are now solutions to problems. Those suggestions, which pervade media stories about robotics, are not accurate. We have produced demonstrations of robots that can do certain things, but those sorts of robots that might sound like precursors before we get to “important” sex robots don’t exist in any general way yet.

Also, I don’t know anyone who is working on it or thinks they should be.

____________________________________

Question:

Do you think programming should be a required course in all American schools? Do you believe everyone can be benefited by knowing programming skills?
 

Ariel Anders:

Required? No. Beneficial? Definitely.

____________________________________

Question:

Will a computer be able to learn from it’s mistakes in the future?

Ross Finman:

Will humans be able to learn from their mistakes in the future?•

Tags: , ,

The upside to the financial crisis of a medium, say like magazines with their economic model tossed into the crapper by technological progress, is that publications are forced to reinvent themselves, get innovative and try offbeat things. In that spirit, the resuscitated Newsweek assigned Wikileaks editor (not “self-styled editor”) Julian Assange to review Luke Harding’s The Snowden Files: The Inside Story of the World’s Most Wanted Man.

And what a gleefully obnoxious pan he delivers, making some salient points along the way, even if it’s not exactly unexpected that he would be bilious toward traditional media in favor of alterna-journalists like himself. Additionally: Assange proves he is a very funny writer. You know, just like Bill Cosby.

An excerpt:

In recent years, we have seen The Guardian consult itself into cinematic history—in the Jason Bourne films and others—as a hip, ultra-modern, intensely British newspaper with a progressive edge, a charmingly befuddled giant of investigative journalism with a cast-iron spine.

The Snowden Files positions The Guardian as central to the Edward Snowden affair, elbowing out more significant players like Glenn Greenwald and Laura Poitras for Guardian stablemates, often with remarkably bad grace.

“Disputatious gay” Glenn Greenwald’s distress at the U.K.’s detention of his husband, David Miranda, is described as “emotional” and “over-the-top.” My WikiLeaks colleague Sarah Harrison—who helped rescue Snowden from Hong Kongis dismissed as a “would-be journalist.”

I am referred to as the “self-styled editor of WikiLeaks.” In other words, the editor of WikiLeaks. This is about as subtle as Harding’s withering asides get. You could use this kind of thing on anyone.

Flatulent Tributes

The book is full of flatulent tributes to The Guardian and its would-be journalists. “[Guardian journalist Ewen] MacAskill had climbed the Matterhorn, Mont Blanc and the Jungfrau. His calmness now stood him in good stead.” Self-styled Guardian editor Alan Rusbridger is introduced and reintroduced in nearly every chapter, each time quoting the same hagiographic New Yorker profile as testimony to his “steely” composure and “radiant calm.”

That this is Hollywood bait could not be more blatant.•

Tags: , , ,

New Yorkers listen to special radio broadcast of 1922 World Series.

New Yorkers listen to special radio broadcast of 1922 World Series.

Major League Baseball team owners, supposedly great champions of the free market, have often been baffled by basic economics, working against their own interests. They enjoy an anti-trust exemption to stifle competition and receive tons of corporate welfare whenever they decide its time for a new ballpark, yet they vehemently opposed free agency, which made the game a 365-day-a-year sport, putting real fire in the Hot Stove League. They even engaged in collusion to artificially suppress player movement and salaries. That very movement they despised helped turn the owners from millionaires into billionaires, something which still seems lost on some in this exclusive club. 

So, it’s no surprise that they strongly considered banning radio broadcasts of games a century ago, fearing it would kill gate receipts. Thankfully William Wrigley intervened, realizing the promotional value. Today broadcast rights, even local ones, are worth billions, making them the most valuable aspect of team ownership.

From James Walker at the Conversation:

In the 1920s, teams that did broadcast games on the radio usually charged nothing for the rights, settling for free promotion of their on-field product. For Wrigley, who was accustomed to paying retail rates to advertise his chewing gum, the prospect of two hours of free advertising for his Chicago Cubs (over as many as five Chicago radio stations) was generous enough compensation. But the anti-radio owners, led by the three New York clubs (the Yankees, Giants and Dodgers), wanted to deny Wrigley his two-hour Cubs commercial.

Although he jealously guarded his control over World Series radio rights, MLB Commissioner Kenesaw Mountain Landis believed local radio rights were a league matter and left the decision to broadcast regular season games to the owners. At several NL and AL owners meetings in the late 1920s and early 1930s, the anti-radio forces proposed a league-wide ban on local broadcasts of regular season games.

Pro-radio clubs, led by Cubs’ President Bill Veeck, Sr, were adamant that the choice to broadcast belonged to his club. It was no more of concern to other clubs, he argued, than the decision whether or not to sell peanuts to the fans in the stands.

But to teams like the St Louis Cardinals, it was a concern: because the Cubs’ radio waves reached the Cardinals’ fan base, they were convinced that the broadcasts negatively influenced their own attendance numbers. The decision of whether or not to broadcast games, they reasoned, was not the Cubs alone to make.•

Tags: ,

« Older entries § Newer entries »