Science/Tech

You are currently browsing the archive for the Science/Tech category.

From the September 23, 1895 Brooklyn Daily Eagle:

Dr. Edward W. Burnette, a New York physician, has died from cancer of the face, contracted from a patient whom he had treated.•

Tags:

Oy pioneers! Mars One, which is really unlikely to begin establishing a human colony on our neighboring planet in the next ten years, has just chosen 100 finalists who hope to die on another planet (perhaps sooner than later). One of the “lucky” potential astronauts, Hannah Earnshaw, a UK Ph.D. student who seems like a swell and idealistic person, writes at The Conversation about what will hopefully be a voyage of self-discovery rather than an actual voyage. An excerpt:

When I applied for Mars One, I applied to dedicate my life to the creation of a colony that will have enormous implications for the future of the human race. It’s in many ways a monumental responsibility, a life’s work much bigger than myself, and one for which I feel no qualms about the fact that it’s journey from which there’s no coming back.

I feel very aware of the dreams of all those people who wished to travel in to space, to colonise other planets – and I do so on their behalf, as well as for myself. I want to have lived my life doing something that wasn’t only what I wanted to do, but something that will have a lasting impact on our collective future.

I’m 23, and the past couple of years have been uncertain: stepping through the application for Mars One, even though I’ve made the shortlist of 100 I’m still unsure whether I’ll be selected. Hoping that I am suitable, but ultimately wanting the very best and most capable people to go, I have had to hold two possible futures in my mind.

In one, I complete my PhD, get a place of my own, pursue a career in research or maybe in politics. I get really good at playing piano, I find time to travel to Norway, Italy, Canada, and Japan, and maybe find a husband or wife.

In the other, I leave behind the possibilities of Earth for the possibilities of Mars. Alongside my crew I pioneer planetary scientific research and, as the founding member of a new civilisation, I plant the seeds of a diverse and generous society. I communicate our life to followers on Earth, help establish new policy through which humans explore and settle the stars ethically and responsibly… and maybe find a husband or wife.

Both futures hold so much potential that there will be a real sense of loss when I know which path I am on, but also a real sense of purpose.•

Tags:

Newspapers, no longer wanted, are complimentary in hotels, whereas Wi-Fi, desperately desired, comes at a cost. Sounds intuitive, but it actually makes little sense. Chains and boutiques alike are probably better off giving the Internet away for free, using it for marketing and tracking (though, of course, I wish they wouldn’t). From the Economist:

Social media is the single biggest marketing tool these firms have. Not in the sense of setting up a corporate page, but because of guests sharing their experiences in real time with their friends. A report by the European Travel Commission found that about a quarter of leisure travellers turn to social media to check out hotels before booking. They place even more store by looking at travel-review websites like TripAdvisor. 

Hence, no matter how much revenue hotels are earning by squeezing guests, the opportunity cost of making access to the internet expensive is huge. According to Resonance, 24% of Americans update social media at least once a day while travelling. For 18-to-34-year-olds, that figure rises to 51%. An even higher proportion post photos. If customers are not sharing thoughts about hotels during their stay because they do not want to pay for Wi-Fi, firms such as Hilton are chopping their marketing off at the knees. Even more shortsightedly, they are left hoping that those guests who do begrudgingly stump up $19 for 24 hours’ Wi-Fi access are still going to write something nice about their room while waiting in the bar for their equally expensive Coco Locos to arrive.

The good news is that these are the dying days of paid-for Wi-Fi. “In the 19th century hotels charged extra if you wanted hot water for a bath,” says Chris Fair president of Resonance. “In less than a decade, I suspect the idea of paying for internet access at a hotel will seem as ridiculous as the idea of paying for hot water seems to us now.” Some things never change, however. During every business revolution, there will always be those who adapt too late to survive.•

Tags:

Because of computerized autopilot systems and a greater understanding of wind shears, flying has never been safer than it is right now. Boarding a domestic carrier in the United States is a particularly low-risk means of travel. But increasingly automated aviation can cause human pilots to experience skill fade, something which has alarmed Nicholas Carr, and now Steve Casner of Slate is concerned about two-pilot cockpits being halved. My assumption is that if accidents remain the rare exception, the automation process will continue apace. An excerpt:

Now that we’ve gone from four pilots to two, and with more automation on the way, you don’t need to be a mind reader to know what the industry is thinking next. The aircraft manufacturer Embraer has already revealed plans for a single-pilot regional jet, and Cessna has produced several small single-pilot jets. (I’m rated to fly this one.) And as my colleagues at NASA are busy studying the feasibility of large single-pilot airliners, a Delta Air Lines pilot made it look easy a few weeks ago when the other pilot was accidentally locked out of the cockpit. But should we be a little nervous about the idea of having just one pilot up there in the front office? The research says maybe so.

Studies show that pilots make plenty of errors. That’s why we have two pilots in the airline cockpit—to construct a sort of human safety net. While one pilot operates the aircraft’s controls, the other pilot keeps watch for occasional errors and tries to point them out before they cause any harm. NASA engineer Everett Palmer likes to sum up the idea with a quip: “To err is human, to be error-tolerant is divine.” Keeping the error-maker and getting rid of the error-catcher may not prove to be very error-tolerant.

Besides, automation doesn’t eliminate human error—it just relocates it. The engineers and programmers who design automation are humans, too. They write complex software that contains bugs and nuances. Pilots often speak of automation surprises in which the computers do something unexpected, occasionally resulting in accidents. Having only one pilot in the cockpit might compromise our ability to make sense of these technological noodle-scratchers when they pop up.•

Tags: ,

Delivery drones may be delayed by legislation driven by fear of injuries and a court system unprepared for such nouveau liabilities, but new federal rules are likely to allow their use by farmers who want to keep a remote eye on their crops. From a Harvest Public Media piece:

On a breezy morning in rural Weld County, Colo., Jimmy Underhill quickly assembles a black and orange drone with four spinning rotors. The machine sits on a dirt patch right next to a corn field, littered with stalks left over from last year’s harvest.

Underhill is a drone technician with Agribotix, a Boulder, Colo.-based drone start up that sees farmers as its most promising market. Underhill is in charge of training his fellow employees how to work the machine in the field.

Punching a few buttons on a remote control with two joysticks, the machine whirs to life. The quadcopter, a toaster-sized machine with four rotors, zips 300 feet into the air directly above our heads, pauses for a moment and then begins to move.

“So it just turned to the east and it’s going to start its lawnmower pattern,” Underhill said.

What makes the drone valuable to farmers is the camera on board. It snaps a high-resolution photo every two seconds. From there, Agribotix stitches the images together, sniffing out problem spots in the process, using infrared technology to look at plant health. Farmers hope that more information about their fields can lead to big savings in their bottom lines. Knowing what’s happening in a field can save a farmer money.•

Producing an infinite bounty of healthy food and clean energy through “artificial photosynthesis” was the stated near-term goal of a group of University of California scientists featured in an article in the January 27, 1955 Brooklyn Daily Eagle. Even the dietary needs of space travelers was given consideration.

Tags:

The 1960s video report embedded below about computers includes footage of American college students asking concerned questions about automation and the coming technological unemployment. No different than today, really. Luddite-ism is never the answer, though political solutions may be required. A couple weeks back, Newsweek referred to its 1965 cover story, “The Challenge of Automation.” An excerpt:

In 1965, America found itself facing a new industrial revolution. The rapid evolution of computers provoked enormous excitement and considerable dread as captains of industry braced themselves for the age of automation.   

Newsweek devoted a special edition to discussing “the most controversial economic concept of the age” in January 1965. “Businessmen love it. Workers fear it. The government frets and investigates and wonders what to do about it,” the report began. “Automation is wiping out about 35,000 jobs every week or 1.8 million per year.”•

DARPA, which put the Internet and its endless cat photos and trolls into our lives, would now like to implant a “modem” in our brains. The department, under the direction of Dr. Arati Prabhakar, has announced its intentions to ambitiously dive into biotechnology with an array of projects. The aforementioned cortex device is not just meant for the treatment of Alzheimer’s and Parkinson’s but also to make possible a form of Google Glass without the external hardware. From Peter Rothman’s breathless report at h+:

Dr. [Geoff] Ling portrayed DARPA’s ambitious goals and set out what was one of the clearest presentations of the proactionary principle which I have heard. But that was just the opening volley; DARPA is going full on H+.

Following the inspirational presentation by Dr. Ling, the individual program managers had a chance to present their projects.

The first Program Manager to present, Phillip Alvelda, opened the event with his mind blowing project to develop a working “cortical modem.” What is a cortical modem you ask? Quite simply it is a direct neural interface that will allow for the visual display of information without the use of glasses or goggles. I was largely at this event to learn about this project and I wasn’t disappointed.

Leveraging the work of Karl Deisseroth in the area of optogenetics, the cortical modem project aims to build a low cost neural interface based display device. The short term goal of the project is the development of a device about the size of two stacked nickels with a cost of goods on the order of $10 which would enable a simple visual display via a direct interface to the visual cortex with the visual fidelity of something like an early LED digital clock.

The implications of this project are astounding.

Consider a more advanced version of the device capable of high fidelity visual display. First, this technology could be used to restore sensory function to individuals who simply can’t be treated with current approaches. Second, the device could replace all virtual reality and augmented reality displays. Bypassing the visual sensory system entirely, a cortical modem can directly display into the visual cortex enabling a sort of virtual overlay on the real world. Moreover, the optogenetics approach allows both reading and writing of information. So we can imagine at least a device in which virtual objects appear well integrated into our perceived world. Beyond this, a working cortical modem would enable electronic telepathy and telekinesis. The cortical modem is a real world version of the science fiction neural interfaces envisioned by writers such as William Gibson and more recently Ramez Naam.

Tags: , , ,

As soon as George Washington University economist Steve Rose looked at the numbers and determined that income inequality has actually decreased since the Great Recession, others in the field immediately pushed back. More certainly will. Whomever you believe, even the bearish on the topic think wealth inequality is still at dangerous levels. Of course, depending on how policy is enacted, it’s not destiny. From a post about Rose’s conclusions by David Leonhardt in the New York Times:

The income of the top 1 percent – both the level and the share of overall income – still hasn’t returned to its 2007 peak. Their average income is about 20 percent below that peak. Yet we have all become so accustomed to rising inequality that we seem to have lost the ability to consider the alternative. Maybe it’s because many liberals are tempted to believe inequality is always getting worse, while many conservatives are tempted to believe that the Obama economy is always getting worse.

The numbers, however, make clear that inequality isn’t destined to rise. Not only can economic forces, like a recession, reduce it, but government policy can, too. And Washington’s recent efforts to fight inequality – as imperfect and restrained as they’ve been – have made a bigger difference than many people realize.

The existing safety net of jobless benefits, food stamps and the like cushioned the blow of the so-called Great Recession. So did the stimulus bill that President Obama signed in 2009 and some smaller bills passed afterward. “Not only were low-income people protected – middle-income and some higher income-households had much lower losses because of these public policies,” Mr. Rose said. “For those who think government programs never work, maybe they need to think again.”

Before diving into the numbers on the government’s role, let’s start with the pretax statistics. These are the data on what’s happened before the government redistributes income through taxes and benefits.•

Tags: ,

Here’s a real rarity: Walter Cronkite and Bill Stout of CBS News interviewing authors Robert A. Heinlein and Arthur C. Clarke about the future of space exploration on the day of the Apollo 11 moon landing. The two writers (and Cronkite) were inebriated by the excitement of the moment, believing we would in short shrift colonize the universe. Clarke thought travel to other planets would end war on Earth, which, of course, has not yet come close to occurring. Heinlein called for female astronauts, saying “it does not take a man to run a spaceship.” Both believed the first baby born in space would be delivered before the end of the twentieth century, and Heinlein was sure there would be retirement communities established on the moon in that same time frame. Go here to view the video

Tags: , , ,

A collection of brief notes about the potential future of AI from the “Emerging Risks” section of the Global Challenges Report, which outlines species-threatening possibilities:

1. The advantages of global coordination and cooperation are clear if there are diminishing returns to intelligence and a plethora of AIs, but less clear if there is a strong first mover advantage to the first group to produce AI: then the decisions of that first group are more relevant than the general international environment.

2. Military AI research will result in AIs built for military purposes, but possibly with more safeguards than other designs.

3. Effective regulatory frameworks would be very difficult without knowledge of what forms AIs will ultimately take.

4. Uncontrolled AI research (or research by teams unconcerned with security) increases the risk of potentially dangerous AI development.

5. “Friendly AI” projects aim to directly produce AIs with goals compatible with human survival.

6. Reduced impact and Oracle AI are examples of projects that aim to produce AIs whose abilities and goals are restricted in some sense, to prevent them having a strong negative impact on humanity.

7. General mitigation methods will be of little use against intelligent AIs, but may help in the aftermath of conflict.

8. Copyable human capital – software with the capability to perform tasks with human-like skills – would revolutionise the economic and social systems.

9. Economic collapse may follow from mass unemployment as humans are replaced by copyable human capital.

10. Many economic and social set-ups could inflict great suffering on artificial agents, a great moral negative if they are capable of feeling such suffering.

11. Human redundancy may follow the creation of copyable human capital, as software replaces human jobs.

12. Once invented, AIs will be integrated into the world’s economic and social system, barring massive resistance.

13. An AI arms race could result in AIs being constructed with pernicious goals or lack of safety precautions.

14. Uploads – human brains instantiated in software – are one route to AIs. These AIs would have safer goals, lower likelihood of extreme intelligence, and would be more likely to be able to suffer.

15. Disparate AIs may amalgamate by sharing their code or negotiating to share a common goal to pursue their objectives more effectively.

16. There may be diminishing returns to intelligence, limiting the power of any one AI, and leading to the existence of many different AIs.

17. Partial “friendliness” may be sufficient to control AIs in certain circumstances.

18 .Containing an AI attack may be possible, if the AIs are of reduced intelligence or are forced to attack before being ready.

19. New political systems may emerge in the wake of AI creation, or after an AI attack, and will profoundly influence the shape of future society.

20. AI is the domain with the largest uncertainties; it isn’t clear what an AI is likely to be like.

21. Predictions concerning AI are very unreliable and underestimate uncertainties.•

From the September 18, 1912 Brooklyn Daily Eagle:

Olga Martin,18 years old, the daughter of Charles Martin, a wealthy contractor of 469 Crescent Street, was operated on last night at the Lutheran Hospital for the removal of a breastpin which the young woman swallowed more than two years ago.

The operation was performed by the visiting staff of the hospital, including Drs. Harold L. Barnes, John Kepke, F.H. DeCoste and Raymond Westover. The patient was conscious during the entire proceedings an experienced no pain because of the use of cocaine.•

Tags: , , , , ,

Like a lot of pioneers, John C. Lilly was controversial. whether working with hallucinogenics, dolphins or isolation tanks. On the latter topic, John Bryson of People magazine interviewed Lilly in 1976 about his experimentations with sensory deprivation. The opening:

Question:

What, precisely, is your so-called “isolation tank method”?

John C. Lilly:

The idea is to separate yourself from society through the solitude and confinement of a scientifically controlled tank. There should be only 10 inches of water, heated to 93° F—just right for maintaining the proper brain temperature—with enough Epsom salts so that your hands, feet and head all float. Lying on your back, you can breathe quite comfortably and safely, freed from sight, sound, people and the universe outside. That way you can enter the universe within you.

Question:

What is the origin of the technique?

John C. Lilly:

In 1954 there was an argument going on among neurophysiologists over whether or not the brain would sleep if all outside stimulation was removed. I was an eager young scientist pushing forward into regions of the unknown: the nervous system and the mind. The first year I used the tank, I proved that the notion the brain shuts off when removed from stimulation is sheer nonsense.

Question:

How many of these tanks are there in this country?

John C. Lilly:

I’d say more than 200, some at universities and research institutes but mostly in private hands.

Question:

Do you recommend the tank for everyone as a method of self-discovery?

John C. Lilly:

For most people, I think it would provide unique insights. Of course, there are exceptions. People with certain types of mental disorders should not use the method unless under professional supervision.

Question:

Isn’t it true that some people have had severe mental problems as a result of this experience?

John C. Lilly:

That is bull. In spite of the bad reputation of coerced sensory deprivation experiments, the tank method has rarely led to panic, fear or intense pain. We’ve had a few cases of spontaneous, reversible claustrophobia develop temporarily in a few people. We have had only good results with the tank.

Question:

Wasn’t one of those people your wife?

John C. Lilly:

Yes, she went into the tank one day and suddenly she had to get out. She scrambled up and pushed the lid of the tank so hard that the hinge broke. While lying there in the shallow water she had begun to recall her birth—the feeling of suffocation, the bright lights, the gasp of the first breath. It was too much for her. But there have been only one or two such incidents out of 450 people who have tried out the tank here.

Question:

Could the tank be used destructively for brainwashing?

John C. Lilly:

You can alter someone’s beliefs in any number of ways—hanging them up by their thumbs, putting them in isolation, feeding them various drugs. Yes, I suppose it could be used in that way. But the idea of using the tank to scare the hell out of somebody and coerce them is mostly just romantic nonsense.•

Tags: ,

I’ll be perplexed if Yuval Noah Harari’s great book Sapiens: A Brief History of Humankind, just published in the U.S., doesn’t wind up on many “Best of 2015” lists at the end of the year. It’s such an amazing, audacious, lucid thing. Salon has run a piece from the volume. Here’s an excerpt about the seemingly eternal search for eternity:

The Gilgamesh Project

Of all mankind’s ostensibly insoluble problems, one has remained the most vexing, interesting and important: the problem of death itself. Before the late modern era, most religions and ideologies took it for granted that death was our inevitable fate. Moreover, most faiths turned death into the main source of meaning in life. Try to imagine Islam, Christianity or the ancient Egyptian religion in a world without death. These creeds taught people that they must come to terms with death and pin their hopes on the afterlife, rather than seek to overcome death and live for ever here on earth. The best minds were busy giving meaning to death, not trying to escape it.

That is the theme of the most ancient myth to come down to us – the Gilgamesh myth of ancient Sumer. Its hero is the strongest and most capable man in the world, King Gilgamesh of Uruk, who could defeat anyone in battle. One day, Gilgamesh’s best friend, Enkidu, died. Gilgamesh sat by the body and observed it for many days, until he saw a worm dropping out of his friend’s nostril. At that moment Gilgamesh was gripped by a terrible horror, and he resolved that he himself would never die. He would somehow find a way to defeat death. Gilgamesh then undertook a journey to the end of the universe, killing lions, battling scorpion-men and finding his way into the underworld. There he shattered the mysterious “stone things” of Urshanabi, the ferryman of the river of the dead, and found Utnapishtim, the last survivor of the primordial flood. Yet Gilgamesh failed in his quest. He returned home empty-handed, as mortal as ever, but with one new piece of wisdom. When the gods created man, Gilgamesh had learned, they set death as man’s inevitable destiny, and man must learn to live with it.

Disciples of progress do not share this defeatist attitude. For men of science, death is not an inevitable destiny, but merely a technical problem. People die not because the gods decreed it, but due to various technical failures – a heart attack, cancer, an infection. And every technical problem has a technical solution. If the heart flutters, it can be stimulated by a pacemaker or replaced by a new heart. If cancer rampages, it can be killed with drugs or radiation. If bacteria proliferate, they can be subdued with antibiotics. True, at present we cannot solve all technical problems. But we are working on them. Our best minds are not wasting their time trying to give meaning to death. Instead, they are busy investigating the physiological, hormonal and genetic systems responsible for disease and old age. They are developing new medicines, revolutionary treatments and artificial organs that will lengthen our lives and might one day vanquish the Grim Reaper himself.

Until recently, you would not have heard scientists, or anyone else, speak so bluntly. ‘Defeat death?! What nonsense! We are only trying to cure cancer, tuberculosis and Alzheimer’s disease,’ they insisted. People avoided the issue of death because the goal seemed too elusive. Why create unreasonable expectations? We’re now at a point, however, where we can be frank about it. The leading project of the Scientific Revolution is to give humankind eternal life. Even if killing death seems a distant goal, we have already achieved things that were inconceivable a few centuries ago. In 1199, King Richard the Lionheart was struck by an arrow in his left shoulder. Today we’d say he incurred a minor injury. But in 1199, in the absence of antibiotics and effective sterilisation methods, this minor flesh wound turned infected and gangrene set in. The only way to stop the spread of gangrene in twelfth-century Europe was to cut off the infected limb, impossible when the infection was in a shoulder. The gangrene spread through the Lionheart’s body and no one could help the king. He died in great agony two weeks later.

As recently as the nineteenth century, the best doctors still did not know how to prevent infection and stop the putrefaction of tissues. In field hospitals doctors routinely cut off the hands and legs of soldiers who received even minor limb injuries, fearing gangrene. These amputations, as well as all other medical procedures (such as tooth extraction), were done without any anaesthetics. The first anaesthetics – ether, chloroform and morphine – entered regular usage in Western medicine only in the middle of the nineteenth century. Before the advent of chloroform, four soldiers had to hold down a wounded comrade while the doctor sawed off the injured limb. On the morning after the battle of Waterloo (1815), heaps of sawn-off hands and legs could be seen adjacent to the field hospitals. In those days, carpenters and butchers who enlisted to the army were often sent to serve in the medical corps, because surgery required little more than knowing your way with knives and saws.

In the two centuries since Waterloo, things have changed beyond recognition. Pills, injections and sophisticated operations save us from a spate of illnesses and injuries that once dealt an inescapable death sentence. They also protect us against countless daily aches and ailments, which premodern people simply accepted as part of life. The average life expectancy jumped from around twenty-five to forty years, to around sixty-seven in the entire world, and to around eighty years in the developed world.

Death suffered its worst setbacks in the arena of child mortality. Until the twentieth century, between a quarter and a third of the children of agricultural societies never reached adulthood. Most succumbed to childhood diseases such as diphtheria, measles and smallpox. In seventeenth-century England, 150 out of every 1,000 newborns died during their first year, and a third of all children were dead before they reached fifteen. Today, only five out of 1,000 English babies die during their first year, and only seven out of 1,000 die before age fifteen.

We can better grasp the full impact of these figures by setting aside statistics and telling some stories. A good example is the family of King Edward I of England (1237–1307) and his wife, Queen Eleanor (1241–90). Their children enjoyed the best conditions and the most nurturing surroundings that could be provided in medieval Europe. They lived in palaces, ate as much food as they liked, had plenty of warm clothing, well-stocked fireplaces, the cleanest water available, an army of servants and the best doctors. The sources mention sixteen children that Queen Eleanor bore between 1255 and 1284:

1. An anonymous daughter, born in 1255, died at birth.

2. A daughter, Catherine, died either at age one or age three.

3. A daughter, Joan, died at six months.

4. A son, John, died at age five.

5. A son, Henry, died at age six.

6. A daughter, Eleanor, died at age twenty-nine.

7. An anonymous daughter died at five months.

8. A daughter, Joan, died at age thirty-five.

9. A son, Alphonso, died at age ten.

10. A daughter, Margaret, died at age fifty-eight.

11. A daughter, Berengeria, died at age two.

12. An anonymous daughter died shortly after birth.

13. A daughter, Mary, died at age fifty-three.

14. An anonymous son died shortly after birth.

15. A daughter, Elizabeth, died at age thirty-four.

16. A son, Edward.

The youngest, Edward, was the first of the boys to survive the dangerous years of childhood, and at his father’s death he ascended the English throne as King Edward II. In other words, it took Eleanor sixteen tries to carry out the most fundamental mission of an English queen – to provide her husband with a male heir. Edward II’s mother must have been a woman of exceptional patience and fortitude. Not so the woman Edward chose for his wife, Isabella of France. She had him murdered when he was forty-three.

To the best of our knowledge, Eleanor and Edward I were a healthy couple and passed no fatal hereditary illnesses on to their children. Nevertheless, ten out of the sixteen – 62 per cent – died during childhood. Only six managed to live beyond the age of eleven, and only three – just 18 per cent – lived beyond the age of forty. In addition to these births, Eleanor most likely had a number of pregnancies that ended in miscarriage. On average, Edward and Eleanor lost a child every three years, ten children one after another. It’s nearly impossible for a parent today to imagine such loss.

How long will the Gilgamesh Project – the quest for immortality – take to complete? A hundred years? Five hundred years? A thousand years? When we recall how little we knew about the human body in 1900, and how much knowledge we have gained in a single century, there is cause for optimism. Genetic engineers have recently managed to double the average life expectancy of Caenorhabditis elegans worms. Could they do the same for Homo sapiens? Nanotechnology experts are developing a bionic immune system composed of millions of nano-robots, who would inhabit our bodies, open blocked blood vessels, fight viruses and bacteria, eliminate cancerous cells and even reverse ageing processes. A few serious scholars suggest that by 2050, some humans will become a-mortal (not immortal, because they could still die of some accident, but a-mortal, meaning that in the absence of fatal trauma their lives could be extended indefinitely).•

Tags:

Driverless cars are a goal of Uber and other rideshares, which would like to remove human hands from the wheel, but a former Google engineer wants to take things a step further and eliminate ownership as well. Mike Hearn presents a thought experiment and a utopian dream: What if the cars “own” themselves and are programmed to be ethical and use their small profits to upgrade themselves? From Leo Kelion of the BBC:

They would be programmed to seek self-improvement in order to avoid becoming obsolete. This would involve using earnings to hire human programmers to tweak their code.

After an update the cars could run the new software during half their pick-ups but not the other half, so as to determine whether to make the upgrades permanent.

Other costs would include paying to be refuelled, insured and maintained.

To ensure the system would scale up to meet demand, Mr Hearn suggests something a bit odd: the cars could club together with any surplus earnings they had to pay factories to build more of them.

“After it rolls off the production line… the new car would compete in effect with the existing cars, but would begin by giving a proportion of its profits to its parents.

“You can imagine it being a birth loan, and eventually it would pay off its debts and become a fully-fledged autonomous vehicle of its own.”

Death, too, is woven into the system, helping weed out clapped-out models.

“If there were too many cars and the human population drops, for example, then some of those cars could put themselves in long-term parking and switch themselves off for a while to see if things improve,” Mr Hearn says. “Or you could get immigrant vehicles driving to another city looking for work.

“Ultimately, they could just run out of fuel one day. They would go bankrupt, effectively, and become available for salvage.”

Since banks might struggle with this concept – at least at first – it’s proposed the vehicles use a digital currency like bitcoins for their transactions, since the “wallets” used to store and trade the digital currency are not restricted to people or organisations.

“Some people would find it creepy and weird, and they would refuse to do business with machines,” acknowledges Mr Hearn. “They would hate the idea of a machine being an economic equal to them – a modern Ludditism, if you like.

“But one interesting thing computers can do is prove to a third party what software they are running.

“And then it would be the most transparent business partner. You would have no risk of it ripping you off, no risk of misunderstandings, and some people would actually find that preferable.”•

Tags: ,

robotheadcovering7

Rodney Brooks, the roboticist featured in Errol Morris’ great documentary Fast, Cheap and Out of Control, is interviewed by Joanne Pransky of Robotics Business Review about the future of AI. A few exchanges follow:

Joanne Pransky:

Let’s assume that your life is only 50 per cent complete. What groundbreaking challenges do you think you’ll be working on 25 and 50 years from now?

Rodney Brooks:

Twenty-five years from now: getting into and out of bed. Fifty years from now: going to the bathroom. I think robotics for eldercare and homecare are going to be important because of demographic inversion, and that’s going to be the big market for robots going forward. In one of my talks, I put up a picture of a Mercedes-Benz 2014 S-Class, and I asked the audience, “What is this?” And they say, “Oh, it’s a car. Oh, it’s a Mercedes”. And somebody said it’s an S-Class. I said, “It’s an eldercare robot”. Because what it’s going to do is let me drive much longer and safely, before my kids pry my keys from my “cold, dead hands”, so to speak. This is an example of a technology which is going to allow the elderly to have dignity and independence longer, and we baby boomers are going to be demanding those as we get older, as there aren’t going to be enough young people to serve our elderly needs.

Joanne Pransky:

If you could wave a magic wand, what technological item would you give to the world?

Rodney Brooks:

There’s two: a technological hand like a human, and object recognition like a child. We have image-based object recognition, but we don’t have the category recognition that a child can do.

Joanne Pransky:

How far away do you think we are from that vision recognition?

Rodney Brooks:

When I did my PhD on that topic in 1977, I thought we were a long way away and it’s still a long way away. We can now do vision a lot better using different techniques, but not in the same “general” way that people can do it. That may take a long time. We’ve had airplanes for over a hundred years. It’s only in the last few years that people have gotten model airplanes to land on branches. We are just understanding STOL (short takeoff and landing) now, which birds use all the time, for flying machines. That took a hundred years.

Joanne Pransky: 

And what do you think the future human–robot interface (HRI) will be like? Will it be directly in the brain, as other science fiction people state? Will it be with our eyes?

Rodney Brooks:

I saw my first touch screen probably around 1988/1989 at CMU and I thought, “That’ll never work.” When I go to some of the academic human robot interaction conferences, I like to characterize some of the papers as, “Well, we tested this variation on that variation, and 60 per cent of people preferred Method A, and the other two preferred Method B.” I think that’s “want-to-be” scientist stuff. It’s asking questions at the wrong level. I think we haven’t invented it. I think a university should be inventing wild HR interactions and seeing what sticks, instead of, “Oh, well, should it be displayed this way or should I have this?” They haven’t invented this interface yet, whatever it’s going to be. That’s what people should be doing, trying different things, most of which will fail. But everyone wants the paper that just gets accepted, just enough science. I don’t know what it’s going to be, but things will change.•

Tags: ,

Marc Goodman, law-enforcement veteran and author of the forthcoming book Future Crimes, sat for an interview with Jason Dorrier of Singularity Hub about the next wave nefariousness, Internet-enabled and large-scale. A question about the potential for peril writ relatively small with Narrow AI and on a grand scale if we create Artificial General Intelligence. An excerpt::

Question:

Elon Musk, Stephen Hawking, and Bill Gates have expressed concern about artificial general intelligence. It’s a hotly debated topic. Might AI be our “final invention?” It seems even narrow AI in the wrong hands might be problematic.

Marc Goodman:

I would add Marc Goodman to that list. To be clear, I think AI, narrow AI, and the agents around us have tremendous opportunity to be incredibly useful. We’re using AI every day, whether it’s in our GPS devices, in our Netflix recommendations, what we see on our Facebook status updates and streams—all of that is controlled via AI.

With regard to AGI, however, I put myself firmly in the camp of concern.

Historically, whatever the tool has been, people have tried to use it for their own power. Of course, typically, that doesn’t mean that the tool itself is bad. Fire wasn’t bad. It could cook your meals and keep you warm at night. It comes down to how we use it. But AGI is different. The challenge with AGI is that once we create it, it may be out of our hands entirely, and that could certainly make it our “final invention.”

I’ll also point out that there are concerns about narrow AI too.

We’ve seen examples of criminals using narrow AI in some fascinating ways. In one case, a University of Florida student was accused of killing his college roommate for dating his girlfriend. Now, this 18-year-old freshman had a conundrum. What does he do with the dead body before him? Well, he had never murdered anybody before, and he had no idea how to dispose of the body. So, he asked Siri. The answers Siri returned? Mine, swamp, and open field, among others.

So, Siri answered his question. This 18-year-old kid unknowingly used narrow AI as an accomplice after the fact in his homicide. We’ll see many more examples of this moving forward. In the book, I say we’re leaving the world of Bonnie and Clyde and joining the world of Siri and Clyde.•

Tags: ,

Humans are the worst thing ever for other species, especially megafauna. When we began to appear on continents, they started to largely disappear. Some of it was unavoidable if we were going to settle all over the globe, since we needed to burn through tall grasses and forestry to explore and establish. But plenty of it could be avoided, if we begin to realize that other creatures aren’t merely meat and target practice. E.O. Wilson has suggested the “Half-Earth Cure,” but first hearts and minds will have to be won. From Peter Aldhous’ Buzzfeed article “People Are Animals, Too“:

Tommy the chimpanzee got his day in court on Oct. 8, 2014. He was unable to attend the hearing in “person” — spending the day, like any other, in a cage at a used trailer sales lot in Gloversville, New York. But an hour’s drive away, in a courtroom in the state capital of Albany, Steven Wise of the Nonhuman Rights Project argued that Tommy should indeed be considered a person under New York state law. If so, Patrick and Diane Lavery of Circle L Trailer Sales could be summoned to determine whether they are imprisoning him illegally.

Central to Wise’s arguments in Tommy’s case, and to similar suits his organization has filed on behalf of other captive chimpanzees, is the assertion that apes are highly intelligent and self-aware beings with complex emotional lives. “The uncontroverted facts demonstrate that chimpanzees possess the autonomy and self-determination that are supreme common law values,” Wise told the five judges hearing the case.

It is a bold legal move — and so far unsuccessful. The court in Albany, like a lower court before it, rejected the idea that Tommy has legal rights of personhood. But Wise intends to fight on, taking Tommy’s case to the state’s ultimate arbiter, the New York Court of Appeals.

Events elsewhere in New York state stand in stark contrast to its courts’ willingness to consider the legal implications of the science of animal cognition. In March 2014, the Rip Van Winkle Rod and Gun Club in Palenville, a hamlet of some 1,000 people on the Hudson River, held the fourth installment of an annual festival that makes a competitive sport out of shooting down creatures that — judged by objective measures of their mental abilities — are arguably just as deserving of personhood as Tommy.

Those creatures are crows, targeted with abandon at the Palenville Crow Down. In recent years, members of the corvid family — including crows, ravens, jays and magpies — have been found to possess cognitive skills once thought to be the exclusive domain of people and the great apes. They make and use tools. They remember details about the past and plan for the future. They even seem to respond to one another’s knowledge and desires. “For all the studies that have been compared directly so far, the corvids seem to perform as well as the chimpanzees,” says Nicky Clayton of the University of Cambridge, in whose lab some of the most exciting discoveries have been made.

We gaze into the eyes of a chimp and see a reflection of ourselves. We glance at a crow and see an alien being that under some jurisdictions can be exterminated with impunity — bringing a sinister second meaning to the phrase “a murder of crows.” Such biases affect ordinary people and academic experts alike, skewing our understanding of what nonhuman intelligence looks like.•

In a Hollywood Reporter piece about the former pride of the peacock, Michael Wolff states the obvious–network news organizations are of little or no consequence apart from in some odd phantom sense–but he makes the case really well. An excerpt:

Maintaining the evening news was perhaps more useful for the corporate agenda than it was as a programming tool or journalistic function. Indeed, despite its 8 million or so nightly viewers and an estimated $200 million in annual ad revenue, Nightly News long has run against the currents of news programming and become quite an organizational sore thumb — a phantom power base that commanded a strange primacy in the corporate bureaucracy. Williams, mostly irrelevant to the overall NBCUniversal bottom line or to the news itself, was yet very powerful.

Each of the networks has, over the past decade or more, made tentative efforts toward disbanding the evening news or combining it with cable operations, or, in many variations of this deal discussion, partnering with CNN. In effect, everybody recognized that the nature of newsgathering had profoundly changed and that networks could not compete (or had no interest in competing) as all-purpose news organizations. But in the end, nobody wanted to take the PR hit for killing the news, or lose the PR advantage in having it.

Until Williams, once the ultimate PR asset, became the ultimate PR nightmare.•

Tags: ,

Figuring out the final 5% of the autonomous-vehicle question will likely be more challenging than getting to that point, but there’s now a critical mass of technologists working on the remaining issues. From Hal Hodson at New Scientist:

SOME day soon, driverless podcars will cluster around our cities, waiting to pick us up on demand. There will be no steering wheel, no brake pedal; once seated, you can take a nap or watch a movie. This public facility will reduce traffic and carbon emissions. Not having to own a car will make transport cheaper for everyone.

Stop us if you’ve heard this one before.

Why are self-driving cars taking so long to show? For starters, essential technological and social changes needed to make them work might still be decades away. But they are on their way, thanks to some of the world’s largest companies. Google has been fine-tuning its autonomous cars for years, amassing hundreds of thousands of kilometres of test drives on Nevada’s roads. Last week the developers of the taxi app Uber announced a collaboration with Carnegie Mellon University’s Robotics Institute to develop technology for a self-driving taxi fleet.

“It’s a very big deal,” says Nidhi Kalra, an analyst at the RAND Corporation. “Nearly every auto-maker is pursuing this technology.”

Autonomous cars will confront the same problem that faces all robots designed to operate around people: social interactions are a key part of negotiating our world.•

Tags: ,

If someone currently alive was to become a trillionaire, it’s probably as likely it would be Elon Musk as anyone. But the idea that the SpaceX founder will have established a city of 80,000 Earth immigrants on Mars within the next 25 years? I’d bet against that one. Brian Wang of the New Big Future thinks both outcomes are plausible. An excerpt:

Mars Colonial Transporter has been notionally described as being a large interplanetary spacecraft capable of taking 100 people at a time to Mars, although early flights are expected to carry fewer people and more equipment. The spacecraft has been notionally described as using a large water store to help shield occupants from space radiation and to possibly having a cabin oxygen content that is up to two times that which is found in Earth’s atmosphere.

The Mars colony envisioned by Musk would start small, notionally an initial group of fewer than ten people. With time, Musk sees that such an outpost could grow into something much larger and become self-sustaining, perhaps up to as large as 80,000 people once it is established. Musk has stated that as aspirational price goal for such a trip might be on the order of US$500,000, something that “most people in advanced countries, in their mid-forties or something like that, could put together [to make the trip].”

Before any people are transported to Mars, a number of cargo missions would be undertaken first in order to transport the requisite equipment, habitats and supplies. Equipment that would accompany the early groups would include “machines to produce fertilizer, methane and oxygen from Mars’ atmospheric nitrogen and carbon dioxide and the planet’s subsurface water ice” as well as construction materials to build transparent domes for crop growth.•

Tags: ,

Vint Cerf, father of cat photos (and the rest of the Internet), is concerned that this century’s history is being preserved mainly online in bits that could go bust. While it might be less embarrassing if it all went away, it’s important for posterity that our selfies and tweets be accessible to future generations who want to understand (and mock) us. Cerf has a plan for preservation. From Pallab Ghosh at the BBC:

Vint Cerf is promoting an idea to preserve every piece of software and hardware so that it never becomes obsolete – just like what happens in a museum – but in digital form, in servers in the cloud.

If his idea works, the memories we hold so dear could be accessible for generations to come.

“The solution is to take an X-ray snapshot of the content and the application and the operating system together, with a description of the machine that it runs on, and preserve that for long periods of time. And that digital snapshot will recreate the past in the future.”

A company would have to provide the service, and I suggested to Mr Cerf that few companies have lasted for hundreds of years. So how could we guarantee that both our personal memories and all human history would be safeguarded in the long run?

Even Google might not be around in the next millennium, I said.

“Plainly not,” Vint Cerf laughed. “But I think it is amusing to imagine that it is the year 3000 and you’ve done a Google search. The X-ray snapshot we are trying to capture should be transportable from one place to another. So, I should be able to move it from the Google cloud to some other cloud, or move it into a machine I have.

“The key here is when you move those bits from one place to another, that you still know how to unpack them to correctly interpret the different parts. That is all achievable if we standardise the descriptions.

“And that’s the key issue here – how do I ensure in the distant future that the standards are still known, and I can still interpret this carefully constructed X-ray snapshot?”•

Tags: ,

Trying to predict weather with precision is a fool’s errand, and even overall trends are tricky. In a new study published at Science Advances, scholars forecast decades-long drought for the American Southwest, beginning later this century. If it were to occur, the stress on humans, infrastructure and finances would be extreme. This, not Al-Qaeda, is a huge threat to us. From Suzanne Goldenberg at the Guardian:

The years since 2000 give only a small indication of the punishment ahead. In parts of Arizona, California, Nevada, New Mexico, Oklahoma and Texas, 11 of those years have been drought years.

As many as 64 million people were affected by those droughts, according to Nasa projections.

Those conditions have produced lasting consequences. In California, now undergoing its fourth year of drought – and the worst dry spell in 1,200 years, farmers have sold off herds. Growers have abandoned fields. Cities have imposed water rationing.

But future droughts could be even more disruptive, because they will likely drag on for decades, not years.

“We haven’t seen this kind of prolonged drought even certainly in modern US history,” Smerdon said. “What this study has shown is the likelihood that multi-decadal events comprising year after year after year of extreme dry events could be something in our future.”•

Tags:

Unintended consequences aren’t necessarily a bad thing. The new batteries manufactured for EVs are beginning to be repurposed to power homes. If a good deal of that electricity can be created from solar, a major correction to environmental damage could be in the offing. From Ben Popper at the Verge:

Tesla didn’t ship nearly as many cars this quarter as it had projected, but CEO Elon Musk remained upbeat during today’s earnings call as he let some details slip about a brand new product. According to Musk, the company is working on a consumer battery pack for the home. Design of the battery is apparently complete, and production could begin in six months. Tesla is still deciding on a date for unveiling the new unit, but Musk said he was pleased with the result, calling the pack “really great” and voicing his excitement for the project.

What would a Tesla home battery look like? The Toyota Mirai, which uses a hydrogen fuel cell, gives owners the option to remove the battery and use it to supply electrical power to their homes. That battery can reportedly power the average home for a week when fully charged. Employees at many big Silicon Valley tech companies already enjoy free charging stations at their office parking lot. Now imagine if they could use that juice to eliminate their home electric bill.•

Tags: ,

As Reality TV is the modern freak show, the anomalies now hurt psyches rather than hunched backs, the Twitter evisceration of the lunkheaded is the contemporary auto-de-fé, the collective sacrifice of a few to atone for all of our sins. It’s not that the racist and sexist and generally offensive tweets are being sent out by angels who deserve employment security despite their public stupidity, but the crowd condemnation that is supposedly righteousness may actually reveal some wrongheadedness, our process of socialization perhaps tainted by antisocial impulse. How else to explain the death threats that continue long after a career has been ruined? From “How One Stupid Tweet Blew Up Justine Sacco’s Life,” Jon Ronson’s New York Times Magazine article of one such lunkhead and the culture of condemnation:

In the early days of Twitter, I was a keen shamer. When newspaper columnists made racist or homophobic statements, I joined the pile-on. Sometimes I led it. The journalist A. A. Gill once wrote a column about shooting a baboon on safari in Tanzania: “I’m told they can be tricky to shoot. They run up trees, hang on for grim life. They die hard, baboons. But not this one. A soft-nosed .357 blew his lungs out.” Gill did the deed because he “wanted to get a sense of what it might be like to kill someone, a stranger.”

I was among the first people to alert social media. (This was because Gill always gave my television documentaries bad reviews, so I tended to keep a vigilant eye on things he could be got for.) Within minutes, it was everywhere. Amid the hundreds of congratulatory messages I received, one stuck out: “Were you a bully at school?”

Still, in those early days, the collective fury felt righteous, powerful and effective. It felt as if hierarchies were being dismantled, as if justice were being democratized. As time passed, though, I watched these shame campaigns multiply, to the point that they targeted not just powerful institutions and public figures but really anyone perceived to have done something offensive. I also began to marvel at the disconnect between the severity of the crime and the gleeful savagery of the punishment. It almost felt as if shamings were now happening for their own sake, as if they were following a script.

Eventually I started to wonder about the recipients of our shamings, the real humans who were the virtual targets of these campaigns. So for the past two years, I’ve been interviewing individuals like Justine Sacco: everyday people pilloried brutally, most often for posting some poorly considered joke on social media. Whenever possible, I have met them in person, to truly grasp the emotional toll at the other end of our screens. The people I met were mostly unemployed, fired for their transgressions, and they seemed broken somehow — deeply confused and traumatized.•

Tags: , ,

« Older entries § Newer entries »