Science/Tech

You are currently browsing the archive for the Science/Tech category.

Baseball’s All-Star Game voting uses the latest technology: Paper ballots are carried by Pony Express to the General Store where they’re calculated on an abacus. Commissioner Selig then reads the results which are recorded onto a wax cylinder and played from a talking machine over the wireless. It’s a big improvement from when Charles Lindbergh used to barnstorm American cities in his aeroplane and drop leaflets with the tabulations over ballyards. From Phil Mackey at ESPN:

“But do you want to know something completely archaic and silly?

Chris Colabello — one of baseball’s best run producers through the first 30 days this season — isn’t even on Major League Baseball’s All-Star ballot.

Go ahead and take a look for yourself.

Josh Willingham, despite having played only a handful of games due to injury, is on it. So is Pedro Florimon, whose slugging percentage (.173) is lower than his weight (180).

The Colabello omission is more of a knock on MLB’s often archaic thinking than it is on the Twins.

Here’s how the process works: During the early part of spring training, each MLB front office submits projected starters at each position. Twins assistant GM Rob Antony, who was in charge of this process for the Twins, listed Joe Mauer as the first baseman, Oswaldo Arcia, Aaron Hicks and Willingham as the outfielders, and Jason Kubel as the DH. This is what they projected at the time, and if not for injuries to Arcia and Willingham, it’s possible Colabello wouldn’t have nearly as many at-bats.

OK, that’s fine. But why can’t MLB adjust the ballot on the fly? Presumably because they already printed out millions of hanging-chad paper ballots to be distributed throughout ballparks in an era where two out of every three adults owns a smartphone in this country.

MLB can’t simply add Colabello to the online ballot?

‘Well no, that’s not the way we’ve always done it…’

We have apps on our smartphones that allow us to record high-definition videos, we have apps that allow us to cash checks, we have apps that allow us to make dinner and movie reservations, and we have apps that essentially replace TVs, radios and books.

Yet, if we want to send Colabello to the All-Star Game at Target Field, we need to write his name in the old-fashioned way…”

Tags:

From the Asahi Shimbun, more perspective on Google’s recent interest in Japanese robotics:

“While the future plans of Google are not totally clear, the company apparently wants to incorporate all future-generation robotic technology. Google Chairman Eric Schmidt has written about a future in which each U.S. household owns several multifunction robots.

Norio Murakami, who once served as the head of the Japanese arm of Google, predicts that Google is seeking to develop computers that can serve as butlers in the home.

Those robots would find answers over the Internet to questions raised by its master as well as perform such tasks as cleaning and cooking.

In 2011, Google proposed technology that it called cloud robotics. Under that concept, robots in households and factories would be connected to a gigantic brain in cyberspace. That would mean nothing short of Google controlling the brains used in all robots.

The idea clashes somewhat with mainstream thinking in Japan, where robots have primarily been considered as a manufacturing tool.

Changing demographics also place greater expectations on robots.

Rodney Brooks, a co-founder of U.S.-based iRobot Corp., noted that many advanced nations face a growing population of senior citizens and a declining number of young people. He said robots hold the key for resolving manpower problems such as how to inspect and repair social infrastructure, especially in Japan.”

Tags: , , , ,

No one should confuse the challenges of abundance with those of poverty, but Qatar, which has no true winter but a good deal of discontent, is a great case study in human psychology. When the earth unexpectedly offers up everything we could ever want, does it become clear that what we need is something else? From Matthew Teller in BBC Magazine:

“From desperate poverty less than a century ago, this, after all, has become the richest nation in the world, with an average per-capita income topping $100,000 (£60,000).

What’s less well understood is the impact of such rapid change on Qatari society itself.

You can feel the pressure in Doha. The city is a building site, with whole districts either under construction or being demolished for redevelopment. Constantly snarled traffic adds hours to the working week, fuelling stress and impatience.

Local media report that 40% of Qatari marriages now end in divorce. More than two-thirds of Qataris, adults and children, are obese.

Qataris benefit from free education, free healthcare, job guarantees, grants for housing, even free water and electricity, but abundance has created its own problems.

‘It’s bewildering for students to graduate and be faced with 20 job offers,’ one academic at an American university campus in Qatar tells me. ‘People feel an overwhelming pressure to make the right decision.’

In a society where Qataris are outnumbered roughly seven-to-one by expatriates, long-term residents speak of a growing frustration among graduates that they are being fobbed off with sinecures while the most satisfying jobs go to foreigners.

The sense is deepening that, in the rush for development, something important has been lost.”

Tags:

The Philosopher’s Beard has its facial hair in a knot over the prominence of New Atheism. The opening of an essay assailing the evangelical strain of the seeming non-evangelical:

“The New Atheist movement that has developed from the mid 2000s around the ‘four horsemen of the apocalypse’ – Hitchens, Dennett, Harris, Dawkins, and various other pundits, has had a tremendous public impact. Godlessness has never had a higher public profile. How wonderful for unbelievers like me? Hardly. I am as embarrassed by the New Atheists as many Christians are embarrassed by the evangelical fundamentalists who appoint themselves the representatives of Christianity.

It has often been noted that the New Atheist movement has contributed no original arguments or ideas to the debate about religion. But the situation is worse than this. The main achievement of New Atheism – what defines it as a more or less coherent movement – is its promulgation of a particular version of atheism that is quasi-religious, scientistic, and sectarian. New Atheism been so successful in redefining what atheism means that I find I must reject it as an identity. My unbelief is apathetic and simply follows from my materialism – I don’t see why I should care about the non-existence of gods. What the New Atheists call ‘rationality’ is an impoverished way of understanding the world that excludes meanings and values. At the political level, the struggle for secularism requires more liberalism, not more atheism.”

An excerpt from Richard C. Lewontin’s just-published New York of Review of Books piece “The New Synthetic Biology: Who Gains?” which looks at recent writing on the field, which will not ultimately be contained by regulation and will be messy:

“In modern times Craig Venter, the head of the J. Craig Venter Institute, announced the creation of a living, functioning, self-reproducing artificial bacterial cell containing a laboratory-produced DNA sequence that, according to Laurie Garrett’s Foreign Affairs essayBiology’s Brave New World, ‘moved, ate, breathed, and replicated itself.’

An element that was not yet present in the early-nineteenth-century interest in the artificial creation of life was the possibility of great financial profit. Biotechnology was still a century and a half in the future. Garrett characterizes Venter not only as the most powerful man in biotechnology but as the richest. The J. Craig Venter Institute has already worked with fuel companies and the pharmaceutical industry to create microorganisms that could produce new fuels and vaccines.

What did concern those in the nineteenth century who imagined the possibility of the artificial creation of life, a concern that is at the core of Shelley’s Frankenstein, is the nemesis that is the inevitable consequence of the creators’ hubris. We now face the same problem on a huge scale. In an interview in 2009, quoted by Garrett, Venter declared, ‘There’s not a single aspect of human life that doesn’t have the potential to be totally transformed by these technologies in the future.’ Not a single aspect! Does that mean he is promising me that I might literally live forever?

Nothing in history suggests that those who control and profit from material production can really be depended upon to devote the needed foresight, creativity, and energy to protect us from the possible negative effects of synthetic biology. In cases where there is a conflict between the immediate and the long-range consequences or between public and private good, how can that conflict be resolved? Can the state be counted on to intervene when a private motivation conflicts with public benefit, and who will intervene when the state itself threatens the safety and general welfare of its citizens? Garrett provides a frightening real-life example.

In 2011 two scientists, one from Erasmus Medical Center in Rotterdam and one from the University of Wisconsin, independently reported that they had turned a bird flu virus, H5N1, which could very occasionally be transmitted from birds to humans, causing their death in about 60 percent of cases, into a strain that could be directly passed easily between laboratory mammals. Were this virus then capable of infecting humans, a catastrophe would occur, judging from the infamous flu epidemic of 1918, which killed more than 50 million people, about 2.5 percent of the world’s population.”

___________________________

Frost-Venter, 2012:

 

Tags: , ,

Excerpts from two Sports Illustrated articles about Garry Kasparov tangling with Deep Blue: Th first is a jokey piece supposedly written by the IBM chess computer itself after losing a five-game series to the Russian in 1996, and the second is a matter-of-fact declaration of the rise of the machines in 1997, when the laughter stopped for good.

________________________

From “I Was a Just a Pawn!” February 26, 1996:

“When I was just a chip, my motherboard used to tell me, ‘D.B., if you can’t process something nice, don’t process it at all.’ But Mom never went through what I went through last week, when world chess champion Garry Kasparov humiliated me, an IBM supercomputer, four games to two.

All week I kept hearing these grandmasters and chess nerds saying, ‘Deep Blue’s advantage is that he doesn’t feel pressure, emotion or anxiety.’ Yeah, right. I’d like them to spend a week in my outlet. You want pressure? It took six years to build me, man! They had a five-person team doing nothing but programming me for this one match. I can consider 200 million moves in one second! Kasparov can do, what, one, maybe two? IBM does not put five guys on a project for six years and expect to lose. That’s how you end up at the employee Montessori, with kids sticking jelly doughnuts in your serial ports.

I can hear all the snickering around the office now. I hear the other mainframes calling me Deep Blue It and whispering about how, any day now, the guys in the white coats are going to come and give me the big drag-and-drop. I’ll tell you what: If I had coasters, I’d get over there and teach them all about megahurts.

Sure, I lost, but how come nobody ever mentions that no computer had ever won one single regulation game from a world chess champion before I did? I won the first game from Kasparov. Stick that in your hi-memory! And how about the fact that I wasn’t even in the room with Kasparov the whole match. I wasn’t! They made me stay in this crummy room in Yorktown Heights, N.Y., while some little guy in Philadelphia typed Kasparov’s moves into a desktop and fed them to me through a phone line. Let me ask you this: How good would Troy Aikman be if he had to read defenses from some Marriott 800 miles away? You talk about mo-dumbs.”

________________________

From “Tangled Up in Blue,” May 19, 1997:

“In his 12-year reign as world chess champion, Garry Kasparov has earned a reputation for both brilliance and aggressiveness. He is widely considered the greatest player in history. But on Sunday afternoon, after resigning the sixth and deciding game of his match with the IBM supercomputer known as Deep Blue, Kasparov sat slumped and glassy-eyed as he awaited questions in a midtown Manhattan ballroom. ‘He looks like a DMV photo,’ cracked international master Mike Valvo.

Kasparov’s capitulation shocked everyone, coming just one hour into a game that he needed to draw in order to tie the match. Things had gone much differently 15 months ago, when Kasparov defeated an earlier version of Deep Blue 4-2 in Philadelphia. But since then IBM’s computer scientists had enlisted the help of four grandmasters, and this latest teaming of technology and human intelligence threw Kasparov some curves. In Game 5, for example, no one anticipated that with one of Kasparov’s pawns poised to reach the last file and become a queen, Deep Blue would simply ignore it and launch an attack with its own king. That stunning shift of focus set up a perpetual check and forced Kasparov to offer a draw.

‘The computer will be unbeatable in five or 10 years,’ says Frederick Friedel, an expert on artificial intelligence and computer chess who served as one of Kasparov’s seconds. ‘Garry will understand much more about chess, but he will still lose because he will make mistakes.'”

See also:

Elon Musk recently stated that in the near term, only 90% of driving can be completely autonomous. Judging by a new post on the Google blog, that company is consumed by the other 10%. An excerpt:

“Jaywalking pedestrians. Cars lurching out of hidden driveways. Double-parked delivery trucks blocking your lane and your view. At a busy time of day, a typical city street can leave even experienced drivers sweaty-palmed and irritable. We all dream of a world in which city centers are freed of congestion from cars circling for parking (PDF) and have fewer intersections made dangerous by distracted drivers. That’s why over the last year we’ve shifted the focus of the Google self-driving car project onto mastering city street driving.

Since our last update, we’ve logged thousands of miles on the streets of our hometown of Mountain View, Calif. A mile of city driving is much more complex than a mile of freeway driving, with hundreds of different objects moving according to different rules of the road in a small area. We’ve improved our software so it can detect hundreds of distinct objects simultaneously—pedestrians, buses, a stop sign held up by a crossing guard, or a cyclist making gestures that indicate a possible turn. A self-driving vehicle can pay attention to all of these things in a way that a human physically can’t—and it never gets tired or distracted.

As it turns out, what looks chaotic and random on a city street to the human eye is actually fairly predictable to a computer. As we’ve encountered thousands of different situations, we’ve built software models of what to expect, from the likely (a car stopping at a red light) to the unlikely (blowing through it). We still have lots of problems to solve, including teaching the car to drive more streets in Mountain View before we tackle another town, but thousands of situations on city streets that would have stumped us two years ago can now be navigated autonomously.”

BBC is reporting that the Chinese firm WinSun has built giant 3D printers which can create 10 concrete houses in a day. They’re admittedly not glorious living or anything, but it’s still impressive. An excerpt:

“The cheap materials used during the printing process and the lack of manual labour means that each house can be printed for under $5,000, the 3dprinterplans website says.

‘We can print buildings to any digital design our customers bring us. It’s fast and cheap,’ says WinSun chief executive Ma Yihe. He also hopes his printers can be used to build skyscrapers in the future. At the moment, however, Chinese construction regulations do not allow multi-storey 3D-printed houses, Xinhua says.”

From Joshua Green’s Businessweek profile of Boston Red Sox savior John Henry, a thumbnail sketch of the owner who initially married a Moneyball mentality to a big-market budget and has since overseen the franchise as its remade itself as a relatively austere and even more analytical organization:

“For so prominent a figure, Henry is a bit of a mystery. He limits contact with the press and, when he does communicate, prefers e-mail. In person, he’s so reserved that it often appears as if he’s working out a difficult algebraic formula in his head. Which is what he may, in fact, be doing. ‘He’s the most mathematically talented person I’ve ever met,’ says Lucchino, the team’s co-owner and chief executive officer. ‘I think that element of the game very much appeals to him. And he’s a competitive guy. He wants to win. He wants to measure his success. When you put it all together, he’s got more dimensions than most baseball owners.’

As different as he may seem, Henry captures baseball’s current era. A mathematical whiz who made a fortune as a pioneering trader of commodities futures, he’s part of a wave of owners from the financial world that’s sweeping professional sports. In baseball, this includes Tampa Bay Rays owner Stuart Sternberg, a former Goldman Sachs partner, and Milwaukee Brewers owner Mark Attanasio, founder of the investment firm Crescent Capital Group. All are keenly attuned to the statistical revolution that has upended the game and compete as vigorously against each other as anyone on Wall Street. Last year, Henry shut down his commodities trading firm to concentrate on his many other endeavors. In addition to the Globe (where I’m a contributor), he and his partners own the English Premier League soccer team Liverpool and a stake in Nascar’s Roush Fenway Racing team. But just as his trading algorithms did, baseball has furnished him with the most spectacular payoffs.

Henry provides an especially good lens into how the game is changing and why the Red Sox appear poised for further success. It isn’t just that financial types are applying their smarts to baseball, it’s that baseball success has come to hinge less on signing expensive stars, as George Steinbrenner’s Yankees once did, and much more on making smarter bets than the competition on which young players will emerge as the next stars. Winning in baseball is becoming a lot like winning in futures trading.”

Tags: ,

British Pathé has just dropped a huge trove of classic newsreels onto Youtube. One video is of American military veteran and Bronx native Christine Jorgensen (née George Jorgensen), who became, in 1952, world famous for having changed genders with the use of hormone-replacement therapy. Thankfully, she was a quick-witted, confident person who could survive the attention. The Pathé video and a couple others from later in her “second” life.

1953:

1966:

1980s:

While I have plenty of concerns about technology, I don’t understand those who equate it with evil and biology with good. I’m not sure that biology doesn’t have a programmed endgame in mind for us that technology might, perhaps, counter. From E.O. Wilson’s 2005 Cosmos article, Is Humanity Suicidal?“:

“Unlike any creature that lived before, humans have become a geophysical force, swiftly changing the atmosphere and climate as well as the composition of the world’s fauna and flora.

Now in the midst of a population explosion, this species has doubled in number to more than 6 billion during the past 50 years. It is scheduled to double again in the next 50 years. No other single species in evolutionary history has even remotely approached the sheer mass in protoplasm generated by humanity.

Darwin’s dice have rolled badly for Earth. It was a misfortune for the living world in particular, many of our scientists believe, that a carnivorous primate and not some more benign form of animal made the breakthrough.

Our species retains hereditary traits that add greatly to our destructive impact. We are tribal and aggressively territorial, intent on private space beyond minimal requirements and oriented by selfish sexual and reproductive drives. Cooperation beyond the family and tribal levels comes hard. Worse, our liking for meat causes us to use the Sun’s energy at low efficiency.” 

_________________________

“It’s doomsday”:

From “Robocopulation” an Economist article about scientists trying to figure out polymorphism with the aid of rodent robots:

“‘How do robots have sex?’ sounds like the set-up line for a bad joke. Yet for Stefan Elfwing, a researcher in the Neural Computation Unit of Japan’s Okinawa Institute of Science and Technology (OIST), it is at the heart of discovering how and why multiple (or polymorphic) mating strategies evolve within the same population of a species. Because observing any species over hundreds of generations is impractical, Dr Elfwing and other scientists are increasingly using a combination of robots and computer simulation to model evolution. And the answer to that opening question? By swapping software ‘genotypes’ via infrared communications, ideally when facing each other 30cm apart. Not exactly a salty punchline.

Charles Darwin was intrigued by polymorphism in general and it still fascinates evolutionary biologists. The idea that more than one mating strategy can coexist in the same population of a species seems to contradict natural selection. This predicts that the optimum phenotype (any trait caused by a mix of genetic and environmental factors) will cause less successful phenotypes to become extinct.

Yet in nature there are many examples of polymorphic mating strategies within single populations of the same species, resulting in phenomena such as persistent colour and size variation within that population. Male tree lizards, for instance, use three different mating strategies correlated with throat colour and body size, and devotees of each manage to procreate.

Simulations alone can unintentionally overlook constraints found in the physical world, such as how far a critter looking for a mate can see. So the OIST team based their simulations on the actual behaviour of small, custom-made ‘cyber-rodent’ robots. This established their physical limitations, such as how they must align with each other to mate and the extent of their limited field of view.”

Tags: ,

I put up a post just a couple of weeks ago about Thomas Piketty’s Capital in the Twenty-First Century, and since then it’s quickly become an unlikely blockbuster, sold out in brick-and-mortar stores and ranked #1 on Amazon, the latest green shoot in the Occupy mindset which blossomed in these scary financial times. At Foreign Affairs, economist Tyler Cowen provides a well-written review of the work, which he finds impressive but (unsurprisingly) disagrees with in fundamental ways. The opening:

Every now and then, the field of economics produces an important book; this is one of them. Thomas Piketty’s tome will put capitalist wealth back at the center of public debate, resurrect interest in the subject of wealth distribution, and revolutionize how people view the history of income inequality. On top of that, although the book’s prose (translated from the original French) might not qualify as scintillating, any educated person will be able to understand it — which sets the book apart from the vast majority of works by high-level economic theorists.

Piketty is best known for his collaborations during the past decade with his fellow French economist Emmanuel Saez, in which they used historical census data and archival tax records to demonstrate that present levels of income inequality in the United States resemble those of the era before World War II. Their revelations concerning the wealth concentrated among the richest one percent of Americans — and, perhaps even more striking, among the richest 0.1 percent — have provided statistical and intellectual ammunition to the left in recent years, especially during the debates sparked by the 2011 Occupy Wall Street protests and the 2012 U.S. presidential election.

In this book, Piketty keeps his focus on inequality but attempts something grander than a mere diagnosis of capitalism’s ill effects. The book presents a general theory of capitalism intended to answer a basic but profoundly important question. As Piketty puts it:

‘Do the dynamics of private capital accumulation inevitably lead to the concentration of wealth in ever fewer hands, as Karl Marx believed in the nineteenth century? Or do the balancing forces of growth, competition, and technological progress lead in later stages of development to reduced inequality and greater harmony among the classes, as Simon Kuznets thought in the twentieth century?’

Although he stops short of embracing Marx’s baleful vision, Piketty ultimately lands on the pessimistic end of the spectrum. He believes that in capitalist systems, powerful forces can push at various times toward either equality or inequality and that, therefore, ‘one should be wary of any economic determinism.’ But in the end, he concludes that, contrary to the arguments of Kuznets and other mainstream thinkers, ‘there is no natural, spontaneous process to prevent destabilizing, inegalitarian forces from prevailing permanently.’ To forestall such an outcome, Piketty proposes, among other things, a far-fetched plan for the global taxation of wealth — a call to radically redistribute the fruits of capitalism to ensure the system’s survival. This is an unsatisfying conclusion to a groundbreaking work of analysis that is frequently brilliant — but flawed, as well.”

Tags: ,

Andy Warhol, that cyborg, was the messenger who got shot. He lived long enough, however, to participate in the early moments of the computer explosion, commissioned by Amiga to create a digital portrait of Debbie Harry. The fascinating visual artist Cory Arcangel has recovered some of Warhol’s other Amiga art. From Jonathan Jones at the Guardian:

“Thanks to the curiosity of Cory Arcangel – one of today’s most important artists working with digital technologies – a forgotten hoard of Warhol artworks has been rescued from old Amiga disks by students who ingeniously hacked into the defunct software.

The works Warhol created to commission in 1985 to help launch the Amiga 1000 computer are not earth-shattering in themselves. He essentially recreated some of his paintings as digital images.

But the meeting of Andy Warhol and a computer at the dawn of the digital age is hugely suggestive. Warhol, after all, is the man who flirted with being a machine. He wore a metallic silver wig and made paintings on a production line, with assistants silkscreening found photographs onto canvas.

This computer-like style was eerie. Yet it was not the real him. In reality, Andy Warhol was a talented draughtsman, a secret Catholic and a compassionate historian of his times. He pretended to be a machine because that was the best way he found to capture the way the world was changing. From canned soup to instant pictures, Warhol took the pulse of the age as America became a society of consumers and celebrity watchers. He portrayed reality so truly he seemed to invent it – as if one artist could create the celebrity age.

Warhol was a reporter who simply told the truth.”

Tags: , ,

George Dvorsky’s io9 post “This Could Be the First Animal to Live Entirely Inside A Computer” examines neuroscientist Stephen Larson’s attempts to create a virtual worm, which has massive implications for the future of medicine and so much else. An excerpt:

“To be fair, scientists , namely the exceptionally small free-living bacteria known as Mycoplasma genitalia. It’s an amazing accomplishment, but the pathogen — with its 525 genes — is one of the world’s simplest organisms. Contrast that with E. coli, which has 4,288 genes, and humans, who have anywhere from 35,000 to 57,000 genes.

Scientists have also created synthetic DNA that can self-replicate and an artificial chromosome from scratch. Breakthroughs like these suggest it won’t be much longer before we start creating synthetic animals for the real world. Such endeavors could result in designer organisms to help in the manufacturing of vaccines, medicines, sustainable fuels, and with toxic clean-ups.

There’s a very good chance that many of these organisms, including drugs, will be designed and tested in computers first. Eventually, our machines will be powerful enough and our understanding of biology deep enough to allow us to start simulating some of the most complex biological functions — from entire microbes right through to the human mind itself (what will be known as whole brain emulations).

Needless to say we’re not going to get there in one day. We’ll have to start small and work our way up. Which is why Larson and his team have started to work on their simulated nematode worm.”

Tags: ,

Considering the appalling way we treat animals apart from a couple of cute ones we are very protective of, it’s worth pondering if non-human creatures should have legal recourse. Historically, animals have taken part in court systems, though as defendants, not plaintiffs. From Charles Siebert’s New York Times Magazine article, “Should a Chimp Be Able to Sue Its Owner?“:

“Animals are hardly strangers to our courts, only to the brand of justice meted out there. In the opening chapters of [Steven] Wise’s first book, Rattling the Cage: Toward Legal Rights for Animals, published in 2000, he cites the curious and now largely forgotten history, dating at least back to the Middle Ages, of humans putting animals on trial for their perceived offenses, everything from murderous pigs, to grain-filching rats and insects, to flocks of sparrows disrupting church services with their chirping. Such proceedings — often elaborate, drawn-out courtroom dramas in which the defendants were ostensibly accorded the same legal rights as humans, right down to being appointed the best available lawyers — were essentially allegorical rituals, a means of expunging evil and restoring some sense of order to a random and disorderly world.

Among the most common nonhuman defendants cited by the British historian E. P. Evans in his 1906 book, The Criminal Prosecution and Capital Punishment of Animals, were pigs. Allowed to freely roam the narrow, winding streets of medieval villages, pigs and sows sometimes maimed and killed infants and young children. The ‘guilty’ party would regularly be brought before a magistrate to be tried and sentenced and then publicly tortured and executed in the town square, often while being hung upside down, because, as Wise explains it in Rattling the Cage, ‘a beast . . . who killed a human reversed the ordained hierarchy. . . . Inversion set the world right again.’

The practice of enlisting animals as unwitting courtroom actors in order to reinforce our own sense of justice is not as outmoded as you might think. As recently as 1906, the year Evans’s book appeared, a father-son criminal team and the attack dog they trained to be their accomplice were prosecuted in Switzerland for robbery and murder. In a trial reported in L’Écho de Paris and The New York Herald, the two men were found guilty and received life in prison. The dog — without whom, the court determined, the crime couldn’t have been committed — was condemned to death.

It has been only in the last 30 years or so that a distinct field of animal law — that is laws and legal theory expressly for and about nonhuman animals — has emerged.”

Tags:

Corporations don’t just nudge–they push hard. Trying to get us to consume products that are often injurious to us, they attack with constant messages to trigger our behavior. That’s considered freedom. But it’s stickier when governments try to influence us with sin taxes, default agreements and helpful reminders. That’s called a nanny state. Sometimes I like such initiatives (cigarette taxes) and sometimes I don’t, but they influence us less and to healthier ends than corporations do. From Cass Sunstein’s new Guardian article about nudging:

“The beauty of nudges is that when they are well chosen, they make people’s lives better while maintaining freedom of choice. Moreover, they usually don’t cost a lot, and they tend to have big effects. In an economically challenging time, it is no wonder that governments all over the world, including in the US and UK, have been showing a keen interest in nudging.

Inevitably, we have been seeing a backlash. Some people object that nudges are a form of unacceptable paternalism. This is an objection that has intuitive appeal, but there is a real problem with it: nudging is essentially inevitable, and so it is pointless to object to nudging as such.

The private sector nudges all the time. Whenever a government has websites, communicates with its citizens, operates cafeterias, or maintains offices that people will visit, it nudges, whether or not it intends to. Nudges might not be readily visible, but they are inevitably there. If we are sceptical about official nudging, we might limit how often it occurs, but we cannot possibly eliminate it.

Other sceptics come from the opposite direction, contending that in light of what we know about human errors, we should be focusing on mandates and bans. They ask: when we know people make bad decisions, why should we insist on preserving freedom of choice?

Tags:

Americans have always viewed technology (and anti-technology) in romantic terms. In a New Atlantis piece, Benjamin Storey argues that Alexis de Tocqueville didn’t give tech in the U.S. the short shrift but instead viewed it as a poetic impulse as much as an economic one. An excerpt: 

For Tocqueville, technology is not a set of morally neutral means employed by human beings to control our natural environment. Technology is an existential disposition intrinsically connected to the social conditions of modern democratic peoples in general and Americans in particular. On this view, to be an American democrat is to be a technological romantic. Nothing is so radical or difficult to moderate as a romantic passion, and the Americans Tocqueville observed accepted only frail and minimal restraints on their technophilia. We have long since broken many of those restraints in our quest to live up to our poetic self-image. Understanding the sources of our fascination with the technological dream, and the distance between that dream and technological reality, can help revitalize the sources of self-restraint that remain to us.

That Tocqueville presents much of his commentary on technology in the chapter of Democracy in America entitled ‘Of Some Sources of Poetry among Democratic Nations‘ already indicates why his analysis of technology has been less well received than his analysis of town government or the tyranny of the majority. What, after all, does technology have to do with poetry? Wouldn’t Tocqueville have done better to offer a systematic analysis of ‘the material bases of American life,’ in the manner of an economic or industrial historian, as Garry Wills suggests?

To see what exactly poetry has to do with technological progress, we must first seek to understand Tocqueville’s account of the nature of poetry and the human need for it. We must then turn to his account of the appeal of the poetry of technology to the psychic passions of democratic man. Finally, we must consider his analysis of why democratic peoples would take an argument about the hard facts of economics or industry more seriously as a mode of understanding the question of technology than his own reflections on poetry. By doing so, we can understand something about our typical mode of self-understanding and the distinctive kind of blindness to ourselves to which we are most prone.”

Tags: ,

From the July 3, 1925 Brooklyn Daily Eagle:

“After his clever work as a plastic surgeon was completed, Dr. W.A. Pratt married his model, his patient and the ideal of his own recreating. Dr. and Mrs. Pratt have just returned to New York on the S.S. Columbus and are ‘married and very happy,’ as the culmination of a romance that began with the surgeon’s knife.

Dr. Pratt, who recently predicted ‘a perfect-featured nation’ when the skill of the plastic surgeon becomes more widely known and in demand, had been remoulding foreheads, chins, cheeks and noses for some time before he met the woman whose beauty was to make him forget his profession long enough to take a honeymoon.

‘A woman is only as charming as she is beautiful,’ he says. ‘It is only a question of time when ugly features will have disappeared from the human race.’

His story of falling in love with his model shows that his interest increased as the face under his skillful fingers became more and more lovely. ‘When the work was completed, I was wholly in love,’ he explained.”

Tags: ,

Edward Snowden’s participation in a recent dog-and-pony show about government surveillance with Vladimir Putin confirms what has long been apparent: He’s not the most astute fellow who thinks things through in advance of his actions. Russia under Putin isn’t just a place that spies on journalists but one where they mysteriously wind up dead. But even if Snowden is his own worst enemy, that doesn’t necessarily mean he’s an enemy of the state. In his new Foreign Affairs piece, “Live and Let Leak,” Jack Shafer acknowledges that whistleblowers can be dangerous but not nearly as dangerous as a government not held to account by them. An excerpt:

“With little or no public input, the U.S. government has kidnapped suspected terrorists, established secret prisons, performed ‘enhanced’ interrogations, tortured prisoners, and carried out targeted killings. After the former National Security Agency contractor Edward Snowden pilfered hundreds of thousands of documents from the NSA’s computers and released them to journalists last summer, the public learned of additional and potentially dodgy secret government programs: warrantless wiretaps, the weakening of public encryption software, the collection and warehousing of metadata from phones and e-mail accounts, and the interception of raw Internet communications.

The secrecy machine was originally designed to keep the United States’ foes at bay. But in the process, it has transformed itself into an invisible state within a state. Forever discovering new frontiers to patrol, as the Snowden files indicate, the machine molts its skin each season to grow ever larger and more powerful, encountering little resistance from the courts or Congress.”

Tags: ,

Freeman Dyson and his fellow scientists behind the 1950s Project Orion space-exploration plans had an ambitious timeline for their atomic rockets: Mars by 1965 and Saturn by 1970. But their dreams were dashed, collateral damage of non-proliferation Limited Test Ban Treaty of 1963. But is it merely a dream deferred? The opening of Richard Hollingham’s new BBC article on the topic:

“Project Orion has to be the most audacious, dangerous and downright absurd space programme ever funded by the US taxpayer. This 1950s design involved exploding nuclear bombs behind a spacecraft the size of the Empire State Building to propel it through space. The Orion’s engine would generate enormous amounts of energy – and with it lethal doses of radiation.

Plans suggested the spacecraft could take off from Earth and travel to Mars and back in just three months. The quickest flight using conventional rockets and the right planetary alignment is 18 months.

There were obvious challenges – from irradiating the crew and the launch site, to the disruption caused by the electromagnetic pulse, plus the dangers of a catastrophic nuclear accident taking out a sizable portion of the US. But the plan was, nevertheless, given serious consideration. Project Orion was conceived when atmospheric nuclear tests were commonplace and the power of the atom promised us all a bright new tomorrow. Or oblivion. Life was simpler then.

In the early 1960s, common sense prevailed and the project was abandoned, but the idea of nuclear-powered spaceships has never gone away. In fact there are several in the cold depths of space right now.”

 _____________________________

“First time we tried it, the thing took off like a bat out of hell”:

Tags: ,

Charles Hatfield, 1915.

Charles Hatfield, rainmaker, 1915.

Of the things humans still can’t do, making it rain is one of the more perplexing. We should be able to manage that by now, right? You would think making nuclear power and traveling to the moon would be tougher. We finally may be making progress in this area. From Kurzweil AI:

“Researchers at the University of Central Florida’s College of Optics & Photonics and the University of Arizona have further developed a new technique to aim a high-energy laser beam into clouds to make it rain or trigger lightning.

The solution: surround the beam with a second beam to act as an energy reservoir, sustaining the central beam to greater distances than previously possible. The secondary ‘dress’ beam refuels and helps prevent the dissipation of the high-intensity primary beam, which on its own would break down quickly.

A report on the project, ‘Externally refueled optical filaments,’ was recently published in Nature Photonics.

Water condensation and lightning activity in clouds are linked to large amounts of static charged particles. Stimulating those particles with the right kind of laser holds the key to possibly one day summoning a shower when and where it is needed.”

Thanks to the wonderful Browser, I came across one of the three best pieces I’ve read so far this year (along with this one and this one), Roger Highfield’s eye-opening Mosaic article “The Mind Readers,” which focuses on the life that lingers beneath the surface when humans are rendered into a vegetative state. A passage:

“Half a century ago, if your heart stopped beating you could be pronounced dead even though you may have been entirely conscious as the doctor sent you to the morgue. This, in all likelihood, accounts for notorious accounts through history of those who ‘came back from the dead’. As a corollary, those who were fearful of being buried alive were spurred on to develop ‘safety coffins’ equipped with feeding tubes and bells. As recently as 2011, a council in the Malatya province of central Turkey announced it had built a morgue with a warning system and refrigerator doors that could be opened from the inside.

What do we mean by ‘dead’? And who should declare when an individual is dead? A priest? A lawyer? A doctor? A machine? [Adrian] Owen discussed these issues at a symposium in Brazil with the Dalai Lama and says he was surprised to find that they both agreed strongly on one point: we need to create an ethical framework for science that is based on secular, rather than religious, views; science alone should define what we mean by death.Near death experience

The problem is that the scientific definition of ‘death’ remains as unresolved as the definition of ‘consciousness’. Much confusion is sowed by the term ‘clinical death’, the cessation of blood circulation and breathing. Even though this is reversible, the term is often used by mind–body dualists who cling to the belief that a soul (or self) can persist separately from the body. Today, however, being alive is no longer linked to having a beating heart, explains Owen. If I have an artificial heart, am I dead? If you are on a life-support machine, are you dead? Is a failure to sustain independent life a reasonable definition of death? No, otherwise we would all be ‘dead’ in the nine months before birth.

The issue becomes murkier when we consider those trapped in the twilight worlds between normal life and death – from those who slip in and out of awareness, who are trapped in a ‘minimally conscious state’, to those who are severely impaired in a vegetative state or a coma. These patients first appeared in the wake of the development of the artificial respirator during the 1950s in Denmark, an invention that redefined the end of life in terms of the idea of brain death and created the specialty of intensive care, in which unresponsive and comatose patients who seemed unable to wake up again were written off as ‘vegetables’ or ‘jellyfish’. As is always the case when treating patients, definitions are critical: understanding the chances of recovery, the benefits of treatments and so on all depend on a precise diagnosis.”

Brad Templeton, a consultant to Google’s driverless-car division, explaining why he thinks delivery robots, which will transport goods and not people, shouldn’t be governed by the same restrictions as autonomous cars:

“Delivery robots are world-changing. While they won’t and can’t carry people, they will change retailing, logistics, the supply chain, and even going to the airport in huge ways. By offering very quick delivery of every type of physical goods — less than 30 minutes — at a very low price (a few pennies a mile) and on the schedule of the recipient, they will disrupt the supply chain of everything. Others, including Amazon, are working on doing this by flying drone, but for delivery of heavier items and efficient delivery, the ground is the way to go.

While making fully unmanned vehicles is more challenging than ones supervised by their passenger, the delivery robot is a much easier problem than the self-delivering taxi for many reasons:

  • It can’t kill its cargo, and thus needs no crumple zones, airbags or other passive internal safety.
  • It still must not hurt people on the street, but its cargo is not impatient, and it can go more slowly to stay safer. It can also pull to the side frequently to let people pass if needed.
  • It doesn’t have to travel the quickest route, and so it can limit itself to low-speed streets it knows are safer.
  • It needs no windshield or wheel, and can be small, light and very inexpensive.

A typical deliverbot might look like little more than a suitcase sized box on 3 or 4 wheels. It would have sensors, of course, but little more inside than batteries and a small electric motor. It probably will be covered in padding or pre-inflated airbags, to assure it does the least damage possible if it does hit somebody or something. At a weight of under 100lbs, with a speed of only 25 km/h and balloon padding all around, it probably couldn’t kill you even if it hit you head on (though that would still hurt quite a bit.)

The point is that this is an easier problem, and so we might see development of it before we see full-on taxis for people.”

Tags:

David J. Cord, who wrote the book on Nokia’s collapse (quite literally), just did an Ask Me Anything at Reddit about all things mobile. A few exchanges follow.

______________________________

Question:

What do you think is the future in mobile?

David J. Cord:

A new disruption will happen within about five years, maybe much sooner. Historically, a disruption occurs whenever the next generation of mobile technologies becomes fairly widespread. 4G is just starting to take off.

I think the next disruption could be wearable devices. But not Google Glass. Glass comes from Google’s existing business – primarily communication, search and location-based services. By definition, the disruption will come from out of the blue. It will either be a new player in the industry or a startup. But it won’t be Google, and it won’t be Apple or Samsung.

______________________________

Question:

Will we see a big improvement on mobile phone battery life any time soon?

David J. Cord:

No. The demands for power are increasing faster than battery technologies. I know there are some potentially big improvements that are being worked on, but it is difficult to commercialise them and make them financially viable. I suspect over the next few years battery life will either stay stagnant or even get worse.

______________________________

Question:

Are you able to discuss the privacy implications of the newest mobile devices – tracking by GPS, Google Glass and facial recognition software – and how you see that evolving?

I’m a Luddite with a flip phone who won’t go near anyone wearing Google Glass if I can avoid it.

David J. Cord:

Tracking is becoming all-pervasive, and the very concept of privacy is morphing into something entirely different. In some ways, this is brought about by the consumer: younger kids are much more willing to share extremely private information to their friends and the world at large. Meanwhile, technology is collecting more and more information.

One industry expert explained to me how pictures posted online could be used, and it was quite disquieting. Geotagging, information about time and place and habits are all collected.

Technology moves much faster than regulations, so it will be some time before we become used to the current state of privacy and what is allowed and what is not allowed. It will take public debate.

Question:

I dislike what the NSA is doing, but what the big consumer marketing companies are gathering on everybody is terrifying.

I won’t touch Facebook or Twitter, I go to great lengths to keep my information off Google (Google my real name in any permutation and nothing accurate will come up, thank God), but I feel like it’s a losing battle. Who wants to live in a glass fishbowl? <sigh>

David J. Cord:

There is a balance to consider. Do you want to be able to interact freely online? Do you need to use online communications for your job? Then you have to be willing to give up some privacy. Is privacy more important? Then you have to be willing to give up some ease of using online communications.

Everyone needs to decide what is more important for them.

______________________________

Question:

Do you think one day we will have phones / devices implemented into our bodies? Take Google Glass for example, and place the entire device in your head.

David J. Cord:

Yes, but it probably won’t be common as soon as some of the futurologists think. There are a lot of hurdles that need to be jumped first. There are technological challenges, as well as societal, health and regulatory. For instance, would this be considered a medical device and need FDA approval? It depends upon what it does, and what the regulators think it does.•

 

Tags:

« Older entries § Newer entries »