Science/Tech

You are currently browsing the archive for the Science/Tech category.

From Joseph Stiglitz, on tax day, in the New York Times:

Leona Helmsley, the hotel chain executive who was convicted of federal tax evasion in 1989, was notorious for, among other things, reportedly having said that ‘only the little people pay taxes.’

As a statement of principle, the quotation may well have earned Mrs. Helmsley, who died in 2007, the title Queen of Mean. But as a prediction about the fairness of American tax policy, Mrs. Helmsley’s remark might actually have been prescient.

Today, the deadline for filing individual income-tax returns, is a day when Americans would do well to pause and reflect on our tax system and the society it creates. No one enjoys paying taxes, and yet all but the extreme libertarians agree, as Oliver Wendell Holmes said, that taxes are the price we pay for civilized society. But in recent decades, the burden for paying that price has been distributed in increasingly unfair ways.

About 6 in 10 of us believe that the tax system is unfair — and they’re right: put simply, the very rich don’t pay their fair share.”

Tags: ,

From “Civilization Vs. Human Desire,” a post at the Overcoming Bias blog, Robin Hanson explains why our wants drive the future better than planning ever could:

“So, if we could, we’d pick futures that transfer to us, honor us, preserve our ways, and act warm and moral by our standards. But we don’t get what we’d want. That is, we mostly don’t consciously and deliberately choose to change civilization according to our preferences. Instead, changes are mostly side effects of our each trying to get what we want now. Civilizations change as cultures and technologies are selected for being more militarily, rhetorically, economically, etc. powerful, and for giving people what they now want. This is mostly out of anyone’s control, and yes it could end very badly.

And yet, it is our unique willingness and ability to let our civilization change and be selected by forces out of our control, and then to tell us that we like it, that has let our species dominate the Earth, and gives us a good chance to dominate the galaxy and more. While our descendants may be somewhat less happy than us, or than our distant ancestors, there may be trillions of trillions or more of them. I more fear a serious attempt by overall humanity to coordinate to dictate its future, than I fear this out of control process.”

Tags:

From Holman W. Jenkins, Jr.’s new Wall Street Journal interview with the ever-fascinating Ray Kurzweil:

“Mr. Kurzweil’s frank efforts to outwit death have earned him an exaggerated reputation for solemnity, even caused some to portray him as a humorless obsessive. This is wrong. Like the best comedians, especially the best Jewish comedians, he doesn’t tell you when to laugh. Of the pushback he receives from certain theologians who insist death is necessary and ennobling, he snarks, ‘Oh, death, that tragic thing? That’s really a good thing.’

‘People say, ‘Oh, only the rich are going to have these technologies you speak of.’ And I say, ‘Yeah, like cellphones.’

To listen to Mr. Kurzweil or read his several books (the latest: How to Create a Mind) is to be flummoxed by a series of forecasts that hardly seem realizable in the next 40 years. But this is merely a flaw in my brain, he assures me. Humans are wired to expect ‘linear’ change from their world. They have a hard time grasping the ‘accelerating, exponential’ change that is the nature of information technology.

‘A kid in Africa with a smartphone is walking around with a trillion dollars of computation circa 1970,’ he says. Project that rate forward, and everything will change dramatically in the next few decades.

‘I’m right on the cusp,’ he adds. ‘I think some of us will make it through’—he means baby boomers, who can hope to experience practical immortality if they hang on for another 15 years.”

Tags: ,

From Rachel Hardwick’s new Vice interview with Dennis M. Hope, a man who claims to “own the moon”:

Vice:

How did you end up owning and selling off chunks of the moon?

Dennis M. Hope: 

I started in 1980 when I was going through a divorce. I was out of money and thought maybe I could make some if I owned some property, then I looked out the window, saw the moon, and thought, Hey, there’s a load of property! So I went to the library, looked up the 1968 Outer Space Treaty and, sure enough, Article 2 states: ‘No nation by appropriation shall have sovereignty or control over any of the satellite bodies.’ Meaning it was unowned land. 

Vice:

But how did you acquire it?

Dennis M. Hope:

I just filed a claim of ownership for the moon, the other eight planets and their moons, and sent it to the United Nations with a note stating that my intent was to subdivide and sell the property to anybody who wanted it. I told them that if they had a legal problem with it they should please let me know.

Vice:

Did they ever get back to you?

Dennis M. Hope:

They never responded. Shame on them! I’ve never had a challenge to my claim of ownership by any government on this planet, period. I’ve had a lot of people telling me I don’t have the right to do this, but that’s just their opinion.

Vice:

So how much land have you sold so far?

Dennis M. Hope:

Well, this is the only job I’ve had since 1995, which is when I started doing this full time. We’ve sold 611 million acres of land on the moon, 325 million acres on Mars, and a combined 125 million acres on Venus, Io, and Mercury.”

Tags: ,

A Nicholas Thompson post at the New Yorker blog looks at the peculiar state of Stanford University, which has administrators tacitly encouraging students to drop out of school to create riches for themselves and their elders. It’s not so much a sign of the decentralization of modern education as it is a contemporary tale of a gold rush and things that get lost in the haste. The post’s opening:

“Is Stanford still a university? The Wall Street Journal recently reported that more than a dozen students—both undergraduate and graduate—have left school to work on a new technology start-up called Clinkle. Faculty members have invested, the former dean of Stanford’s business school is on the board, and one computer-science professor who taught several of the employees now owns shares. The founder of Clinkle was an undergraduate advisee of the president of the university, John Hennessy, who has also been advising the company. Clinkle deals with mobile payments, and, if all goes well, there will be many payments to many people on campus. Maybe, as it did with Google, Stanford will get stock grants. There are conflicts of interest here; and questions of power dynamics. The leadership of a university has encouraged an endeavor in which students drop out in order to do something that will enrich the faculty.

Stanford has been heading in this direction for a while.”

 

Tags:

From a post at the great Paleofuture blog which recalls a 1969 prediction about the polarizing potential of narrowcasting, a term that had yet to be coined:

Imagine a world where the only media you consume serves to reinforce your particular set of steadfast political beliefs. Sounds like a pretty far-out dystopia, right? Well, in 1969, Internet pioneer Paul Baran predicted just that.

In a paper titled “On the Impact of the New Communications Media Upon Social Values,” Baran (who passed away in 2011) looked at how Americans might be affected by the media landscape of tomorrow. The paper examined everything from the role of media technology in the classroom to the social effects of the portable telephone — a device not yet in existence that he predicted as having the potential to disrupt our lives immensely with unwanted calls at inopportune times.

Perhaps most interestingly, Baran also anticipated the political polarization of American media; the kind of polarization that media scholars here in the 21st century are desperately trying to better understand.

Baran understood that with an increasing number of channels on which to deliver information, there would be more and more preaching to the choir, as it were. Which is to say, that when people of the future find a newspaper or TV network or blog (which obviously wasn’t a thing yet) that perfectly fits their ideology and continuously tells them that their beliefs are correct, Americans will see little reason to communicate meaningfully with others who don’t share those beliefs.”

Tags:

Mantis, a two-ton robot with room for a passenger.

You have to have a lot of faith in humanity to be an anarchist. Have you met people? They’re awful.

The collapse of Wall Street, the sway of corporations that see us as consumers rather than citizens, grave concerns about our environment and the decentralization of communication have opened a door for anarchic movements in the form of Occupy Wall Street and beyond. If only I had more faith in people, the awful, awful people.

An excerpt from an excellent interview that Gawker’s Adrian Chen conducted with anarchist, author and scholar David Graeber:

Question:

One of the major themes of your book is that the current political structure is not at all democratic. I think among the people who would read your book, that’s kind of a given. But you go further in pointing out the anti-democratic nature of the Founding Fathers.

David Graeber:

Most people think these guys had something to do with democracy, but nobody ever reads what they actually said. What they said is very explicit: They would say things like ‘We need to do something about all this democracy.’

Question:

So as an alternative, you promote the model of consensus that Occupy used to organize, through its General Assembly.

David Graeber:

Yeah. What we wanted to do was A) change the discourse and then B) create a culture of democracy in America, which really hasn’t had one. I mean direct democracy, hands on, let’s figure out how you make this system together. It’s ironic because if you go to someplace like Madagascar, everybody knows how to do that. They sit in a circle and they do a consensus process. There is a way that you can do these things, that millions and millions of people over human history have developed and it comes out pretty much the same wherever they are because there are certain logical constraints and people being what they are.

Consensus isn’t just about agreement. It’s about changing things around: You get a proposal, you work something out, people foresee problems, you do creative synthesis. At the end of it you come up with with something that everyone thinks is okay. Most people like it, and nobody hates it.

Question:

This is pretty much the opposite of what goes on in mainstream politics.

David Graeber:

Yeah, exactly. It’s like, ‘People can be reasonable, I didn’t think it was possible!’ And that’s something I’ve noticed, that authoritarian regimes, what they do is that they always come up with some way to teach people about political decision making that says people aren’t basically reasonable, so don’t try this at home. I always point out the difference between the Athenian Agora and the Roman Circus. When most Athenians gathered together in a big mass it was to do direct democracy. But here’s Rome, this authoritarian regime. When did most Romans get together in the same place? If they’re voting on anything it’s like thumbs-up or thumbs-down to kill some gladiator. And these things are all organized by the elite, right? So all the people who are really running things throw these games where they basically organize people into a giant lynch mobs. And then they say, ‘Look, see how people behave! You don’t want to have Democracy!'”

Tags: ,

Hindustan Times report profiles the attempts of a Russian oligarch to create cyborgs by 2045, essentially defeating death. Even if he is successful (very unlikely), what he preserves won’t be exactly you or I. The opening:

“A Russian billionaire has unveiled plans to make humans immortal by converting them into ‘Terminator-style’ cyborgs – a creature that’s part human and part machine – within the next three decades.

Thirty two-year-old mogul, Dmitry Itskov has been pushing the project forward since 

His ultimate goal is to transfer a person’s mind or consciousness from a living brain into a machine with that its personality and memories intact, according to website Digital Trends.

The so called ‘Cyborg’ will have no physical form, and exist in a network similar to the Internet and be able to travel at the speed of light all over the Earth, or even into the space.”

 

Tags:

There’s no better source for thought-provoking essays on the web than the remarkable Aeon site. There are several pieces each week that make me glad the Internet exists. The latest pair of examples are Leo Hollis’ exploration of future-proofing cities in an age of extreme weather and Jesse Gamble’s study of technological “remedies” for sleep. 

“We do not, however, need to rely on speculation to imagine the impact of extreme weather events on the city. We have seen this scenario unfold before.

On Thursday, 13 July, 1995, the temperature in downtown Chicago rose to a record 104ºF (40ºC), the high point in an unrelenting week of heat. Combined with high humidity, it was so hot that it was almost impossible to move around without discomfort. At the beginning of the week, people made jokes, broke open beers and celebrated the arrival of the good weather. But after seven days and nights of ceaseless heat, according to the Chicago Tribune:
Overheated Chicagoans opened an estimated 3,000 fire hydrants, leading to record water use. The Chicago Park District curtailed programs to keep children from exerting themselves in the heat. Swimming pools were packed, while some sought relief in cool basements. People attended baseball games with wet towels on their heads. Roads buckled and some drawbridges had to be hosed down to close properly.

Only once the worst of the heatwave had passed were authorities able to audit the damage. More than 739 people died from heat exhaustion, dehydration, or kidney failure, despite warnings from meteorologists that dangerous weather was on its way. Hospitals found it impossible to cope. In a vain attempt to help, one owner of a meatpacking firm offered one of his refrigeration trucks to store the dead; it was so quickly filled with the bodies of the poor, infirm and elderly that he had to send eight more vehicles. Afterwards, the autopsies told a grim, predictable tale: the majority of the dead were old people who had run out of water, or had been stuck in overheated apartments, abandoned by their neighbours.

It is easy to forget that cities are made out of people who live and thrive in the spaces between buildings

In response to the crisis, a team from the US Centers for Disease Control and Prevention (CDC) scoured the city for the causes of such a high number of deaths, hoping to prevent a similar disaster elsewhere. The results were predictable: the people who died had failed to find assistance or refuge. They had died on their own, without help. In effect, the report blamed the dead for their failure to leave their apartments, ensure that they had enough water, or check that the air conditioning was working.

These two scenarios offer a bleak condemnation of our urban future. Natural disasters appear to be inevitable, and yet we seem largely incapable of readying ourselves for the unexpected. What can we do to prepare, and perhaps prevent, coming catastrophe?”

Work, friendships, exercise, parenting, eating, reading — there just aren’t enough hours in the day. To live fully, many of us carve those extra hours out of our sleep time. Then we pay for it the next day. A thirst for life leads many to pine for a drastic reduction, if not elimination, of the human need for sleep. Little wonder: if there were a widespread disease that similarly deprived people of a third of their conscious lives, the search for a cure would be lavishly funded. It’s the Holy Grail of sleep researchers, and they might be closing in.

As with most human behaviours, it’s hard to tease out our biological need for sleep from the cultural practices that interpret it. The practice of sleeping for eight hours on a soft, raised platform, alone or in pairs, is actually atypical for humans. Many traditional societies sleep more sporadically, and social activity carries on throughout the night. Group members get up when something interesting is going on, and sometimes they fall asleep in the middle of a conversation as a polite way of exiting an argument. Sleeping is universal, but there is glorious diversity in the ways we accomplish it.

Different species also seem to vary widely in their sleeping behaviours. Herbivores sleep far less than carnivores — four hours for an elephant, compared with almost 20 hours for a lion — presumably because it takes them longer to feed themselves, and vigilance is selected for. As omnivores, humans fall between the two sleep orientations. Circadian rhythms, the body’s master clock, allow us to anticipate daily environmental cycles and arrange our organ’s functions along a timeline so that they do not interfere with one another.

Our internal clock is based on a chemical oscillation, a feedback loop on the cellular level that takes 24 hours to complete and is overseen by a clump of brain cells behind our eyes (near the meeting point of our optic nerves). Even deep in a cave with no access to light or clocks, our bodies keep an internal schedule of almost exactly 24 hours. This isolated state is called ‘free-running’, and we know it’s driven from within because our body clock runs just a bit slow. When there is no light to reset it, we wake up a few minutes later each day. It’s a deeply engrained cycle found in every known multi-cellular organism, as inevitable as the rotation of the Earth — and the corresponding day-night cycles — that shaped it.

Human sleep comprises several 90-minute cycles of brain activity. In a person who is awake, electroencephalogram (EEG) readings are very complex, but as sleep sets in, the brain waves get slower, descending through Stage 1 (relaxation) and Stage 2 (light sleep) down to Stage 3 and slow-wave deep sleep. After this restorative phase, the brain has a spurt of rapid eye movement (REM) sleep, which in many ways resembles the waking brain. Woken from this phase, sleepers are likely to report dreaming.

One of the most valuable outcomes of work on sleep deprivation is the emergence of clear individual differences — groups of people who reliably perform better after sleepless nights, as well as those who suffer disproportionately. The division is quite stark and seems based on a few gene variants that code for neurotransmitter receptors, opening the possibility that it will soon be possible to tailor stimulant variety and dosage to genetic type.

Around the turn of this millennium, the biological imperative to sleep for a third of every 24-hour period began to seem quaint and unnecessary. Just as the birth control pill had uncoupled sex from reproduction, designer stimulants seemed poised to remove us yet further from the archaic requirements of the animal kingdom.” 

Tags: ,

Truck-platooning, in which a single driver leads a convoy of automated delivery vehicles, is being tested in Japan. From Steven Ashley at the BBC:

“Imagine cruising down a three-lane highway and rounding a bend to find four trucks rolling along in single-file. They are all traveling close together – perhaps too close – but otherwise everything seems normal.

Yet as you pass the trailing truck, you look up through the sun roof to see the driver on a mobile phone. He should know better, you think as you slide by. Passing the next one, the driver appears to be sipping a cup of coffee and you could swear that he’s watching TV. That can’t be right, but you power on regardless. Then, coming alongside the third, there seems to be no driver at all. You must be mistaken, you tell yourself, as the truck stays in lane and otherwise rides as per usual.

By the time you glance up at the lead truck, you glimpse a driver concentrating on the road. Perhaps your mind was playing tricks on you after all.

Or maybe not. In February this year, a similar line-up of four large trucks circled an oval test track in Tsukuba City, Japan to help get so-called ‘truck platooning’ technology ready for real-world use.”

Tags:

Roger Ebert was one of the best newspaper writers ever–lucid, interesting, prolific, intelligent, inviting–in the same league as Breslin, Hamill or Royko. My interaction with him was minimal: I interviewed the critic once by phone and spoke to him another time at the Toronto festival about the Jessica Yu film, In the Realms of the Unreal, which we both loved. He was naturally argumentative and cantankerous but remarkably generous and open-minded and egalitarian and warm. And he was steadfastly progressive in regards to women and minorities, to people who didn’t have the kind of platform he had carved for himself. Ebert was truly the King of All Media, and I’m constantly amazed at how such an ink-stained wretch found his way not only through the world of television but through all areas of the new communication platforms. 

The odd thing is that outside of his early years, Ebert had pretty lousy, hit-or-miss taste in film. He wasn’t a blurb whore like, say, Jeffrey Lyons (who used to loudly mock Ebert’s appearance in vicious, personal terms at New York screenings). He just lost his critical compass by the late 1970s. Sometimes Ebert’s aforementioned progressive politics seemed to get in the way of his critical eye: He disliked Blue Velvet in part because of how Isabella Rossellini’s character was treated, and he named Eve’s Bayou, a good film, the best film of 1997, the same year that saw the release of Boogie Nights, Fast, Cheap & Out of Control, L.A. Confidential, etc. But often he just seemed to make odd choices (e.g., hating Jim Jarmusch’s Dead Man) that someone with his intelligence shouldn’t. 

But if Ebert’s taste faltered, his writing and soul never, ever did. He was an amazing guy who left the world a better, smarter place because of his presence. He was loved and will be missed.

In the New York Times, David Carr, who is Ebert’s equal as a writer, examines the Chicagoan’s empire-building skills. The opening:

At journalism conferences and online, media strivers talk over and over about becoming their own brand, hoping that some magical combination of tweets, video spots, appearances and, yes, even actual written articles, will help their name come to mean something.

As if that were a new thing.

Since Roger Ebert’s death on Thursday, many wonderful things have been said about his writing gifts at The Chicago Sun-Times, critical skills that led to a Pulitzer Prize in 1975, the first given for movie criticism. We can stipulate all of that, but let’s also remember that a big part of what he left behind was a remarkable template for how a lone journalist can become something much more.

Mr. Ebert was, in retrospect, a very modern figure. Long before the media world became cluttered with search optimization consultants, social media experts and brand-management gurus, Mr. Ebert used all available technologies and platforms to advance both his love of film and his own professional interests.

He clearly loved newspapers, but he wasn’t a weepy nostalgist either. He was an early adopter on the Web, with a CompuServe account he was very proud of, and unlike so many of his ink-splattered brethren, he grabbed new gadgets with both hands.

But it wasn’t just a grasp of technology that made him a figure worthy of consideration and emulation.

Though he was viewed as a movie critic with the soul of a poet, he also had killer business instincts. A journalist since the 1960s, he not only survived endless tumult in the craft, he thrived by embracing new opportunity and expanding his franchise at every turn.”

Tags: ,

The Orion Project, brainchild of Freeman Dyson and other scientists from more than five decades ago, which aimed to make far-flung space travel possible in the short term via nuclear-fueled rockets, was collateral damage of non proliferation treaties. (See here and here.) But NASA is now trying to make travel times brief with a new fusion engine. From Iain Thomson in the UK Register:

“NASA, and plenty of private individuals, want to put mankind on Mars. Now a team at the University of Washington, funded by the space agency, is about to start building a fusion engine that could get humans there in just 30 days and make other forms of space travel obsolete.

‘Using existing rocket fuels, it’s nearly impossible for humans to explore much beyond Earth,’ said lead researcher John Slough, a UW research associate professor of aeronautics and astronautics in a statement. ‘We are hoping to give us a much more powerful source of energy in space that could eventually lead to making interplanetary travel commonplace.’

The proposed Fusion Driven Rocket (FDR) is a 150-ton system that uses magnetism to compress lithium or aluminum metal bands around a deuterium-tritium fuel pellet to initiate fusion. The resultant microsecond reaction forces the propellant mass out at 30 kilometers per second, and would be able to pulse every minute or so and not cause g-force damage to the spacecraft’s occupants.”

Tags:

From Oliver Burkeman’s Guardian column about the Helsinki Bus Station Theory, which stresses the importance of persistence, rather than originality, in creativity:

“There are two reasons this metaphor is so compelling – apart from the sheer fact that it’s Finland-related, I mean. One is how vividly it illustrates a critical insight about persistence: that in the first weeks or years of any worthwhile project, feedback – whether from your own emotions, or from other people – isn’t a reliable indication of how you’re doing. (This shouldn’t be confused with the dodgy dictum that triggering hostile reactions means you must be doing the right thing; it just doesn’t prove you’re doing the wrong one.) The second point concerns the perils of a world that fetishises originality. A hundred self-help books urge you to have the guts to be ‘different’: the kid who drops out of university to launch a crazy-sounding startup becomes a cultural hero… yet the Helsinki theory suggests that if you pursue originality too vigorously, you’ll never reach it. Sometimes it takes more guts to keep trudging down a pre-trodden path, to the originality beyond.”

Tags:

Robots, not humans, should be calling balls and strikes at every Major League Baseball game. It should have been this way for years. There was a sad reminder of the reluctance to automate this aspect of umpiring on Sunday night when Marty Foster ended the Tampa Bay-Texas game with one of the worst ball-and-strike calls imaginable.

Of course, we won’t be seeing the game utilizing computers to greater effect anytime soon. Commissioner Bud Selig and his inner circle have shown a shocking incompetence in regard to most of the key issues facing the game–instant replay, home-plate collisions, the untenable stadium situations of the A’s and the Rays, the Mets festering ownership crisis–procrastinating rather than acting. These failings have been papered over by the sport’s runaway profits, which have everything to do with the explosion of regional cable and its hunger for a quantity of family-friendly events that defy time-shifting, and little to do with anything in particular that Selig has done.

What are the arguments for not using software to call strikes? There is a tradition of catchers framing pitches, purists will say, which will be lost. A small sacrifice that will eliminate larger issues. You don’t want that much variance in the execution of the rules of any sport, with the egos of the least-important “participants” taking center stage. A loose application of rules also opens up the possibility of officials tilting games for illicit purposes (not the case with the Foster call, of course). And inconsistent outcomes due to human error is one of the reasons why boxing has seen such a decline. (Knowledge of the impact of head injuries has been just as damaging, thinning the ranks of talent.) Professional basketball’s referee scandal of a few years back occurred because the rules of the game allow for far too much interpretation. That needn’t be the case with baseball.

The human element will be lost, the stalwarts argue, not acknowledging that baseball is not some pastoral pastime but a multibillion dollar industry, and one that can easily afford to ensure its integrity if it weren’t for the lethargy and myopia of its highest ranks.•

Tags:

The opening ofThe Rise of the Small and Narrow Vehicle,” Brad Templeton’s blog post about how car design will be transformed when (if) we eventually live in a world of driverless, automated taxis:

“Many of the more interesting consequences of a robotic taxi ‘mobility on demand’ service is the ability to open up all sorts of new areas of car design. When you are just summoning a vehicle for one trip, you can be sent a vehicle that is well matched to that trip. Today we almost all drive in 5 passenger sedans or larger, whether we are alone, with a single passenger or in a group. Many always travel in an SUV or Minivan on trips that have no need of that.

The ability to use small, light vehicles means the ability to make transportation much more efficient. While electric cars are a good start (at least in places without coal-based electricity) the reality is today’s electric cars are still sedans and in fact are heavy due to their batteries. As such they use 250 to 350 watt-hours/mile. That’s good, but not great. At the national grid average, 300 wh/mile is around 3000 BTUs/mile or the equivalent of 37mpg. Good, and cleaner if from natural gas, but we can do a lot more.

Half-width vehicles have another benefit — they don’t take up much room on the road, or in parking/waiting. Two half-width vehicles that find one another on the road can pair up to take only one lane space. A road that’s heavy with half-width vehicles (as many are in the developing world) can handle a lot more traffic. Rich folks don’t tend to buy these vehicles, but they would accept one as a taxi if they are alone. Indeed, a half-width face-to-face vehicle should be very nice for 2 people.”

Tags:

From a new WBUR interview with cell-phone inventor Martin Cooper, who was inspired to his invention by Star Trek communicators, a passage about his very hopeful prognostication for the future of the cell:

“Cooper sees other revolutions coming as a result of cell phone technology.

‘Just suppose that you could do a physical examination, not every year, which people do and which is almost worthless, but every minute, because you’re connected, and because we have devices that you can put on your body that measure virtually everything on your body. If you could be sensing your body all the time and anticipate a disease before it happens,’ Cooper said.

A computer would process the data, Cooper said, and detect illness and disease before they took hold. It could then instruct a patient on what to do to stop the illness.

‘If you extrapolate that thought, we are going to eliminate the concept of disease. And I think that’s going to happen within the next generation or two,’ he said.

In addition to health care, he sees changes in education, as learning tools become more mobile and students are able to spend more time out in the world learning.

‘If we don’t blow ourselves up, this is going to be a really wonderful world,’ Cooper said.”

Tags:

I guess the essential question for Time Inc. and Time magazine in particular is this: No matter how good a job the current editors and managers do, can it be enough for the brand to survive, let alone thrive? There has always been amazing talent there (and still is), but a monolith has trouble adapting, regardless of how much cash at hand or head start it has. The company’s first foray into digital in the mid-1990s, Pathfinder, was a huge flop because editors at the various publications were reluctant to give away their content. And as media decentralization broadened as cable TV gave way to the Internet, where everyone has a channel or 200 of their own, a behemoth with large fixed costs has trouble keeping the “barbarians” at the gates. Rick Stengel has proven a very good choice as Time‘s editor at the moment of digital do or die, but has too much terrain already been ceded? Is the war even winnable?

From “Running Out of Time,” Joshua Macht’s very good Atlantic article with a very bad title:

The newsweekly’s long slide has been blamed on pretty much everything from lack of investment to the AOL merger. And of course there’s the notion that the newsweekly category itself is simply no longer viable–that in the age of the Internet, the weekly rhythm is just too long. But then why the success of The Week or the Economist? For Time, the challenge wasn’t just the weekly print cycle; it was the weekly print cycle plus a crushing load of fixed costs. It’s expensive to support a model that demands reporters around the world, big name columnists, and massive distribution. The high costs means that there’s virtually no room for Time to stumble.

Unfortunately, the brand would fall hard. I joined Time magazine in the summer of 2002 just after the bursting of the dot-com bubble. The largest project during my tenure as editor and general manager of Time.com was to digitize the entire archive going back to March of 1923 – which pushed me deep into Time magazine lore. I tracked the early days when the magazine first took flight to the WWII era when Time could sell more than 400,000 copies in a week even with some little-known Italian general on the cover.

It kept on growing after that. At its zenith the brand could reach more than 20 million people around the world each week. Time practically defined what it meant to be mass media. It was a brand for pretty much everybody. Television and then cable news (CNN in particular) eventually began to chip away at its position, and Time went though struggles and repeated attempts at reinvention through the years. But it took the arrival of the Internet to truly endanger it.”

Tags: ,

The opening of E.O. Wilson new WSJ essay, in which he acknowledges that scientists aren’t as adept at math as they are thought to be, a mistaken belief that has thinned the discipline’s ranks:

“For many young people who aspire to be scientists, the great bugbear is mathematics. Without advanced math, how can you do serious work in the sciences? Well, I have a professional secret to share: Many of the most successful scientists in the world today are mathematically no more than semiliterate.

During my decades of teaching biology at Harvard, I watched sadly as bright undergraduates turned away from the possibility of a scientific career, fearing that, without strong math skills, they would fail. This mistaken assumption has deprived science of an immeasurable amount of sorely needed talent. It has created a hemorrhage of brain power we need to stanch.”

Tags:

There are no UFOs and there never were, so it’s good that there aren’t nearly as many reports these days of people seeing them. Is that because we’re more rational now? Perhaps, but we don’t seem very rational with our politics and conspiracy theories. Is it because cameras are everywhere and all-knowing? Maybe. Because we trust more now in technology and science than in religion? Could be. 

In his new Aeon essay, “Seeing Is Believing,” Stuart Walton tries to understand why there are no more aliens in the sky, no more ghosts in the machine. An excerpt:

“UFO sightings reached their spate roughly within a decade of the release of Steven Spielberg’s spellbinding filmClose Encounters of the Third Kind (1977). One good reason to believe there were never any UFOS is that nobody sees them any more. Once, the skies were refulgent with alien craft; now they are back to their primordial emptiness, returning only static to the radio telescopes, and offering the occasional meteor shower to the wondering eye.

It isn’t only flying saucers that have receded into history. They are being followed, more gradually to be sure, by a decline in sightings of ghosts, recordings of poltergeists, claims of psychokinesis and the rest, as is regularly attested by organisations such as the Society for Psychical Research in London and the UK-wide research group Para.Science. Many of those with a vested interest in the supernatural industry naturally resist this contention, but there is far less credulity among the public for tales of the extraordinary than there was even a generation ago. The standard explanation attributes this to growing skepticism. But, as is only fitting for the paranormal, it might be that there are more mysterious forces at work.”

________________________

1973: “For weeks now there have been reports of sightings of UFOs in many parts of the country.”

Tags:

I’m still not completely convinced people will be any more willing to give up the wheels of their cars than they are their guns, but development of driverless cars continues apace. From an article by Rick Montgomery at AP, some questions being asked as driving becomes increasingly automated:

“The question has sped beyond whether or not technology will ever let motorists read a magazine en route to work — which techies say is a reality nearer than you think.

Rather, society has begun to ask: Do we really want this?

Computer engineer Don Wunsch voices an emphatic yes.

‘The days of human drivers deserve to be numbered,’ said Wunsch, a professor at the Missouri University of Science & Technology in Rolla. ‘Humans are lousy drivers. It’s about time computers take over that job.’

Others note that the rush to make autos fully autonomous, and conceivably far safer, promises to run into huge societal bumps.

In a transportation center such as Kansas City, how many truckers won’t be needed in 2025? How will insurance companies react when hands-free accidents happen — and nobody disputes they will — or roadside sensors go awry?

Will systems navigating 21st-century vehicles reach obsolescence and need costly upgrades every few years, like today’s smartphones? And, perhaps the most critical question, who will make certain these innovations will make travel less deadly?

‘You have these brand new capabilities coming to the market at a time of grossly inadequate funding’ of federal safety regulators, said Clarence Ditlow of the Washington-based Center for Auto Safety, a watchdog group.

Only after risky ‘experimentation on the road,’ he said, will the public’s overall safety in a driverless world be known.”

Tags: ,

The opening of Brad Templeton’s article about the future of brain enhancement, which suggests that chimpanzees may become post-simian–something we’ve tried before in more modest ways–before we become post-human:

“Once this chimp brain is created, it will cause a flurry of research. Quite possibly, the ‘software’ part of the brain will be published and made available for others to work on, and the hardware will be readily available too.

Indeed, the software of this chimp brain might be made available for free distribution. An ‘open source’ ape, for all to experiment on.

And they will experiment on it. Once again, even if a human brain is similarly available, moral and legal considerations will limit what experimentation can be done, while actions on the ape-brain will probably not be nearly as limited.

Apes however are remarkably similar to humans. As you may know, chimps share 98% of our DNA. In addition, we have made intensive study of the ways in which they are different, and we will attempt to learn more.

Thus some of the first experiments on this artificial chimp brain will be to enhance it along the lines that humans and chimps are different. Humans are not so qualitatively different in our brains from chimps, though the few differences have a magnified effect in our capabilities. We have more of certain types of brain structures, and some of our structures are larger and have more neural connections. There is no component of a human brain not found in a chimp brain. Experimenters will quickly try to see what happens if you modify those aspects of the working chimp brain. They will also ‘graft’ information from post-mortem and live scans of human brains, where available.

If the artificial chimp brains ‘run’ much faster than biological ones, they will be able to perform these experiments quickly. They may be able to have their computer play out a thousand different experimental scenaria, each playing out years of biological scale time — perhaps in just a day of real time. They will quickly learn what works and what doesn’t, what enhances and what doesn’t. And there will be many of them.

I think quite quickly they could create a chimp brain capable of human level intelligence or beyond. She may then need training or ‘rearing’ by real human parents, but she will be a very quick and supremely capable learner. All this will happen much more quickly than the ethical changes to occur which would allow scientists to do similar experiments on human based brains.”

Tags:

From Edward Luce’s new Financial Times profile of Harvard philosopher Michael Sandel, a passage about the marketization of morality:

I ask him about his latest book, What Money Can’t Buy: The Moral Limits of Markets, in which he argues that the US and other countries are turning from market economies into market societies, as Lionel Jospin, the former French prime minister, once put it. Sandel argues that we live in a time of deepening ‘market faith’ in which fewer and fewer exceptions are permitted to the prevailing culture of transaction. The book has infuriated some economists, whom he sees as practitioners of a ‘spurious science.’

He has been at loggerheads with the profession for many years. In 1997, he enraged economists when he attacked the Kyoto protocol on global warming as having removed ‘moral stigma’ from bad activity by turning the right to pollute into a tradeable permit. Economists said he misunderstood why markets work. Sandel retorts that they know the price of everything and the value of nothing. To judge by his sellout lecture tours, he has clearly tapped into a larger disquiet about the commodification of life.

Which countries are the least receptive to his concerns about market fundamentalism? ‘China and the US – no question,’ he replies instantly. ‘In other parts of east Asia, in Europe and in the UK and in India and Brazil, it goes without arguing that there are moral limits to markets and the question is where to locate them. In the US and China, there are strong voices who will challenge the whole idea of there being any limits.’”

 _________________________

“What’s you answer, smartypants?” asks TV’s best talk-show host.

Tags: , ,

Greg Lindsay co-authored the smart book, Aerotropolis, which I’ve blogged about before. In the New York Times, he has a new article about work places arranging their space to overcome “structural holes,” hoping to “engineer” serendipity and creativity. An excerpt:

“ONE reason structural holes persist is our overwhelming preference for face-to-face interactions. Almost 40 years ago, Thomas J. Allen, a professor of management and engineering at M.I.T., found that colleagues who are out of sight are frequently out of mind — we are four times as likely to communicate regularly with someone sitting six feet away from us as we are with someone 60 feet away, and almost never with colleagues in separate buildings or floors.

And we get a particular intellectual charge from sharing ideas in person. In a paper published last year, researchers at Arizona State University used sensors and surveys to study creativity within teams. Participants felt most creative on days spent in motion meeting people, not working for long stretches at their desks.

The sensors in the A.S.U. study were supplied by Sociometric Solutions, a spinoff company of the M.I.T. Media Lab’s Human Dynamics Laboratory that uses ‘sociometric badges’ to measure workers’ movements, speech and conversational partners. One discovery, says Ben Waber, a co-founder of the company and a visiting scientist at M.I.T., was that employees who ate at cafeteria tables designed for 12 were more productive than those at tables for four, thanks to more chance conversations and larger social networks. That, along with things like companywide lunch hours and the cafes Google is so fond of, can boost individual productivity by as much as 25 percent.

‘If you just think of serendipity as an interaction with an unintended outcome, you can orchestrate pleasant surprises,’ says Scott Doorley, a creative director at Stanford University’s Institute of Design.”

Tags:

We’re better and worse than ever in America. We’re haves, but we’re have-nots. Technology has gleefully led to a decentralization of media power, and science is accomplishing what was once sci-fi. Hollywood can barely keep up with the world, let alone predict it, as the Dream Factory has been outpaced by reality, However, we’re also deeply narcissistic, most often using our amazing technology merely like mirrors, distracted by our distorted sense of self, so much so that we seem unable to address a political and economic system that has run aground. And the mainstream culture is irredeemably stupid, all “housewives” and halfwits. If you want to see how much we’ve slipped in that regard, just have a look at an issue of People magazine from the 1970s, when pop singers and public intellectuals shared the pages. You can’t blame the People people for a culture that has become saturated with stupidity. They’re just taking the pulse.

But could a revolution of some sort wipe all the inequity away? I doubt it, because we love being entertained by clever toys and reflecting pools. Of course, if personal power and narrowcasting become political and writ large, that could change. Occupy Wall Street, despite being derided as a short-lived, fashionable failure, actually framed a Presidential election. But can such a movement do more than frame? Can it break the glass?

From “A Practical Utopian’s Guide to the Coming Collapse,” David Graeber’s new Baffler article:

Normally, when you challenge the conventional wisdom—that the current economic and political system is the only possible one—the first reaction you are likely to get is a demand for a detailed architectural blueprint of how an alternative system would work, down to the nature of its financial instruments, energy supplies, and policies of sewer maintenance. Next, you are likely to be asked for a detailed program of how this system will be brought into existence. Historically, this is ridiculous. When has social change ever happened according to someone’s blueprint? It’s not as if a small circle of visionaries in Renaissance Florence conceived of something they called “capitalism,” figured out the details of how the stock exchange and factories would someday work, and then put in place a program to bring their visions into reality. In fact, the idea is so absurd we might well ask ourselves how it ever occurred to us to imagine this is how change happens to begin.

This is not to say there’s anything wrong with utopian visions. Or even blueprints. They just need to be kept in their place. The theorist Michael Albert has worked out a detailed plan for how a modern economy could run without money on a democratic, participatory basis. I think this is an important achievement—not because I think that exact model could ever be instituted, in exactly the form in which he describes it, but because it makes it impossible to say that such a thing is inconceivable. Still, such models can be only thought experiments. We cannot really conceive of the problems that will arise when we start trying to build a free society. What now seem likely to be the thorniest problems might not be problems at all; others that never even occurred to us might prove devilishly difficult. There are innumerable X-factors.

The most obvious is technology. This is the reason it’s so absurd to imagine activists in Renaissance Italy coming up with a model for a stock exchange and factories—what happened was based on all sorts of technologies that they couldn’t have anticipated, but which in part only emerged because society began to move in the direction that it did. This might explain, for instance, why so many of the more compelling visions of an anarchist society have been produced by science fiction writers (Ursula K. Le Guin, Starhawk, Kim Stanley Robinson). In fiction, you are at least admitting the technological aspect is guesswork.”

Tags:

« Older entries § Newer entries »