A footnote now to the last quarter century of polarized American politics is the fact that Maya Angelou supported Clarence Thomas during his bruising, controversial Supreme Court nomination process. She believed he could be “saved” or “won over” or something, would become a generous soul. How’d that turn out? Did this sense of unity, as she termed it, really help African-American children? Did Citizens United or the near-dismantling of the Affordable Care Act help them? As always, there’s a real danger in having a big-picture view when interpreting individual people.

An excerpt from her 1991 New York Times op-ed:

In these bloody days and frightful nights when an urban warrior can find no face more despicable than his own, no ammunition more deadly than self-hate and no target more deserving of his true aim than his brother, we must wonder how we came so late and lonely to this place.

In this terrifying and murderous season, when young women achieve adulthood before puberty, and become mothers before learning how to be daughters, we must stop the rhetoric and high-sounding phrases, stop the posing and preening and look to our own welfare.

We need to haunt the halls of history and listen anew to the ancestors’ wisdom. We must ask questions and find answers that will help us to avoid falling into the merciless maw of history. How were our forefathers able to support their weakest when they themselves were at their weakest? How were they able to surround the errant leader and prevent him from being co-opted by forces that would destroy him and them? How were they, lonely, bought separately, sold apart, able to conceive of the deep, ponderous wisdom found in “Walk together, children . . . don’t you get weary.”

The black youngsters of today must ask black leaders: If you can’t make an effort to reach, reconstruct and save a black man who has been to and graduated from Yale, how can you reach down here in this drug-filled, hate-filled cesspool where I live and save me?

I am supporting Clarence Thomas’s nomination, and I am neither naive enough nor hopeful enough to imagine that in publicly supporting him I will give the younger generation a pretty picture of unity, but rather I can show them that I and they come from a people who had the courage to be when being was dangerous, who had the courage to dare when daring was dangerous — and most important, had the courage to hope.

Because Clarence Thomas has been poor, has been nearly suffocated by the acrid odor of racial discrimination, is intelligent, well trained, black and young enough to be won over again, I support him.

The prophet in “Lamentations” cried, “Although he put his mouth in the dust . . . there is still hope.”•

Tags: ,

Remember Sy Sperling, who wasn’t only the President of Hair Club for Men, the rug company, but also a client? Elizabeth Parrish is reportedly the Sy Sperling of gene therapy, a significantly bolder endeavor.

The CEO of BioViva says she’s volunteered herself as Patient Zero, the first to receive the company’s experimental, regenerative daily injections to reverse aging. Parrish is a completely healthy 44-year-old person, apart from gradually dying like the rest of us. I’m highly skeptical, of course, that such therapies will be successful, widely available and affordable in a handful of years–or at any point in the foreseeable future–though if our species survives long enough, I think they’ll become reality and make our current state of medicine seem as barbaric to future people as surgery sans anesthesia is to us.

In a Reddit AMA, Parrish was unsurprisingly asked many questions about the cosmetic aspect of the research (regrowing hair lost to baldness, looking younger, etc.), but the paramount goal is clearly longer and healthier lives. A few exchanges follow.

_____________________________

Question:

  1. What criteria did you use in picking patient zero? How old were they? Did they have any medical conditions which would be fixed by age reversal?
  2. Suppose you’ve proven to have cured aging with this first patient. How soon before I’m cured also?
  3. How soon will you be confident enough that your treatment is working? At the one year mark? The full 8 years?
  4. In the talk that you gave in May, you said that it is your wish to distribute this cure for free. How will you and your team accomplish that?

Elizabeth Parrish:

  1. I am patient zero. I will be 45 in January. I have aging as a disease.
  2. We are working as hard as we can to bring it to the world as quickly and safely as possible.
  3. We will will evaluate monthly and within 12 months we will have more data.
  4. We will work with governments and insurance providers.

Question:

Are you patient zero because it would be unethical to ask someone else to be patient zero? Because it seems to me that the researcher shouldn’t be the patient unless there’s no other option.

Elizabeth Parrish:

It was the only ethical choice. I am happy to step up. I do feel we can use these therapies in compassionate care scenarios now but we will have to work them back into healthier people as we see they work as preventive medicine.

Question:

How do you feel about being patient zero? Are you apprehensive at all?

Elizabeth Parrish:

I am happy to be patient zero. It is for the world, for the sick children and sick old people. My life has been good. I understand the risks but I research how people die and I am happy to say that today I do not know how I will die now. Tomorrow or in the long future I was up for a change.

_____________________________

Question:

Can you control the aging reversal to determine a prefered age?

Elizabeth Parrish:

We cannot control the aging reversal to a specific year today, that will come in the future. It is hypothesized that you will not reverse in physical appearance to less than a young adult. We see this in mice as well.

_____________________________

Question:

I have only read titles about anti-aging therapy and don’t really know what it’s all about. What are the actual expected results in layman terms? Does a 50 year old individual start looking younger, regaining muscle growth potential, higher testosterone levels, etc or does it just influence a subset of factors?

Elizabeth Parrish:

If you don’t look younger we have failed. Aging is one of the most visual diseases on the planet and includes things that we all know like wrinkles and grey hair, but also brain atrophy, muscle wasting and organ damage.

_____________________________

Question:

How did you administer the treatment? Injection? Where? How many?

Elizabeth Parrish:

Doctors do it by injections in various parts of the body.

_____________________________

Question:

What are your thoughts on accessibility to anti-aging therapy and what is BioViva doing to ensure ethical and fair access to it’s tech?

Elizabeth Parrish:

Our goal is to build laboratories that will have the mission of a cGMP product at a reduced cost. Gene therapy technology is much like computing technology. We had to build the super computer which cost $8 million in 1960. Now everyone has technologies that work predictably and at a cost the average person can afford. We need to do the same with these therapies. What you will get in 3-5 years will be vastly more predictable and effective that what we are doing today and at a cost you or your insurance can cover.

_____________________________

Question:

When do you think an ageing treatment will be available to the general public?

Elizabeth Parrish:

If the results are good we hope to have something to the general public, that is cost acceptable, in 3-5 years.•

 

Tags:

From the July 20, 1901 Brooklyn Daily Eagle:

Tags: ,

 

10 search-engine keyphrases bringing traffic to Afflictor this week:

  1. jack nicholson interview about the shining
  2. kurt vonnegut on geraldo rivera show
  3. judi mcguire roller derby
  4. bernstein woodward with william f buckley
  5. sissy spacek andy warhol
  6. women futurists
  7. will robots be pets in the future?
  8. the airport of the future will recognize you
  9. peter georgescu on wealth inequality
  10. people who think they live inside a reality show
This week, after much chaos, the GOP finally found someone who would be an acceptable Speaker to the extreme wing of the party.

This week, after much chaos, the GOP finally found someone the extremist wing of the party would accept as Speaker.

guns (1)

 

  • Donald Trump is the most insincere person Tom Junod has ever met.
  • In China, gene-editing is being used to design pet-sized micropigs.
  • A ghost city is planned for the New Mexican desert to test technologies.
  • Some studies suggest “digital amnesia” is a real problem.

In an h+ opinion piece, futurist Harry J. Bentham says many true things about synthetic biology, a sector of science that could go a long way toward creating resource abundance and medical miracles.

That said, I have two disagreements with him: 

  1. Bentham’s contention that businesspeople are hampering synthbio’s development due to greed, instead focusing on manufacturing trifling products to make a quick buck, seems off the mark. Let’s face it: Plenty of people have no affinity or talent for this type of work. But more than at any time in history, many major American technological companies aren’t driven mainly by profits but also by impact. In fact, “changing the world” is the new coin of the realm. I doubt in a different age that Google would be trying to create a purely private Bell Labs (which was essentially a government-sponsored monopoly) as it is with Google X, with many projects aimed at helping health and environment. Other such companies are sponsoring R&D in similar ventures, also hoping for breakthroughs. Whether they’ll be successful in landing these moonshots is another matter, but they are trying.
  2. While synthetic bio holds great promise and will likely be necessary at some point for the survival of humans, saying it has “no adequate risk” if it’s utilized isn’t accurate.

From Bentham:

Although discovery and invention continue to stun us all on an almost daily basis, such things do not happen as quickly or in as utilitarian a way as they should. And this lack of progress is deliberate. As the agenda is driven by businessmen who adhere to the times they live in, driven more by the desire for wealth and status than helping mankind, the goal of endless profit directly blocks the path to abolish scarcity, illness and death.

Today, J. Craig Venter’s great discoveries of how to sequence or synthesize entire genomes of living biological specimens in the field of synthetic biology (synthbio) represent a greater power than the hydrogen bomb. It is a power we must embrace. In my opinion, these discoveries are certainly more capable of transforming civilization and the globe for the better. In Life at the Speed of Light (2013), that is essentially Venter’s own thesis.

And contrary to science fiction films, the only threat from biotech is that humans will not adequately and quickly use it. Business leaders are far more interested in profiting from people’s desire for petty products, entertainment and glamour than curing cancer or creating unlimited resources to feed civilization. But who can blame them? It is far too risky for someone in their position to commit to philanthropy than to stay a step ahead of their competitors.•

Tags: ,

As I wrote recently, Donald Trump is an adult baby with no interest in actually being President.

He entered the race impetuously, seeking attention to satisfy his deep and unexamined psychological scars, enjoyed an abundance of cameras over the summer when cable stations needed inexpensive content and focusing on the fascist combover was even cheaper than renting a Kardashian. Now with the new fall TV shows debuting, he’s growing restless, hoping his political program will get cancelled, reliably mentioning an exit strategy in every interview. Even the one-man brand himself is probably in disbelief that his prejudiced bullshit and faux populism have catapulted him for this long over his fellow candidates, weak though they are, a one-eyed racist in the land of the blind.

His continuing campaign is comeuppance richly earned by the GOP, with its bottomless supply of shamelessness, the party’s statistical leader going rogue not so much in policy but in language, stripping away the Gingrich-ish coding from the mean-spirited message meant to appeal to the worst among us and within us. He’s muddied the waters and now wants to swim ashore.

From Maggie Haberman at the New York Times:

In interviews this week, Mr. Trump insisted he was in the race to win, and took aim at “troublemakers” in the news media who, he said, were misrepresenting his remarks. “I’m never getting out,” he insisted Friday on MSNBC.

Mr. Trump keeps noting that he still leads in every major Republican poll and is in a political position that others would envy, and he says he will spend the money to keep his candidacy alive. But he conceded in another interview: “To me, it’s all about winning. I want to win — whereas a politician doesn’t have to win because they’ll just keep running for office all their life.”

He said he had not contemplated a threshold for what would cause him to get out of the race. And he noted that his crowds were even larger than those of Senator Bernie Sanders, the Vermont independent who is drawing thousands to rallies in seeking the Democratic nomination.

While Mr. Trump still leads major national polls and surveys in early voting states, that lead has recently shrunk nationally, and the most recent NBC News/Wall Street Journal poll showed his support eroding in New Hampshire, the first primary state. His recent comments have lent credence to the views of political observers who had long believed the perennially self-promoting real estate mogul would ultimately not allow himself to face the risk of losing.

“Even back in the summer, when he was somewhat defying gravity, somewhat defying conventional wisdom, it seemed to me there would be a moment when reality sets in,” said Rob Stutzman, a Republican political strategist who is based in California. “He would not leave himself to have his destiny settled by actual voters going to the polls or the caucuses.”

Mr. Stutzman was skeptical that Mr. Trump would be willing to endure the grind of a campaign needed to amass enough delegates to make him a factor at the Republican convention in July.•

Tags: , ,

Jonathan Franzen, beloved man of the people, is the latest subject of the Financial Times “Lunch with the FT” feature, dining with Lucy Kellaway as his latest novel, Purity, is released.

I haven’t read the new one yet, though I certainly enjoyed The Corrections and Freedom. With Franzen, of course, the work may ultimately be the whole story, but not while he’s alive. His identification as a bothersome man who irks people is always part of every profile, and he doesn’t seem to run from the characterization. A person who found fault with Oprah’s meshuganah galaxy of faux doctors, victim-porn and automobile giveaways isn’t exactly incorrect, but a more political person probably would have taken the gobs of money he made from the association and absconded quietly into the night. Other skirmishes since then seemed similarly avoidable, but Franzen wouldn’t be Franzen if he didn’t visibly recoil from the democratic, unchallenging standards of much of American culture, letting us know exactly what he thinks of us. Again: I wouldn’t say he’s really wrong.

An excerpt:

While he has been talking we have each been given a large white bowl with a pair of tiny, shrivelled pastries in them and a jug of tepid, cloudy liquid on the side. Franzen eats his without comment, and I ask: does he understand why he makes people quite so cross? “Well, I have to acknowledge the possibility that I’m simply a horrible person.”

He recites the line with a practised irony. Evidently he acknowledges no such possibility at all.

“My other answers would all be sort of self-flattering, right? Because I tell the truth; people don’t like the truth.”

He tells me about a piece he wrote in the New Yorker in March about climate change and bird conservation in which he managed to alienate everyone, including bird watchers. “I pointed out that 25 years after humanity collectively tried to reduce its carbon emissions, they reached an all-time high last year; further pointed out that the people who say we still have 10 years to keep the average temperature from rising more than 2 degrees Celsius are, charitably, deluded or, uncharitably, simply lying. And, therefore, maybe we should rethink whether we want to be putting such a large percentage of our energies into what is essentially a hopeless battle.”

His idea of himself as a truth-teller is only partly why people find him so aggravating. There is something about the man himself, and his variety of superior maleness, that also annoys.•

Tags: ,

Now five decades old, the thought experiment known as the Trolley Problem is experiencing new relevance due to the emergence of driverless cars and other robotized functions requiring aforethought about potential moral complications. Despite criticisms about the value of such exercises, I’ve always found them useful and including them in the conversation about autonomous designs surely can’t hurt.

Lauren Cassani Davis of the Atlantic looks at the merging of a stately philosophical scenario and cutting-edge technology. An excerpt about Stanford mechanical engineer Chris Gerdes:

Gerdes has been working with a philosophy professor, Patrick Lin, to make ethical thinking a key part of his team’s design process. Lin, who teaches at Cal Poly, spent a year working in Gerdes’s lab and has given talks to Google, Tesla, and others about the ethics of automating cars. The trolley problem is usually one of the first examples he uses to show that not all questions can be solved simply through developing more sophisticated engineering. “Not a lot of engineers appreciate or grasp the problem of programming a car ethically, as opposed to programming it to strictly obey the law,” Lin said.

But the trolley problem can be a double-edged sword, Lin says. On the one hand, it’s a great entry point and teaching tool for engineers with no background in ethics. On the other hand, its prevalence, whimsical tone, and iconic status can shield you from considering a wider range of dilemmas and ethical considerations. Lin has found that delivering the trolley problem in its original form—streetcar hurtling towards workers in a strangely bare landscape—can be counterproductive, so he often re-formulates it in terms of autonomous cars:

You’re driving an autonomous car in manual mode—you’re inattentive and suddenly are heading towards five people at a farmer’s market. Your car senses this incoming collision, and has to decide how to react. If the only option is to jerk to the right, and hit one person instead of remaining on its course towards the five, what should it do?

It may be fortuitous that the trolley problem has trickled into the world of driverless cars: It illuminates some of the profound ethical—and legal—challenges we will face ahead with robots. As human agents are replaced by robotic ones, many of our decisions will cease to be in-the-moment, knee-jerk reactions. Instead, we will have the ability to premeditate different options as we program how our machines will act.•

Tags: , ,

The cloud being an extension of our brains and our devices portals into that cloud definitely means we have access to dramatically more information than ever before. No disputes there. Until we develop some way, organic or not, to increase elasticity of human memory without compromising other faculties, we have to prioritize what we remember ourselves and what we “outsource” to machines.

Except that isn’t usually a conscious process, so we may not be deciding as much now what’s stored in us and what’s placed elsewhere. To me, that’s still preferable to life before the deluge of data, with whatever being lost more than made up for by the windfall of knowledge, even if the prioritization of it is transformed.

One caveat: It’s a more complicated situation if the actual process of memorization is deteriorating, not just being altered, by reliance on our external “memory banks.” Is the type of muscle memory an elite athlete learns not being built up in our ability to remember because of the new normal?

I don’t notice it in myself yet. Sure, I’ll reach for something that’s surprisingly no longer there, but my warehouse of facts seems the same in quantity. The inventory is just different, more fitfully filed, though the contents still seem valuable. But I find myself continuing to check, never quite trusting the system.

From Sean Coughlan at BBC:

The survey suggests relying on a computer in this way has a long-term impact on the development of memories, because such push-button information can often be immediately forgotten.

“Our brain appears to strengthen a memory each time we recall it, and at the same time forget irrelevant memories that are distracting us,” said Dr. [Maria] Wimber.

She says that the process of recalling information is a “very efficient way to create a permanent memory.”

“In contrast, passively repeating information, such as repeatedly looking it up on the internet, does not create a solid, lasting memory trace in the same way.” …

The study from Kaspersky Lab, a cybersecurity firm, says that people have become accustomed to using computer devices as an “extension” of their own brain.

It describes the rise of what it calls “digital amnesia,” in which people are ready to forget important information in the belief that it can be immediately retrieved from a digital device.

Tags:

Filippo Tommaso Marinetti and his fellow Futurists were sexist and fascistic and militaristic, not unique to them in Italy during the first half of the twentieth century. Some of their ideas were insane (sleep was to be abolished) and some neutral (tin neckties, after all, are no dumber than any other kind), but a few were worth thinking about.

One such political thought: The Futurists thought automation would eventually eliminate poverty and inequality, something that’s possible if not inevitable. A less-important though interesting cultural idea: Machines and industrial sounds should be be used to create dance music. It was very prophetic, if not initially appreciated. Their plan for reinventing boxing never came to fruition, however, as you can read in the following article about a Futurist exposition in Rome from the July 16, 1933 Brooklyn Daily Eagle.

Tags:

Stephen Hawking’s answered some of the Reddit Ask Me Anything questions that were submitted a few weeks back. Some highlights: The physicist hopes for a world in which wealth redistribution becomes the norm when and if machines do the bulk of the labor, though he realizes that thus far that hasn’t been the inclination. He believes machines might subjugate us not because of mayhem or malevolence but because of their sheer proficiency. Hawking also thinks that superintelligence might be wonderful or terrible depending on how carefully we “direct” its development. I doubt that human psychology and individual and geopolitical competition will allow for an orderly policy of AI progress. It seems antithetical to our nature. And we actually have no place setting standards governing people of the distant future. They’ll have to make their own wise decisions based on the challenges they know and information they have. Below are a few exchanges from the AMA.

________________________

Question:

Whenever I teach AI, Machine Learning, or Intelligent Robotics, my class and I end up having what I call “The Terminator Conversation.” My point in this conversation is that the dangers from AI are overblown by media and non-understanding news, and the real danger is the same danger in any complex, less-than-fully-understood code: edge case unpredictability. In my opinion, this is different from “dangerous AI” as most people perceive it, in that the software has no motives, no sentience, and no evil morality, and is merely (ruthlessly) trying to optimize a function that we ourselves wrote and designed. Your viewpoints (and Elon Musk’s) are often presented by the media as a belief in “evil AI,” though of course that’s not what your signed letter says. Students that are aware of these reports challenge my view, and we always end up having a pretty enjoyable conversation. How would you represent your own beliefs to my class? Are our viewpoints reconcilable? Do you think my habit of discounting the layperson Terminator-style “evil AI” is naive? And finally, what morals do you think I should be reinforcing to my students interested in AI?

Stephen Hawking:

You’re right: media often misrepresent what is actually said. The real risk with AI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants. Please encourage your students to think not only about how to create AI, but also about how to ensure its beneficial use.

________________________

Question:

Have you thought about the possibility of technological unemployment, where we develop automated processes that ultimately cause large unemployment by performing jobs faster and/or cheaper than people can perform them? Some compare this thought to the thoughts of the Luddites, whose revolt was caused in part by perceived technological unemployment over 100 years ago. In particular, do you foresee a world where people work less because so much work is automated? Do you think people will always either find work or manufacture more work to be done? 

Stephen Hawking:

If machines produce everything we need, the outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.

________________________

Question:

I am a student who has recently graduated with a degree in Artificial Intelligence and Cognitive Science. Having studied A.I., I have seen first hand the ethical issues we are having to deal with today concerning how quickly machines can learn the personal features and behaviours of people, as well as being able to identify them at frightening speeds. However, the idea of a “conscious” or actual intelligent system which could pose an existential threat to humans still seems very foreign to me, and does not seem to be something we are even close to cracking from a neurological and computational standpoint. What I wanted to ask was, in your message aimed at warning us about the threat of intelligent machines, are you talking about current developments and breakthroughs (in areas such as machine learning), or are you trying to say we should be preparing early for what will inevitably come in the distant future?

Stephen Hawking:

The latter. There’s no consensus among AI researchers about how long it will take to build human-level AI and beyond, so please don’t trust anyone who claims to know for sure that it will happen in your lifetime or that it won’t happen in your lifetime. When it eventually does occur, it’s likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right. We should shift the goal of AI from creating pure undirected artificial intelligence to creating beneficial intelligence. It might take decades to figure out how to do this, so let’s start researching this today rather than the night before the first strong AI is switched on.

_____________________

 Question:

I am a biologist. Your fear of AI appears to stem from the assumption that AI will act like a new biological species competing for the same resources or otherwise transforming the planet in ways incompatible with human (or other) life. But the reason that biological species compete like this is because they have undergone billions of years of selection for high reproduction. Essentially, biological organisms are optimized to ‘take over’ as much as they can. It’s basically their ‘purpose’. But I don’t think this is necessarily true of an AI. There is no reason to surmise that AI creatures would be ‘interested’ in reproducing at all. I don’t know what they’d be ‘interested’ in doing. I am interested in what you think an AI would be ‘interested’ in doing, and why that is necessarily a threat to humankind that outweighs the benefits of creating a sort of benevolent God.

Stephen Hawking:

You’re right that we need to avoid the temptation to anthropomorphize and assume that AI’s will have the sort of goals that evolved creatures to. An AI that has been designed rather than evolved can in principle have any drives or goals. However, as emphasized by Steve Omohundro, an extremely intelligent future AI will probably develop a drive to survive and acquire more resources as a step toward accomplishing whatever goal it has, because surviving and having more resources will increase its chances of accomplishing that other goal. This can cause problems for humans whose resources get taken away.•

 

Tags:

In the Backchannel piece, “Our Robot Sky,” journalist/novelist J.M. Ledgard and architect Lord Norman Foster combine efforts to propose a series of inexpensive cargo droneports around the world, especially in fraught places in desperate need of life-saving supplies. Cheap, pilotless robot planes could handle the deliveries. The first such droneport is proposed for Rwanda and would cost roughly what a new gas station would. 

From Foster’s section:

The droneport, where the sky touches the ground, is the critical element for a cargo drone route. No one has created rules for this new type of building. The opportunity to do just that is why I chose to support Redline as the very first project of the Norman Foster Foundation. Jonathan approached me and said, “Look, Norman, you’ve built the biggest airport in the world, now could you build the smallest.” The strange thing is that in ten years time the sum total of all these droneports in Africa will be bigger than the biggest airport in the world.

Our droneport holds to Buckminster Fuller’s maxim of doing more with less. It is grounded in detailed and first-hand study of isolated communities in Africa by Narinder Sagoo, a partner at the firm who has taken the lead on the project. It is very much informed by two previous projects: our 2012 Lunar Habitation for ESA, which binds lunar regolith by use of robots, and Narinder’s Sierra Leone school project, which introduces kit forms in combination with labour and intensive use of locally available materials.

Redline droneports should be affordable, clean energy civic buildings, with a strong visual presence.•

Tags: ,

Because of the knowledge they’ve acquired and the many unique experiences they’ve known, some people are a loss to the culture that can’t be replaced when they die. It just becomes a hole where something useful was. Pete Seeger was like that, someone who knew the Great Depression, the Civil Rights Movement, HUAC, etc. He was steadfastly on hand for all the major American movements until he wasn’t anymore. 

Jane Goodall is that kind of person, too, though thankfully she’s still alive. What a sensational, uncommon life. The primatologist sat for an interview with Philip Bethge and Johann Grolle of Spiegel discussing her conservation efforts, the refugee crisis in Europe and the sometimes atrocious behavior of chimps (and humans). An excerpt about the latter topic:

Spiegel:

Chimps can have a very dark side as well. Did it come as a shock to you when you first became aware of it?

Jane Goodall:

Absolutely! It was horrifying. First, we observed this brutal attack on a female which ended in the killing of her baby. Chimps are brutal, and it is so deliberate. The males go out and get near the boundary of their territory. And they walk very silently trying hard not to make any noise. They will climb into a tree and stare out over hostile territory for hours. They are waiting for the right opportunity. And then they attack.

Spiegel:

Is this comparable to warfare?

Jane Goodall:

It can be. We observed what I call the four-year war. It all started when a big chimp community split into two because there were too many males. About seven males left with some females and babies. However, they didn’t go beyond the range which previously they shared but took up the southern part of it. When relations got completely cold between the two groups, the original group began systematically moving back into the territory they had lost.

Spiegel:

Killing the others?

Jane Goodall:

Yes, every single one. We observed six murders ourselves, and circumstantial evidence showed that the same thing happened to the seventh male. It was horrible.

Spiegel:

Are they intentionally cruel? Do they want to inflict pain?

Jane Goodall:

I thought about this a lot. But I came to the conclusion that being evil is something that only humans are capable of. A chimp would never plan to pull another’s nails out. The chimps’ way of aggression is quick and brutal. I compare them to gang attacks.

Spiegel:

Do you think the chimpanzees’ emotional world is comparable to ours?

Jane Goodall:

In many ways, yes.•

Tags: , ,

______________________________

Who, after all, was Jerzy Kosinski? I wonder, after a while, if even he knew.

Like a lot of people who move to New York to reinvent themselves, Kosinksi was a tangle of fact and fiction that couldn’t easily be unknotted. He was lauded and reviled, labeled as brilliant and a plagiarist, called fascinating and a fraud. The truth, as usual, probably lies somewhere in between. In essence, he was much like the shadowy, misunderstood, paranoid characters from his own literature. One thing known for sure: He was a tormented soul, who ended his life by suicide in 1991, a plastic bag pulled over his face until he suffocated. He was a regular correspondent of sorts for David Letterman none too long before that. Here he is, in 1984, at the 23:35 mark, talking about overcoming his fear of drowning.

_______________________

A spate of game-related deaths to high school football players early this season combined with the reported marked decline in youth participation makes me think that Super Bowl C in 2066 will be played, if at all, by robots. (It should be noted that while young people playing football far less is associated with growing knowledge about brain-injury risk, all American youth sports have declined in the time of smartphones.) The game’s partially cloudy tomorrow hasn’t stopped the reporters at Wired and Sports Illustrated from pooling their talents for a mixed-media look at the future of the NFL, wondering what it will be like when America’s most popular single-game sports event reaches the century mark. There are considerations of players using cutting-edge technology, data-driven exercise, even gene editing, though I haven’t yet come across anything on concussion prevention.

_______________________

Baseball playoff season begins at the same time Henry Kissinger receives a biographical treatment, so here’s a video that mixes those two seemingly disparate subjects.

Like the first President he served, former Secretary of State Henry Kissinger became quite a baseball junkie, especially in his post-Washington career. At the 15:40 mark of this episode of The Baseball of World of Joe Garagiola, we see Kissinger, who could only seem competent when standing alongside that block of wood Bowie Kuhn, being honored at Fenway Park before the second game of the sensational 1975 World Series. During the raucous run by the raffish New York Mets in the second half of 1980s, both Nixon and Kissinger became fixtures at Shea Stadium. Nixon was known to send congratulatory personal notes to the players, including Darryl Strawberry. It was criminals rooting for criminals.

Philip_K_Dick_android_missing_head-Niki-Sublime-2

Two thoughts about the intersection of human and artificial intelligence:

  1. If we survive other existential risks long enough, we’ll eventually face the one posed by superintelligence. Or perhaps not. That development isn’t happening today or tomorrow, and by the time it does machine learning might be embedded within us. Maybe a newly engineered version of ourselves is the next step. We won’t be the same, no, but we’re not meant to be. Once evolution stops, so do we.
  2. The problem of understanding the human brain will someday be solved. That will be a boon in many ways medically, but there’s some question as to whether this giant leap for humankind is necessary to create intelligent, conscious machines. The Wright brothers didn’t need to simulate the flapping wings of birds in creating the Flyer. Maybe we can put the “ghost” in the machine before we even fully understand it? I would think the brain work will be done first because of the earnest way it’s being pursued by governments and private entities, but I wonder if that’s necessary.

From Ariana Eunjung Cha’s Washington Post piece about Paul Allen’s dual brain projects:

Although today’s computers are great at storing knowledge, retrieving it and finding patterns, they are often still stumped by a simple question: “Why?”

So while Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana — despite their maddening quirks — do a pretty good job of reminding you what’s on your calendar, you’d probably fire them in short of a week if you put them up against a real person.

That will almost certainly change in the coming years as billions of dollars in Silicon Valley investments lead to the development of more sophisticated algorithms and upgrades in memory storage and processing power.

The most exciting — and disconcerting — developments in the field may be in predictive analytics, which aims to make an informed guess about the future. Although it’s currently mostly being used in retail to figure out who is more likely to buy, say, a certain sweater, there are also test programs that attempt to figure out who might be more likely to get a certain disease or even commit a crime.

Google, which acquired AI company DeepMind in 2014 for an estimated $400 million, has been secretive about its plans in the field, but the company has said its goal is to “solve intelligence.” One of its first real-world applications could be to help self-driving cars become better aware of their environments. Facebook chief executive Mark Zuckerberg says his social network, which has opened three different AI labs, plans to build machines “that are better than humans at our primary senses: vision, listening, etc.”

All of this may one day be possible. But is it a good idea?•

 

Tags: ,

A rose is a rose is a rose, but if you purchase an e-book written by Gertrude Stein, it isn’t what it seems.

When we bought literature in paper form, the “hardware” and “software” were ours for as long as we held onto the physical item, but virtual books (and articles, films, etc.) are leasing agreements that depend on all sorts of infrastructure remaining intact. And to quote the title of the most famous book by another author, Chinua Achebe, things fall apart. When the next companies destabilize today’s tech giants, Amazon and Google and Facebook and Apple, will there be a hole in the culture that’s difficult to recover?

From “When Amazon Dies,” an excellent piece on the topic by Adrienne LaFrance at the Atlantic:

Increasingly, the purchase of digital works is treated like the purchase of software, which has gone from something you buy on a disc to something downloadable with an Internet connection. “You might think you’re buying Microsoft Office, but according to your user agreement you’re merely leasing it,” [media studies professor Siva] Vaidhyanathan said. “You can think of music and video as just another form of software. There is a convergence happening.”

That convergence is built for a streaming world, one that’s driven by an expectation of instant gratification. “One of the things we’re doing increasingly is opting for convenience over dependability. And we’re doing it somewhat thoughtlessly,” Vaidhyanathan told me. “We have to recognize that it is temporary. Anything that is centrally collected in a server somewhere on Earth is ephemeral. Even if Amazon doesn’t go out of business in 20 years, Amazon will not exist as we know it in 100 years.”•

Tags: ,

CITE-green-tech-ghost-town-537x376

It was in 2012 that I first put up a post about CITE, an insta-ghost town planned to be built in the New Mexican desert for the express purpose of testing technologies, and I still can’t say I fully get the concept. Is a discrete soundstage city really required when driverless cars are tested on public streets and highways? Wouldn’t it just be better for tech firms to make agreements with small urban areas for pilot programs? That would seem a truer test.

From Kieron Monks at CNN:

In the arid plains of the southern New Mexico desert, between the site of the first atomic bomb test and the U.S.-Mexico border, a new city is rising from the sand.

Planned for a population of 35,000, the city will showcase a modern business district downtown, and neat rows of terraced housing in the suburbs. It will be supplied with pristine streets, parks, malls and a church.

But no one will ever call it home.

The CITE (Center for Innovation, Testing and Evaluation) project is a full-scale model of an ordinary American town. Yet it will be used as a petri dish to develop new technologies that will shape the future of the urban environment.

The $1 billion scheme, led by telecommunications and tech firm Pegasus Global Holdings, will see 15-square-miles dedicated to ambitious experiments in fields such as transport, construction, communication and security.•

As we move into the future, a lot more food production traditionally done out of doors will be moving inside, in labs and “vegetable factories,” away from the fickle and increasingly frightening climate. There’s no reason most of the work can’t be automated.

The Japanese firm Spread is currently developing just such an indoor farm that will be fully computerized and robotized. From Sarah Fecht at Popular Science: 

Robots will be the farmers of the future. A company in Japan is building an indoor lettuce farm that will be completely tended by robots and computers. The company, named Spread, expects the factory to open in 2017, and the fully automated farming process could make the lettuce cheaper and better for the environment.

Spread already tends several large indoor farms, which have a multitude of environmental benefits. The plants can be grown hydroponically without exhausting soil resources. Up to 98 percent of Spread’s water will be recycled, and the factory won’t have to spray pesticides, since the pests are outdoors. Artificial lighting means the food supply won’t rely on weather variables, and the lighting can be supplied through renewable energy.

Currently Spread grows about 7.7 million heads of lettuce a year, and sells them at about the same price as regular lettuce. It sounds like the company is hoping to increase its production and lower its prices by making their growing process even more automated.•

Tags:

From the March 3, 1889 Brooklyn Daily Eagle.

Tags:

In her Guardian column, Jemima Kiss describes driverless cars as a “hard sell for Google,” and I agree with part of her reasoning.

Regulators and entrenched market interests will certainly be a chore to appease. The fallout will be huge, policy-wise, and auto-insurance companies, after all, aren’t not-for-profits and stand to be destabilized out of business. But I think Kiss’ contention that motorist fear of the new technology is overstated. Some may enjoy continuing to drive, but it won’t be because of apprehension about robocars. That worry will quickly abate.

Of course, the bigger point is that Google won’t be alone in trying to change the course of the future, in having automobiles be truly automatic, as Apple, Tesla, Uber, Big Auto and companies in China and Japan will be doing likewise. Google’s biggest challenge will in ensuring it’s one of the companies in the winner’s circle.

An excerpt:

The hard sell for Google will be winning over generations of people who feel safer being in control of their vehicle, don’t know or care enough about the technology, or who simply enjoy driving. Yet most people who try a demo say the same thing: how quickly the self-driving car feels normal, and safe. As the head of public policy quipped, “perhaps we just need to do demos for 7 billion people”. Google’s systems engineer Jaime Waydo helped put self-driving cars on Mars while she worked at Nasa; it may well be that regulation and public policy prove easier there than on Earth.

And before it can get to the public, Google has to get through the regulators. In taking on the auto industry, Google has some mighty pitched battles ahead, not least the radical changes it implies for the insurance industry (who will find the number of accident claims dropping sharply), car makers (who will become partners with Google to equip their autonomous cars) and the labour issues of laying off a whole class of drivers, from cabs to haulage.•

Tags:

As long as people smoke, it’s difficult to argue that free markets are chastened by free wills. It’s an addiction almost always begun in adolescence, yes, but plenty of smokers aren’t even trying to quit despite the horrifying health risks. So we tax cigarettes dearly and run countless scary PSAs, trying to curtail the appetite for destruction and push aside the market’s invisible hand offering us a light.

The cost, of course, goes beyond the rugged individual, transferred onto all of us whether we’re talking about lung cancer or obesity or financial bubbles. Sooner or later, we all pay.

In a NYRB piece about Phishing for Phools, a new book on the topic by economists George A. Akerlof and Robert J. Shiller, professional noodge Cass R. Sunstein finds merit in the work, though with some reservations. One topic of note touched on briefly: the micro-marketing of politicians aided by the “manipulation of focus.” An excerpt:

Akerlof and Shiller believe that once we understand human psychology, we will be a lot less enthusiastic about free markets and a lot more worried about the harmful effects of competition. In their view, companies exploit human weaknesses not necessarily because they are malicious or venal, but because the market makes them do it. Those who fail to exploit people will lose out to those who do. In making that argument, Akerlof and Shiller object that the existing work of behavioral economists and psychologists offers a mere list of human errors, when what is required is a broader account of how and why markets produce systemic harm.

Akerlof and Shiller use the word “phish” to mean a form of angling, by which phishermen (such as banks, drug companies, real estate agents, and cigarette companies) get phools (such as investors, sick people, homeowners, and smokers) to do something that is in the phisherman’s interest, but not in the phools’. There are two kinds of phools: informational and psychological. Informational phools are victimized by factual claims that are intentionally designed to deceive them (“it’s an old house, sure, but it just needs a few easy repairs”). More interesting are psychological phools, led astray either by their emotions (“this investment could make me rich within three months!”) or by cognitive biases (“real estate prices have gone up for the last twenty years, so they’re bound to go up for the next twenty as well”).

Akerlof and Shiller are aware that skeptics will find their depiction of human beings as “phools” to be inaccurate and impossibly condescending. Their response is that people are making a lot of bad decisions, producing outcomes that no one could possibly want. In their view, phishing for phools “is the leading cause of the financial crises that lead to the deepest recessions.” A lot of people run serious health risks from overeating, tobacco, and alcohol, leading to hundreds of thousands of premature deaths annually in the United States alone. Akerlof and Shiller think that it is preposterous to believe that these deaths are a product of rational decisions.•

 

Tags: , ,

Orson Welles’ 1938 radio drama The War of the Worlds played upon our concerns of what is out there (as well as what is in here), but our fears aren’t unfounded. Making contact with life more intelligent than us could be an existential risk, a situation that keeps Stephen Hawking awake at night.

But one thing scarier than discovering alien life in the universe might be not finding any. If no one else could make a go of it, our days would seem to be numbered. If outer space is a ghost town, isn’t it likely we’ll soon be ghosts as well?

At the Atlantic Science section, Tom Chmielewski writes of the protocol for proceeding should we make contact. An excerpt:

Since the first exoplanet was identified in 1992, astronomers have confirmed the existence of nearly 1,900 planets beyond our solar system. The sheer number of planets increases the statistical probability that Earth-like planets will be found. Some estimate that there are around 140 habitable planets in our stellar neighborhood within 33.6 light years of Earth. Many astronomers estimate that we’ll find a life-bearing planet within 25 to 30 years, or maybe tonight, if we know what to look for.

The upcoming 10YSS symposium will focus on both the pragmatic and more theoretical elements of such a discovery: How do we find Earth 2.0? How do we confirm evidence of life? If we find evidence of intelligent life out there, how do we announce it to the world? How will the people of Earth 1.0 react?

“How do you finally decide, ‘Eureka, we found it?’” said Mae Jemison, a former NASA astronaut and the principal for 100YSS. “What are the compelling signs of finding another planet outside of our solar system that indisputably is terrestrially evolved, with earth-like evolved lifeforms? …  What would happen if we could identify it [as Earth 2.0]? How does that change us?”•

 

Tags: ,

Vivek Wadhwa, who wisely looks at issues from all sides, has written an excellent Singularity Hub article analyzing which technologies he believes will impact global politics in the next two decades.

In the opening, he asserts something I think very true: an ascendant China isn’t really scary but that state in steep decline would be. He further argues in that first paragraph that fossil fuel is in its dying days, something that probably needs to be true if China, with its world-high cancer and air-pollution rates, is to remain stable. A nation of 1.3 billion will only cough and choke for so long. Solar and wind can’t arrive soon enough for that country, and for us all, though oil-dependent nations unable to transition will be destabilized.

An excerpt about 3D printers:

In conventional manufacturing, parts are produced by humans using power-driven machine tools, such as saws, lathes, milling machines, and drill presses, to physically remove material to obtain the shape desired. In digital manufacturing, parts are produced by melting successive layers of materials based on 3D models — adding materials rather than subtracting them. The “3D printers” that produce these use powered metal, droplets of plastic, and other materials — much like the toner cartridges that go into laser printers. 3D printers can already create physical mechanical devices, medical implants, jewelry, and even clothing. But these are slow, messy, and cumbersome — much like the first generations of inkjet printers were. This will change.

In the early 2020s we will have elegant low-priced printers for our homes that can print toys and household goods. Businesses will use 3D printers to do small-scale production of previously labor-intensive crafts and goods. Late in the next decade, we will be 3D-printing buildings and electronics. These will eventually be as fast as today’s laser printers are. And don’t be surprised if by 2030, the industrial robots go on strike, waving placards saying “stop the 3D printers: they are taking our jobs away.”

The geopolitical implications of these changes are exciting and worrisome.•

 

Tags:

Nicholas Carr’s The Glass Cage, a must-read if you want to understand all sides of this new machine age, is now out in paper. I like Carr’s thinking when I agree with him, and I like it when I don’t. He always makes me see things in a fresh way, and he’s a miraculously graceful writer. Carr put an excerpt from the book, one about the history of automation, on his blog. Here’s a smaller section from that: 

Automated machines existed before World War II. James Watt’s steam engine, the original prime mover of the Industrial Revolution, incorporated an ingenious feedback device — the fly-ball governor — that enabled it to regulate its own operation. The Jacquard loom, invented in France around 1800, used steel punch cards to control the movements of spools of different-colored threads, allowing intricate patterns to be woven automatically. In 1866, a British engineer named J. Macfarlane Gray patented a steamship steering mechanism that was able to register the movement of a boat’s helm and, through a gear-operated feedback system, adjust the angle of the rudder to maintain a set course.

But the development of fast computers, along with other sensitive electronic controls, opened a new chapter in the history of machines. It vastly expanded the possibilities of automation. As the mathematician Norbert Wiener, who helped write the prediction algorithms for the Allies’ automated antiaircraft gun, explained in his 1950 book The Human Use of Human Beings, the advances of the 1940s enabled inventors and engineers to go beyond “the sporadic design of individual automatic mechanisms.” The new technologies, while designed with weaponry in mind, gave rise to “a general policy for the construction of automatic mechanisms of the most varied type.” They opened the way for “the new automatic age.”

Beyond the pursuit of progress and productivity lay another impetus for the automatic age: politics.•

Tags:

« Older entries § Newer entries »