Russ Roberts

You are currently browsing articles tagged Russ Roberts.

old-school-flying-airplane-work-typewriter-people-pic-1335218357-e1419282636723-4

Discussion of the ideas in David Gelernter’s new book, The Tides of Mind: Uncovering the Spectrum of Consciousness, which just landed in my mailbox, forms the crux of the latest episode of EconTalk with Russ Roberts. The computer scientist talks about the variety of cognizance that forms our days, an idea he believes lost in the unstudied acceptance of binary labels “conscious” or “unconscious.” He thinks, for instance, that we operate at various levels of up- or down-spectrum consciousness, which permits us to function in different ways. 

Clearly the hard problem is still just that, and the creativity that emerges from consciousness, often the development of new symbols or the successful comparison and combination of seemingly disparate thoughts, isn’t yet understood. Someday we’ll comprehend the chemical reactions that enable these mysterious and magnificent syntheses, but for now we can enjoy though not understand them. In one passage, the author wonderfully articulates the creative process, the parts that are knowable and those that remain inscrutable. The excerpt:

David Gelernter:

You also mention, which is important, the fact that you have a focused sense when you are working on lyrics or writing poetry, let’s say. And I’ve argued, on the other hand, that you need to be well down-spectrum in order to get creativity started. That is, you can’t be at your creative peak when you’ve just got up in the morning: your attention is focused and you are tapping your pencil; you want to get to work and start, you know, getting through the day’s business at a good clip. It’s not the mood in which one can make a lot of progress writing poetry. But that’s exactly why–that’s one of the important reasons why creativity is no picnic. It’s not easily achieved. I think it’s fair to say that everybody is creative in a certain way. In the sort of daily round of things we come up with new solutions to old problems routinely. But the kind of creativity that yields poetry that other people value, that yields original work in any area, is highly valued, is more highly valued than any other human project, because it’s rare. And it’s rare not because it requires a gigantic IQ (Intelligence Quotient), but because it requires a certain kind of balance, which is not something everybody can achieve. On the one hand–it’s not my observation; it’s a general observation–that creativity often hinges on inventing new analogies. When I think of a new resemblance and an analogy between a tree and a tent pole, which is a new analogy let’s say that nobody else has ever thought of before, I take the new analogy and can perhaps use it in a creative way. One of a million other, a billion, a trillion other possible analogies. Now, what makes me come up with a new analogy? What allows me to do that? Generally, it’s a lower-spectrum kind of thinking, a down-spectrum kind of thinking, in which I’m allowing my emotions to emerge. And, I’m allowing emotional similarity between two memories that are in other respects completely different. I’m maybe thinking as a graduate student in computing about an abstract problem involving communication in a network like the ARPANET (Advanced Research Projects Agency Network) or the Internet, in which bits get stuck. And I may suddenly find myself thinking about traffic on a late Friday afternoon in Grand Central Station in Manhattan. And the question is–and that leads to a new approach. And I write it up; and I prove a theorem, and I publish a paper. And there’s like a million other things in the sciences and in engineering technology. But the question is: Where does the analogy come from? And it turns out in many cases–not in every case–that there are emotional similarities. Emotion is a tremendously powerful summarizer, abstractor. We can look at a complex scene involving loads of people rushing back and forth because it’s Grand Central Station, and noisy announcements on [?] to understand, loudspeakers, and you’re being hot and tired, and lots of advertisements, and colorful clothing, and a million other things; and smells, and sounds, and–we can take all that or any kind of complex scene or situation, the scene out your window, the scene on the TV (television) when you turn on the news, or a million other things. And take all those complexities and boil them down to a single emotion: it makes me feel some way. Maybe it makes me happy. Maybe it makes me happy. It’s not very usual to have an emotion as simple as that. But it might be. I see my kids romping in the backyard, and I just feel happy. Usually the emotion to which a complex scene has boiled down is more complex than that–is more nuanced. Doesn’t have a name. It’s not just that I’m happy or sad or excited. It’s a more nuanced; it’s a more–it’s a subtler emotion which is cooked up out of many bits and pieces of various emotions. But the distinctive emotion, the distinctive feeling that makes me feel a certain way, the feeling that I get when I look at some scene can be used as a memory cue when I am in the right frame of mind. And that particular feeling–let’s say, Happiness 147–a particular subtle kind of happiness which is faintly shaded by doubts about the coming week and by serious questions I have about what I’m supposed to do tomorrow morning but which is encouraged by the fact that my son is coming home tonight and I’m looking forward to seeing him–so that’s Happiness 147. And it may be that when I look out at some scene and feel Happiness 147, that some other radically different scene that also made me feel that way comes to mind–looking out at that complex thing and I think of some abstract problem in network communications, or I think of a mathematics problem, or I think of what color chair we should get for the living room, or one of a million other things. Any number of things can be boiled down in principle, can be reduced, can be summarized or abstracted by this same emotion. My emotions are so powerful because the phrase, ‘That makes me feel like x,’ can apply to so many situations. So many different things give us a particular feeling. And that feeling can drive in a new analogy. And a new analogy can drive creativity. But the question is: Where does the new analogy come from? And it seems to come often from these emotional overlaps, from a special kind of remembering. And I can only do that kind of remembering when I am paying attention to my emotions. We tend to do our best to suppress emotions when we’re up-spectrum. We’re up-spectrum: We have jobs to do, we have work to do, we have tasks to complete; our minds are moving briskly along; we’re energetic. We generally don’t like indulging in emotions when we are energetic and perky and happy and we want to get stuff done. Emotions tend to bring thought to a halt, or at any rate to slow us down. It tends to be the case as we move lower on the spectrum, we pay more attention to emotions. Emotions get a firmer grip on us. And when we are all the way at the bottom of the spectrum–when we are asleep and dreaming–it’s interesting that although we–often we think of dreaming as emotionally neutral except in the rare case of a nightmare or a euphoria dream, and neither of those happen very often–we think of dreams as being sort of gray and neutral. But if you read the biological[?] literature and the sleep-lab literature, you’ll find that most dreams are strongly colored emotionally. And that’s what we would expect. They occur at the bottom of the spectrum. Life becomes more emotional, just as when you are tired you are more likely to lose your temper; you are more likely to lose your self-control–to be cranky, to yell at your kids, or something like that. We are less self-controlled, we are less self-disciplined; we give freer rein to our emotions as we move down spectrum. And that has a good side. It’s not good to yell at your kids. But as you allow your emotions to emerge, you are more likely to remember things that yield new analogies. You are more likely to be reminded in a fresh way of things that you hadn’t thought of together before.•

Tags: ,

baby65 babybot (4)

Pedro Domingos’ book The Master Algorithm takes on many issues regarding machine learning, but as the title makes implicit, it wonders chiefly about the possibility of a unified theory enabling an ultimate learning machine, which, the author recently told Russ Roberts of EconTalk, could perhaps figure out as much as 80% of any problem posed. Can’t say I’m expecting its development in my lifetime.

In one section of the interview, there’s a technical and philosophical exchange between host and guest about creating infantile robots that can grow and learn experientially as human babies do–gradually, with small steps becoming giant leaps. Two points about this section:

  • I believe Domingos is right to say that philosophers who believe “standard models of biology, chemistry, and physics cannot explain human consciousness” are getting ahead of themselves. No one should be shocked if the keys to consciousness are located via knowledge developed within current frameworks. I think that’s actually the likely outcome. We’re not at some sort of “end of science” moment.
  • Machines could theoretically someday possess the type of complicated emotions humans have, or maybe they won’t. It may not matter in some practical matters. After all, a plane can fly without being a bird. Roberts’ consternation about a sort of robot consciousness sans emotions seems like a visceral and romantic concern on his part, but such a scenario could have profound implications. Not to say that emotions are a fail-safe from destruction–sometimes they get the best of us–but it does seem they’re essential in the long-term to truly complex growth, though it’s impossible (for now) to be sure.

The exchange:

Russ Roberts:

So, I’m going to read a somewhat lengthy paragraph that charmed me, from the book. And then I want to ask you a philosophical question about it. So here’s the passage:

If you’re a parent, the entire mystery of learning unfolds before your eyes in the first three years of your child’s life. A newborn baby can’t talk, walk, recognize objects, or even understand that an object continues to exist when the baby isn’t looking at it. But month after month, in steps large and small, by trial and error, great conceptual leaps, the child figures out how the world works, how people behave, how to communicate. By a child’s third birthday all this learning has coalesced into a stable self, a stream of consciousness that will continue throughout life. Older children and adults can time-travel–aka remember things past, but only so far back. If we could revisit ourselves as infants and toddlers and see the world again through those newborn eyes, much of what puzzles us about learning–even about existence itself–would suddenly seem obvious. But as it is, the greatest mystery in the universe is not how it begins or ends, or what infinitesimal threads it’s woven from. It’s what goes on in the small child’s mind–how a pound of gray jelly can grow into the seat of consciousness. 

So, I thought that was very beautiful. And then you imagined something called Robby the Robot, that would somehow simulate the experience and learn from, in the same way a child learns. So, talk about how Robby the Robot might work; and then I’ll ask my philosophical question.

Pedro Domingos:

Yes. So, there are several approaches to solving the problem of [?]. So, how can we create robots and computers that are as intelligent as people? And, you know, one of them, for example, is to mimic evolution. Another one is to just build a big knowledge base. But in some ways the most intriguing one is this idea of building a robot baby. Right? The existence proof of intelligence that we have as human beings–in fact, if we didn’t have that we wouldn’t even be trying for this. So, the idea of–so the path, one possible path to (AI) artificial intelligence, and the only one that we know is guaranteed to work, right? Is to actually have a real being in the real world learning from experience in the same way that a baby does. And so the ideal is the robot baby is–let’s just create something that has a brain–but it doesn’t have to be at the level of neurons, it’s just at the level of capabilities–that has the same capabilities that the brain, that the mind, if you will, that a newborn baby has. And if it does have those capabilities and then we give it the same experience that a newborn baby has, then two or three years later we will have solved the problem. So, that’s the promise of this approach.

Russ Roberts:

So, the thought, the philosophical thought that I had as I was down in the basement the other day with my wife and we were sorting through boxes of stuff that we don’t look at except once a year when we go down in the basement and decide what to throw out and what to keep. And one of the boxes that we keep, even though we never examine it, except when we open, go down to the basement once a year to go down through the boxes, is: It’s a box of stuffed animals that our children had when they were babies. And we just–we don’t want to throw it out. I don’t know if our kids will ever want to use them with their children–if they have children; our kids, we don’t have any grandchildren but I think we imagine the possibility that they would be used again. But I think something else is going on there. And if our children were in the basement with us, going through that, and they saw the animal or the stuffed item that they had when they were, say, 2 and a half or 3 years old, that was incredibly precious to them–and of course has no value to them whatsoever to them right now–they would have, just as we have, as parents, they would have an incredible stab of emotional reaction. A nostalgia. A feeling that I can’t imagine Robby the Robot would ever have. Am I wrong?

Pedro Domingos:

I don’t know. So, this is a good question. There are actually several good questions here. One is: Would Robby the Robot need to have emotions in order to learn. I actually think the answer is Yes. And: will it have those emotions? I think at a functional level we already know how to put the equivalent of emotions into a robot, because emotions are what guides us. Right? We were talking before about goals, right? Emotions are the way evolution in some sense programmed you to do the right things and not the wrong ones, right? The reason we have fear and pleasure and pain and happiness and all of these things is so that we can choose the right things to do. And we know how to do that in a robot. The technical term for that is the objective function–

Russ Roberts:

Stimulus,–

Pedro Domingos:

Or the utility function. Now, whether at the end of the day–

Russ Roberts:

But it’s not the same. It doesn’t seem the same. Maybe it would be. I don’t know. That’s a tough question.

Pedro Domingos:

Exactly. So, functionally, in terms of the input-output behavior, I think this could be indistinguishable from the robot having emotions. Whether the robot is really having emotions is probably something that we will never know for sure. But again, we don’t know if animals or if even other people have the same emotions that we do. We just give them credit for them because they are similar to us. And I think in practice what will happen, in fact, this is already happening, with all of these chatbots, for example, is that: If these robots and computers behave like they have emotions, we will treat them as if they have emotions and assume that they do. And often we assume that they have a lot more emotions than they do because we project our humanity into them. So, I think at a practical level [?] it won’t make that much difference. There remains this very fascinating philosophical question, which is: What is really going on in their minds? Or in our minds, for that matter. I’m not sure that we will ever really have an answer to that.

Russ Roberts:

I’ve raised the question recently on the program about whether consciousness is something which is amenable to scientific understanding. Certain philosophers, David Chalmers, Thomas Nagle claim–and they are both atheists–but they claim that models of evolution and the standard models of biology, chemistry, and physics cannot explain human consciousness. Have you read that work? Have you thought about it at all?

Pedro Domingos:

Yeah. And I think that–I disagree with them at the following level. I think if you fast forward to 50 years from now, we will probably have a very good and very satisfying model of consciousness. It will probably be using different concepts than the ones that people have from the sciences right now. The problem is that we haven’t found the right concepts to pin down consciousness yet. But I think there will come a point at which we do, in the sense that all the psychological and neural correlates of consciousness will be explained by this model. And again, for practical purposes, maybe even for philosophical purposes that will be good. Now, there is, I think, what is often called the hard question of consciousness. Which is: At the end of the day, because consciousness is a subjective experience, you cannot have an objective test of it. So in some sense once you get down to that hard core, consciousness is beyond the scope of science. Unless somebody comes up with something that I don’t quite imagine yet, I think again what will probably happen is that we will get to a point, probably not in the near future–it will be decades from now–where we understand consciousness well enough that we are satisfied with our understanding and we don’t ask ourselves these questions about it any more. And I can find analogies in the history of science where similar things that used to seem completely mysterious–like, life itself used to be completely mysterious. And today it’s not that mysterious any more. There’s DNA (Deoxyribonucleic acid) and there’s proteins and there’s what’s called the central dogma of biology. At the end of the day, the mystery of life is still there. It’s just really not that prominent on our minds any more because we feel like we understand, you know, the essence of how life works. And I think chances are the same thing will happen with consciousness.•

Tags: ,

lucychocolatefactoryassemblyline4

industrialrobtsassemblylines4

There’s no denying Kevin Kelly is a techno-optimist, something his new book, The Inevitable, speaks to. The Wired cofounder, who returned to Russ Robert’s podcast, EconTalk, to promote the title, said three years ago when guesting on the program: “We’re constantly redefining what humans are here for.” He’s further developed his thinking on that topic this time around.

I agree with Kelly and Roberts that our new tools and systems (Deep Learning, AI, etc.) will make us better off in the long run (though it will be complicated), but I’m concerned about the near- and medium-term, when industries will likely rise and fall with disquieting regularity and financial headaches may find those who aren’t, say, successful authors or research fellow at the Hoover Institution.

Roberts briefly puzzles over people concerning themselves with technological unemployment at a time when the U.S. unemployment rate hovers around 5%. I don’t think it’s Trumpian to say that percentage doesn’t quite speak to the number of citizens struggling nor the long-stagnating wages. Wikipedia and smartphones are wonderful, but they’re not quite a substitute for a degree of economic security. 

Two exchanges are embedded below.


Russ Roberts:

Just to play pessimist for a minute: We think about artificial intelligence, for example, today–and you mention both these kind of things in your book–is it really that exciting that our thermostat gets to know us? Is it really that exciting that my car beeps at me when I’m going out of my lane or can parallel park–which is great for my 16-year old worried about is his driver’s license test? But these are not transformative applications.

Kevin Kelly:

Yeah. This too. It seemed it at first, very invisible. Well, you might not recall, but in the 1920s or something Sears Roebuck, the mail-order catalog company, was selling the Home Motor. And the Home Motor was this immense, 15-pound motor that was going to sit in the center of your home and automate all the appliances and whatnot in your home. That industrial revolution thing worked because it became invisible–we don’t have the big motor turning everything; we have like 50 motors in our homes that became invisible. So, to some extent, this stuff is working because we don’t see it. Because it’s not something that is visible. And it succeeds to the extent that it transforms while we don’t see it. So, that’s one thing. And the second thing I would say about that is that, we’re sitting on this huge wave of the First Industrial Revolution which brought this incredible prosperity to us all, the fact that we see around us that we no longer in the agricultural hunter-gatherer era were–we had to do everything with human muscle or with animal muscle, animal power. We invented something called ‘synthetic artificial power.’ And we harnessed fossil fuels, and carbon fuels, to give additional power that we couldn’t do. And all that we see is basically a result of this artificial power. So when we drive down the road in your car, you have 250 horses working for you at that moment. Just turn a little knob, you’ve got 250 horses powering you down to do whatever you want to do. And then we distributed that power through a grid to every home and farm in the country; and so farmers could employ that artificial power to do all kinds of things; and factories could use that artificial power. And everything that we had built around us was because of the artificial power that we made. Well, now, we’re going to do the same thing with artificial intelligence. So, instead of–in addition to having 250 horses driving you down the road, you are going to have 250 minds–which we are going to get from AI, from artificial intelligence. And that, we’re also going to put that onto a grid and distribute it around the country so that like any farmer could just get and purchase as much artificial power and artificial intelligence as they want, to do things. And just as that artificial power, was this incredibly transformative, incredibly progressive, incredibly powerful platform to give us all that we enjoy now, this artificial minds that we are going to get on top of the artificial power is going to transform us in an equal way: it’s going to touch everything that we do. And I think actually it will transform us more than that first Industrial Revolution did.•


Russ Roberts:

A lot of people worry about the impact of artificial intelligence on employment. We’ve talked about this–it’s now becoming a recurring theme. And of course it’s ironic we’re having this theme when unemployment in the United States is 5%. But, put that to the side. I think people are legitimately worried about what might be replaced by what. And you talk about it at length. I just wondered about two points you make. You talk about the fact that there are jobs that we didn’t know we wanted done. I’m going to read a little excerpt here:

Before we invented automobiles, air-conditioning, flat-screen video displays, and animated cartoons, no one living in ancient Rome wished they could watch cartoons while riding to Athens in climate-controlled comfort. One hundred years ago not a single citizen of China would have told you that they would rather buy a tiny glass slab that allowed them to talk to faraway friends before they would buy indoor plumbing, but every day peasant farmers in China without plumbing purchase smart phones. Crafty AIs embedded in first-person-shooter games have given millions of teenage boys the urge, the need, to become professional game designers–a dream that no boy in Victorian times ever had. In a very real way our inventions assign us our jobs. 

You want to add anything to that?

Kevin Kelly:

I think maybe I kind of maybe say it this way: Our jobs into the future will be to invent jobs that we can automate and give to the robots. So, we’re on a kind of a path, on an escalator–that we’re going to keep inventing new things that that we desire to be wanted to do; we’ll figure out how to do them, and once we figure out how to do them we’ll automate them–basically giving them to the AIs, and a box. So, in a certain sense our job is to invent jobs that we can automate. And I think that part of inventing jobs may be our job–human job–for a while, because we have better access to our latent desires than AIs do. Although eventually even perhaps that job is–at least assisted by AIs.

Russ Roberts:

I’m going to read another quote which says what you just said, but it’s so beautiful. You say,

When robots and automation do our most basic work, making it relatively easy for us to be fed, clothed, and sheltered, then we are free to ask, “What are humans for?” Industrialization did more than just extend the average human lifespan. It led a greater percentage of the population to decide that humans were meant to be ballerinas, full-time musicians, mathematicians, athletes, fashion designers, yoga masters, fan-fiction authors, and folks with one-of-a kind titles on their business cards. With the help of our machines, we could take up these roles; but of course, over time, the machines will do these as well. We’ll then be empowered to dream up yet more answers to the question “What should we do?” It will be many generations before a robot can answer that.•

Tags: ,

In a recent episode of EconTalk, host Russ Roberts invited journalist Adam Davidson of the New York Times to discuss, among other things, his recent articleWhat Hollywood Can Teach Us About the Future of Work.” In this “On Money” column, Davidson argues that short-term Hollywood projects–a freelance, piecemeal model–may be a wave of the future. The writer contends that this is better for highly talented workers and worrisome for the great middle. I’ll agree with the latter, though I don’t think the former is as uniformly true as Davidson believes. In life, stuff happens that talent cannot save you from, that the market will not provide for.

What really perplexed me about the program was the exchange at the end, when the pair acknowledges being baffled by Uber’s many critics. I sort of get it with Roberts. He’s a Libertarian who loves the unbridled nature of the so-called Peer Economy, luxuriating in a free-market fantasy that most won’t be able to enjoy. I’m more surprised by Davidson calling Uber a “solution” to the crisis of modern work, in which contingent positions have replaced FT posts in the aftermath of the 2008 financial collapse. You mean it’s a solution to a problem it’s contributed to? It seems a strange assertion given that Davidson has clearly demonstrated his concern about the free fall of the middle class in a world in which rising profits have been uncoupled from hiring.

The reason why Uber is considered an enemy of Labor is because Uber is an enemy of Labor. Not only are medallion owners and licensed taxi drivers (whose rate is guaranteed) hurt by ridesharing, but Uber’s union-less drivers are prone to pay decreases at the whim of the company (which may be why about half the drivers became “inactive”–quit–within a year). And the workers couldn’t be heartened by CEO Travis Kalanick giddily expressing his desire to be rid of all of them before criticism intruded on his obliviousness, and he began to pretend to be their champion for PR purposes.

The Sharing Economy (another poor name for it) is probably inevitable and Uber and driverless cars are good in many ways, but they’re not good for Labor. If Roberts wants to tell small-sample-size stories about drivers he’s met who work for Uber just until their start-ups receive seed money and pretend that they’re the average, so be it. The rest of us need to be honest about what’s happening so we can reach some solutions to what might become a widespread problem. If America’s middle class is to be Uberized, to become just a bunch of rabbits to be tasked, no one should be satisfied with the new normal.

From EconTalk:

Russ Roberts:

A lot of people are critical of the rise of companies like Uber, where their workforce is essentially piece workers. Workers who don’t earn an annual salary. They’re paid a commission if they can get a passenger, if they can take someone somewhere, and they don’t have long-term promises about, necessarily, benefits. They have to pay for their own car, provide their own insurance, and a lot of people are critical of that, and my answer is, Why do people do it if it’s so awful? That’s really important. But I want to say something slightly more optimistic about it which is a lot of people like Uber, working for Uber or working for a Hollywood project for six months, because when it’s over they can take a month off or a week off. A lot of the people I talk to who drive for Uber are entrepreneurs, they’re waiting for their funding to come through, they’re waiting for something to happen, and they might work 80 hours a week while they’re waiting and when the money comes through or when their idea starts to click, they’re gonna work five hours a week, and then they’ll stop, and they don’t owe any loyalty to anyone, they can move in and out of work as they choose. I think there’s a large group of people who really love that. And that’s a feature for many people, not a bug. What matters is–beside your satisfaction and how rewarding your life is emotionally in that world–your financial part of it depends on what you make while you’re working. It’s true it’s only sort of part-time, but if you make enough, and evidently many Uber drivers are former taxi drivers who make more money with Uber for example, if you make enough, it’s great, so it seems to me that if we move to a world where people are essentially their own company, their own brand, the captain of their own ship rather than an employee, there are many good things about that as long as they have the skills that are in demand that people are willing to pay for. Many people will unfortunately will not have those skills. It’s a serious issue, but for many people those are enormous pluses, not minuses. 

Adam Davidson:

Yes, I agree with you. Thinking of life as an Uber driver with that as your only possible source of income, I would guess that might be tough. Price competition is not gonna be your friend. Thinking about a world where you have a whole bunch of options, including Task Rabbit, and who knows what else, Airbnb, to earn money in a variety of ways, that’s at various times and at various levels of intensity, that strikes me as only good. If we could shove that into the 1950s, I think you would have seen a lot more people leaving that corporate model and starting their own businesses or spending more time doing more creative endeavors. That all strikes me as a helpful tool. It does sound like some of the people who work at Uber have kind of been jerks, but it does seem strange to me that some people are mad at the company that’s providing this opportunity. It is tough that lots of Americans are underemployed and aren’t earning enough. That’s a bad situation, but it is confusing to me that we get mad at companies that are providing a solution.•

Tags: , ,

Charter cities don’t work very often, probably because top-down design is antithetical to human nature, trial-and-error needing to be a more gradual and granular process. The stately pleasure-dome may work for Kubla Khan but not so much for you and I. Some academics love placing these planned utopias at the heart of bull sessions, building this city or tearing down that one in their heads. It can be disquieting to listen to, even if the intentions are good. In a new EconTalk episode, host Russ Roberts and NYU economist Paul Romer had such a talk. Two excerpts follow.

__________________________

Paul Romer:

You can think of a charter city as a kind of a zone, but a big one, big enough to encompass an entire city. One of the questions that you confront when you propose new zones is: What fraction of existing zones have succeeded, in any sense? Most zones fail. And so we have to ask, Why is that? It could be that starting a zone is kind of like starting a startup firm: even if you do it right there’s a high probability that it won’t succeed. But you keep doing it because the ones that do succeed are worth enough. But I think there’s another problem with zones around the world, which is that they fail in ways that you could have predicted when you started them, because they took this form that I’m calling a ‘concession zone.’ So, what’s the difference? A concession zone is a zone where you do something differently as a kind of a concession, a gift to some favored party. So, you give a tax holiday or some other kind of favored treatment to people who get those favors through mechanisms that are pretty easy to forecast. The test of whether something is a reform, or reform zone, is: Do you want it to extend to the rest of the country, and, do you want it to last forever? So, for example, a tax holiday, which is just for firms in a zone and just for a finite amount of time is clearly a concession. There’s no sense that this is something you’d want to extend to every firm in the country and extend forever, because typically they have no plan for how they would recover the tax revenue that they’d give up that way. So the thing to ask in small or big zones all over the world, is: Are governments using these to try out reforms that they want to spread throughout the rest of the country and have last forever, or are they just using them to give some concessions? And if they are to give some concessions, the probability that it won’t do anything good for the country, the ex ante probability, is very low.

Russ Roberts: 

Now, the way I originally understood the idea of a charter city is you have a system–you have a country, excuse me–where the governance of the country is failing in some dimension and it’s very difficult under that scenario, under that situation, for the government to credibly commit to reforming itself. And what a charter city would do is import essentially the institutions of a different country which they are more likely and more credibly able to promise about property rights, the rule of law, say, crime. And in this way you could encourage foreign investment, or any kind of investment, in that city, that you wouldn’t be able to attract if you were stuck under the governance of the host country. That idea is only one kind of charter city or one kind of reform, correct? Because you’re really talking about something more like a laboratory where trial and error could be used to assess effectiveness. 

Paul Romer: 

Yeah. I think the general concept here is that you use the decision to opt in to a new geographic area as an opportunity to implement reforms of any sort, any type of reform, that might be controversial if you tried to implement it on a group of people who were already in a particular location. Think of it as a way to avoid–is to try something new without any coercion. Try something new where the people who live under this new regime choose voluntarily to be part of that. And the thing that you try to do differently or try to do new can take many different forms; and different countries at different stages of development might try many different kinds of reforms or just innovations in their systems of rules. So, the one you were describing where the reform you want to undertake is one where you import government services from outside, I think that’s in practice a very important possible type of reform for poor countries. But the more general concept would allow many different types of reforms. You can even consider a new reform zone/city in the United States where you might do something like say, well, every vehicle in this city has to have autonomous control, instead of driver control. Or you might say, we’re going to ban any use of gasoline and diesel and just rely on natural gas and build the infrastructure for that. So, there’s things you can try in the new setting that would be very difficult from a technical point of view and a political point of view to try in an existing setting; and we might learn a lot that generalizes from running an experiment like that.

Russ Roberts: 

Well, what’s exhilarating about it is it allows the choice of a city to be similar to my choice of, say, music player. Right? Nobody sticks me with a music player. I go out and choose the one I want. I choose the phone I want. I choose the kind of house I want to live in, and I choose the books I want to read. I can choose the government I want but the costs of that choice are very different, right? 

Paul Romer: 

Yeah.

Russ Roberts: 

Because I can move.

Paul Romer: 

Yeah. When I teach about cities these days I tell students to think of cities as intermediate entities between the nation and a business. So, I don’t think a city is identical to a business. And I think there are some city functions that we couldn’t privatize to a corporate governance accountability kind of model. Policing is the test case on this. I think very few people would actually voluntarily choose to go someplace where there’s a police force and a judicial system that could lock you up that’s run by a corporate entity. And I think that doesn’t change whether it’s a nonprofit or a for-profit corporate entity. So, what we’re doing is using some of the same mechanisms for cities, like choice by consumers or users–we’re using choice, but it’s on an entity which is still likely to have some form of government that’s subject to some form of political accountability. And what this reform-zone idea does is more fully exploit the possibilities of this thing that lies between the nation and the business.•

__________________________

Russ Roberts:

So, if you say to me, ‘Hey, we’re starting this new town. It’s fabulous. It’s going to have driverless–this is the town that me and 17 other people would want to live in. It’s got driverless cars, natural gas fuel, no minimum wage laws–whole range of, say, attractive things. So, it’s clean air; it’s fabulous. But they you say: ‘But where is it?’ ‘Well, it’s in the middle of Nebraska.’ ‘But I don’t want to live in the middle of Nebraska.’ So in a way, all the good spots have been taken in the United States. That’s why there are cities there already. So, one of the challenges I think of thinking about Shenzhen and India and China, where their population is growing so fast: It’s going to be very appealing sometimes to leave a city for a new place. It’s a little more challenging in a country like the United States–imagine where this magical city of Oz would be.

Paul Romer:

Yeah. Well, I think we have to use a little bit of imagination. This is mostly being facetious, but one thing I tell people, having visited Long Beach, California just once, is that we should think about Long Beach as a tear down. You know, it’s a really ugly city, but in a beautiful location.

Russ Roberts:

Uuuh, uhhh, yeah–

Paul Romer:

We ought to just tell them to tear down the whole city. And then if you build like a Manhattan in Long Beach–if you could get like Manhattan densities and street activity and excitement, with California weather, man, that would be a successful real estate project.•

Tags: ,

Gurgaon in India is a libertarian wet dream, a high-tech private city that grew from nothing, knows little or no regulation and has no infrastructure. You want your sewage taken away? Hire a firm to do it. You want to start a business? Don’t worry about pesky rules. You want roads to drive on? Not so much. You’re worried about environmental protection? Wait, what? It’s the free market extrapolated to an extreme.

On the latest EconTalk episode, Russ Roberts and fellow economist Alex Tabarrok have a lively discussion about this private city and other ones which favor “voluntary action.” While the guest is enthusiastic about Gurgaon as something of a model for the next global cities, he acknowledges its faults (“The roads situation is terrible, a disaster”), though he often waves these shortcomings away by saying they’re no worse than in other parts of India. The idea of using some sort of Disneyland approach to building private cities on a large scale, which Tabarrok seriously suggests, seems a little cuckoo to me. An excerpt:

Alex Tabarrok:

Between just 2015 and 2030, in India, the urban population is expected to increase by over a quarter of a billion people. So, just think about that. What that means is that during the next 15 years, even taking into account the reduced infrastructure in India, India is going to need on the order of a new Chicago every single year for the next 15 years. At least. And then continuing on into the future. So we have, around the world, massive increases in the urban population. And most of this is happening in the developing world. And the developing world, of course, is struggling with corruption and with poor governments and with a lack of information. And you know, we just can’t expect governments to work very well in these countries. So how are we going to plan? We can hope, right, that cities will be planned and laid out and the sewage lines will be planned for the future and everything will be divided neatly. You know, the way an urban planner in theory would do it. But that’s just not realistic. So, what can we expect? Are there other ways of doing this? And Gurgaon is one possible alternative route, which involves, you know, leaving a whole lot to the private sector.

Russ Roberts:

When you talk about that increasing urbanization, say, in India, the most likely way that’s going to happen is that the existing cities in India are going to get larger. And they are going to have increasing stress on their current infrastructure systems, which are not very effective, from what I understand, already. And so, the likely result of this urbanization and population growth is going to be muddling through with a big set of imperfections. It seems to me China is taking a different approach. China is saying: We need a bunch of new cities. So they are just building them. They are building cities out in the middle of relatively nowhere, from scratch. With lots of buildings, lots of infrastructure, from the top down. And I did read today–I didn’t get to click through on the tweet, but somebody tweeted that the Chinese, some Chinese officials were bidding in auctions to keep land prices high in some of the cities that they are worried about. This is not likely to be a successful strategy for creating value. But China has taken a different approach. It might be a lot better.

Alex Tabarrok:

So, current urban areas are certainly going to grow. But there’s also no question that we’re going to need entirely new cities–both in India and China and elsewhere. And you just look at the United States. Even in the United States, which has long been majority-urbanized, we’ve seen growth, really essentially new cities. Like Houston, has grown in the past 50 years from 100,000 to, you know, several million people. And so forth. You think about the Industrial Revolution in Great Britain: the creation of new cities like Birmingham and so forth. It’s not just London getting bigger, in other words. Although that happened as well. So, I want to put China aside for a minute, and maybe come back and talk about that. But I want to keep on, on Gurgaon, for a little bit longer, because I want to talk about what has worked, and what hasn’t.

Russ Roberts:

Yeah, go ahead.

Alex Tabarrok:

So, fire prevention in Gurgaon works really well. So, what has happened is these private developers buy a chunk of land. And within that chunk of land you have excellent infrastructure; you have excellent delivery of services. So, the developers will build office parks. And within the office parks, you have sewage. But the sewage doesn’t go anywhere. It just–once it leaves the office park–well, sometimes it will go to a small treatment plant. You’ll also have electricity–electricity 24 hours, but funded with diesel, provided with diesel. Which is inefficient. You don’t get all the economies of scale. You do get excellent fire protection. It’s pretty interesting: Gurgaon has India’s only private fire department. And it’s the only fire department really in all of India which has equipment which can reach the top of these skyscrapers.

Russ Roberts:

Good idea.

Alex Tabarrok:

Yeah, exactly. The public system is a complete disaster. You also have delivery of transportation. So, these private firms hire taxis, sort of like Uber but a totally private system to bring their workers, ferry their workers, all over the city.

Russ Roberts:

Yeah. By the way, it’s important to mention: we’ve had some discussions of private busses here, in Chile with Mike Munger. But of course many firms in Silicon Valley outside of San Francisco bus workers into their companies and have major, significant private bus companies.

Alex Tabarrok:

Exactly. It’s very similar.

Russ Roberts:

They are running them themselves. I don’t think they are hiring them out. But they are not public.

Alex Tabarrok:

Exactly. It’s very similar to that.•

Tags: ,

Google’s translated text isn’t perfect, but it’s far better than I could do. Of course, those algorithms aren’t conscious of their accomplishment, while I’m aware of my shortcoming. Erasing that distinction isn’t a bridge too far, but it’s going to take a long time to cross. EconTalk host Russ Roberts did an excellent podcast this week with cognitive scientist Gary Marcus about the future of AI. A couple of excerpts follow.

________________________________

Russ Roberts:

Now, to be fair to AI and those who work on it, I think, I don’t know who, someone made the observation but it’s a thoughtful observation that any time we make progress–well, let me back up. People say, ‘Well, computers can do this now, but they’ll never be able to do xyz.’ Then, when they learn to do xyz, they say, ‘Well, of course. That’s just an easy problem. But they’ll never be able to do what you’ve just said’–say–‘understand the question.’ So, we’ve made a lot of progress, right, in a certain dimension. Google Translate is one example. Siri is another example. Wayz, is a really remarkable, direction-generating GPS (Global Positioning System) thing for helping you drive. They seem sort of smart. But as you point out, they are very narrowly smart. And they are not really smart. They are idiot savants. But one view says the glass is half full; we’ve made a lot of progress. And we should be optimistic about where we’ll head in the future. Is it just a matter of time?

Gary Marcus:

Um, I think it probably is a matter of time. It’s a question of whether are we talking decades or centuries. Kurzweil has talked about having AI in about 15 years from now. A true artificial intelligence. And that’s not going to happen. It might happen in the century. It might happen somewhere in between. I don’t think that it’s in principle an impossible problem. I don’t think that anybody in the AI community would argue that we are never going to get there. I think there have been some philosophers who have made that argument, but I don’t think that the philosophers have made that argument in a compelling way. I do think eventually we will have machines that have the flexibility of human intelligence. Going back to something else that you said, I don’t think it’s actually the case that goalposts are shifting as much as you might think. So, it is true that there is this old thing that whatever used to be called AI is now just called engineering, once we can do it.

________________________________

Russ Roberts:

Given all of that, why are people so obsessed right now–this week, almost, it feels like–with the threat of super AI, or real AI, or whatever you want to call it, the Musk, Hawking, Bostrom worries? We haven’t made any progress–much. We’re not anywhere close to understanding how the brain actually works. We are not close to creating a machine that can think, that can learn, that can improve itself–which is what everybody’s worried about or excited about, depending on their perspective, and we’ll talk about that in a minute. But, why do you think there’s this sudden uptick, spike in focusing on the potential and threat of it right now?

Gary Marcus:

Well, I don’t have a full explanation for why people are worried now. I actually think we should be worried. I don’t understand exactly why there was such a shift in the public view. So, I wanted to write about this for The New Yorker a couple of years ago, and my editor thought, ‘Don’t write this. You have this reputation as this sober scientist who understands where things are. This is going to sound like Science Fiction. It will not be good for your reputation.’ And I said, ‘Well, I think it’s really important and I’d like to write about it anyway.’ We had some back and forth, and I was able to write some about it–not as much as I wanted. And now, yeah, everybody is talking about it. I don’t know if it’s because Bostrom’s book is coming out or because people, there’s been a bunch of hyping, AI stories make AI seem closer than it is, so it’s more salient to people. I’m not actually sure what the explanation is. All that said, here’s why I think we should still be worried about it. If you talk to people in the field I think they’ll actually agree with me that nothing too exciting is going to happen in the next decade. There will be progress and so forth and we’re all looking forward to the progress. But nobody thinks that 10 years from now we’re going to have a machine like HAL in 2001. However, nobody really knows downstream how to control the machines. So, the more autonomy that machines have, the more dangerous they are. So, if I have an Angry Birds App on my phone, I’m not hooked up to the Internet, the worst that’s going to happen if there’s some coding error maybe the phone crashes. Not a big deal. But if I hook up a program to the stock market, it might lose me a couple hundred million dollars very quickly–if I had enough invested in the market, which I don’t. But some company did in fact lose a hundred million dollars in a few minutes a couple of years ago, because a program with a bug that is hooked up and empowered can do a lot of harm. I mean, in that case it’s only economic harm; and [?] maybe the company went out of business–I forget. But nobody died. But then you raise things another level: If machines can control the trains–which they can–and so forth, then machines that either deliberately or unintentionally or maybe we don’t even want to talk about intentions: if they cause damage, can cause real damage. And I think it’s a reasonable expectation that machines will be assigned more and more control over things. And they will be able to do more and more sophisticated things over time. And right now, we don’t even have a theory about how to regulate that. Now, anybody can build any kind of computer program they want. There’s very little regulation. There’s some, but very little regulation. It’s kind of, in little ways, like the Wild West. And nobody has a theory about what would be better. So, what worries me is that there is at least potential risk. I’m not sure it’s as bad as like, Hawking, said. Hawking seemed to think like it’s like night follows day: They are going to get smarter than us; they’re not going to have any room for us; bye-bye humanity. And I don’t think it’s as simple as that. The world being machines eventually that are smarter than us, I take that for granted. But they may not care about us, they might not wish to do us harm–you know, computers have gotten smarter and smarter but they haven’t shown any interest in our property, for example, our health, or whatever. So far, computers have been indifferent to us.•

Tags: , ,

Nick Bostrom’s book Superintelligence: Paths, Dangers, Strategies is sort of a dry read with a few colorful flourishes, but its ideas have front-burnered the existential threat of Artificial Intelligence, causing Stephen Hawking, Elon Musk and other heady thinkers to warn of the perils of AI, “the last invention we will ever need to make,” in Bostrom-ian terms. The philosopher joined a very skeptical Russ Roberts for an EconTalk conversation about future machines so smart they have no use for us. Beyond playing the devil’s advocate, the host is perplexed by the idea of superintelligence can make the leap beyond our control, that it can become “God.” But I don’t think machines need be either human or sacred to slip from our grasp in the long-term future, to have “preferences” based not on emotion or intellect but just the result of deep learning that was inartfully programmed by humans in the first place. One exchange:

“Russ Roberts: 

So, let me raise, say, a thought that–I’m interested if anyone else has raised this with you in talking about the book. This is a strange thought, I suspect, but I want your reaction to it. The way you talk about superintelligence reminds me a lot about how medieval theologians talked about God. It’s unbounded. It can do anything. Except maybe created a rock so heavy it can’t move it. Has anyone ever made that observation to you, and what’s your reaction to that?

Nick Bostrom:

I think you might be the first, at least that I can remember.

Russ Roberts: 

Hmmm.

Nick Bostrom: 

Well, so there are a couple of analogies, and a couple of differences as well. One difference is we imagine that a superintelligence here will be bounded by the laws of physics, and which can be important when we are thinking about how we are thinking about how it might interact with other superintelligences that might exist out there in the vast universe. Another important difference is that we would get design this entity. So, if you imagine a pre-existing superintelligence that is out there and that has created the world and that has full control over the world, there might be a different set of options available across humans in deciding how we relate to that. But in this case, there are additional options on the table in that we actually have to figure out how to design it. We get to choose how to build it.

Russ Roberts:

Up to a point. Because you raise the specter of us losing control of it. To me, it creates–inevitably, by the way, much of this is science fiction, movie material; there’s all kinds of interesting speculations in your book, some of which would make wonderful movies and some of which maybe less so. But to me it sounds like you are trying to question–you are raising the question of whether this power that we are going to unleash might be a power that would not care about us. And it would be the equivalent of saying, of putting a god in charge of the universe who is not benevolent. And you are suggesting that in the creation of this power, we should try to steer it in a positive direction.

Nick Bostrom: 

Yeah. So in the first type of scenario which I mentioned, where you have a singleton forming because the first superintelligence is so powerful, then, yes, I think a lot will depend on what that superintelligence would want. And, the generic [?] there, I think it’s not so much that you would get a superintelligence that’s hostile or evil or hates humans. It’s just that it would have some goal that is indifferent to humans. The standard example being that of a paper clip maximizer. Imagine an artificial agent whose utility function is, say, linear in the number of paper clips it produces over time. But it is superintelligent, extremely clever at figuring out how to mobilize resources to achieve this goal. And then you start to think through, how would such an agent go about maximizing the number of paper clips that will be produced? And you realize that it will have an instrumental reason to get rid of humans in as much as maybe humans would maybe try to shut it off. And it can predict that there will be much fewer paper clips in the future if it’s no longer around to build them. So that would already create the society effect, an incentive for it to eliminate humans. Also, human bodies consist of atoms. And a lot of juicy[?] atoms that could be used to build some really nice paper clips. And so again, a society effect–it might have reasons to transform our bodies and the ecosphere into things that would be more optimal from the point of view of paper clip production. Presumably, space probe launchers that are used to send out probes into space that could then transform the accessible parts of the universe into paper clip factories or something like that. If one starts to think through possible goals that an artificial intelligence can have, it seems that almost all of those goals if consistently maximally realized would lead to a world where there would be no human beings and indeed perhaps nothing that we humans would accord value to. And it only looks like a very small subset of all goals, a very special subset, would be ones that, if realized, would have anything that we would regard as having value. So, the big challenge in engineering an artificial motivation system would be to try to reach into this large space of possible goals and take out ones that would actually sufficiently match our human goals, that we could somehow endorse the pursuit of these goals by a superintelligence.”

Tags: ,

I was disappointed when I first played the new EconTalk podcast, which featured host Russ Roberts interviewing Capital in the Twenty-First Century author Thomas Piketty; I simply couldn’t understand the guest due to his French accent (or my American ears). Thankfully, the program is transcripted, and it makes for a fascinating read. The Libertarian host and his politically opposed guest go at it in an intelligent way on all matters of wealth creation and distribution.

One argument that Roberts makes always galls me because I think it’s intellectually dishonest: He says that really innovative people (e.g., Steve Jobs and Bill Gates) deserve the huge money they make, implying that most of the wealth in the country is concentrated with such people. That’s not so. They’re outliers, extreme exceptions being raised to argue a rule.

There are also the Carly Fiorinas of the world, who run formerly great companies like Hewlett-Packard into the ground and make a soft landing with a ginormous golden parachute just before thousands of workers are laid off. If you want to say she’s equally an outlier, feel free, but the majority of CEOs in the U.S. aren’t great innovators. They’re stewards being compensated like innovators, collecting generous “royalties” on someone else’s ideas.

One excerpt from the show on this topic:

“Russ Roberts:

I’m just trying to get at the mechanics, because I think it matters a lot for why inequality has risen. So, for example, if somebody has gotten wealthy because they’ve been able to be bailed out using my tax dollars, then I would resent that. But if somebody is wealthy because they’ve created something marvelous, then Idon’t resent it. And my argument is that when we look at the Forbes 400, or the top 1%, many of the people in their, their incomes, their wealth has risen at a greater rate than the economy as a whole not because they are exploiting people, not because of corporate governance, but because of an increase in globalization that allows people to capture–make more people happy. Make more people–provide more value. My favorite example is sports. Lionel Messi makes about 3 times–the great soccer player, the great footballer, makes about 3 times what Pele made in his best earning years, 40 years ago. That’s not because Messi is a better soccer player. He’s not. Pele, I think, is probably a better soccer player. But Messi reaches more people, because of the Internet, because of technology and globalization. You can still argue that he doesn’t need $65 million a year and you should tax him at high tax rates. But I think as economists we should be careful about what the causal mechanism is. It matters a lot.

Thomas Piketty:

Oh, yes, yes, yes. But this is why my book is long, because I talk a lot about this mechanism. And I talk a lot about the entrepreneur, and the reason there is a lot of entrepreneurial wealth around, but my point is certainly not to deny this. My point is twofold. First, even if it was 100% entrepreneurial wealth, you don’t want to have the top growing 4 times faster than the average, even if it was complete mobility from one year to the other, you know, it cannot continue forever, otherwise the share of middle class in national wealth goes to 0% and you know, 0% is really very small. So that would be too much. And point number 2, is that when you actually look at the dynamics of top wealth holders, you know it’s really a mixture of, you know, you have entrepreneurs but you also have sons of entrepreneurs; you also have ex-entrepreneurs who don’t work any more but their wealth is rising as fast and sometimes faster than when they were actually working. You have–it’s a very complicated dynamics. And also be careful actually with Forbes’s ranking, which probably are even underestimating the rise of top wealth holders and you know, there are a lot of problems counting for inherited diversified portfolios. It’s a lot easier to spot people who have created their own company and who actually want to be in the ranking because usually they are quite proud of it, and maybe rightly so, than to spot the people, you know, who just inherited from the wealth. And so I think this data source is very biased in the direction of entrepreneurial wealth. But even if you take it as perfect data you will see that you have a lot of inherited wealth. You know, look: I give this example in the book, which is quite striking. The richest person in France and actually one of the richest in Europe, is Liliane Bettencourt. Actually, her father was a great entrepreneur. Eugene Schueller founded L’Oreal, number 1 cosmetics in the world, with lots of fancy products to have nice hair; this is very useful, this has improved the world welfare by a lot.

Russ Roberts: 

Pleasant. It’s nice.

Thomas Piketty:

The only problem is that Eugene Schueller created L’Oreal in 1909. And he died in the 1950s, and you know, she has never worked. What’s interesting is that her fortune, between the [?], between 1990 and 2010, has increased exactly as much as the one of Bill Gates. She has gone from $5 to $30 billion, when Bill Gates has gone from like $10 to $60. It’sexactly in the same proportion. And you know, in a way, this is sad. Because of course we would all love Bill Gates’ wealth to increase faster than that of Liliane. Look, why would I–I’m not trying to–I’m just trying to look at the data. And when you look at the data, you would see that the dynamics of wealth that you mention are not only about entrepreneurs and merit, and it’s always a complicated mixture. You have oligarchs who are seated on a big pile of oil, which you know, I don’t know how much of it is their labor and talent but some of it is certainly direct appropriation. And once they are seated on this pile of wealth, the rate of return that they are getting by paying tons of people to make the right investment with their portfolio can be quite impressive. So I think we need to look at these dynamics in an open manner. And when Warren Buffet says, I should not be paying less tax than my secretary, I think he has a valid point. And I think the issue, the idea that we are going to solve this problem only by letting these people decide how much they want to give individually is a bit naive. I believe a lot in charitable giving, but I think we also need collective rules and laws in order to determine how each one of us is contributing to tax revenue and the common good.

Russ Roberts:

Well, the share contributed by the wealthy in the United States is relatively high. You could argue it should be higher. As you would point out, I don’t really have a model to know what that would be. But real question for me is the size of government. If there’s a reason for it to be larger, if money can be spent better by the government, that would be one thing. And again, the other question is what should be the ideal distribution of the tax burden.”

Tags: ,

Go here to listen to a really good Econtalk discussion between economists Russ Roberts and Mike Munger about the sharing economy. Uber and Airbnb certainly provide improved offerings (though not always a lower price), but they also skirt tax and regulatory rules. It’s pretty clear that consumers want a peer-to-peer economy, but there are consequences for those who’ve adhered to traditional regulations. What if you spent a million dollars on a NYC taxi medallion a few years ago only to find out the value of your purchase has cratered (which hasn’t happened yet but potentially could) because of Uber and Lyft and the like? These companies have improved the transportation market, they’ve innovated ways for consumers to connect to cabs, but they aren’t playing by the rules.

So here’s the question: What happens to all parties when the rules have changed in practice but not (yet) on paper? Munger thinks New York will ban Uber, but it’s hard to believe those market forces will be constrained for very long. Nor should they be, really. One passage from the discussion:

Russ Roberts:

We should explain. A medallion is–

Mike Munger:

A license.

Russ Roberts:

It’s a license that allows you to, in the case of a cab company, to pick up a stranger on the street who is raising his hand, saying, ‘Taxi’. There has always been an out for limos. You can always call a limo service to your house. I don’t think they need the same–they don’t have the exact same regulatory structure. But certainly, it is against the law in almost every city in America to cruise around and offer to pick up somebody who is raising his or her hand looking for a taxi and act like a taxi. And what Uber has done is be a little bit different. Sort of like that, but a little bit different. And that’s what the regulatory issue is.

Mike Munger:

Yeah. It’s much harder for the police. You don’t have to raise your hand, now. You just press a button on your phone unobtrusively. And the police don’t know. For all they know, it’s your friend picking you up at the airport.

Russ Roberts:

But, I think you exaggerate slightly. So, the medallion–now medallions have sold recently for as much as a million dollars.

Mike Munger:

In New York.

Russ Roberts:

In New York. Despite the Chicago story. So, there are people who are still investing in the right to be a taxi cab driver, either because they think that Uber is not as important as we do, or they think that Uber will be stopped and shut down and will not be a competitive force.

Mike Munger:

I predict that Uber will be stopped and shut down.

Russ Roberts:

Okay, I’m going to go against you there. I’m going to disagree with you. It is under tremendous regulatory pressure. Pittsburgh just announced–

Mike Munger:

I just meant in New York. In New York City. I just think that the people who made that, are making a good bet. It’s too easy to make a sting operation.

Russ Roberts:

Okay. We’ll see. But I do think that–the question isn’t that–I don’t think that Uber is illegal right now. It’s a gray area. Pittsburgh has just ruled that it must comply with the Pittsburgh Utility Council’s, or Pennsylvania Utility Council’s regulations. In Europe there’s tremendous pressure to shut down Uber, not allow them. But remember, there is tremendous pressure from riders. Who like it. And I think–I want to make sure we make something clear here. There are two aspects to this attractiveness of Uber. One of them–I don’t think it’s so much the price. I don’t think the price is that much different. I think it’s the convenience and power of it, on a calm, normal day; and I think it’s its ability change price on the fly, using a fairly sophisticated algorithm.

Mike Munger:

But the taxi companies can mimic all of that. They’ll do it within a month. It’s easy to do. If that were the reason, that’s easy to do. It’s basically open-source software.

Russ Roberts:

I don’t know about that. Um, you are suggesting then that the cab company doesn’t offer me a web, a phone-based opportunity to hail a cab because they don’t need to? Because they have a monopoly?

Mike Munger:

Yeah.

Russ Roberts:

I don’t know. I think the software is what gives Uber its comparative advantage.

Mike Munger:

It’s interesting that the taxi companies are so awful at this. So, if nothing else, Uber may force the taxi companies to improve the way that you connect with a taxi. But I think the cost advantage is really a problem, because it actually raises a lot of questions about the nature of due process. Suppose that we don’t take any action and the value of these medallions falls to zero. Are we obliged to offer compensation, because we in effect made a regulatory decision that is a taking? This property right, this medallion, had significant value. We made a choice, without due process, that said we are going to reduce the value of this medallion to zero. Are we obliged to compensate?

Russ Roberts:

Who is ‘we’?

Mike Munger:

The state. Just like we would if we were taking your land under eminent domain to build a road.”

Tags: ,

Technological positivist Marc Andreessen was Russ Roberts’ guest on a really good installment of the EconTalk podcast. The Netscape founder and venture capitalist sees the world as moving in the right direction in the macro, perhaps giving short shrift to those sinking in the short-term and mid-term turmoil that attends transformation. Notes on myriad discussion topics.

  • Google. Andreessen details how one of the most powerful companies on Earth had plenty of luck on its way to market dominance and its position as a latter-day Bell Labs. The search giant could have collapsed early on or been purchased, with Larry Page and Sergey Brin winding up as, say, Yahoo! middle managers. (“A fate worse than death,” as the host cleverly sums it up.) The guest recalls a fellow venture capital player calling the chief Google guys the “two most arrogant founders” he’d ever met.
  • Jobs lost to automation. The guest believes that with the delivery of smartphones into the hands of (eventually) seven billion people, that we’re at the tipping point of an economic boom and great job creation. He doesn’t qualify his remarks by saying that we’re in for rough times in the short run with jobs because of robotics. Andreesen also doesn’t address the possibility that we could have both an economic boom and a jobs shortfall.
  • Bitcoin. He’s over the moon for the crypto-currency company, saying it’s as revolutionary as the personal computer or the Internet. That seems like way too much hyperbole.
  • MOOCs. Andreesen points out that good universities will never be able to expand to meet a growing global population, so online courses will be essential if we’re to avoid a disastrous educational collapse.
  • Political upheavals. The one cloud the Netscape founder sees on the horizon is a barrage of political upheavals that will destabilize sections of the globe at times.
  • Journalism. Andreessen is sanguine about the future of journalism, believing that companies will adjust to post-monopolistic competition. He points out formerly profitable things about newspapers (classified ads, sports scores, movie times, etc.) that have been cannibalized by the Internet without guessing what will replace them for those faltering companies. If his argument was that nothing need replace them and these erstwhile powerful news corporations were no longer necessary since news distribution is now diffuse, I think that would probably be a stronger argument than suggesting that all but a few such companies are salvageable.•

 

Tags: , , ,

The EconTalk podcast episode that Russ Roberts did with David Epstein, author of The Sports Gene, which I encouraged you to listen to last year, wound up tied for best show of 2013 in a listener vote. If you missed it and want to catch up, go here

In the latest program, Roberts interviews Moises Velasquez-Manoff, author of An Epidemic of Absence, which examines whether what’s purported to be a sharp spike in autoimmune diseases and allergies in America has been caused by our fervent efforts to cleanse ourselves of parasites and worms. The Food and Drug Administration is considering treatments in which these organisms would be purposely introduced into patients. The host and guest discuss an underground scene that isn’t waiting for FDA approval, in which medicalized hookworms and such are being injected into the sick who wish to gamble on this counter-intuitive medicine.

As a layman, it’s difficult to process any of this without thinking about the recent furor about immunizations in which junk science convinced some citizens that inoculations caused autism. And even more recently, the supposed advantage of breast feeding over bottle feeding, which has since been largely debunked, changed actual childcare policy in New York City. You have to wonder how much the increase in allergies and autoimmune diseases is the result of better statistical information about the incidences of these illnesses. And even if the rise is legitimate, there obviously could be a multitude of causes.

Listen to the podcast here. An excerpt about the so-called “worm therapy” underground:

Russ Roberts:

So, let’s talk about the hookworm underground and how it got started. Tell us what it is, this phenomenon of people injecting themselves deliberately with various types of parasites and why did anyone start to think that was a good idea?

Moises Velasquez-Manoff:

Yes. Well, back up. So, in the 1990s, people started thinking about some of the parasite questions I’ve been talking about. Mostly because they understood the immunology. And they understood that parasites suppress the immune system. And they began–and they noticed also some populations that were parasitized, these diseases were far less prevalent. So they began to think: Well, how about we deliberately introduce parasites as a way to cure some of these diseases? It’s an outrageous idea. But then a gastroenterologist named Joel Weinstock, who is now at Tufts U., developed a parasite, and medicalized it so it was in theory safe. The parasite is native to pigs. And the reason he chose this parasite is it cannot reproduce sexually in humans. So that you give it to the person and no one else gets it. That’s the idea. The context, the historical context, is: we spent lots of money in this country getting rid of parasites. The last thing you want to do is reintroduce them to the population, right?

Russ Roberts:

And you talk about how, when people would suggest these transmission mechanisms for allergies and autoimmune problems, the outrage that many in the medical profession, in the fields of science had to the idea that there was something beneficial about this scourge that we had eliminated.

Moises Velasquez-ManofF:

Yeah!

Russ Roberts:

It’s hard to–it’s difficult to accept. It’s emotionally unpleasant. But intellectually, it’s deeply disturbing. It’s like being told: Oh, we always were told to wash our hands, that that’s good for you. And doctors really should wash their hands. But it turns out maybe, sometimes, dirty hands are good for you. That’s horrifying.

Moises Velasquez-ManofF:

Right.

Russ Roberts:

As you say, it’s outrageous. So, what happened with this pig worm?

Moises Velasquez-ManofF:

So, he developed it–this is actually in testing right now for FDA (Federal Drug Administration) approval; and I should point out that some of the results–the early results were amazing. They were so impressive. It was like 3 dozen people and a 75% remission rate for Crohn’s Disease. It was unbelievable. And now it’s in testing. And some of the results have been very lackluster, so far. So we don’t really know if it works yet. But in any case, a bunch of underground people are reading this science. I mean, this is published in reputable journals. It makes sense to a certain kind of mindset that’s kind of ecologically and holistically oriented.

Russ Roberts:

And if you have a chronic disease, you’d love to try something different, if whatever you’ve been trying isn’t working reasonably. Right?

Moises Velasquez-ManofF:

Absolutely. I mean, I think actually at some point it’s a rational–it’s a very rational choice.”

Tags: , , ,

The guest on a very good episode of Russ Roberts’ EconTalk this week was Cornell economist Robert Frank. One highlight late in the show was a debate about smoking bans. The host, a non-smoker, argued against them, while the guest, who began smoking as a teenager, spoke for them. I go along with smoking bans not because it makes me deeply sad whenever I see someone with a cigarette (though it does), but because employees in, say, bars shouldn’t be prone to secondhand smoke. And while they have the freedom to not work in such an environment, that right is limited by opportunity. I’ll pay more to supplement health insurance for smokers, but I don’t want my health or anyone else’s to be compromised by a smoker’s behavior. That’s why I’m not in favor of a ban on large sodas. While people who down gigantic sugary drinks are harming themselves and costing us more in healthcare, you’re going to catch diabetes from them. Education is the best way to reverse that problem.

The other highlight, though that’s admittedly an odd word choice given the dire subject, was Frank’s chillingly straightforward description of climate models in response to a question about a carbon tax. The whole planet is essentially a chain smoker. An excerpt:

“If you read the climate science literature, though, I think there’s less ambiguity here than many believe. The science is inexact; that’s the first thing that the climate scientists themselves will stress. They have no idea really where this is going exactly. What we know, though, is that every estimate that’s come in has been dramatically more pessimistic than the one from a year ago. And the best simulation model that we have, the MIT Global Climate Simulation Model, in a recent set of simulations estimated that by the end of this century, by 2095, not even quite the end of the century, there is a 1 in 10 chance that we are going to see an increase of average global temperature by more than 12 degrees Fahrenheit. And if that happened, 1 in 10, the model is uncertain, so it could be 1 in 5, it could be 1 in 3, it could be 1 in 20–we don’t know–but let’s take their estimate at face value, 1 in 10, then we get 12 degrees increase. All the permafrost melts; all the methane, the billions of tons of methane are released into the atmosphere, each ton 50 times more powerful than CO2 as a greenhouse gas. That’s essentially the end of life as we know it on the planet.”

Tags: ,

Vanity Fair journalist Nina Munk is this week’s guest on a very good EconTalk podcast with Russ Roberts. Munk wrote a 2007 article, “Jeffrey Sachs’ $200 Billion Dream,” which looked at the passion and plans of the End of Poverty author. She then decided to follow Sachs’ work in a long-term way, and things got complicated.

If Munk didn’t exactly come to praise the economist, she didn’t think she would end up burying him–but that’s pretty much what happened. Her resulting book on the topic, The Idealist, is a story of good intentions run aground as it pertains to the Sachsian method of sustainable development in impoverished African communities. Munk acknowledges the Millennium Villages Project isn’t an abject failure as a charity, but believes it isn’t a success in its stated aspiration to find a poverty-fighting formula. Munk doesn’t seem to be attempting to demonize anyone (although she does accuse Sachs of “emotional blackmail”) but is trying to make sense of the naivete and folly and mistakes.

I like Roberts, though I find self-serving his suggestion that idealists who try and fail are crueler than Libertarians who oppose activism. 

Listen here and read a Vanity Fair Q&A about the book here. See some excerpts from Sachs’ recent Ask Me Anything at Reddit.

Tags: , ,

If you’re fascinated by all things bees, including Colony Collapse Disorder, Russ Roberts conducted a recent interview on EconTalk with Wally Thurman on the subject. Many questions are answered, though I’m still not sure how much I should be worried about the great bee die-off interrupting the food supply in the U.S., where wild bees aren’t a factor. A Guardian article by Damian Carrington states its a paramount concern in the UK. The opening:

“The UK faces a food security catastrophe because of its very low numbers of honeybee colonies, which provide an essential service in pollinating many crops, scientists warned on Wednesday.

New research reveals that honeybees provide just a quarter of the pollination needed in the UK, the second lowest level among 41 European countries. Furthermore, the controversial rise of biofuels in Europe is driving up the need for pollination five times faster than the rise in honeybee numbers. The research suggests an increasing reliance on wild pollinators, such as bumblebees and hoverflies, whose diversity is in decline.

‘We face a catastrophe in future years unless we act now,’ said Professor Simon Potts, at the University of Reading, who led the research.”

Tags: , ,

There’s a very good EconTalk episode this week with host Russ Roberts being joined by Northwestern economist Joel Mokyr. The guest is an optimist about the transformative powers of technology, and two areas of the conversation particularly interested me: 1) Economic production and growth may be slowing down by most measures because those measures are inefficient and outdated at gauging the value of recent tech advancements, and 2) We are reaching an epoch in which the “death of distance” is becoming a reality because of connectivity and we may be returning to a pre-Industrial Revolution, home-centered society.

On the first count, Mokyr comments that those who decry that plane travel hasn’t speeded up in decades as a sign that we’ve stagnated technologically are giving short shrift to airline passengers being able to use laptops, tablets, smartphones and wi-fi to do work during their trips.

From a Mokyr essay at PBS.org: “Yet today, once again, we hear concerns that innovation has peaked. Some claim that ‘the low-hanging fruits have all been picked.’ The big inventions that made daily life so much more comfortable — air conditioning, running cold and hot water, antibiotics, ready-made food, the washing machine — have all been made and cannot be matched, so the thinking goes.

Entrepreneur Peter Thiel’s widely quoted line ‘we wanted flying cars, instead we got 140 characters’ reflects a sense of disappointment. Others feel that the regulatory state reflects a change in culture: we are too afraid to take chances; we have become complacent, lazy and conservative.

Still others, on the contrary, want to stop technology from going much further because they worry that it will render people redundant, as more and more work is done by machines that can see, hear, read and (in their own fashion) think. What we gained as consumers, viewers, patients and citizens, they fear, we may be about to lose as workers. Technology, while it may have saved the world in the past century, has done what it was supposed to do. Now we need to focus on other things, they say.

This view is wrong and dangerous. Technology has not finished its work; it has barely started.”

Tags: ,

The above quote, not a fact obviously but an educated guess, was made by Princeton economist Angus Deaton during this week’s excellent EconTalk podcast. Host Russ Roberts and his guest talk about the topics covered in Deaton’s recent book, The Great Escape: Health, Wealth, and the Origins of Inequality: longevity, income disparity and the argument over whether investment in developing nations has made a real difference.

Great little facts about the hidden reasons for why we live longer. Example: In the early part of the 20th century, hotels didn’t change sheets between guests, which helped bacteria to thrive. There’s also discussion about how lifespans continue to grow with a Moore’s Law steadiness despite predictions to the contrary.

What’s left unsaid is that damage to environment or some calamity of disease or meteorite could halt progress in the quantity and quality of life. What are the odds of that? Are we prepared to prevent such doom?

Tags: ,

Russ Roberts of EconTalk did an interesting interview with security expert Bruce Schneier in the days between the Boston Marathon bombings and the Snowden leaks. Schneier suggested back then that the NSA might be using its Utah data center to spy on all Americans, but he couldn’t say conclusively. I’m not nearly as informed as Schneier is, but I thought it was definitely going on. And I don’t know that new legislation will ever make it go away, not with the ever-improving tools we have at our disposal. Just a couple more of the interesting topics from the podcast:

  • Google could in theory use its search capacity to try to tip an election. If it willfully returned more negative articles about one candidate over many months, it might have some influence. And it wouldn’t be illegal, any more than it is for Fox News to slant the news in favor of conservatives. It’s not mentioned on the show, but there are market forces that might prevent this from happening. Whereas Fox has a niche (if very profitable) audience, Google’s “audience” is every person, and it can’t alienate a large section of them. Still, not impossible.
  • Corporate spying on American citizens is driven by many of the same forces that led to our economic collapse. Managers within corporations may be enticed by short-term bonuses to cross lines, not worrying about the big picture of the company because of their own personal goals for themselves. Despite Mitt Romney’s claim, corporations are not people but are run by many of them who have conflicting goals.

 

Tags: ,

Another very good EconTalk episode hosted by Russ Roberts is this one from early 2013 with Kevin Kelly of Wired fame. It was prompted by the writer’s article for the magazine “Better Than Human” (a title not of his choosing nor his liking). Most interesting to me was Kelly’s idea that this century is one of identity crisis for our species, that the things we thought we were meant to do (chess, manufacturing, etc.) have been taken from our domain, so we’ll have to figure out what our role should be, reassess what our purpose truly is. Listen here.

Tags: ,

Just as good as Russ Roberts’ EconTalk episode with David Epstein is his recent show with economist Tyler Cowen, whose new book, Average Is Overlooks at life in a more-autonomous future. The guest sees the coming years being increasingly meritocratic, though with merit having shifted from those who are great to those who great at interfacing with machines. On that point is an exchange about freestyle chess, in which a human and computer team up to challenge another computer. Cowen points out that the best human players usually don’t fare too well in these competitions, and are often outdone by lesser players who are superior at knowing when to trust their non-human partner. Cowen guesses at future population distribution in the U.S. and how cities will change, and explains why he thinks income inequality is rising at the same time that crime rates are falling. He’s optimistic about life in 50-70 years, but believes the next few decades will be a painful mix of positives and negatives. 

I doubt we’ll ever really be a meritocracy. Even if we were, the idea that a small number of us, 15% or so, will flourish and have tremendous advantages and the rest will be second-class citizens with very nice toys and tools, just makes me sad. Even if it means that we’re wealthier in the aggregate, I still feel depressed about it. Beautiful cities where no poor people can afford to live doesn’t sound Utopian to me. Listen here.

Tags: ,

David Epstein, author of The Sports Gene, is the subject of an excellent EconTalk interview by Russ Roberts. Among other things, he provides solid evidence to undermine the 10,000-hour mishegas, and explains why competitive female runners seem to be getting slower and Tibetan monks living at high altitudes don’t make for great marathoners as Kenyan athletes do. Listen here.

 

Tags: ,