Cathy O’Neil

You are currently browsing articles tagged Cathy O’Neil.

Considering the best-case scenario of how many days I might possibly have left in my life and how many books I want to read—not even counting the ones yet to be published that will lengthen that list—there’s no doubt I’ll fall well short of crossing off every title. That could be viewed as a blessing: At least I’ll never run out of reading material. It’s also a curse. What if there was another way?

In Cathy O’Neil’s concerned Bloomberg View opinion piece “What If We Could Upload Books to Our Brains?” the editorialist pivots off of a podcast discussion between Neil deGrasse Tyson and Ray Kurzweil, two guys who just won’t give it a rest. The excerpt:

‎Ray Kurzweil:

Computers are getting smaller and smaller. We’ll have nano-robots the size of blood cells that have computers in them. They’ll go into the brain through the capillaries and communicate with our neurons. We already know how to do that. People with Parkinson’s disease already have computer connections into their brain. My view is that we’re going to become a hybrid, partly biological, partly non-biological. However, the non-biological part is subject to what I call the Law of Accelerating Returns. It’s going to expand exponentially. The cloud is expanding exponentially. It’s getting about twice as powerful every year. Our biological thinking is relatively fixed. I mean, there’ve been a few genetic changes in the last few thousand years, but for the most part it hasn’t changed much, and it’s not going to expand because we have this fixed skull that constrains it and it actually runs on a very slow substrate that’s a million times slower than electronic circuits.

Neil deGrasse Tyson:

Then why invoke the brain-machine connection at that point? You’ve got the machine.

Ray Kurzweil:

Because it’s a much faster interface. Our fingers are very slow.

‎Neil deGrasse Tyson:

The world is going too slow for you. You want to speed it up.

Ray Kurzweil:

I mean, it is. How long does it take you to read The Brothers Karamozov? It takes months.

Neil deGrasse Tyson:

So you’re suggesting that you can get these nanobots the size of your neurosynapses and one that will be pre-loaded with War and Peace and will somehow inject it into your neurosynaptic memory banks and then you’re done, you’ve got it. Just like in the Matrix, they would load memory programs into you.

Ray Kurzweil:

We will connect into neocortical hierarchies in the cloud. Some of that could have preloaded knowledge.•

A couple things: 1) Even if your lips move, it should not take months to read the Brothers Karamozov. 2) Feel free to toss Kurzweil’s “Law of Accelerating Returns” onto a pile of e-waste, as he’s often wildly optimistic in these matters.

That means we likely won’t be the ones making decisions about this brave new world, if humans get to make them at all. I assume Kurzweil means the result of volumes being uploaded into our descendants’ wetware would be different than if they were fed into a computer, that these future people wouldn’t just absorb this information as data but would be capable of analysis and criticism as if they’d actually sat and read them.

It would be akin to swallowing a pill dinner instead of eating food. Of course, that way of taking nourishment would cause jaws, mouths, teeth, throats and stomachs to change, likely for the worst. You’d have to think parts of our brains might go slack if we were plugging them into a library, painlessly absorbing shelves at a time. 

When futurists talk about carbon-silicon hybrids as necessary to evolve and save the species, they’re actually talking about perpetuating some form of life, more than specifically “human life.”

From O’Neil:

What if humans could upload all the great classics of literature to their brains, without having to go through the arduous process of reading? Wonderful and leveling as that may seem, it’s a prospect that I’m not sure we should readily embrace.

A while ago, I listened to an interview with futurist Ray Kurzweil on astrophysicist Neil deGrasse Tyson’s radio show StarTalk. Kurzweil described (starting at 10:30) how our brains might someday interface directly with non-biological forms of intelligence, possibly with the help of nano-bots that travel through our capillaries.
 
Given how much faster this interface would be than regular reading, he went on, we’d be able to consume novels like “The Brothers Karamazov” in moments, rather than the current rather clumsy form of ingestion known as reading, which, he said, “could take months.”
 
At this point Tyson interjected: Are you saying we could just upload War and Peace? Yes, Kurzweil answered: “We will connect to neocortical hierarchies in cloud with pre-loaded knowledge.”

This snippet of conversation has baffled and fascinated me ever since.•

Tags:

A big problem with data analysis is that when it goes really deep, it’s not so easy to know why it’s working, if it’s working. Algorithms can be skewed consciously or not to favor some and keep us in separate silos, and the findings of artificial neural networks can be mysterious to even machine-learning professionals. We already base so much on silicon crunching numbers and are set to bet the foundations of our society on these operations, so that’s a huge issue. Another one: The efficacy of neural nets may be inhibited by more transparent approaches. Two pieces on the topic follow.


The opening of Aaron M. Bornstein’s Nautilus essay “Is Artificial Intelligence Permanently Inscrutable?“:

Dmitry Malioutov can’t say much about what he built.

As a research scientist at IBM, Malioutov spends part of his time building machine learning systems that solve difficult problems faced by IBM’s corporate clients. One such program was meant for a large insurance corporation. It was a challenging assignment, requiring a sophisticated algorithm. When it came time to describe the results to his client, though, there was a wrinkle. “We couldn’t explain the model to them because they didn’t have the training in machine learning.”

In fact, it may not have helped even if they were machine learning experts. That’s because the model was an artificial neural network, a program that takes in a given type of data—in this case, the insurance company’s customer records—and finds patterns in them. These networks have been in practical use for over half a century, but lately they’ve seen a resurgence, powering breakthroughs in everything from speech recognition and language translation to Go-playing robots and self-driving cars.

As exciting as their performance gains have been, though, there’s a troubling fact about modern neural networks: Nobody knows quite how they work. And that means no one can predict when they might fail.•


From Rana Foroohar’s Time article about mathematician and author Cathy O’Neil:

O’Neil sees plenty of parallels between the usage of Big Data today and the predatory lending practices of the subprime crisis. In both cases, the effects are hard to track, even for insiders. Like the dark financial arts employed in the run up to the 2008 financial crisis, the Big Data algorithms that sort us into piles of “worthy” and “unworthy” are mostly opaque and unregulated, not to mention generated (and used) by large multinational firms with huge lobbying power to keep it that way. “The discriminatory and even predatory way in which algorithms are being used in everything from our school system to the criminal justice system is really a silent financial crisis,” says O’Neil.

The effects are just as pernicious. Using her deep technical understanding of modeling, she shows how the algorithms used to, say, rank teacher performance are based on exactly the sort of shallow and volatile type of data sets that informed those faulty mortgage models in the run up to 2008. Her work makes particularly disturbing points about how being on the wrong side of an algorithmic decision can snowball in incredibly destructive ways—a young black man, for example, who lives in an area targeted by crime fighting algorithms that add more police to his neighborhood because of higher violent crime rates will necessarily be more likely to be targeted for any petty violation, which adds to a digital profile that could subsequently limit his credit, his job prospects, and so on. Yet neighborhoods more likely to commit white collar crime aren’t targeted in this way.

In higher education, the use of algorithmic models that rank colleges has led to an educational arms race where schools offer more and more merit rather than need based aid to students who’ll make their numbers (thus rankings) look better. At the same time, for-profit universities can troll for data on economically or socially vulnerable would be students and find their “pain points,” as a recruiting manual for one for-profit university, Vatterott, describes it, in any number of online questionnaires or surveys they may have unwittingly filled out. The schools can then use this info to funnel ads to welfare mothers, recently divorced and out of work people, those who’ve been incarcerated or even those who’ve suffered injury or a death in the family.

Indeed, O’Neil writes that WMDs [Weapons of Math Destruction] punish the poor especially, since “they are engineered to evaluate large numbers of people. They specialize in bulk. They are cheap. That’s part of their appeal.” Whereas the poor engage more with faceless educators and employers, “the wealthy, by contrast, often benefit from personal input. A white-shoe law firm or an exclusive prep school will lean far more on recommendations and face-to-face interviews than a fast-food chain or a cash-strapped urban school district. The privileged… are processed more by people, the masses by machines.”•

Tags: , , ,