We are really excited tonight to
engage you in a conversation about the critical need to
preserve our humanity. And how we might be able to do that, given that we are careening at
light speed into the digital age. We’re thrilled to be partnering
again with the Institute for Cross-Disciplinary Engagement, or ICE,
the good ICE, at Dartmouth College. This is part of their series of very
rich dialogues between a scientist and a humanist, organized by Marcelo Gleiser,
the Director of ICE, and Amy Flockton, the Assistant director. Marcelo Gleiser is a theoretical
physicist and a cosmologist and a leading proponent of the belief
that science, philosophy, and spirituality are complementary
expressions of humanity’s need to embrace mystery and the unknown. He and his work were bestowed
a huge honor last week, when it was announced that Marcelo was
the winner of the 2019 Templeton Prize, which is an award honoring
an individual who has made an exceptional contribution to
affirming life’s spiritual dimension, whether through insight,
discovery, or practical works. Since it was established in 1972,
by the Templeton Foundation in the UK, the award has recognized Nobel Peace Prize
laureates, dissident intellectuals, and spiritual luminaries like
the Dalai Lama, Mother Teresa, and Bishop Desmond Tutu,
and now, Marcelo Gleiser. So please join me in giving
very hearty congratulations and a warm welcome to Dr. Marcelo Gleiser.>>[APPLAUSE]>>Hi, everyone, thanks for coming. I think we are up for an amazing night. We have two spectacular speakers here, and
the topic couldn’t be more timely, right? I mean, we are all immersed into this
world where digital technology is sort of serving us but also
controlling us in somewhat scary ways. And we are gonna talk
about that a lot tonight. But before we do that, I have a few sort
of more practical things to ask you. First of all,
you must have sat on a piece of paper. That piece of paper is a survey that
our funders ask us to have filled up so we have more money to do this again and
again and again. And in fact, with the Museum of Science,
I hope, right? So please take one step,
it will take you 30 seconds to fill it up. And it’s super important, because they
want some kind of quantitative data of what people feel like when we do
these events, these public dialogues, as we call them. The other thing is
the rundown of the evening. So the way it’s gonna work is first,
when I’m done speaking, we’re gonna have a 90 second
video about the institute. So you’re gonna see me awkwardly
on the screen over there talking about this stuff. But still, so you understand what
we’re doing and why we’re doing, which is the most important part. And then I’m gonna invite our speakers, they’ll each have about
15 minutes to talk. And then the three of us
will engage in hopefully a spirited conversation about
the important topics of the night. And then we’re gonna open to Q&A for
you to ask questions. And Lisa and James will be running around,
hopefully not too fast, but will be running around with
microphones so that you can speak and be heard, okay for a while,
and we’re gonna wrap it up. We can’t be 8:30 anymore,
Lisa, because we’re late. So you have to give us a few more minutes,
but still around 8:30 or so, okay? All right, so
if you could roll the movie please. [MUSIC] The world is a complex place,
a network of flowing information and changing patterns, where forces known and unknown generate the most sublime beauty
and the most terrifying destruction. The world inspires wonder and doubt, and
we humans try to make sense of it all, creating stories, theories,
symphonies, and poems. I am Marcelo Gleiser,
Director of the Institute for Cross Disciplinary Engagement
at Dartmouth, or ICE. On behalf of all of us at ICE and
our partners, I invite you to be a part of our institute, to be a part
of this essential conversation. What is the nature of reality? What is the future or humanity? Will machines think? Will and should we become immortal? Is there free will? Are we alone in the universe? Can science be a path
towards spirituality? ICE was created to
address these issues and establish new bridges between
different ways of knowing. Our mission is to overcome
old bigotries and facilitate a constructive dialogue between
intellectuals and the general public, creating a community of citizens concerned
with the common good, engaging experts, promoting public dialogue, and
offering open access courses. One thing is certain, the hardest
questions ask for different viewpoints for across the disciplinary approach,
for interllectual openess. The sciences and the humanities need
one another now more than ever. And we need them both. [MUSIC] I’ll now introduce our speakers,
so I’ll start with Jaron. So Jaron Lanier is being considered by
Wired magazine as one of the 25 most influential people in technology
in the last 25 years, so 25 on 25. He is a real renaissance man. He is a computer scientist,
he’s a composer, he’s an artist. He’s a writer that addresses many,
many topics from high technology in business to social impact of technology,
philosophy of consciousness and information, Internet politics,
and the future of humanism. I don’t know if maybe some of you
were here when we did our past event, where we talked about
transhumanism in the spring, no, in the fall last year, I guess,
I can’t remember when it was. Jaron’s second book
Who Owns The Future is a critical and insightful perspective on big data. Who owns the data, what it all means for
our society, and the quest for a sustainable digital economy. Jaron looks at the large patterns shaping
the digital world, such as the 2008 financial crisis, the NSA surveillance,
and the implementation of HealthCare.gov. Who Owns The Future remains an
international best seller and was declared the most important book of 2013 by
Jonah Sarah in the New York Times and was on the Amazon 2013
Best Books of the Year list. It has also been awarded Harvard’s
2014 Goldsmith Book Prize. The impact of Who Owns The Future was
celebrated prominently in Europe when Jaron was awarded the 2014
Peace Prize of the German Book Trade, one of the highest literary
honors of the world. And of course, he published last
year the book, Ten Arguments For Deleting Your Social
Media Accounts Right Now, which is gonna be certainly some of
the topics that he’s going to discuss. And our other speaker is Sue Halpern. She’s a scholar residence
in Middlebury College, a long time contributor to the New York
Review of Books, and now a staff writer at the New Yorker Halpern is the author
of seven books of fiction and non fiction, most recently
Summer Hours at the Robbers Library. In addition to the New York Review,
Halpern has written on science, technology and social issues for
the New York Times and Rolling Stone and many other publications. She’s the director of the Middlebury
fellowship in narrative journalism and a recipient of the Guggenheim and
Echo and Green fellowships. She has a doctorate
from Oxford University. So now please help me welcome
our speakers for tonight. [APPLAUSE]
>>[APPLAUSE]>>And we’re going to start with Jaron. [APPLAUSE]
>>Hello, hello. And you would be the audience,
who needs no introduction.>>[LAUGH]
>>Well, I want to share a thought I’ve been playing with,
I’m not sure if it’s correct. But I will share it with you, and
if it starts to feel worthwhile, it’ll go into a book. So you can help me think about it. So it starts with a story that’s
rather chilling to my blood and it took place not far from here,
in Hartford, Connecticut. Last year, there was a contest where
bright high school students around Connecticut competed to be able to
ask questions of a few figures, including me, who were considered to
be worth talking to [LAUGH] and so they competed in small groups and I was in a theater in Hartford and
I met with the top team and they collectively decided the first question
that they wanted to ask me was this. If AI is gonna surpass us,
if we’ll have no jobs, if we won’t be needed,
why did our parents have us? Why are we here? Now, I guess, the first thing that fled through my
mind is, I love talking to teenagers. And every time I talk to
teenagers I’m prepared for all kinds of garbage coming
out of their mouths. I’m prepared for weird, under-handed, very
brilliant insults that put me in my place.>>[LAUGH]
>>I’m prepared for cynical stuff and depression. I’m prepared for massive confusion. I’m prepared for pent up rage,
I’m prepared for experiments with unfortunate
identities that will soon pass. I’m prepared for all of that,
I’ve never heard this before in decades. I think this is new and I’ll tell you what I tried to
assemble in my head to tell them, although I don’t know if I
did a good enough job in it. Here’s what I said. There’s this way we talk
about technology that has as its origins not rigorous
philosophical debate, not intuitive sensibilities,
not cultural tradition but rather raw marketing and fundraising. Now, this started very close
geographically to where we are right now. It started at MIT. I was there, very young. And my mentor was the most wonderful
mentor to me, the most brilliant and sweet man, who I bet some people in
this audience knew, named Marvin Minsky. Anybody know Marvin? So I can’t express how much I adore
Marvin and how much he meant to me. But the favorite thing that Marvin and
I did together was argue. And from when I was a teen, I would
tell him, but this AI stuff is just a ridiculous way of thinking computers,
and we’d have this great argument. And you’d always say, wow, it’s so nice to
have a kid who doesn’t just agree with me. But when it came time to go to DARPA for
funding, he’d say, okay, now it’s time to play along. [LAUGH] Now we agree. And the thing is,
you show up at the funding agency, and you say, we’re building this giant brain
that’ll surpass people, and if you’re not on board with it some other giant brain
will belong to the communists or whatever. And they’re like, my god, my god! Here! Make the giant brain.>>[LAUGH]
>>And so that is approximately the same thing that Google’s doing to
the institutional investors now. And Facebook. It’s approximately the same thing. We’re all doing to make these spectacular
market capture, the big tech companies. And I’ll own my part of it,
at Microsoft we do it too. And it’s an incredible story. If you’re really making god,
who’s gonna talk back to you? And if you can kind of prove it,
if you can kinda say, hey, my algorithm’s running everything. We’re running politics,
we’re running dating, we’re running your damn rides around town. That’s it. Whoever has the biggest
computer runs everything. Buy in or be left out. But does that really tell us what
the right way is to think about computers? So I’ve had this feeling for
a really long time. And I’ll tell you another story. The last time I saw Marvin when he was
frail before he passed away recently, I was walking to his house in Brookline
with another old student of his. And I was ever formally a student of his. I worked for him as a researcher as a kid. But a student of his was saying,
Marvin is very frail. Don’t argue with him,
don’t do the old AI argument. And I showed up, and Marvin smiling,
he said, can we argue?>>[LAUGH]
>>[LAUGH] Like we did, it was so great, it was so great, so
let me present the position that I will take with Marvin to present this counter
perspective, and it goes like this. There are qualities we perceive in humans,
in ourselves and in others that we really have
never succeeded in defining well. These include the sense of consciousness, the sense of free will,
the sense of self-awareness. All of these things, we talk about them,
we have to make decisions about them sometimes life and
death decisions in medicine. But do we really know
what these things are? And, if we are honest with ourselves,
we don’t. We do not have a consensus, rigorous
definition of any of these things. And therefore, to talk about any and
by the way, here, I’ll say some. I hope that that’s not
a controversial statement. There are some who would think it’s
controversial because they don’t even think we should use the terms
if we can’t define them. They’re say conscious isn’t a thing. There is no self awareness. This is just determinism. It’s an illusion. But then I say. What’s having the illusion?>>[LAUGH]
>>You know? Like illusion is the one thing that
isn’t reduced if it’s an illusion. Right?
It’s an exceptional object. So [LAUGH]
when we talk about anything
related to these in machines. And even in intelligence. I think we can talk about an idea
of intelligence in people but we don’t really know. We have this correlation of different test
results that we call intelligence and I think it has some utility. It was created by people with good
intentions but, I mean, I’ve had students who didn’t test well as being intelligent
who then made fantastic contributions, and we all have. So we know that that’s not rigorous. We know that that’s not a final
understanding of what’s going on in a brain. So given that there’s
some wiggle room here, is there any way to decide
how to think about computers? Or should we just think of them as on
the way to surpass us as some kinda better version of us? Well, I think the criteria
has to be pragmatic. If we think about computers
in a certain way and it makes our engineering
sloppier it makes outcomes worse. Maybe then that’s not the right
way to think about computers. And so what I’d argue is that the AI
way of thinking about computers is pragmatically bad, because it makes for
lesser engineering and poor outcomes. Now, that’s not a popular thing to say,
cuz all the money’s flying to AI. If you wanna encourage your kid on
a major, hey, get your PhD in AI. Your ticket’s written at that point. It is the popular paradigm right now. I’ll try to approach why I think
this is wrong in two quick ways and then I’ll be done. No, I’m gonna play some
music too on that [LAUGH]. The first way,
I’ll revisit the turing test. You remember the turing test? So in serious debating
circles about consciousness, the training test has long been
considered sloppy and it’s absolute. But I’ll use it anyway cuz I think
the whole thing is sloppy and absolute. In the turing test you
have first the man and the woman trying to fool a judge
with all sorts of paper. Then you place the woman with the person
and the ideas that if the judge can’t distinguish them, then the computer
has elevated in some way. It’s achieved some sorta human equivalency
and what other test could there be. And I should say, if you read the original turing remarks,
they’re not actually quite like that. They’re very interesting and spiritual and have a very different quality
than the usual telling. But I’m just telling the canonical
version that we all know. So here’s what I’d observe about that. Since we’re just asking whether
the judge can discriminate, there’s another possibility,
which is that maybe the judge lowered his or
her discrimination to the point that judge looked stupid enough
to be unable to tell them apart. So the judge could have become stupid, or the other person behind the curtain
could have become stupid so as to make themselves
indistinguishable from the computer. So I wanna point out that there’s a two
out of three chance that it’s human stupidity and not machine intelligence that is being [LAUGH] detected
when the turing test is passed. Now, [LAUGH] I think this is exactly what
happens, this is exactly what happens when an algorithm tells you who to date
and what degree to get and all this stuff, you subtly lower the context in order
to make the algorithm seem valuable. When you say the algorithms you’re
also saying that the whole human element of psyching out your employment
never mattered, and it was only the moves. When in fact, it used to be a whole list
of pursuit, you’ve done it yourself down. Do you see it now? Let me give you another example just
like that but this brings in an economic component way back going into the 50s,
even before I knew Marvin. There’s a classic story that he
decides some grad students to just do natural language translation. Here’s a chomskyan model and
here’s some dictionaries. And you should just be able to
get natural language translation. Of course, that didn’t work. So instead what worked is,
many years later, in the 90s, initially scientists at IBM realized
you could do it with Big Data. And if you have a sufficiently
large corpus of examples, you could do statistical correlations. And you could get something usable. A giant mash up of examples. We’ve gotten better and
better that Google and Microsoft are the two principal
providers of it now, and the result has been this huge loss
of employment for translators. And I think that’s a very interesting
thing to look at in detail because actually a small minority of them
have done well, but overall, they’ve seen their careers decimated,
very similar to the pattern we see in recording musicians,
investigative journalists, photographers. Same story but here’s the trick. Language is alive, every single day
brings news, brings pop culture, brings social media activity and you have to replenish your example set for
the translations to work on a daily basis. So how do you do it? What you do is,
you steal without notice or permission tens of millions of phrase
examples every single night from all over the globe for people who
don’t know it’s being done to them. From amateurs who are adding subtitles
to videos, all kinds of people. So we’re saying, hey, you’re buggy whipped, you’re obsolete,
our electronic brain has surpassed you. But we need you. We need to steal from you. We need your data. It’s dishonest. It’s cruel. It’s not sustainable. And if you extend it to everything
if everything’s gonna become AI, it becomes totally economic ruin
based on an absurdity based on a lie. So anyway,
this is what I told the students. This was my answer. I was gonna say, there is a way to
refrain this, where there is no AI. AI isn’t a thing. What there is, is a future, with a lot
of great big computers and networks. And you can look forward to a future of
providing incredible data through it. Helping other people through this thing,
but don’t believe in the AI. And all of a sudden, the future clarifies. So that’s what I told them,
I don’t know if it got through. All right, I’m gonna play a bit of
music to Anybody know what this is? It’s a xun. It’s from China. It’s very ancient. This particular one is
a very piercing version. And I am going to use it as a ritual
instrument to purge the MIT, Harvard, and greater Boston
community of bad AI mythology. [MUSIC]>>[APPLAUSE]
>>Are you good?>>That’s good.
>>I am.>>We’ll talk more. Don’t worry. We’ll have plenty of time. That was awesome. Thank you.
So yeah, please.>>[APPLAUSE]>>Hi, thank you. [LAUGH] I did not bring my instrument, so you’re just gonna have suffice with me. So the last time I shared the stage with
Jarron it was appropriately the 50th anniversary of the National Endowment
of the Humanities. A government program that the Trump
administration has been trying unsuccessfully to get rid of in
each of its last proposed budgets. And the last thing I said that day
was that we should stop calling our phones smart. I wasn’t being glib,
calling phone smart is being glib. It’s mindlessly seating and essentially
human attribute to an inanimate object imbued with powers created and
deployed by human ingenuity. So if the question that we’re dealing with
here today, one of the question is, how do we preserve our humanity in the midst
of increasingly invasive technologies? The first thing to consider is our
language, adjectives matter and analogies matter. They’re not neutral, they can be
co-opted and they can co-opt us. So I’m a journalist, as you now know. And I write a lot about Facebook and Google, two companies that I am
sure all of you know a lot about. You probably have a Facebook page
unless you’ve read Jarren’s last book. But I have no doubt that you used
the Google Search engine and most of us know that these are the two biggest
advertising platforms on the planet. The two biggest advertising platforms
that have ever been invented and we think we know what that means. That they take our data, and they enable marketers to target
us with ever more specific information, things for us,
things that are more personal. And that’s true, they do that for
products and they do that for politics. And what they’re doing is reducing
the truth of who we are by what we buy or what we’re interested in,
by where we live, our zip code, by our marital status,
by who we know. All the while they’re suggesting
that this is in our best interest, because it’s personal even if it’s not. A couple of years ago I participated
in a bit of an experiment. I allowed the champions of psychographic targeting the Cambridge University at Psychographic Institute to determine
who I was based on my Facebook likes. And so
they had access to my Facebook page and then at the end they were
gonna tell me who I was. And who I was,
according to the good people of Cambridge, was a gay male conservative Republican.>>[LAUGH]
>>So that’s silly, but as we saw in the 2016 election,
it’s also really dangerous. And so
far enduring benefits of that uniquely human endeavor journalism is
that we’re now aware of this. But a contiguous danger of all
of these massive advertising operations is that we’ve come to
accept that our primary economic function in a digital world is
as suppliers of data points. On diverse one year, the conversation was,
that data was going to be the new oil, which is to say
the commodification of you and me. So to answer in a very dark way
the other question raised by Marcelo, what does it mean to be human
in a technological age? It means that you and I are the grease
that lubricates this machine. And for the most part,
we’re fine with that. So fine that instead of challenging it,
some of us are buying into schemes to get paid for letting companies like
Nike have access let’s say to our Fitbits. And one rationale of this is
that they’re gonna do it anyhow, they’re gonna take the data for free. Like for instance, insurance companies
are now using it, taking it for free. And then say taking a picture that
you’ve posted on Instagram and using that to raise your rates. This is not just the point
of if you’re not paying for the product you’re the product,
it’s that implicit in that statement. It’s an exception of a very
strange distortion of capitalism, before the distinction was between workers
and the owners of the means of production. But in the digital economy we’ve become
wholly owned subsidiaries of apps and platforms. We’ve essentially handed over
ownership of ourselves, and I’m really sorry Marcelo, this is a dark
answer to what I think was supposed to be an inherently optimistic question. But as you know, when we’re talking back
there, we are all getting very depressed, so sorry.>>[LAUGH]
>>But when Marcelo first raised the question and asked me to participate
in this conversation, it occurred to me that the operative work in that
question was meaning rather than human. We’re all human beings, right? So we’re all human so that’s not the nut
that he was asking to have cracked. But meaning is a whole other order of
magnitude, it’s really it’s loaded. And it’s been passed by philosophers and
people who study philosophy, and religion, and host of other intellectual and
uniquely human increase. And in the past, really for the longest
time when we asked that question. What does it mean to be human? We defined ourselves in
opposition to animals. Not always to other animals,
but to animals as other. So we were rational, we could think,
we cogito, ergo sum. We use symbolic language,
we could look ahead and plan. We use and made tools, we made music. We developed systems of belief,
we had a soul. Technology has changed that dichotomy. Now we find ourselves in the position
of having to distinguish ourselves from machines. Not talking about the singularity
that very strange desire to achieve immortality by downloading the contents
of one’s brain onto a chip implanted into some
kind of robotic device. That vision continues to capture
the imagination of people who would like to stick around long
enough to watch the sea levels rise, the sixth extinction Segway into the 7th. And witness if not get caught up in
whatever resource wars are in the offing. It’s always seemed futuristic,
and it remains so, even as we’ve watched advances in neural implants and
their success in helping, say, people who have spinal cord injuries operate
robotic arms just with their minds. And the development of chips designed
to supersede damaged hippocampi in people with Alzheimer’s disease. But we’ve always imagined the future
to be cleaved from the present. We’ve imagined it to be a place very
different from the one that we inhabit. When in fact, the future is constantly
being attained incrementally. Obviously, there are breakthroughs
that demarcate a then and a now. But for the most part,
we move forward inexorably. So the singularity may seem to be
far in the distance still, but the reason we’re having this conversation
today is because humans are merging with technology in ways both obvious and
not so obvious. But with enough velocity and
substance that we have to pause to ask this fundamental question
about preserving our humanity. You know this picture,
you might be in this picture, someone’s walking down the street and
they’re just looking at their phone, or you’ve got people sitting at a table and
all of them are staring at your device. It’s such a common image now
that’s it’s become a cliche. There was a study last year by
the accounting firm Deloitte, that found that people checked their
phones on average 52 times a day. And if you’re a millennial,
it’s around 86 times. A study from the UK found
that people were spending about 24 hours a week online,
so a full day online. But they also found a sizeable number of
people spending 40 hours or more online. I am actually telling you
nothing that you don’t know. In fact, some of you might be
looking at your phones right now or itching to look at your phones right now. And this is by design which is to say,
the designers deliberately sought to engage neurobiology when
they wrote their code. Dopamine is a very powerful
neurotransmitter, as all addicts know. Social media in some instances,
make people more social, in that it facilitates a kind of human
interaction but to a large extent. Those interactions do not make
us happier or more fulfilled. There’s a phenomenon known as status
envy that has been associated with passive viewing of Facebook, and
status envy has been linked to depression. Just last month the Pew Research Center
reported that most US teenagers see anxiety and depression as a major
problem among their peers. So to recap, social media while connecting
us to each other through mediation of our devices can’t achieve the basic effective
rewards of a physical community. But it does makes us more
connected to our devices. Then there’s memory which we’ve
outsourced especially to Google. It’s not that memory is a uniquely human
activity, but both say scholarship and the serendipitous pursuit of knowledge is. The last book I wrote, which Marcello
mentioned, was about a library. And in the past year or so, I have
spent a lot of time in libraries and a lot of time talking about libraries. And I cannot tell you how many
times I have heard that libraries are now obsolete since we have Google. And here’s the argument which I lifted
from a blog post on TechCrunch, which by the way I read regularly, which
is why the data scientists, the brilliant people at Cambridge University assumed
I was male which should tell you a lot. Okay, so here it goes. Here’s what they said. It’s hard not to imagine a future when
the majority of libraries cease to exist, at least as we currently know them. Not only are they being rendered
obsolete in a digital world, the economics make even less stands. The Internet has replaced the importance
of libraries as a repository for knowledge. And digital distribution has replaced
the role of a library as a central hub for obtaining the containers
of such knowledge, books. And digital bits have replaced the need
to cut down trees to make paper and waste ink to create those books. And so, the writer concludes this
is evolution, not devolution. So I am now going to beg to differ but
with a caveat. It’s a noticeable human
disposition to favor the future, to look to the future with hope and typically assume that the future will
deliver us to some place better. Than the place we are now, and that this
is progress, and that progress is good. I’m not saying that this is 100% true and with climate change bearing down on us,
it may become less true. But even there, one might attribute
complacency to an underlying belief That there will be a fix for
this problem in the future. We are, after all an ingenious species. And so when we ask, how can we retain
our humanity as we become more and more digitized? As we conflate automation with autonomy, we have to be aware that we’re
gonna be accused of many things. Being a lead for example or
anti-progressive or dope. Be prepared for that, but resistance
is not the same thing as rejection. And it becomes more and more important
to add some resistance, some friction to what may feel like the inevitable
slide to total digital connectivity. The 5G world that we’ve been told
is coming soon and coming fast. As you may have read, I think this weekend
even in the New York Times, the community that appears to be leading this resistance
is actually located in Silicon Valley, where parents are forbidding their
children from using phones and iPads. And they’re trying really hard to
get them into the world of school, where they are compelled to
spend a lifetime outdoors. It’s very easy to be cynical about this, these are the folks that brought
us to this point after all. But it may be a better idea to pay
attention to them and follow their lead. Unfortunately though,
it looks like the coming digital divide may not be between those who have fast
Internet and those who have no Internet. But between those who have the luxury
of disconnecting and those who have no choice but to view the world or have the
world mediated by a digital connection. Just last week,
a man was handed an iPad and told by a disembodied doctor on
the screen that he was going to die. So much for the laying on of hands. And last fall, over 100 students walked
out of their Brooklyn High School to protest their school’s reliance
on a Facebook designed and funded educational platform, that require them to sit in front of their
computers all day teaching themselves. I would venture guess that
there is no one at BBNN who sits in front a computer all
day teaching themselves. The kind of resistance demonstrated
by the students likely made more people question the assumptions
about the primacy of technology. It also highlighted a new kind of
inequality that many of us might have not seen coming. Those students seems to me,
were on to something else. They’re actions suggested that
the question being raised here is actually the answer. As a species, we will retain our humanist,
our capacity for wonder and exploration, our skepticism, our humor,
our connections to each other by demanding that technology is built from
the start taking all of that into account, or that is not built if it can’t. This is a choice and
we declare our humanness by our agency. Just as adjectives matter, design matters. And in the meantime, there’s poetry. Thank you.>>[APPLAUSE]
>>Thank you so much.>>[APPLAUSE]
>>Okay, so now we will talk a little bit about this. Try not to depress everybody.>>[LAUGH]
>>[LAUGH]>>I mean, we were struggling back there. So what are we gonna do because we’ve
started really thinking about this, it really becomes kind of difficult
to kind of see the bright future. So let’s, first of all, just for
the benefit of everyone, it would be good, maybe,
Jaron, if you could define the differences between soft AI,
which is what everybody’s talking about, machine learning, and
the real Marvin Minsky hard AI. Marvin, when I was a postdoc at Fermilab,
Marvin came and gave one of his talks and
just did the Society of the Mind there. And during the questions, and
I was 28 or something, I said, but wait a second, if you really
believe machines are going to think, are they also going to develop psychoses,
schizophrenia, are going to become crazy? And his answer, he didn’t even, of course. And then I said, so does that mean
you’re gonna have machine therapists? Which are gonna be other machines
take care of those machines. And he’s like, very possibly. So anyway, the point is,
it’s important, I think, to distinguish between the intelligence
that we are seeing everywhere when you go to San Francisco,
AI, machine learning, etc., etc. And what Nick Bostrom, for example,
is talking when he talks about super intelligence and us being wiped out
by a very intelligent computer. Or not.>>How old is he? Wait, is this on?>>Yeah.
>>I’ll disagree with you.>>Good.
>>[LAUGH] Neither means a thing, they’re both points of rhetoric. They’re both aesthetics for
how to think about modernity. They’re both storytelling. For how we think about the future. And they function hand in hand. So you can’t, I mean,
Bostrom’s stuff doesn’t mean anything, soft AI doesn’t mean anything. It’s just sort of one way to interpret
what you’re doing with code. You can absolutely do everything that you
call AI and think of it not as AI, and I think you’ll be a better engineer for
it.>>Right, right.
>>So I don’t think the distinction
between soft and hard AI is useful. In fact,
I think it’s a damaging distinction because it lets the AI off the hook. So there’s a lot of ways of talking about
AI that just reinforce the AI mythology. If you say, AI is dangerous,
it reinforces that AI is even a thing. If you say, well,
there’s a difference in soft AI and hard AI then it lets you call all
the really lunatic people hard AI, but then you still have the soft AI, which is
a thing but the AI is not a thing at all. Like there’s this other point of
view where none of it’s anything. It’s just a way of thinking about code, it’a sort of like, it’s just a way of
thinking, it’s like saying I’m a romantic. I’m an AI believer, it’s a romantic
attitude towards computers essentially, it’s not absolutely false, but
I just think it’s pragmatically damaging. So, I just don’t think any form
of AI is even a thing at all.>>Okay.>>All right,
I do think the algorithms function. I’ve sold a machine
vision company to Google. I’ve played the game,
it’s great, it’s lucrative. Love the algorithms,
really interested in them. Actually, if you just think of it as math,
it’s gotten really interesting lately. I think it’s great, but
the AI mythology is totally rejectable. But the change is happening. I mean the market forces
push it very hard.>>But the market force is
always based on human belief. It’s what people think of, the price of something is set
by the human perception of it. So, it’s not actually, AI is not a thing. It’s a cultural change that people
raising money through the mythology of AI have convinced everybody else to buy into. That is happening.>>Okay.>>And algorithms are getting interesting
and computers are getting faster and all that, that’s all real, that’s true.>>Yeah.>>But, you know it’s a little
bit like a gestalt thing, you can switch foreground and background, there’s this transformation
you can experience in your head where you see all the same stuff you see all the
same results you read all the same papers, look at all the same math, look at all the
same things but there’s just no AI there.>>Right.
>>And when you can experience that reversal, it’s like so good all of a sudden
>>[LAUGH]>>computer sciences comes back as an actual rigorous field that’s
an actual science again. It’s no longer an alchemical quest for
the divine.>>Good.>>It’s like this other,
you get rid of all the crap and you can actually be empirical again. It’s a wonderful release to
actually see this clearly. And all you have to do is not
believe in AI as a thing.>>Good.>>[LAUGH]
>>I’m good with that.>>[LAUGH]
>>It’s good. Because when you talk,
you mention consciousness and subjectivity as things that we
don’t even know how to define. Which you probably meant, or not, that you really can’t put that
stuff in an algorithm right now.>>No, no, I’m saying something
a little different than that. I’m saying that we don’t know if we can. It’s really important. To really dig into this, you have to,
if you want to be formal and rigorous, you have to take a skeptical
position about all of this. You can’t say, I believe that
consciousness is a special thing. Because you can’t defend that either,
that’s why I’m one of the, the right way to criticize
the mythology of AI or the sense that AI is even anything at all,
is on a pragmatic basis and to recognize that there’s really no,
it’s, I mean because it is a religion. It’s like it’s like going to somebody
who deeply adheres to Zoroastrianism or whatever it is and saying,
what a bunch of nonsense. Like you can’t. That’s ridiculous. If somebody really believes
that machines are coming alive, I’m not going to criticize their religion. What I am going to do is I’m gonna say, maybe religion isn’t the way we should be
deciding who gets healthcare, or whatever. I mean maybe that is not
the right system to apply there. And that’s totally reasonable. And every functioning society depends
on being able to draw those lines.>>Yeah, is the rapture of the nerds.>>[LAUGH]
>>The rapture of the, was that mine? Somebody wrote that.
>>Yeah, it was Mark O’Connell said that in his book To Be A Machine.>>Okay.>>The whole transcendence,
trans-humanism, like the rapture of the nerds.>>But the thing is you can’t,
there’s a danger in saying the or whatever, those are the crazy ones but normal AI people are respectable places,
even they’re also crazy. There’s absolutely no
reason to believe in AI. And I’m talking about my direct
colleague who I adore, I mean but I just think this is a crazy belief. This is one of these thought structures
like believing in the deep state or something like that there are these things
people can come to believe in that aren’t really necessary believes.>>They’re a way of organizing
the world that’s really optional. And if it’s not helping, get rid of it.>>So that’s optimistic.>>[LAUGH]
>>I mean, really? Right?>>You know, it’s funny. I ran out of time because
I tend to go on too long. So I didn’t actually get to
the thing I told that new idea so maybe somebody can ask me what my new
idea actually is and I’ll tell you.>>Yes, yeah.>>[LAUGH]
>>So meanwhile, right, so I mean, how do you see, you talked about
resistance, right, in somebody. So how do you see that
happening in practice? Now I have one opinion about this, but I would love to hear how do you
see this actually gaining force? Apart from the movement that is coming
in from from inside that chemistry. What about people just like us?>>Yeah, so, we were talking about this a
little bit back there before we got really depressed, so
it seems to me there are three places where there’s some
resistance into the system. And the first one was the one we
were talking about most directly, which is that there’s a resistance coming
from inside the tech companies themselves. There’s a group called,
it’s just like the tech workers. Alliance or something like that,
I can’t remember what the name is. Which is very important, obviously,
because they are actually doing the work. And they’re putting up some resistance
to partly the market forces, and the PR forces that are pushing
them in a particular direction. We have government regulations,
not very much in this country, but there has been a little bit of that,
some talk of that. There is at the moment the possibility that the FTC will have a little more kind of power to regulate, to levy fines. There’s a big fine, potentially billions of dollars that’s
going to be in the offing for Facebook. For having gone and
not abided by a consent decree to not steal people’s data. And then the third one is
having this conversation, is just the consciousness
that people have. And then sort of essentially
voting with their fingers, ie, stopping, don’t use that product,
don’t do this, don’t do that. And as people stop doing those things, they stop having the power
that they have right now. So I think it’s coming from
these different directions and I think the fact that it’s a conversation
at all is a form of resistance.>>Right.>>[LAUGH]
>>No, that’s exactly right, do you want to ask
questions to one another? Do you have?>>I have a question for you.>>Yeah.>>When you say that AI is
kind of like it’s conception, and it’s a rhetorical device,
and it doesn’t exist. I think I understand what you’re saying, my guess is that not everyone knows
exactly what you mean by that.>>I think you’re right about that.>>Yeah.>>It’s very hard to get the point across.>>Right, so can you be more specific and
sort of explain? Because it seems to me that
one of the things that we’re all inundated with is
the rhetorical kind of marketing. So that’s one thing, but then we’re looking at self
driving cars and robotic surgery. Or whatever the things are,
that we’re being told that the reason why we can have these things now
is because now we have better AI. So, why do we have these things now or why are we getting these things now and
we didn’t have them before?>>Well, so fly by wire technology in aviation has existed for a long time. As soon as it got called AI, we decided
we didn’t have to train the pilots to use it anymore, because it was AI.>>[LAUGH] And then the plane crashed.>>And then the planes crashed, and now
this whole series of planes is grounded. And one of America’s important
industrial companies is in deep trouble. So that’s AI, that’s what it is.>>Okay, [LAUGH].>>[LAUGH]
>>But people buy it like crazy.>>Yeah, no, so the thing is,
so I’m mostly a musician. And one of the things,
if you play in a jazz club and then somebody else is playing. During their set, you always clap from
behind to try to create the emotional contagion to get the audience to clap,
it’s your duty. It’s like you’re promoting the idea
that being in a jazz club is cool. That listening to music is cool, and
that you’re an enthusiastic audience, and what’s on stage is great. And you all contribute to that for
each other. And we all do that in the tech world, and it’s this constant bombardment every
single day with stories about AI. Google’s little search page
animation was like, we’ll be Bach.>>Yes.
>>We’ll simulate Bach, why would they do that? Because it’s reinforcing Google’s stock
value, it’s retelling the mythology and pounding you on the head with it. In every single story, every single day,
there’s this story again, and again, and again, and it’s a constant. And people in the industry know they have
to keep on doing it, it is our value.>>So one of the things that was really,
did all of you notice this kind of very interesting Google
doodle a couple of days ago? Where there was a treble staff,
and you could write a melody? [COUGH] And then there was a kind
of magic thing that was going on. And it said, we’re going to look at, I don’t know,
it was like 306 pieces by Bach. And we’re going to harmonize your
melody to sound like what Bach would sound like if Bach
had written your melody. I don’t know if you guys saw it,
it was fun, it was actually really, really fun to do. But when I was doing it,
I was thinking a couple of things. One of the things I was thinking was,
if you’re Bach, you’ve invented this,
you’ve figured this out. Even if the algorithms that
Bach had in his head came from listening to lots of other music
prior to Bach, it was Bach’s. And the thing that was so disturbing about
this thing that I suddenly was Bach, was that I had nothing to do with it. It had nothing to do with creativity. And one of the problems
with machine learning, or with AI, or
with this kind of way of proceeding. Is that it doesn’t allow for the
Stockhausen’s or the Jackson Pollock’s. It doesn’t allow for a kind of paradigm
break with what’s happened before. Everything that is going to happen has
already happened in some fashion already. And it was chilling at the end,
I was like, wow, I’m Bach, and wow, I’m actually not Bach, [LAUGH].>>[LAUGH]
>>Right, I think that’s the breaking point, right? To me, I see,
whenever a machine can become a new Bach, that creates a completely
different array of composing. That is actually resonating with
human emotions in very deep ways->>No, no, no, my friend->>No?>>[LAUGH]
>>It’s already happening?>>No, it’s not going to happen.>>There’s no Bach achievement meter,
you can’t go->>No, of course, but that’s a metaphor.>>You can only do Turing test like
things, it’s all in your discrimination. So what will have happened at that point
is your perception of Bach will have become degraded enough that
it will seem true to you. It’s very important to realize that
that’s an equally valid interpretation. I’m not saying one is absolutely true and
the other is absolutely false, I’m saying that there’s no truth
value to what you just said. It’s a cultural preference
that you’re expressing, and you have to get to the point of seeing
that, that it’s not a real thing.>>No, I get it, no,
I’m not talking about the machine, a machine copying back to perfection,
I don’t really care about that. I think that’s actually horrible, I’m
talking about the new, amazing composer.>>That’s what I’m talking about, too.>>You think that’s going to happen?>>No, no, no, no.>>No, it can’t happen.>>No, no, no,
what you’re saying is an absurd statement, you’re saying something that
can have no truth value.>>Okay.>>You’re expressing a preference and
treating it as something that’s a rigorously evaluatable claim, and
it isn’t, and I need you to see that.>>How do you read those-
>>And I know it’s hard, because you’re bombarded every single day
with rhetoric that indicates that that’s a meaningful thing to say. But it isn’t, see, the thing is,
art is subjective, right? And so radicalness, interest, all these
things are things that you perceive. And so if you’ve been hypnotized
to perceive that a program Is this new radical artist, it’s an entirely possible thing for you to
perceive there is no absolute truth to it. Art has to be something that’s
experimental and intuitive and it can’t be proven and there isn’t any absolute error
and you have to believe in your own inferiority and accept the kind of
ultimately, I guess a little bit if you’re gonna perceive art at all you have to
perceive it from kind of a mystical place. And if you’re willing to perceive that in
an algorithm, it basically is the same thing as writing a check to Google which
is which becomes an economic ripoff. I don’t know how to get this like seems so
obvious. Let me get it across because everybody
else is hypnotized, you all crazy.>>[LAUGH]
>>Actually I’m saying exactly the same thing you are. Maybe you are hypnotized by your rhetoric because-
>>[LAUGH]>>I’m seeing exactly the same thing because I’m saying that there are,
yes art speaks to us a very intuitive and as you like to say which I love,
but there is Bach, Beethoven and Brahms and they are different from
Salieri and other minor composers. There is something about them that
speaks highly to what they created. And so what I’m saying is that
I would like to see a machine become the next be in this
sequence of three guys. And if you’re saying that finding that
machine is going to be degrading my level but I think it’s Bach,
Beethoven and Brahms, I think you’re mistaking because I think
that’s degrading the human spirit.>>Well see-
>>I adore Bach and Beethoven, and I’ve played them furiously. And Marvin Minsky used to improvise
in Bach’s style beautifully, and in other styles, and all that. The thing is, though,
in a way AI is inheriting a certain kind of a thing that happened
that was also a mistake. So, look as you know you have to
raise money for your organization and musicians are all about the hustle. There’s always like, how do you raise
money and so going back to Bach and Beethoven, they had to impress
some flatter patrons and there was this whole economy that
was built around creating a kind of a perception of a supernatural
status of the great composers which funded the whole enterprise and
we’re all in on it and that’s great. But the truth is so wasn’t all that bad. He’s kind of interesting.>>[LAUGH]
>>He was very famous, yeah.>>Absolutely.>>Actually, there’s so many lesser known
composers in history that are astonishing and there’s so much music out there
in the world that’s so incredible and there’s nothing wrong with
worshiping Beethoven or Bach. I mean it’s easier because they’re
dead but I just saying that that we have a tendency to, in our statics to
create this sort of superhuman status and artist and that was an economic ploy and
it’s in the sense, it was sort of like an early prototype with Google and
Facebook do today with machines. And as you know, so
I do think the situation is a little confusing but I-
>>Let me, can I try a totally
different attack on this?>>Yeah, yeah.
>>So I do this sometimes with undergraduates
who I have the same argument with. And so they’ll say, well, somebody will
say well if a machine can start to write music, that’s great, cuz then as
music consumers we have better music, we don’t have to pay the damn musicians.>>[LAUGH]
>>And all this, and so I said well okay, let me apply
this to a different area of life. What if Google and Facebook told you hey,
we have these AI’s out here and these AI’s are having better sex with
each other than humans ever could. Would you say great, then they can
have the good sex we don’t need it.>>That’s impossible.>>No, wait, wait, wait.>>[LAUGH]
>>Something just went right here.>>Whoever is judging that doesn’t
have any idea about sex is.>>[LAUGH]
>>Yeah. Well, substitute sex for music.>>Right.>>Seriously, I mean, like as soon
as you take this consumer attitude about music then AI can
be a great musician. But there’s a whole attitudinal
frame that you have to adapt.>>Exactly, good, that’s it,
that’s the point.>>That’s the convergence point. That’s precisely right.>>[LAUGH]
>>Stop.>>[LAUGH]
>>Wait, I can’t keep talking.>>So that restores the faith
in humanity right there, right?>>[LAUGH]
>>I was trying very hard to kind of so it works.>>[LAUGH]
>>We hope. So there is something about
us that’s special, right?>>Well.>>So, you should say something.>>Yeah, yeah, please,
you said well, come on, bring it on.>>What do you mean [LAUGH] what
do you mean like we’re special?>>I mean we have-
>>We’re different, we’re different.>>Yeah, very, deeply different, right?>>Yeah, and we’re not machines.>>Yeah. So, yeah.
>>But we run the danger of being turned
into machines by hypnosis.>>By AI that doesn’t exist.>>No by market far as they’re hypnotizing
us to converge and use more and more machines and
push the button to buy more tennis rackets because we like tennis and
all that kind of stuff.>>But
we like the racket better than them. Now we’re just playing it virtually,
we’re not actually playing tennis.>>Yeah.
>>Now we’re just thinking about it.>>Anyways.>>One concluding remark and
we have to move on, yes.>>Hey, can I tell you the quick
idea I ran out of time for?>>Please, yes yes.
>>[LAUGH]>>I was gonna tell you this idea but I ran out of time cuz I do tend to go on,
it’s a problem. The idea is, I have been thinking
about this wave of horrible people, all over the world, and all this different
situation, we have something in common, which is the angry young man who feels
he’s not getting enough attention and just doesn’t just exist until
he shoots a bunch of people or turns into a total jerk and
he gets that way online. We saw that with the shooter
in the New Zealand Mosque, we’ve seen it in Charlottesville,
we’ve seen it in so many places and all these people have
this quality in common. And if you read their rhetoric
there’s a fascinating thing. They all go on and on and on about this
thing they called replacement theory, which is this idea that white people
are being replaced by non white people. But the thing is,
if you look at the Islamic people who are extremely Isis people,
they also were talking the same way. And they have the same way
of saying we’re true Islam, we’re being replaced by these fake people. And there’s a sense of these fake
doppelgangers are coming to take over in Charlottesville. The neo-Nazis chanted,
you will not replace us. Jews will not replace us. And I was thinking about
this replacement thing. And I have a theory that one part of
what’s going on that’s a little different this time then what happened in the 30’s
and other and every other time in history. I think there is this feeling that people
are wondering if they’re obsolete. They’re wondering if there
is any place for them. Where I live in Silicon Valley or
if you I live here there’s a place for you because you are close
to the big computers. You’ll be one of the big computer tenders. But if you’re out there somewhere in
the world and you’re far from the big computers you must, I think it feels like
you’re becoming obsolete that there’s no place for you that modernity isn’t your
friend, modernity doesn’t care about you. There’s no future. And I wonder if this replacement feeling
of resentment towards other humans is actually a displaced feeling of resentment
towards our modern moment itself. If I was going to put it in some really striking way to get a
headline out of a newspaper or something, I’d say, maybe AI is actually the thing
that people are being are afraid of being replaced by and that’s what they really
are shooting at because they’re bombarded every single day with this rhetoric
that they’re going to be surpassed. And that’s the thought.>>It’s a very good thought, yeah.>>[LAUGH]
>>On that note, we have about 20 minutes or so
for questions from the audience. Please be brief and to the point so
that everybody can->>Unlike me.>>Yeah.
>>[LAUGH]>>[LAUGH] [INAUDIBLE]>>Check, there you go.>>Thank you to the panel, I learn a lot. I’m also a big fan of a TV
show called Jeopardy.>>[LAUGH]
>>And one of the iterations on the show was to pair two human beings who
are the two Jeopardy geniuses of the ages against a, quote,
unquote, machine named Watson. Inevitably, Watson buzzed in
before the two human beings. Inevitably, Watson came up
with the correct question. So my conclusion was that Watson is
smarter than the two human beings, am I correct?>>No.>>Well, how about Watson processes
faster than those human beings, but what does that have
to do with intelligence?>>The correct thing is that IBM should
be paying wages to all the people who contributed data to that algorithm because
they programmed this thing for IBM. What it is is it’s a human-created
thing that a corporation pretended wasn’t made by people
in order to not pay the people. And once you see it that way,
then it all starts to make sense. That’s the ground truth. And that actually could
be a wonderful thing. There’s nothing wrong with that. It’s not anything in itself,
it is unpaid labor precisely.>>But even if it were paid, what then?>>If it were paid, it would be
like John Henry and the railroad. It would just be saying, well, a whole
factory of people making machines and working together cannot
do one guy with a hammer. Which is true, but it doesn’t mean that those people
in the factory shouldn’t be paid. The difference is that
these people aren’t paid. I really think understanding this in
terms of economics is the most clear way to get it. So if there’s a way for us to work
together to solve problems that we couldn’t do before using computers and
networks, that’s great. I’ve devoted my life to that, so
many people in computer science have. I worked really hard on this
internet thing trying to get it to work a long time ago. That would be fantastic, if the Jeopardy
thing is a demonstration of doing that, that’s fantastic. But the AI component of it is
just a way of not paying people. It’s just a way of pretending that
the people who actually did the work don’t exist. And that will destroy the civilization,
by definition. If the civilization refuses to acknowledge
its members, it implodes into nothingness. And that’s precisely what Watson is doing, with all due respect to IBM that
needs the brand recognition. I want them to be successful, but
I just wish they’d find a different way.>>[LAUGH]
>>Another question right here [INAUDIBLE]>>What concerns me most, Sue, I think you used the word addiction. You were talking about people looking
at their phones, and we all do. It’s a variable reinforcement schedule,
which, in psychology, is the strongest way to get
people to do something. So that concerns me quite a bit. What concerns me more is that now, and
I think we’ve all had this experience, you go to a restaurant and
kids of every age, every age down to two,
have a screen in front of them so that they don’t disrupt the other
diners at the restaurant. Instead, their parents give
them a big kiddy screen, here, go play with this, from the earliest age. That concerns me a great deal. I’m happy my kids were the last
generation not to grow up with a screen in their hand from
the time they could hold it. What should we do about that?>>Well, what should we do, more crayons,
>>[LAUGH]>>And more coloring books, coloring books are coming back.>>Yeah.>>One thing I keep thinking about,
I live in Vermont. And I live in a place where
we do not have connectivity. And I was thinking the other day that,
for the longest time, those places have been considered
to be kind of like bad. You can’t get online. Pretty soon, they’re gonna be
like the oasis of our society. People are gonna wanna move to these
places because they won’t have that problem with their children. This is a choice that people make. It’s easy, it’s sort of
the equivalent of taking the TV that people used to plop
their kids down in front. And now we have that, so I don’t know. I’m with you, my daughter is
too old now for that, happily. She’s over there. [LAUGH]
>>Father of a 12-year-old daughter in the Bay Area, so
deal with it all the time. My daughter’s friends who have parents
who work at the tech companies, the cliche is true. They are forbidden, like if they’re gonna
come over to our house, the parents will call and say, can you please make sure
they don’t get in front of a screen? So I took a different tack with her. I did two things, and neither of
them will scale for anybody else, but I’ll tell you what I did. One thing is I took her to all
the companies so she could see them. So she’s been to Twitter,
and Snap, and Facebook. And there’s just like all these little
cubicles of socially-awkward nerds.>>[LAUGH]
>>And there’s this weird vibe, and she’s like, ew.>>[LAUGH]
>>So for all this information transparency,
if it were genuinely two-way, it would solve itself, but it isn’t. If kids could really see behind the
curtain, but of course, there’s no way to get a billion kids to visit these places,
so that doesn’t scale, but it works. And the other thing,
[LAUGH] at Microsoft, we have a dying or almost dead platform
called the Windows phone. So I gave her a Windows Phone. And very little works, and
none of the surveillance stuff works. And it’s a great solution cuz they
can still do the basic stuff, but everything else is broken. And it’s kinda cool, so
it like sorta solves the problem.>>[LAUGH]
>>So those are my two little sneaky things. She’s not listening right now.>>[LAUGH]
>>Next question here to your left.>>Hi, my name’s Darian, so
I’m an AI researcher at a startup here.>>[LAUGH] You don’t exist.>>[LAUGH]
>>I don’t exist.>>Wait,
you said you’re an AI researcher at MSR?>>No, no, at a local startup here.>>Local startup, okay, great.>>And so I think I get what
you were getting at, which is, when we think about AI, we have a tendency
to think of it as a monolithic being. In reality, when you’re working
with the foundational research, it’s really a set of curated models
looking at very specific small problems. And that painting it as a single
monolithic construction is problematic in many ways, right? [INAUDIBLE] problematic, from our point
of view as a startup, because people have these crazy expectations for
what our product can and can’t do. But I think in your point, also in
terms of like framing human meaning and understanding, it’s problematic
because we elevate it to a level of coherence that perhaps it isn’t. And so I think that’s what I understand
from how you’re framing this. The question I have for you is, I guess,
getting back to your student’s original question, which is,
what is the purpose of human life? Even if we were to remove AI
as referent of human meaning, I think it still fails to understand
the fundamental problem, which is that human meaning is something that’s
a deeply subjective and a personal thing. The way in which define meaning for ourselves is something that is an internal
conversation that we have with us. And that if we don’t look at,
I think one of the challenges I have, I come from an Eastern culture,
I’m a Hindu. And we don’t often have an individualist
way of framing our identity, is that in the US and
in Western civilization, we frame human value as
essentially an external thing. So it’s the value throughout the society
or it is The things that you do or the things that you produce. And I agree that in itself is problematic,
but changing the rhetoric from AI to something
else doesn’t get at that problem I think. And then the question
that I have then is like, where are your points of resistance and
what is your intervention space? So it’s either a deeply personal thing
that you need to do on your own, but that feels like getting at
global things where you could have systems that allow you to basically
render the human life meaningless and allow for state violence and
allow for other things. Or if you have, if you’re looking at
it as an engagement with these larger institutions, you run into collective
action problem which you don’t have enough mass to make meaningful change. So I don’t really know where to I
guess frame the conversation and where to frame I guess
our intervention here.>>Well,
I suggest economics as the solution. So in your start up, look at all your
trainings, I don’t know what you do, but there was somebody involved
in producing every bit that came into whatever
algorithm you used. There aren’t any angels or
aliens who’ve shown up, who are providing us with corpora for
our research. Pay those people and
suddenly you’ll have a feeling for where the line of humanity is. If you don’t pay them, you’re living in
a dreamland where you’re pretending and that’s dangerous. Pay them and
then you’ll start to get a feeling for who they are, what they do, what it means,
and that’s the way to acknowledge people. And I I don’t quite buy your East West
distinction, just in the sense that when you really go into the Western literature
there’s a lot of mystical stuff. It’s not that different. And honestly, a lot of Eastern cultures
are kind of exploitative in a similar way. There’s a precedent in the caste
system to what’s going on right now. We’re creating a caste system but
the tech companies taking over the world. And it’s based on how close
you are to the computer. So I’m a Brahma and
in this world a reluctant one. And so anyway, those are some thoughts. But pay the people who are responsible for
your data existing on every level, even if they were even measured passively. And then think about your
philosophy again, and I think you’ll notice that it’s shifted.>>I worry about paying. I mean, I’m not opposed to getting paid. But I worry that it lets
the companies off a little too easy. So you take their data, you pay them,
and then you do creepy things with it. But you’ve already exonerated yourself,
you’ve paid them. So at what point, so how do you-
>>Yeah, I have an answer for that.>>Good.
[LAUGH]>>The point is when people can collectively bargain and have enough power
to charge enough that it cancels out the creepiness,
that it’s too expensive to be creepy. That’s the point where it’s fixed. So this is a whole other thing about what
the new economics theory has to be like. But in order to correct for the disasters
that we’re creating in the world, we have to have the people
who supply the data that runs everything be able to collectively
bargain to get enough money for it. [COUGH] And that should equalize to
the point where it undoes creepiness.>>Okay, but that’s every
single person in this room and every person walking down the street.>>That’s correct.>>Right. It’s hard to organize unions. How do we organize all of us?>>Well, we have this thing
called the Internet that makes organizing people better.>>No, not really.>>It’s like, no, no, no I actually think this whole thing
could turn around I really, I’m gonna.>>[LAUGH] Wow it’s getting
really optimistic now.>>[LAUGH]
>>I’m gonna try to be like, no, I am gonna try to be optimistic about
this cuz I remember the idea is from the start of it. And I remember how they went bad, and
the good ideas haven’t been disproven yet, they’ve never been an honest chance yet. And I still think we can at least try. I mean, the problem is that there’s such a
astonishing centralization of power about everything in the Internet, where there’s
just a few companies that are kind of running everything despite the illusion
of socialist openness for everybody. That the idea space that’s been explored
is miniscule despite all this talk about the radical Silicon Valley and all the
startups, everybody is almost the same. There’s actually massive conformity. And I don’t know how long it will take,
but I’m really hopeful that by testing alternate ideas we can come
up with systems that work better. One of the problems right
now is that there’s an incredible incrustation
of complacency and fatalism. Everybody believes at everything, every
single article of our Facebook, yeah, Facebook is horrible, it’s destroying
the world and nothing can be done. It always end up at the last line,
it’s always we’re stuck with this, our world is shit forever,
sorry for the language. And I simply will not go there,
I just refuse to go there.>>Okay.>>Okay, we’re gonna have our last
question because we were kinda hitting a wall over there.>>Hello, my name is Will.>>Hey.>>I’m a friend of Jared’s.>>I love this man.>>[LAUGH] It’s great to be here. I had a question that was kind
of answered in reset again, and Sue made a really
amazing point about choices. I’m a musician, an artist,
travel the world a lot and studied a lot of indigenous cultures. I’ve studied. I lived abroad for many years studying
different types of music and science and technology, and life, etc. And I find it interesting that in
those places, when this process began, we talked about what we know about our
thoughts and our ideas like love and compassion and this. I found it easier to more tangible object
in places that didn’t have the technology. They were cultures that
sit around the fire or have a conversation with
your grandparents often. I think about the way I grew up versus
the way my children are growing up. My grandparents and
great grandparents were around, so I got to see my mother get chastised
by my grandmother which was fantastic.>>[LAUGH]
>>[LAUGH]>>Because there was a time growing up where I thought she was the queen and
I realized she wasn’t. But that order created another kind
of concept in my mind on order, and information, and on age, and on respect. So it kind of was very tangible and
kind of really organic. And I guess with all of you being
familiar with the technology and familiar with being a parent and
being into things like music and films, where do you think that
bridge can be created between both of those things without
losing a plot on one or having the other one sound really
ancient in the digital side? It sounds like this is new at what’s
happening, cuz I have children as well and I brought them up on vinyl and going to
the library and getting library cards. And for every book my daughter downloads, she has to read two books
actually hard cover books. She can look up a book and she can understand what a glossary
means in the dictionary, etc. So where’s the bridge between
the technology side and the actual, I’ll call it the analog or organic side?>>Go.>>[LAUGH]
>>[LAUGH]>>[LAUGH] This guy has to listen to me too much already, you should take it.>>Gosh. Where’s the bridge? You’re the bridge. You just told us you were the bridge. I mean, that’s the thing. I mean, the calvary isn’t coming for you. You are the calvary and
you figured it out. And I actually think that for
a while in there, we were also enamored of this stuff that
we didn’t really have much skepticism. We didn’t really question it. And I think we’re questioning it, but
we’re not rejecting it completely. And so,
I think that this is what we do now. We’re trying to figure this out and
I’m not sure it’s scales. I don’t think we say,
this is the right way to do it. I think we figure out
the right way to do it. And as I said before,
the thing that worries me. Is that there is gonna be
this new divide with people who can’t stop being
connected in some fashion. But I think that, hopefully,
we’ll be able to figure this out. We’re gonna have to figure this out,
I think, almost one by one. But I think that we are, and
by having conversations like this, that’s where it starts. And you say what you do, and
someone thinks, that’s a really good idea. And as a journalist,
as someone who writes for a living, I hear what you’re saying, and
then I think, maybe I’ll write about that. And then more people hear about that. And Jared’s right, we have the Internet. And the Internet, for all of its problems
and faults, is a remarkable megaphone. So it could be, in a sense,
its own undoing. And I wanna leave it there.>>Like watching the video
of this wonderful evening.>>[LAUGH]
>>It’s gonna be on the web for everybody to see. And I totally agree with you, 100%. In my family, I have five kids,
the oldest one works for Google, actually. And he’s very well paid, and he hates
his job, which is very interesting. But The point being that I think,
>>[CROSSTALK]>>[LAUGH]>>But the family, you mentioned the family. I think that’s absolutely important. And if you don’t have a family, you can always create a space where
people are really open, no screens. And what we do in my house, for example,
is every Monday, Wednesdays, and Fridays, we have the philosophy dinners, where
I’ll espouse a philosophical question. And a 12 and the 7 year old,
and that’s it.>>[LAUGH]
>>And we go, and they all have a voice, right? And listen to one another, no Wikipedia, no cheating,
just humans looking at humans in the eye. And I think this is
enlightened resistance, to remember the human condition. How we have a body that feels and touches
and is emotional like we all are tonight. And man,
there is no machine that can do that.>>Right.>>That’s my final word.
>>Well, that sounds like
the perfect place to stop.>>[APPLAUSE]
>>So as our part of our resistant, we should drop a note with our
ideas of how we’re gonna resist or how [INAUDIBLE] lives,
and share those ideas. I think that’s brilliant,
it’s really brilliant. That you all so much,
this has been amazing evening, and it’s part of our Museum of
Science adult programs.>>We hope that you will come back. If you’d like to get on our mailing list,
you can sign up over there. And thank you so much for coming, and we hope to see you again
in the very near future. Good night.>>[APPLAUSE]