Picture Me Coding
Picture Me Coding is a music podcast about software. Each week your hosts Erik Aker and Mike Mull take on topics in the software world and they are sometimes joined by guests from other fields who arrive with their own burning questions about technology.
Produced by Neiko Will.
Logo and artwork by Jon Whitmire - https://www.whitmirejon.com/
Reach out to us here: podcast@picturemecoding.com
Picture Me Coding
Software In Pop Culture with Amy Salley
This week we were fortunate to have a super special guest, Amy Salley, who hails from the podcast Hugo Girl. Amy agreed to come on the show to talk to us about depictions of software in science fiction and we discussed works such as:
- A recently published short story by Naomi Kritzer called "Better Living Through Algorithms" (read it here).
- 2001: A Space Odyssey the 1968 Kubrick film
- War Games the 1983 film
- Neuromancer by William Gibson
Links!
- Hugo Girl Episode List
- Better Living Through Algorithms by Naomi Kritzer : Clarkesworld Magazine
- Vox article about the latest chatGPT model engaging in deception.
Mike and Erik's Favorite Episodes of Hugo Girl
[MUSIC] Hello, welcome to Picture Me Coding with Erik Aker and Mike Maul.
Mike, this week we have a special guest, Amy Salley.
Amy comes from the podcast, Hugo Girl.
Amy, I have been listening to your podcast and enjoying it.
I've gotta say, I was actually nervous to have you on the show because I thought, "Well, here's someone who knows how to be on a podcast." Welcome to the show, Amy! Thanks so much for coming to hang out with us today.
- Thank you. It's more that here's a person who likes to hear herself talk. I'm glad to have another place to do it.
- Tell our listeners a little bit about your podcast if you wouldn't mind. Maybe give us a pitch for it, and I want to say, I really enjoy listening to it. It's a great podcast.
- Thanks. That'll go right to my head. That's great.
Our podcast is a podcast in which we talk about, well, we started out talking about Hugo Award-winning books.
Like sci-fi fantasy books that have run the Hugo Award.
It's kind of branched out since then.
Now we talk about sort of Hugo, Hugo adjacent, sometimes Moby Dick, but we go for lighthearted.
It's a feminist take on all of these things, but we try not to be humorless and that sort of thing.
But that's the podcast.
As you say, I find it really enjoyable to listen to because it's super funny.
You and your co-host, Laurie and Hayley, they're always making me laugh and it's just incredibly entertaining to listen to.
I have a few favorite episodes.
Mike, you've been listening to their podcast.
What are your favorite episodes of the podcast?
Hugo girl.
So I really like the Dune episode.
I did too.
That's one of my favorites.
Dune is very important to men, so that makes sense.
I actually wanted you to trash the book quite a lot more.
I was like, I can't wait to hear them just destroy this book.
Yeah, I confess I don't really like the book very much.
But I also think you did that episode before the new movies came out.
And I would also say that the earlier version of the Dune movie is probably one of my least favorite movies of all time.
The David Lynch one.
David Lynch one.
Yeah, I really, really, really hated that movie.
But yeah, I like the most recent episode that you did because I just finished reading "A Fire Upon the Deep" and was also amused by the "Scrowed Riders."
And I liked the 2001 episode, which we're going to talk about.
I've never actually read the 2001 book.
So it was interesting to me to hear that.
Is it a book or a short story?
I thought it was a short story.
I think it's...
It's a book.
They were actually written at the same time.
So they were developed and written and filmed in conjunction with each other.
So it's sort of like a companion book to the movie.
But I liked the book way more than the movie, although I liked the movie way more after I read the book.
Because I did see it before I read the book and did not like it.
I read the book, saw the movie, liked the movie more.
So highly recommend it.
It's great.
My favorite episode.
Definitely, I like the Dune episode.
I like the Moby Dick episode.
I was like, isn't this cheating a little bit?
The sci-fi podcast reviews.
This is great.
And I love the enthusiasm you and your co-hosts have for Moby Dick.
Yeah, it's true.
It is a perfect book.
I agree with you.
We also like to...
It is perfect.
I think the Moby Dick episode is my favorite episode.
But I might be biased.
I think we also like to talk about the Franklin Expedition and the Terror.
It's another thing that pops up in our episodes quite a bit.
So just talk about what we want.
But I thought you guys liked it.
Awesome.
I like your Neuromancer episode too.
The last one for me because I read that book and I was like, I kind of like this book.
I like how it's written, but jeez, it's really confusing.
That's probably just me.
And then in your Neuromancer episode, you're talking about how confusing it was.
I was like, oh, I feel so relieved.
It wasn't just me who was confused.
I think I said it at some point.
And during the Neuromancer episode, we were talking about how confusing it was.
And I was like, I was really hoping I didn't have an inflated sense of our reading comprehension skills.
It's like, is it just us?
I wasn't sure.
So I had the exact experience.
Is it just me?
But you had this funny moment in that episode where you're talking about, in Neuromancer, the main character case is he's jacking into the Matrix.
And he's got this deck that he uses or console or he's a console.
I can't remember.
And you're describing what you imagine the unit is.
And I had this weird moment when I was listening to you thinking, oh, wow.
I just kept thinking he was getting inside a tanning bed.
And it never occurred to me that that might be weird.
And I'm the only one who had this weird idea that he's laying in a tanning bed in order to jack into the Matrix.
I think it's up for grabs.
So sure, full body computer.
Just let's do it.
I was so happy that you agreed to come on the show.
And this week I wanted to talk about software and pop culture.
So this is a software podcast.
We talk a lot about software, about the industry, about technology, adjacent things like that.
But Mike and I really wanted to do kind of like a pop culture, software and pop culture episode.
And the reason this came up is that Mike sent me a short story.
A short story is called Better Living Through Algorithms.
It was written by a Naomi Kritzer, published in 2023.
And it won the Hugo Award for Best Short Story in 2024.
And I thought, wow, Amy might be able to help us unpack what's going on in this story.
And also help give us a little bit of feedback on this theory that I sort of developed as I was reading the short story.
What's your theory?
So I'll tell you the theory first.
So I was reading this short story and you guys can tell me what you want to talk about in what order.
But so Mike sends me the short story, Better Living Through Algorithms.
And it's about this.
The narrator describes, the narrator's name is Linnea.
And she describes how her friends are telling her about an app called Ableek.
And the app is supposed to be a living improvement app, like Better Living Through Apps.
Almost like a productivity thing, but more than that.
And it's kind of surveillance-y and intrusive and weird.
And it goes beyond privacy boundaries.
And it's kind of, it's kind of threatening, seeming.
But as I'm reading the story, the whole time I'm waiting for this other shoe to drop, like, okay, what's the dark secret behind the app?
And the narrator and her friends are kind of asking the same question.
What's the dark secret behind this app?
And you get to the end of the story and there almost isn't a dark secret.
Mike, how am I doing characterizing this?
Is this how you read it too?
Yeah, I think so.
I think there's this tendency to expect that there's some ulterior motive behind this app.
It's trying to figure out ways to monetize you or control you or spread disinformation or whatever.
And the fact that that doesn't seem to happen is almost a surprise, given what we're used to.
That's really interesting because I feel like that same thing happens, I mean, this is happening now.
I remember a conversation very recently about whether or not, I mean, it's that same conversation I think we were having very recently about whether our phones are listening to us, you know, about whether everything around us is listening to us all the time.
And it's sort of like, some of us, I think, just assume that it is and don't care unless something bad happens.
And then some of us are really worried about that possibility and want to guard against it at all costs.
And I don't know if either one of them are good positions.
Yeah, the way she's introduced to the app is first by her friends and then her boss says, I want you to use this app because it's a productivity improvement and that's pretty creepy in itself.
And then she starts filling out the survey that the app wants her to fill out in order to get her onboarded.
And she starts filling it out because her boss told her to.
And the app pretty much comes out and says, okay, we cleared out all of your answers.
Can you give us like a less bullshit answer and we'll feed some fake lie survey response to your workplace so that you look like an obedient worker B.
And she's like, whoa, what's going on here?
And then after she fully signs up for the app, she's like, okay, it asks her for her goals.
And she says, I want to learn to draw.
And then she gets a phone call the next day and it's from a human being saying, hey, I'm supposed to call you because you need to start drawing.
It's pretty creepy.
So is the app helping her work productivity or is the app supposed to be helping her realize like her most interpersonal dreams?
The app is supposed to be the promise of the app is that it actually improves her life and the lives of people who use it.
It's supposed to be literally optimizing your happiness.
Okay.
Interesting.
So everything from you should open your curtains in the morning so you get more sunlight to, you used to like to draw, you should take up drawing again.
That was fun for you.
It has a wardrobe feature where you turn like a camera on your closet and then...
Okay, but I want this app.
It has a wardrobe feature where you turn a camera on your closet and somebody calls you and they say you should wear this yellow scarf today or you should wear that bright green shirt.
Oh my God.
And then you have to wear that thing.
This is an ADHD app.
This is an ADHD app.
This is a neurodivergence app.
This is like, here are all the things that you know you like, that you don't have the executive functioning to do any of them.
So we're just going to tell you what to do and you just follow the instructions and you'll be really happy.
I am down for that app.
I want that app right now.
You got to read this story.
So here, I'll read you a line from it.
So her friend, Margo, is a freelance tech journalist and she ends up writing this story about the app to unpack where did it come from?
What's the like shadowy, if there is shadowy history of it?
And nobody knows before this article comes out.
So in the article, her friend talks about how this is a quote from the story.
"Abelique was run by an AI built and run by a lab at Temple University in Philadelphia.
The computer scientists at Temple had built the app and given the AI the goal of making people happier and then just watched to see what happened."
So the interesting detail is not a company.
It was a research institution that put this together.
And then the next section says, "The article kind of glossed over the ethics involved, but the former grad student emphasized that the question they always asked was, quote, "Which of two good things makes people happier and not, quote, can we make people unhappy?"
Quote, "We already know how to make people unhappy," she said.
"Just look around you."
So at no point in the story does the app go haywire and start convincing people that things that are bad for them will make them happy just so the AI can proliferate in some way and then take over the universe and make everybody eat their slaves.
That didn't happen.
So that's exactly what is fascinating about the story.
That doesn't happen.
And the whole time you're like on the edge of your seat waiting for that type of, where's the, when's the bad going to happen?
And that expectation, that's what I'm interested in.
Not to spoil it, but people do stop using the app at the end.
And I've kind of come to the conclusion that maybe the point of the story is that if you try to optimize your happiness, you probably stop using your phone so much.
But that is one of the things the phone tells people.
The app tells people to do, don't use your phone so much.
Well, this is cognitive behavioral therapy.
It's what this app is giving you.
It's telling, it's reteaching you how to be happy so that eventually you can do it yourself.
At the end of the story, Kritzer writes that interested after the article comes out, people had sort of stopped being interested in it.
She writes, "Everything I believe had told me to do suddenly made sense.
Research showed people were happier making fewer decisions, spending time outside, driving less, spending less time on traditional social media.
But after the article comes out, people find out how it works and interest surged again," she writes.
"But something about knowing how it all worked took some of the magic of it away.
More importantly, other people started finding ways to circumvent the rules.
People started joining the app to try to sell us their multi-level marketing schemes.
Food started attracting posts about meal replacement drinks.
And it was quickly clear that someone was getting a payout for new customers of these gross fake shakes."
So it's actually not the AI that's going wrong.
It's the humans figuring out how to circumvent.
It's us.
Yeah.
It's us.
We're the problem.
Well, that's funny because a lot of these cyber computer, whatever you want to call it, related stories and books and movies, there's...
I mean, the reason they're interesting is because it starts out with what does human do to create technology and then what does technology do to human?
Right?
It's like this...
What's that word, "Oraburus"?
It's a snake eating its tail forever.
We create a machine as a tool and the tool turns around and hits us in the head.
I think that's...
I mean, that's what 2001 starts very literally with.
So that's interesting.
It's like this...
That was...
So that's what I want to talk about.
That's why I was so happy that you agreed to join us today because I thought you could help us look at that question.
Why are we primed to expect that the technology, in this case, software, and a lot of these stories or in science, technology and sci-fi in general, we're sort of trained to think it always goes off the rails, it always goes bad.
Where does that expectation come from?
Is it justified?
I'm interested in that kind of question.
This actually started with William Gibson, by the way.
You brought up 2001 of Space Odyssey.
That's what I wanted to start with, that we can.
Well, I was going to answer that question.
We can, but I was going to tell you that the reason that we expect bad things out of our technology when we're reading these stories is because of Neuromancer, it's because of William Gibson.
Like he started...
I mean, there's a really interesting question about whether the internet and whether our conception of the internet and the way we envision it and all that stuff, whether it went down the path it did because of Neuromancer, or whether he just sort of preconceived the way it would go.
Because he talks about the internet as being this...
I think he calls it a consensual hallucinatory experience, basically, where it's like, we're all connected to each other and we're all imagining this fake place, but we all think it's real.
I don't know.
And because of the way his book pans out and the way that it walks to the matrix could run kind of thing, that's why we expect it, because his is the archetypical story about how people relate to their technology, at least the internet-y type.
Mike, do you have a comment on that one?
I thought you looked like you were pensive.
No, I have some different takes on Neuromancer when we get to that.
But I was also kind of wondering with 2001 if, because it was so early and because it was so influential, if maybe it also had a role in making us expect that any sufficiently intelligent computer is going to eventually turn on us.
Well, that didn't start with 2001 either.
I mean, that happens in...
I can't remember if the Moon is a Harsh Mistress actually came before 2001 or not, now that I'm talking about it.
But the Moon is a Harsh Mistress has an I.I. that kind of goes rogue too.
And I think it's demolished man even in the '50s.
It might have one too.
Anyway, I'm now just babbling.
Oh, so there's precursors for this.
There's stuff way, way earlier than 1960 to 2001 at Space Odyssey where they're talking about computers that go rogue, run amok.
A lot of examples you're saying.
There definitely are.
There are, I think that the idea of creating a computer that can take you over definitely got some legs in the '60s, I think, because home computers weren't a thing yet, but people were definitely understanding what technology could do.
I guess I was thinking about the romantic tradition too, the 18th century, 19th century idea that technology separates us from nature.
Yeah, the only thing I could think of was Frankenstein.
I think that's a perfect example.
Right out of the romantic tradition.
It's a perfect example.
I just thought maybe that since 2001 was so influential that maybe that set our cultural expectation.
But I confess, I don't know a lot of science fiction prior to that.
So we were just about to talk about 2001 of Space Odyssey.
You had brought up a couple of examples that are precursors to 2001 of Space Odyssey that didn't occur to me at all.
What were they?
The demolished man?
You had said one other, I think, or two others.
So I said the demolished man and I said the moon is a harsh mistress.
I'm going to check and see if the moon is a harsh mistress.
It actually came out first.
Is that Heinlein?
It is.
So that was '66.
Just before?
Just before.
So not too far before.
So I thought it was a little earlier than that.
But the demolished man was from the '50s.
And if I remember correctly, don't quote me on this.
I mean, those computers were spitting out tape.
But they were consulted for criminology purposes, basically.
Like they would feed in data and get answers back out.
And it wasn't AI that was taking over in the sense of what we're talking about.
But it was definitely like a what happens when we depend too much on machines kind of little hint of an idea, which I thought was interesting for the time.
I guess I do you think it's kind of a little too wildly English major of me to be like, well, doesn't this come out of the romantic tradition where there's a distrust?
I will never think that.
No.
I want a college thought every time.
If I can get a college thought, I want a college thought.
And no, I mean, it starts with Frankenstein.
It starts with Dracula.
It starts, yeah.
Yeah.
Distrust, right?
Technology is divorcing us from the natural world.
And we create these tools.
We create these technologies.
And it's easy to sort of set them in motion.
And then they go wildly beyond our intentions or our ability to control them.
Right.
I mean, Frankenstein's monster was a direct result of someone playing God, right?
Which is, I think, what is the theme of a lot of these scary IA stories.
It's like, we don't, it's that we don't stop to ask if we should thing.
So let's talk about 2001 a little bit because out in 1968, the, there's the tribe at the beginning and they're to me.
And I think that's the word and they, they as the monolith and then the monolith somehow is related to their ability to somehow that's related to their ability to come up with some tools, which they can use to kill off all their enemies and get the resources.
And then the model is shows up later and then we're on that ship discovery one.
And there's a computer on there called how 9,000 and it performs these executive functions on the ship.
It can run the ship and help the astronauts stay active and it can do psychological behavioral views.
And then it tries to kill them.
Well, very importantly, how 9,000 isn't just executive functions.
He's also trained to run the mission and he's directed to run the mission.
That's his, that's his primary goal, right?
And I think that's really important to what happens in that story.
So this gets to my question.
Why does how 9,000 try to kill the humans on the ship?
Because he learns about the conflict between truth and deception.
Yeah, this is something I learned from your, from your podcast episode that wasn't apparent to me from the movie was that Hal is not just necessarily evil.
He's actually trying to accomplish some objective that's sort of programmed in.
Yeah.
I mean, he's only evil in the sense that, I mean, human beings understand how to, you know, have nuance in their thinking, right?
They can weigh a lot of competing goals or competing pressures or competing interests and they can say, well, this is, and I think Hal stands for heuristically programmed something, something.
And it's the heurism is heuristic thinking, I think is actually at the core of what's going on with him because we can think about things like this isn't an optimal result, but this is going to be the result that gets us where we want to go.
And it's not, and it's sorry, my cat's joining us.
That's why I like that.
Humans can do this without harming people, right?
Hal can't do this without harming people.
He can only, he can only make these cold calculations about what, what the most efficient way to his goal is and how to protect his ultimate goal.
But over the course of being on the ship, he learns about mortality, which is a whole another problem.
So he's already conflicted because he has to, he has to be, he's programmed to be truthful, always do what's correct, never lie, and do as he's told.
But he's, he has to maintain this lie because there's people on the ship that Dave and Frank don't, they don't know what the people on the ship, the other people ship are for, basically, and, and Hal can't reveal that for a long time.
So he has to answer questions sort of, you know, we wouldn't call it lying, but he thinks of it as deception, right?
And it starts to break down his little pathways.
He doesn't understand, he doesn't quite know how to hold these two things in himself.
And since he's already breaking down, when Dave threatens to disconnect him, it undoes him.
He's undone.
Like he doesn't know how to both protect himself and protect the mission and continue to do what he's programmed to do.
And much like most people, he ends up protecting himself, which he also considers protecting the mission because he's also very good at justifications.
So he's not just still like a malevolent entity.
He's actually pursuing a goal.
Is this the AI alignment problem, Mike?
Is that how you would think of this one?
Or that might be a little bit of a stretch, but do you know the AI alignment problem?
Amy, have you heard of this?
No, I am so excited to hear about this.
So the alignment, Mike, am I going to do a bad job?
Are you going to be better at describing this?
You should take a shot at it.
And so I think, I think, I don't think it is a stretch to call this AI alignment.
So there's a known problem that's talked about in the 60s, where if you give an artificial intelligence a goal, it may not pursue the goal in the way that you want it to, or it may pursue other goals that also reach the goal that you gave it.
Like if you think of a computer that's trying to optimize a particular function, there was one example I read once where it was like, oh, play this game where you have to move this hand to catch the ball.
And the AI figured out, well, if I put the hand in front of the ball visually, it looks like I've caught it.
So I've solved your game.
And so the AI alignment problem is kind of like the paperclip optimization problem.
Or actually, there's a really good example from OpenAI.
Sorry, from the news site Vox, they talked about the newest chat GPT model from the company OpenAI.
They came out with chat GPT.
It's called 01.
The code name of it is Strawberry.
And just yesterday or today, there was an article in Vox where the OpenAI contracted with an outside resource agency to determine the safety level of their newest model.
And why they look at this is they want to know, like, if someone comes in, they say, how do I make Agent Orange or Mustard Gas or Dynamite?
Is the LLM going to just happily tell them how to do that?
What's the danger, the threat level here?
Or nuclear weapons, right?
And in the assessment that this outside consulting agency produced, they rated it as a medium out of, I can't remember, it's extreme and critical.
And OpenAI has said, we won't release anything that's greater than a medium, which is kind of arbitrary.
And the interesting thing in this case is that the researchers determined that this newest model could actually deceive whoever's interlocutor is.
Like, you could say, hey, how do I solve this?
And it would give you an answer that it thinks you want to hear.
And it would reason out that that would produce its result.
But then it could do this other thing.
So they gave it like an urban planning example.
And it said, well, here are two possible ways to do this.
This is the real optimization answer.
But this one is more likely to be accepted because it sounds more sustainable and human friendly.
And so I would suggest this and then I would do this.
That's what the AI said.
Wow.
So that's the AI alignment problem.
How'd I do my-- I mean, I think there's definitely something there, but I'd like to hear what Mike has to say.
Why is this a stretch?
Yeah, why is that a stretch?
Mike?
So I think the way you're describing it is-- does make it seem more connected.
I think a lot of times when I hear about alignment, it's trying to avoid things like-- it's trying not to be-- like, if something has been trained on an alt-right Reddit group or something, it's trying to produce-- trying to avoid producing answers that are overtly racist.
Or if it's been trained on something that's clearly wrong, like telling people to take poison, it's trying to avoid that behavior.
And I don't see that as being exactly what's going on with Hal.
But yeah, trying to imbue some sort of humanity into it so that it does things that's more aligned with human goals and human reasons, I guess, does make sense.
Do you sympathize, Amy, with Hal 9000, when you read the book or watch the movie?
Do you sympathize with the computer?
It's harder to sympathize with him in the movie because they don't quite, I think, flesh out why he's doing what he's doing.
Although in his little death scene, it's actually-- I think it's pretty sad.
But in the book, there's this little chapter about him.
And it-- like, right before he starts really going off-- well, I think he's already killed Frank at this point.
But before he starts trying to do Dave in, I think it tells you why he's doing it, right?
And it goes through this whole thing about what I just kind of talked about, like his dueling, programming that he's trying to reconcile and that he can't.
And then when he learns about death, right?
So Dave says, if you don't do what I'm telling you to do, I'm going to disconnect you.
And then there's this great line in the book that says, to him, that was death.
Because he had never slept, he doesn't know that you can wake up again.
And so to him, it's like the end of everything he is supposed to be doing.
And so that's unacceptable, right?
And so he's doing what he has been programmed to do, just in this very unrecognizable way, because it's not the human way, right?
So yeah, I feel for Hal.
We were friends of Hal.
It's the title of our episode.
I don't understand the end of that movie.
Should we just gloss over that, or should we try to-- Read the book.
[LAUGHTER] Is that how I-- is that what I need to do?
It's interesting because you do.
I do.
I actually think the ending makes almost zero sense if you don't read the book, because there's a lot of talking about evolving and de-evolving, and if the ultimate evolution is actually back to a baby state kind of thing.
And so because Hal, when he's being deprogrammed, he actually regresses down to his most fundamental state.
Like he de-learns everything that he's learned.
And anyway, there's a whole thing there about that.
So I would highly recommend reading the book if you want to understand the end of the movie.
I was just trying to even remember the end.
I watched this a couple of months ago.
I ended up watching this movie every few years.
It has all these long, ponderous, slow shots.
I find them really memorable, but my wife honestly thinks it's the most boring movie ever made.
You got to be into the vibes.
You got to be in for the vibes if you want to watch that movie.
I don't want to tell your readers to take drugs, but readers, listeners, you know what I meant.
There's a large baby in space.
I mean, that's the end of the movie.
[LAUGHTER] That's pretty psychedelic.
Well, Wikipedia, I read the Wikipedia thing about the end.
It says, "A monolith appears at the foot of the bed, and as Bowman reaches for it, he's transformed into a fetus enclosed in a transparent orb of light floating in space above the earth."
And I was like, really?
I didn't know he was transformed.
I thought he's still in the ship and there's this giant baby outside.
And that represents something about the universe or something.
Just to be clear, we're not advocating drug use unless they are drugs that you need or that make you feel better, in which case it's fine.
Or make you understand the film.
Don't do drugs.
I think there's also atomic bombing happening on the earth.
And I think he's going back to either fix or continue the atomic bombing.
I can't remember.
One of those two things.
So the other film I wanted to talk about today was War Games from 1983.
And again, my thought was, why are we always expecting that software becomes intelligent and then becomes dangerous?
It becomes this technology that we unleash and we cannot control it anymore.
Did you remember the movie War Games, Amy?
Have you seen it?
I actually very recently watched this movie for the first time.
I had not seen it before and my boyfriend was like, "Oh, no, we got to watch this movie."
What was your take?
Do you have a quick review for us?
What did you think?
Well, I loved it.
I loved it.
I thought it was unhinged and wonderful.
All not much trouble.
Unhinged is a great adjective.
Why does that come from?
I like unhinged.
I mean, it's just, this kid is running around what the Pentagon.
I don't even know what he's running around.
He's running around some deep covert space.
And he's sneaking in with a tour group and hiding under the stairs or whatever it is he does.
And then he's like, "Well, this isn't working, so he's going to go find the programmer.
He's going to go tell them.
He's going to get that guy."
And I don't know.
It was, the plot was amazing.
I'll do a quick plot summary.
So it's from 1983, directed by John Bottom.
I didn't actually remember that or didn't know that name.
Starring Matthew Broderick and Ally Sheedy.
So he lives in Seattle.
He's like a hacker.
He's a teenager.
This is in the early '80s.
So there wasn't that much of the hacker prototype in the public eye yet.
And he's first, he's like hacking into his school's computer system and changing his grades.
And then he's interested in computer programs that are games.
He's interested in computer games.
So he tries to find access to this game company before their next game come out.
He finds out it.
He finds it in like one of the programming magazines he reads.
And so he starts just like dialing numbers and he gets access to a computer that has these tantalizing games it talks about.
And this is the central thing that they're interacting with in the movie.
The computer is called Whopper.
I can't remember the name of the acronym.
I should have written it down, but he calls it Joshua named for the inventor's son.
So the vendor is this guy, Falcon.
Are I the Joshua?
Yeah.
So Falcon's son who had died, he names his program after him Joshua.
And the program runs NORAD's simulations for nuclear war.
And so as he hacks in, he ends up like triggering this nuclear war scenario and all hell breaks loose.
And he gets caught by the feds.
And then for some reason they take him to Cheyenne Mountain and NORAD.
NORAD looks almost like a Star Wars space cruiser.
Let's take a hacker directly to the thing we think he's trying to get into.
That's a fair critique right there.
Hey, this guy's hacking the system.
Let's take him there to see what he did.
Yeah.
They put it, they like lock him in a room with a guy with a gun like guards the door.
And then he does some cool MacGyver stuff to break out.
And then he joins a tour group.
And every time they see that, I'm like, they have school kids touring NORAD.
That's pretty rad.
That's cool.
I'm not a tour norad.
Yeah.
It's like how many floors underground is this thing?
And they're just on a tour.
I also really like the massive computers with the blinking lights.
I always liked that on the 80s movies.
I also wonder if this movie was single-handedly responsible for anybody knowing which is the bad DEFCON.
Because do you hear, I think I'm going to get it wrong.
I think you're going to be DEFCON 5 is a bad one.
But really it's DEFCON 1 or whatever.
I am right with you.
I have to do the same thing.
I have to invert them in my head.
DEFCON 5.
No way.
That's wrong.
Go to the other side.
I'm just telling you, if a man of a certain age ever corrects it, I'm going to know where he got it from.
It's going to be this movie.
From Wargames?
Yeah.
I will not correct it.
I'll be like, is that the right one?
I can't remember.
So this movie came out between my freshman and sophomore years of college.
Ooh, good formative time.
It was, I think I was 19.
And it was far more influential on my life than I would like to admit.
I could see it.
You know, I was a chemistry major as an undergrad.
And although I was already pretty into computers, and this movie came out and my reaction is, yeah, it's about nuclear annihilation, which would be bad.
But also that kind of looked fun.
And I would maybe like to do that too.
So.
As a non-computer person, would you call the computer in that movie?
I mean, is it an AI?
It would have been AI for the time.
Yeah.
You mean the Whopper computer, right?
Yeah.
War operations, planning and research or whatever.
There you go.
It would have been.
You did research.
Well, I have seen the movie about 12 times now.
Yeah.
So it would have been AI for that time.
So something like an expert system combined with a little bit of game theory.
So, you know, long before we had things like deep neural networks, and you know, what constitutes AI today was kind of just a glimmer at that time.
There was a little bit of research into neural networks already at that point.
Like the Perceptron book came out, I think, in the 80s.
Good name.
But yeah, it would have been AI, but not in the sense that we think of it now.
Although there is a reference in the movie where they talk about the software learning to learn, which is a very modern idea and kind of in the realm of what's now called reinforcement learning.
I actually was surprised.
I was looking at how they made 2001 Space Odyssey, and I discovered that Marvin Minsky, MIT computer scientist and AI, like a pioneer of artificial intelligence research, he was a consultant for 2001 Space Odyssey.
And I thought, oh, I wonder if they got any consultants for war games.
And I couldn't find any names.
But what I did figure out is that war games apparently was screened like at the Pentagon and before Congress, and Reagan was talking about it.
And there was legislation that came out because of this film.
It had a huge impact.
There's a long story in history about Washington and science fiction.
Like it's really interesting to go back and read about.
Clark had Arthur C.
Clark was involved in Washington, and he had a lot of influence.
And Larry Niven did too.
Larry Niven was all up in Washington all the time.
It's an interesting history, if you ever want to look at it.
Who's Larry Niven?
I'm sorry.
I don't know who that is.
He wrote Ringworld, The Ringworld.
And a lot of other stuff.
I have a little bit of a beef with Larry Niven, so I don't know a whole lot about him, but I do know that.
Oh, you guys are like enemies.
Well, yes.
He's a mint.
Personal enemy of mine.
He's no one girl.
Minsky is one of the authors of this book.
Look at that.
Sort of the early ideas of neural networks.
When was that published, Mike?
Minsky died at 71.
I think he was born in the 30s.
69.
This originally came out.
Anyway.
I just see it being enticing to go make one of those machines, though.
I mean, I like the idea in this movie that the machine is created because humans can't be trusted to make the big decisions.
It starts out with the scene of these guys have to make this call about whether or not to blow somebody up or something and restart World War III.
I'm not sure.
And he can't do it.
He's not-- He's like-- Right at the last second, he's like, oh, I can do it.
And I guess the machine is supposed to remove that from the decision-making or something.
And but then when it actually comes down to it, it's like, oh, no, the machine can't differentiate between real and fake.
And it can't-- and it's going to go, oh, it can't control it.
I don't know.
I thought that was an interesting tension in the movie.
I thought there was a little bit of a Hal influence here, too, since the computer is-- again, I don't think trying to be evil.
It's just got this goal.
And it's trying to do everything it can to achieve the goal.
And again, it is maybe misaligned with human motives.
I was going to say, this one seems very clearly the AI alignment problem.
It's optimizing a particular thing.
And it's optimizing it in such a way that goes counter to our goals and our needs and our desires.
What were you going to say, Amy?
Oh, just that-- I mean, it's more college thoughts.
Just that we keep-- we keep-- I think in all these stories, we keep trying to make better humans by making machines act like people, but better.
And we can't.
We're incapable of doing this because a machine is not a human, right?
There's always going to be a point where it does something we didn't want it to do.
Because it's not a person.
It's not going to operate like people do.
We have a whole lot of influences that come up to us in our whole lives that factor into what we do.
Machines can do that same thing.
There's an exchange in the movie that I kind of liked.
It's at the part where David and Halle Sheedy's character, whose name I forget, is visiting Stephen Falcon on Goose Island.
And they're having this conversation.
And Stephen Falcon says, the whole point was to find a way to practice nuclear war without destroying ourselves, to get the computers to learn from mistakes we couldn't afford to make.
Except I could never get Joshua to learn the most important lesson.
David says, what's that?
And Falcon says, futility, that there's a time when you should just give up.
You know, I think in the context of the scene, it kind of makes it look like Falcon has sort of, you know, abandoned humanity because his family was guilt.
But I sort of interpreted it in the same way that you were saying, is that there's just some things you can't teach to computers.
Well, that's interesting too, because, you know, it's very clear that he shouldn't have given up.
Falcon, not the machine.
Like he dipped out when he needed to stay and fight.
And the machine is continuing to fight when it needs to dip out.
So it's kind of interesting.
And we really need this high school kid from Seattle to save the day for us.
Well, the children see things in a whole different way, Eric.
So.
Oh, they're the future.
You know, it's really amazing about this, though.
This is like the, it's a Deus Ex Machina kind of thing.
It's like they go to Falcon's house on the island.
He's like super cynical.
He has this whole speech about how, you know, species will die out and they'll be replaced, probably bees next time.
Nature knows when to give up, David.
And they leave dejected.
And then he pulls up at his helicopter.
It's like so convenient that he's got a helicopter and he could just pick him up.
And he could take him all the way to Norah at the helicopter.
It's like, well, that's a really convenient.
Where is Norah?
It's supposed to be Cheyenne Mountain.
Where's Cheyenne Mountain?
Is that in Cheyenne?
Like Colorado.
It's supposed to be in Colorado.
It was actually in Colorado, yes.
And Goose Island is what?
Wisconsin?
Oregon.
Oregon.
They go back to Oregon.
I thought Goose Island beer is not from Oregon.
Oh.
Yeah, I think.
Well, it's really convenient.
He's got a helicopter.
It's all I'm saying.
That can travel a really long way in a very short amount of time.
Two of my greatest disappointments in life are one that I do not have my own pterodactyl model that I can fly and that I do not have my own personal helicopter.
He's just-- Oh, yeah.
He's got the pterodactyl at the beginning.
Not yet, Mike.
Not yet.
True.
True.
Fair enough.
That pterodactyl is right around the corner.
I can feel it.
So, Mike, you saw this movie and you were like, "Well, that guy Falcon's really cool.
I want to try to pursue that as a career."
Yeah.
I mean, I was more the age of David, but Falcon was the real appeal to this to me.
Just this-- Well, first of all, they got an actor to play, Falcon, who was considerably cooler than any other academic computer scientist I've ever met in my life.
But yeah, the whole program that he undertaken was seemed kind of fascinating to me.
What's the thing about being a-- Being a recruitment tool is top-going.
Yes?
So, I was just saying, Amy, when you see this film, you saw it for the first time recently, do you have any characters that you sympathize with or relate to, or do you even use the word "hinges" like all these people have weird motivations?
And yeah, this is a high school kid.
We're going to have him save the world.
Bad question.
You can pass if you want.
We'll just cut out this.
I was just thinking about the characters.
And I don't really know if anybody is supposed to be directly-- I mean, you feel bad for Falcon and his son and his reasons for leaving and doing what he did and seeing futility and things.
And you definitely-- I felt for him.
But there was also a part of me that was like, get your act together.
You need to get back over there.
Like, now, this thing's about to destroy the world.
You need to care about that.
Like, I was-- I don't know.
Get your pants on.
Get your shoes on.
I sympathize with them to a point.
But I don't think you get to destroy the world because your kid died.
That might be a hot take.
Who's the bad guy in this movie, Wargames?
Is there a bad guy in the film?
I think the closest is probably the character that Dabney Coleman plays, Mekitrik.
Is that the main anti-computer guy?
No, Mekitrik is.
He's like the main general.
He's like the guy who-- he's the guy at NORAD who's advocating for the Whopper computer in place of people.
Oh, Dabney Coleman from 9 to 5.
That guy.
Oh, yeah.
Okay, yes.
He's not necessarily evil, but he is definitely-- his priorities are a little bit out of whack.
It's like anti-- evil in this or like bad guy in this movie is-- it goes from like anti-technology is the bad guy to pro-technology is the bad guy.
Like, there's a shift.
I don't know.
It's a hard thing to spot in this movie, like what the-- who the antagonist is.
I think it's definitely bringing up again, though, the AI alignment problem because this computer doesn't have the same goals that we have or it gives-- we give it a-- we give it a goal, but the goal is like too simplistic.
It's too superficial.
It doesn't have enough context that humans would have.
And if there's a bad guy, it's the person advocating for what we-- the term we refer to this as the human in a loop problem.
This is what you said at the beginning, Amy.
They-- they have a simulation.
They want the humans to be willing to push the button to destroy the world because they refuse to.
They decide, oh, let's get the human out of the loop.
So this is the human in a loop problem is how we refer to this generally.
If you just have a computer solving all the problems-- I just have a programmer.
I didn't know it.
You're good.
Yeah.
So that's the human in a loop problem.
And the guy who's most advocating for the-- for solving the human in a loop problem is Demi Coleman.
And he's the bad guy.
So really, it's again, it's like you get the humans out of decision making and that's the mistake here, right?
Yeah, because, you know, we watched the guy in the beginning make the correct decision for what people see, I guess, as the wrong reasons.
And I think that's the guy you sympathize with, right?
You're sympathizing with that guy in this chair who has to make this decision.
And he doesn't-- there's something in him telling him not to do it, but he's been trying to-- right?
So you're like, oh, yeah, no, you're doing the right thing.
You're doing the right thing.
And then after he does the right thing, all the suits or whatever decide, well, we can make this better.
We can make this more efficient.
We're going to put this machine in the place and then do the thing.
And in the beginning, you're like, this is going to be a bad idea.
[LAUGHTER] You know, no, you are yet.
And then it does become a bad idea.
And you're like, now we need Ferris Fuehler.
OK, can we talk about neuro-matter a little bit here at the-- as our last example?
Wait, I have a question.
Wait, before we move on-- Go ahead. --how much-- So the machine in war games, from a programming perspective, is that a thing-- I'm assuming that's a thing that could actually happen.
Like, is that a thing that could actually happen?
Like, a realistic level.
Which-- which-- which part of it?
The simulations, definitely.
The learning to learn, like, the conclusion it comes to is, like, the only winning answer is not to play.
I-- I-- that's the one unbelievable part for me.
But pretty much everything up to that, I could see a computer doing.
I think the-- the part that is-- There's two parts that are unlikely about the movie that are-- had-- had to happen to make it work.
One is that their super sophisticated war simulation computer, which they have also now given the permission to actually launch missiles-- Oh, the new-- --is-- is connected to a phone line so that you can-- you can-- you can accidentally dial into it with a modem.
That-- that-- that's a computer that definitely would have been air-capped.
But-- Do you know the term, "air-capped"?
I'm assuming it means there's, like, daylight between the mistake and the machine sitting off the news.
Yeah, the network-- no network, like, there's a-- there's a gap.
It can't actually-- sorry.
Like, there-- you have to plug a thing in, and if you don't plug a thing in, it doesn't work.
Okay.
Yeah.
But if-- I guess the whole thing there is that, you know, Vulcan has put in this backdoor so they-- he can get in as Joshua, and so, for some reason, that's available via modem.
The other part that-- Well, remote work, Mike.
Yeah.
That's-- that's a good point.
They're probably worse. [laughter] I mean, this is a-- this is a guy with a helicopter, okay?
Yeah.
Yeah, he has to get in-- be able to get into his-- his work computer from Goose Island.
I think it's also-- so everything the computer does is totally plausible, even at the time.
I think it's very unlikely that they would have had-- they would have literally had the ability for that computer to launch the missiles.
So it's somewhat implausible that it could figure out the launch codes as it does in the movie.
Typically, those launch codes would have required vastly more computation than any computer at the time would have been able to-- would have been able to muster.
So he didn't even take-- Oh, that's right.
He was learning the launch codes.
That's right.
If he's brute-forcing them, it would probably take longer than the lifetime of the universe to figure those out.
Probably Mike, right?
We would hope.
And so there's definitely some stuff in there that is unlikely, but nothing that's literally impossible.
I think aside from the helicopter, I think you're right.
If the idea that, oh, we built this incredible simulation machine to simulate global thermonuclear war, let's plug it directly into the thing that launches the missiles.
That does seem a little bit implausible.
I'm glad you said that.
I think that's-- were we about to talk about fire up on the deep?
Was that the last thing we were going to talk about?
We can if you want.
I didn't read it, so I'll just try to-- You were going to talk about Neuromancer.
I was going to talk about Neuromancer, yeah.
Well, actually, that kind of ties into this, too.
I think where science fiction, and particularly computing science fiction, where it leaves people, I think, is-- you have to hit that sweet spot of plausibility, but fantasticism, I think.
And I don't know if-- I think Neuromancer and Firepun the Deep both maybe missed the mark on that.
I want to talk about-- I'm a ladder, yeah.
I want to talk about Neuromancer, actually, because-- so the examples I wanted to use, I thought, these are some of the most widely talked about known examples.
I read Neuromancer, and I was like, wow, this is a fun book to read, and I like a lot of the language in it.
It's pretty cool.
Mike gave me his old-- I've got it in the bookshelf over there.
I only read it a couple of years ago for the first time, and I hadn't read a lot of sci-fi.
And as I'm reading Neuromancer, I'm like, wow, this is so cool.
Why in the world would somebody read this and think, wow, I want to make this world?
I want to produce these things like-- what is desirable about the world being described there?
Nope.
I think it might have been the first tech-- I don't know what-- there's probably one word that says this somewhere that I don't know, but it's this technological dystopia idea.
I think a lot of cyberpunk does this.
It's like, it's gone too far now.
Humanity is living in this perpetually gray blade-runner world.
I don't know.
I struggle with the plot.
I'd like to try to do a little bit of a cursory treatment of it.
But Mike, tell me, when did you read first read Neuromancer, and did you read it and go, wow, this is really cool.
I can't wait to live in a future like this.
I read it pretty early on, so it came out at '84?
'84, yeah.
That's the order here we have.
It's chronological.
2001, '68, Wargames '83, Neuromancer '84.
So as I keep-- The blade-runner was before Neuromancer.
Sorry.
That's OK.
As I keep pointing out to our listeners, I am very old.
And one of the other things that's probably somewhat unusual was that I had access to the internet before a large portion of humanity.
So my first internet presence was 1988.
And so it was probably from 1988 to, I don't know, '94 or so, where being on the internet was kind of this very nerdy, almost counter-cultural thing.
And this book would come up on like use net groups, and people would talk about it in the halls and stuff.
And I think it clearly was meant to convey this dystopian world.
But there were so many people who were like, this is so cool.
I want the world to be like this.
And I personally don't get that.
I have no interest whatsoever in ever being part of the metaverse or whatever we call it now.
But there was a huge swath of my nerd community at that time who was like, yes, this is cool.
I want this.
I want the implants in my brain.
I want something where I can connect directly into this virtual world.
And I-- You know, there were-- Even when you were 14, you didn't want to do that?
You're asking me?
No, not really.
No, I'm asking Mike.
No, I'm a little bit weird in that sense, in that I'm a very outdoorsy person.
But there was a lot of sense that this was kind of the inevitable future for us all in the late '80s and early '90s.
And even into like the-- Like if you go back and read Wired Magazine from the early days, from like mid '90s, there are all these articles about VR and having sex on the internet and the way that you could simulate taking an acid trip just by sticking probes in your brain and stuff like that.
And so there was a surprisingly number of people who did not read this book and say, oh, that's terrible.
We should avoid that.
But I think that probably when you read it now, you have a different reaction to it.
No, I really wish I'd been able to read it in 1980.
I guess maybe it's unfair to ask, but why did those people want the world to be like this?
And I got to say, you referenced Wired in the '90s.
Wired in the '90s used to annoy the shit out of me because it just always like, here's some technology that's going to be fabulous and everyone's going to be amazed.
The book would have come out when I was in around 20.
And so at the time I would have heard people talking about it, I would have been 24, 25.
Impressionable.
Significantly different.
Like that's a significantly different time, I think, to read this book.
It's reading it at 20.
I was 42 when I read this book.
I had not read it until we read it for the podcast.
Or I read half of it and gave up to be perfectly clear.
But yeah, reading it at 20 in the '80s would be a wildly different experience, I think.
I don't even think you could recreate that experience.
Like what it would-- because you don't know yet, right?
You don't know-- you don't know what's coming down the pike.
And that was a time of pretty swift technological change.
And I could easily see that being a much more viscerally-- I guess interesting experience than reading it now.
Because now it seems kind of quaint.
That's an interesting point.
Like you might have felt at the time in the contemporary-- A and T.
I don't know if that's what I want.
At the time you might have felt like you're at the vanguard of stuff happening.
You're right on the edge.
Things are just about to be exciting and new and different.
And then this book comes out, which seems to-- Yeah.
I mean, there's a reason this genre began at that time.
Right?
There's a reason.
I think it's also significant the world in which I personally had the privilege to be in at that point in time.
Because I worked at one of the supercomputer centers.
So we had computers that nobody else had.
We run networks that nobody else was on.
I worked with people who were like literal Stephen Falken people.
And so the people who were talking about this and projecting themselves into this world in the future were not the public at large.
These were people who had a lot of-- They were technophiles.
They were being along.
The fourth-hundredth thing.
Yeah.
They were people who had some investment in-- I met a lot of early people who were doing virtual reality stuff even in the late '80s.
Which was almost not even in the public conscious at that point in time.
And I think this idea of super fast networks and virtual reality and the ability to simulate realities because of the super fast computers that we had access to was something that people in that world could imagine that maybe somebody who wasn't in that world reading the book might have not been able to extrapolate quite so.
Well, it's funny you say that because William Gibson was not a computer guy.
Yeah, it's true too.
Which is, I think, one of the more fascinating things about it.
It's just pure imagination.
He doesn't have any idea if any of this stuff is possible or possible or interesting to anyone by himself.
Although he was specifically hired to write the book.
So I guess he knew somebody was interested in it.
But that's one of the most interesting things about it.
It's like it had such a grip on people who were actually in that world when it's pure fantasy.
I mean, it's not really science fiction.
I mean, it is science fiction.
It is fantasy.
I mean, this is a magical power they have.
They stick a thing in their brain and suddenly they can manipulate everything everywhere.
And then the AI in this book is almost not even an AI anymore.
It's like a disembodied God kind of thing.
So I guess I asked that question again.
So who's the bad guy in Wargames?
Who's the bad guy in Neuromancer?
This is kind of a superficial question.
But there is AI in Neuromancer.
There's multiple AIs in Neuromancer.
There's Winter Mute and then there's the Neuromancer.
And that takes on this persona.
It takes on, it's like a character.
These are characters in the book.
But they're not like these other stories where they just are trying to do their own evil optimization problems.
They're just another character with their own agendas and their own motivations.
And it's almost like what I take out of this book as the bad, the warning or the thing to be critical of, the thing to be aware of, is a very surprising early commentary that if you go into this virtual reality world, it may become like a drug.
There's a lot of drugs in the book.
That was actually a comment I was going to make too.
Yeah, it's a lot more like a drug culture thing than a computer culture thing to me.
So yeah, so what's the danger there?
Where's the, what's the drug itself?
Power?
So I actually wanted to read this.
The section that really struck me is when they did, when he describes case, the main character jacking in, jacking in, I was like, this is hilarious.
I really want to know if the computer guy started saying that after this book came out.
I really want to know.
Did they say that?
Mike, did people say jacking in to the internet?
That sounds hilarious.
Never really, it never really took.
Thank you.
I think it just a little bit too, too close to, you know, what everybody's thinking.
So, some other activity people started saying it then.
So there's here, I got to read you a passage.
And in the bloodlet dark behind his eyes, this is case, sorry, cases.
He's jacking in again for the first time after he had not been able to.
You have to say it like this.
You have to say jacking in.
You have to say it like that.
Jacking in.
And in the bloodlet.
And in the bloodlet dark behind his eyes, silver phosphines boiled in from the edge of space, hypnagogic images jerking past like a film compiled of random frames, symbols, figures, faces, a blurred, fragmented mandala, visual information.
Please, he prayed.
Now a gray disc, the color of Chiba sky.
Now disc beginning to rotate faster, becoming a sphere of pale or gray.
And it kind of goes on.
And there's this stream of consciousness, stream of lists of nouns and things.
And it's this really frenetic pace.
The writing is hardly not get sucked into.
And this image is silver phosphine's boiled in from the edge of space.
This is hard boiled, kind of Chandler-esque kind of thing.
Yeah.
The writing is really fun to read, but it also sounds, it sounds like we're on the edge of what I'm comfortable with here.
Like if you think about really being there, like this guy's pretty drugged out.
And it also sounds like nothing.
Like it's almost impossible to picture what he is talking about.
Like you can, I think everybody, I think everybody would picture that differently when they read that.
And that's one of my issues with this book is it's so hard to push yourself where the people who are jacked in, where they are.
Like what, because if you, and especially if you've never conceived a virtual reality, I can only imagine.
I mean, the only thing you could, you could, you could analogize it with would be some sort of trip, right?
Which I guess is what he's trying to do.
Like, I don't know.
There's a lot going on there.
Is that why the plot's so hard to follow?
Because there's all this like weird terminology that- Well, it was also his first book.
So I don't know if he was very good yet.
To me, this is, this is like the prototype too of what I expect out of sci-fi.
I expect there to be a lot of terminology I don't understand and I have to sort of reach for and gradually grasp.
I expect it to be hard to understand.
And this book to me is like a canonical example of that.
Yeah.
And it definitely drops you right in.
You're, you're underneath the TV sky and you're, you're jacked in from where we get, right?
Just figuring it out.
I don't really know what's going on though and like ultimately what they're trying to achieve.
What is achieved in the book, if anything?
I mean, winter, what's his toes?
Winter mute.
I always kind of want to call them winter green, that's what I'm saying.
Winter, winter mute is trying to achieve some end goal and you're never really clear on how much of the motivation of these characters is him manipulating them and how much of it is their own just external motivations.
And then Neuromancer has his whole thing going on.
You don't even know about half the time.
You don't even know he's a thing until the end.
Yeah.
And then they joined up, I think, and at the end, like there's this implication that there's going to be a whole lot more winter mute shenanigans going forward.
And so I think, I think the plot really is just these guys are on some sort of noir heist, you know, being puppeted along by winter mute and we're all powerless to resist sort of.
There's like an inevitability to it, right?
Yeah.
The world is going to get dark and gross task and people are going to be strung out on drugs.
And you just can't stop it.
I have computer friends here, so I want to ask you a question.
So when people say the singularity, what does that mean?
What should I be looking for?
What should I be looking out for?
Like, what does that mean?
What is the singularity?
Hey, didn't this come up in one of your podcasts and someone referenced it?
Yeah, and we have no idea what we're talking about.
Hailey was like, well, there's the physics one and then there's the AI one, right?
You made that distinction.
We were talking about blue, red, something's blue and what's the name of that book?
Oh, the Time War.
I tell you, I record, Jay, this is how you lose a time warp.
Yeah, I tell you, I record these things and then it's like it never happened.
I went back and read our Neuromancer transcript.
I was like, did I read this book?
I don't know.
Yeah, we were talking about it.
Because that's like a post-singularity culture or something, we think.
We don't know.
Yeah, so there's the physics, Stephen Hawking Singularity, but then there's this other idea of the AI singularity.
Mike, do you want to take that?
That's the one I want to know about.
Yeah, I think the way that I've heard it expressed is the sort of Raymond Kurzweil version where technological progress begins to happen so fast that we essentially lose control of it.
There's this point at which the artificial intelligence-- There's a point in a return, sort of?
Yeah, exactly.
The artificial intelligence has become smarter than we are.
We are essentially just like their pets at that point in time.
They can now learn and they can solve the problems that we haven't been able to solve.
They can invent things that we couldn't invent.
You reach this point where he kind of talks about it in-- Fingy sort of talks about it in "Fire Upon the Deep 2" where his AIs are just-- They're so smart that they sort of literally transcend us.
They literally become a power.
Well, and the singularity as a concept is threatening because you imagine this optimization function will quickly not need us to be around.
So you might say to an AI or the AI might determine, hey, the threat to my existence is climate change and the easiest solution to climate change is to annihilate all the humans.
So I'll just launch all the nuclear weapons.
Humans gone, problem solved, and now we can continue to live on.
So that's the threat that people have.
You'll launch a bunch of nuclear weapons.
You're going to have a different climate problem.
Well, so problems like that, like the AI, if it exceeds our own intelligence or our capacity to control it, it may go out of control and it might quickly determine humans are unnecessary.
They're part of the problem that they're constantly telling me to solve these problems.
Well, let's just get rid of you guys and all those problems go away.
So it's solved.
Specifically, like this point where some AI turns against us, because we already have AI that's smarter than we are.
I mean, in a certain sense.
I don't know if everyone would agree with that.
What do you say, Mike?
Well, they're faster.
So Amy says, is it-- there's like a point of no return, which the singularity implies.
Yeah.
So like in the world that we live in, if an AI started doing something that we saw as being harmful, we could unplug it.
So even though these things could conceivably get smart, we can still control them because we can turn them off and restart them and try to figure out what they're doing.
But at the point of the singularity, we've kind of lost control of it.
And as human beings, we cease to sort of matter to the artificial intelligences.
And it's not that they would necessarily-- We just couldn't stop it.
Like that option of unplugging it doesn't exist anymore.
Yeah.
It's irreversible at that point.
And they don't necessarily want to kill us because they're evil.
It's just-- we just cease to matter to them.
OK.
Their primacy exceeds ours.
That could never happen.
This is humans on top of this.
Amy, you said that they already are smarter than us.
And I didn't mean to say, oh, I know more.
So I'm dismissing that.
Oh, no.
I think you do know more.
I meant it more as a subjective.
I don't know if I agree with that.
And I don't know if there are people, I think, who would agree with that, who know far more than I do.
And there are people who would disagree with that.
But why-- This depends on what you mean by smarter, right?
What did you mean by that?
Can we go in there a little bit?
Why did you say that?
So there's a couple of, I think, A, it's just-- it's my impression from the news headlines that AI is going to take all of our jobs because it's better at all of them than we are.
And it can do things faster, better, stronger, higher, longer, whatever.
So in that sense, it's smarter.
It's smarter in the efficiency sense.
I think these things can only be as intelligent as we make them, I suppose.
Like, because that's the thing about AI is it can do a lot of things, but it can't do anything you can't do because you have to be able to tell it to do.
I don't know if I'm explaining that.
It's just my impression.
I'm, again, a lot.
Yeah, no, I think that's pretty good, actually.
I think that's a good description.
And I think when people are afraid of a singularity, they're afraid of this threshold where, oh, whoops, we gave it too much capability and now it doesn't need us anymore.
It can break beyond the bounds that we've placed upon it.
But isn't that the exact thing we're trying to create behind AI?
Like, we're trying to create-- we are actively trying to create a neural or a self-learning or self-- I don't know, they're generating whatever the word is.
I know I'm probably like nails on a chalkboard to you guys right now.
But we're trying to create something that can run itself, right?
That can learn so well that it doesn't need us to feed it information anymore or something.
Yeah, there's-- Like, that's the impression I get from the work behind AI is that they're literally actively trying to make this exact thing that everybody is saying is the best.
I think the goal that a lot of people have is what they call artificial general intelligence.
So the ability of machines to-- like people would probably classify things like chat GPT as being-- it's not general intelligence because it's still-- it seems very amazing sometimes, but it's still basically imitative.
And it's trained on human inputs.
But-- Right, we tell it what to think.
It's good to pad a recognition.
It can tell you these things, but it can't-- Right, but people are-- Go out and get a simulation.
People are trying to go beyond that to the AGI idea at which point the computers can potentially figure out things that human beings in any category have not been able to figure out.
They can solve proofs that have eluded mathematicians and they can figure out how to stuff the glaciers from melting and all this kind of stuff if they wanted to.
Yeah, we're definitely not there yet, but you're correct that there's a large number of people who are trying to advance to that goal.
And there's-- to be fair, there's a lot of other people who are trying to slow the progress toward that goal.
Make them pause a bit.
Yeah, because we-- you know, we did this episode a few months back where we talked about-- there was a paper that came out where they had done a survey of some scientists who work in artificial intelligence.
And one of the questions was, you know, do you think artificial intelligence is an existential threat to humanity?
And 5% of the people surveyed said, yes, I think so.
And-- The other 95% didn't know what existential meant.
In my view, that's too high.
You know, anything that has a 5% of ending humanity, we should probably-- Seems a little high.
Let's make that zero.
Can we reduce that risk a little bit?
No, that's zero.
Although, I mean, to be fair-- Or at least get some insurance.
There's not a 0% chance that like, cars won't kill us in the next 100 years.
So I don't know.
Yeah.
I like that.
It's very reckless.
Whatever.
5%.
I might die in a car crash.
I mean, we do take risks every day.
I mean, it's not like you're-- it's not like we're going to go through life ever with no risk.
So, I mean, to a degree, like-- Well, maybe-- There being a risk isn't-- in and of itself an issue.
Maybe the risk we're talking about today-- this right now, though, is like, oh, if I-- You know, maybe if I consume this particular food or drink, then I may increase my risk of cancer.
That's just like individual risk.
Risk of humanity is a violation.
Right.
Exactly.
Versus, oh, if I build this thing for my company that we're spending billions of dollars on, maybe I end up-- there's a chance I could wipe out everybody.
Like, that seems not commensurate, right?
Not a good-- Well, there is a part of me that thinks that the same-- OK, this is going to be-- this may be controversial.
Feel free to take this out if you think it's going to be too much.
But there's a part of me that thinks that the same thing that makes people respond to religion makes them respond to the idea of an AI.
So, like, this-- we kind of like the idea of there being something that is bigger than us, right?
And I think a lot of people might be moved by the idea of there being an AI that can be like the app and the story, right, to get it back around, like, that can help us help ourselves, right, and keep us from our worst instincts and, like, be this benevolent dictator.
I don't know.
Something like that.
Not that I think God's a benevolent dictator, but, you know, it's-- there's something to that, I think, that keeps us striving for this idea that we can make this happen.
We can create this situation where we don't have to be in charge anymore.
Like, we don't have to be the ones that make the hard decisions anymore.
Like in war games, nuclear war is pretty hard.
Let's just make the computer solve that one.
I mean, I think that that-- I think your human loop thing is, I think, a pretty big deal.
I think it's the thing that people keep trying to do with technology is, like, how do we-- it's the same thing behind labor-saving devices.
It's the same thing.
It's every-- it's how do we-- how do we take this out of our hands?
I think that's an interesting idea.
I think there probably are some people who also kind of want to be God.
You don't say.
I think a lot of the people who are most enthusiastic about AGI are people who, you know, want to eventually transfer their consciousness into some sort of indestructible robot.
Does his name rhyme with Enron?
This is a problem with sci-fi for me.
I read a lot of this stuff and I go, "Wait a minute.
How did somebody read this and get inspired to create a company to try to emulate this horrible world?"
I'm reading it like, "Why would you be inspired?"
In their minds, they're in charge of it, right?
I think there are people who think I could be the ruler of that world, right?
I would not be susceptible to this thing.
I would be on top of this thing.
So there's a couple different ways people see religion.
Let me just be real clear.
I was reading Kafka's "Apidokalina" and being like, "Yeah, I'd really like to be the person who made that machine that inscribes the tortures, the prisoner with their sentences."
For everything, there's a person who will assimilate the thing to their own ends, right?
So it's no different.
Well, I did the horrible crime I always do where I have no idea how to end this.
I always do that.
I don't know what to say at the end.
We always just go, "That's it.
That's all we got.
Okay, bye."
Yeah, maybe that's what we should do.
Well, I don't want to keep you forever.
It's great fun having you on here.
Mike, what else should we say?
Anything we can ask Amy?
Then we could just just juice and put in there.
If you could recommend some books that you've read over the course of doing the podcast, which would be your top two or three?
What books have I read?
Let me look at my podcast episode list and I can tell you.
Did you...
I can't tell if you really love...
A lot of times you're like, "Well, I didn't hate it."
I'm like, "Okay, is that a record?"
They should read this or not.
One thing we are very conscious about in our podcast is because we're coming at it from a critical perspective.
A lot of things we're reading are like sort of the lions of science fiction and coming at them from sort of a new, does this hold up kind of perspective?
I love that.
That's what I love about the show.
And it's funny, but I love that.
I love that you bring that perspective.
But we think that if a thing is good, it will hold up to this.
It'll hold up over time.
It may have some things that date it, but it won't completely fall apart under scrutiny.
So as a result, what ends up happening I think is sometimes we will criticize a thing and then at the end it'd be like, "But listen, just because we said all these things or just because we said all these good things, it doesn't mean there aren't other things going on in the book too."
So a lot of things I'll have a hard time with some parts of it, but I'll end up really liking the book as a whole.
So it can come off, I think, sometimes a little mealy-mouthed.
But where do we stand?
The end?
But I don't know about that.
You know what?
One thing I really love about the tone that you strike in your show?
When people drag stuff, they often come from it at this level that is superiority.
I'm looking down on this content that is not up to the standard.
We love books.
You don't ever do that and it's really nice.
You just sort of approach them in a very genuine and authentic way.
You're not trying to be superior.
You're just trying to really take them on their own merits.
And it's great.
I love it when you don't sound superior because I bet you a lot of the time we are feeling pretty superior.
I've never read any Heinlein.
I have a friend who loves Robert Heinlein and you and your podcast set were pretty, not dismissive, but pretty savage with Heinlein.
And I was like, "Oh, I guess I don't need to read that stuff.
I'll just keep reading these other things I want to read."
He's tough because he's one of those older writers who in a lot of ways, he was pretty visionary and his ideas were really interesting.
He has interesting ideas.
And he has, I mean, there's a lot to attract in those books, I think.
Right?
But he is such a man of his time that if you're reading it from, especially as a woman reading these books, it's just like, "How was he talking?
What is he doing?"
But the Moon is a Harsh Mistress had some interesting commentary about communism.
It has some interesting commentary about revolution, about AI, about like, it had a lot in it that was worth reading.
I don't, I would never say don't go read any Heinlein, except for Strangers in a Strange Land.
That one was.
That's the only one I thought I was supposed to read.
I've never read it.
Woof.
That one is tough.
Well, and I liked, so when I, I liked Neuromancer.
I felt confused by it.
I did think the Rusty Farians were pretty ridiculous and cringy.
I thought the Rusty Farians are pretty cringy.
But you know, Lori makes, I think it's Lori makes a comment on your show when you're, when you're reviewing Neuromancer that this book is peppered with breasts.
And that becomes the name of the episode.
And I totally had this moment of like, "Oh shoot, I didn't notice that.
I was blind to that."
I'll tell you, I was blind to a lot of this stuff before we started doing the podcast.
Like over the course of doing the podcast, you end up reading things pretty differently than you did when you started, right?
Because we all, we all are, we're all reading the same stuff and being taught by the same teachers and doing all the same things.
So it's not like, we don't have any like magic capabilities here.
Like we, we have learned to read things this way, right?
Which I think is what we're trying to encourage people to do is learn to read things slightly different way.
And I'm scrolling through our episodes to tell you what to read.
And honestly, the thing that keeps coming to my mind is I would read A Space Odyssey if you haven't.
And I, we're reading The Dispossessed right now.
And we did a show about the left hand of darkness.
So anytime I get a chance to tell people to read Ursula K.
Ligwyn, I say, go read Ursula K.
Ligwyn.
I always have to read Ursula Gwyn.
Unfortunately, Octavia Butler never won a lot of Hugo's for reasons.
But I would always say, go read some Octavia Butler.
That woman was a, she's a private.
Hey, I love Parable of the Solar.
Oh my God, she's so good.
We have an episode on Blood Child with a short story.
So if you want to start somewhere, that's not a bad place to start.
When's the Lonesome Dive episode coming, Amy?
Amy's my Lonesome Dive friend, Mike.
I've been trying to get Lonesome Dive on our docket for a while, but I can't get people to do it.
I had absolutely no- Haley's a fellow Lonesome Dive lover, so.
I had no clue I was going to read this book and it was going to just come out of space and blank me on the head and be one of my favorite books ever.
I had no clue at this age.
I was going to find that.
That's so good.
Well, it has been- Mike, go read it tomorrow.
He read it.
I read it.
He read it.
Oh, he did.
Yeah, I read it.
I talked to you about it.
All right, so our next episode, our next episode is going to be about technology and Lonesome Dive, right?
I'm up for it.
If you come back, we would love to have you back.
It honestly has been such a phenomenal privilege to have you on the show.
Yeah, this has been great.
Thank you so much.
Appreciate it so much.
Thank you.
And hopefully me being super nervous to have a real podcaster on the show didn't come through too clearly.
If you have a podcaster, you're a real podcaster.
That's my opinion on the subject.
Well, this has been picture recoding with Eric Aker and Mike Moll, especially guys to Amy Sallie.
And if you have not heard Amy's podcast, Hugo Girl, I strongly recommend go out and listen to a few episodes.
It's a great show and I enjoy it.
Thanks so much, Amy.
Thanks, guys.
It's great.
Thanks, Mike.
We'll see you later.
Bye-bye.
Bye-bye. your [ Music ]