Picture Me Coding

Dijkstra's Diss Track

Erik Aker and Mike Mull Season 1 Episode 32

This is part II of our Standing on the Shoulders of Giants episode about Edsger Dijkstra, the greatest philosopher of our field.

Instead of using social media, Dijkstra would dash off hot takes on his typewriter or his pen and in this episode we cover various of his opinions and essays. You'll hear him say stuff like, "Java sucks" and it's "cowardly to call our errors 'bugs'."

Here are some links to the material we discussed:

Send us a text

[MUSIC PLAYING] Hello, welcome to Picture Me Coding with Erik Aker and Mike Mull.

Mike, last week we were talking about Dijkstra.

Here we are again.

I want to jump right into it.

Our Dijkstra conversation is so long as we try to cover so much Dijkstra material.

Now I wanted to continue talking about Dijkstra.

You're up for continuing to talk about Dijkstra with me.

I could talk about Dijkstra all day long.

Well, I don't have all day.

I've got to go do some yard work after this.

Here's what I want to do.

I would like to play for you, as I did last time.

Dijkstra, quote, now this one has liberal uses of Dijkstra pauses.

So you got to put your patient listening hat on.

You have that hat much more than I do.

One final request to you, having seen how we can convince ourselves that programs indeed are totally correct.

Please realize that if you have written a program and it's not correct, that it's a little bit cowardly to say that your program had a bug.

To call errors "bug" is a very primitive, animistic attitude.

Suggest that the bug has a life of itself.

That you're not totally responsible for it.

The mean little bug crept in behind your back at the moment you were not looking.

Well, this is not true.

If the program is not correct, you made an error.

And my request, my prayer, so to speak, is that you stop using the term "bug" for program errors, but call them what they are-- errors.

Unless we change our language and call an error an error, programming and computing science have not yet matured.

Thank you for your attention.

What do you think?

You've been chastised by Dykstra for calling your program "bugs," your errors "bugs."

I agree with his argument.

I think I will probably continue to use the word "bug" just because it would be hard to change at this point.

What do you think his argument is there?

How would you characterize his argument?

Well, I guess what it brings to mind when he was talking about it is at least the history of the term "bug" supposedly goes back to literal bugs crawling into relays and computers.

And in that context, you can think of the bug as being something that was independent of the design of the computer.

It was literally something that, as he says, had a life of its own.

Animistic.

Right.

So I agree with his argument that there's a tendency in the programming world to think of bugs as something that we couldn't have avoided and that just showed up in our code, and we had nothing to do with it.

So at least logically, I like his argument.

I just think it's going to be pretty hard to actually change people's use of language at this point.

Now, I have to admit that sometimes when I say, there's a bug in the program at work, it feels like I'm getting away with something a little bit.

Like I'm avoiding a little too much responsibility.

Like there is a mistake in there, and I wrote it in.

It is true that sometimes the mistake is a communication error.

Somebody tried to ask-- somebody asked me to do a thing.

I implemented exactly what they asked me, and then they come back and go, oh, I'm sorry.

I said the wrong thing, or I communicated the request incorrectly.

That happens sometimes too.

But it is an error inserted in there.

There's this perspective from Dijkstra, which is very negative about inserting personification into programming.

And he blames von Neumann for this in his On the Cruelty of Really Teaching Computer Science.

He's saying, we should not describe programs by saying, this guy wants this, and you send this info to that guy over there.

He finds this really awfully imprecise language.

He wants to use a more formal language to describe what programs are capable of.

He wants to talk about things like domain and ranges for functions.

What are the inputs and outputs?

What are the states-- what's the state space for a function and a computation?

He wants to really drastically restrict how we talk about what our programs do in order to aid our ability to reason about them.

This theme seems to come up again and again in his papers.

And personifying, adding personification, or talking about bugs is a way of fuzzily describing what happens in the program as if it exists outside of us.

I don't know this for sure, but I think that maybe-- first of all, he did have a mathematical and physics background, so I think it was probably somewhat preordained that he was going to think about programs that way.

But I also think maybe at the time that he was working, the problems that he was working on were all pretty abstract and weighty.

It's hard for me now to imagine somebody who's spending three hours a day writing CSS is going to apply a lot of mathematical reasoning to that.

It seems like much more a matter of craft and taste than formal reasoning.

Or I'm going to go to my PostgreSQL database and I'm going to retrieve a string which I'm going to show on a website.

Oh, whoops, the string was not encoded using UTF-8.

Yeah.

Yeah, I guess that's what we would have called accidental complexity.

You mentioned last week when we were talking about structure programming.

I mentioned Haskell and Rust as expression-based languages.

All expressions evaluate to a value.

There's no way to get outside of that.

And as a result, you're coerced into writing in a way that more strictly adheres to the ideas of structure programming.

You said, well, it's kind of like the tools that we use.

They really-- they change how we think about solving problems.

And Dexter says exactly that.

He says, and how do we tell truths that might hurt in 1975?

He writes, the tools that we use have a profound and devious influence on our thinking habits and therefore on our thinking abilities.

And Dextra has been offering hot takes all the way back since the '60s.

He's like a total profuse issuer of hot takes.

In that same essay, he writes, the use of cobalt cripples the mind.

It's teaching should therefore be regarded as a criminal offense.

Yeah, that's very-- Dextran, is that what you-- how you describe that?

Maybe.

He doesn't sound like he was a mean dude, but he said some mean things occasionally.

So-- Yeah.

Good thing he's not our boss.

A little bit of a worry of the toxic work environment there.

One thing he writes in this paper that really appeals to me, as you can imagine, coming from more of a literature and philosophy background.

He says, both side of mathematical inclination and exceptionally good mastery of one's native tongue is the most vital asset of a competent programmer.

Now, why do you think he would say that?

That's a good question.

I know that he, in other places, says that he is very against the idea of programming in natural languages.

He thinks that's a ridiculous idea.

I think we talked about this in the past on a previous episode.

Yeah, the Vic and Sine episode.

The essay is called On the Foolishness of Natural Language Programming.

Yeah, so why is he saying here an exceptionally good mastery of one's native tongue is the most vital asset outside of mathematics for a competent programmer?

What I would assume he means by that is that being able to describe things precisely and unambiguously in your native language is probably an asset when it comes to trying to put those things into code as well.

Do you think that's true?

It's not clear to me that it's true in terms of being able to write code.

It does seem like it's probably true in terms of being able to communicate with people who want you to write code and need to describe what they want it to do.

I know that for me in particular, one of my habits over the years is that I ask a lot of questions of people when they're asking me to do things and probably to the point that most people would be a little bit bashful about asking the questions because they seem almost dumb.

The questions you're asking, they seem almost dumb.

The questions I'm asking, yeah, I ask very basic questions.

And my experience has been that very often when I ask those questions, there are other people wondering the same things.

But they tend not to want to ask them because they seem so basic.

But yeah, I don't know.

I mean, we've had this discussion in kind of a different mode before where it seems to us like there are people who really like programming because they really like language.

And there are other people who like programming because they really like machines.

And I don't know.

I think maybe there is some connection between your facility with your natural language and your ability to write elegant and interesting code.

It's hard to dismiss when Dijkstra says that.

This is one of those statements where I think it's probably true, but it makes me feel a lot better about myself because I'm OK with natural language.

And I was trained as a writer.

And so I'm reluctant to be like, yes, I'm going to tell everybody this because it's really just a therapeutic statement from this incredibly inspiring figure, Dijkstra.

He also says in this essay something that is extremely hard to disagree with, simplicity is a prerequisite for reliability.

So of course, we talked a lot about complexity.

Dijkstra is very interested in complexity.

And he's interested in it because he wants things to be correct and he wants them to be reliable.

That one's hard to argue with, so I won't make you.

Yeah, I think almost every engineer agrees with that.

Although I shared that quote with you recently from the-- I've been rereading this book called The Eighth Day of Creative Education that is about the history of molecular biology.

Very long, but very good book.

And there's a part at the end of that where he's talking about how people have used this phrase less is more quite often, but he makes this argument that less is sometimes disastrously less.

And I think there is a little bit of-- in the programming world, there's a little bit of that as well.

There's this Einstein quote too about things should be as simple as possible, but no simpler.

Timingway too, it's hard to achieve that simplicity.

It takes extreme effort sometimes.

Yeah, and I think there is a potential for making things so simple that they are no longer actually correct.

Over simplifying.

You brought up the essay on the cruelty of really teaching computer science.

This one's a little longer.

Is this an essay or a letter?

I read this.

I didn't quite get-- who's the audience for this?

Yeah, I think it's sort of an essay/letter.

This is the one where the PDF that I found for it is actually handwritten.

Handwritten, right.

This is 1988.

Dexter does not want to use a computer in 1988.

Yeah, I think sort of his overall thesis here is that computer science is hard, and you have to learn hard things to master it.

And we shouldn't be unrealistic about how we need to go about teaching it and that it's not going to necessarily be fun.

Well, I want to ask, how does he think we should go about teaching it?

But first, the things that emerged in this essay for me that I noticed, he's again talking about reasoning.

And he's again saying, this is the type of reasoning we should avoid.

One, reasoning by analogy.

Two, personification.

Not useful.

Three, operational reasoning.

Now, that one seemed a little more controversial to me.

Operational reasoning, if I understand correctly, is just simply saying, first this happens, then that happens.

And trying to hold in my brain all of the operations in order.

And Dexter is like, yeah, that's not going to do it either.

There's another one of his funny/mean quotes in this work that I enjoy, which I'm going to read it.

A number of these phenomena have been bundled under the name software engineering.

As economics is known as the miserable science, software engineering should be known as the doomed discipline.

Doomed because it cannot even approach its goal since its goal is self-contradictory.

Software engineering, of course, presents itself as another worthy cause, but that is eye-wash.

If you carefully read its literature and analyze what its devotees actually do, you will discover that software engineering has accepted as its charter.

How to program if you cannot.

[LAUGHTER] I just really enjoy that paragraph.

I don't know that I agree with it.

Well, characterize it for me again.

What do you think he's saying there?

So I think what he's trying to get at is that programming is a-- it's an act of a reasoning.

It's an act of formal reasoning.

And you can't really solve that with methodology.

You are going to eventually have to think about what you are doing.

There's not any series of steps or tools that are going to eliminate the reasoning process.

Wait a minute.

You're saying I can't just open the Gang of Ford Design Patterns book, look in the index for my answer, pull it out, and just type that out, and I'm good?

Well, that's what he's saying.

Yeah, that's what he's saying.

And I think that's what he's saying.

And I think that's what he's saying.

I don't have the lines in front of me.

He was not the fan.

There's this quote that you say is probably apocryphal about him saying that object-oriented programming is such a bad idea that it could only have come out of California.

Which is potentially offensive to us in two different ways.

It's spoken like a Dutch Texan, I guess.

Yeah.

I think you wouldn't like it.

I don't know his precise reasons other than the obvious things about, you know, state.

And apparently Dextre did say object-oriented programs are offered as alternatives to correct ones, which is a pretty good line.

He wasn't crazy about the idea of trying to mimic physical things in code.

Well, I think it's also, as you say, state is one and related to state side effects.

There's a famous letter that he wrote in 2001 when he was no longer teaching.

He had retired from the University of Texas and in their computer science curriculum, they were switching from using Haskell for introductory programming courses to Java.

And he writes this letter, which was definitely passed around a lot in the Haskell, has been in the Haskell world passed around a lot where he's like, this is horrible.

You're inflicting Java on these undergraduates.

Terrible.

And you could see these themes that exist throughout his career that go all the way back to structured programming.

He's interested in thinking about programs as a kind of mathematics.

If I write a function, it's easier to reason about it if I have a known domain and range for the function.

If I know the possible inputs for it and the possible outputs for it, my ability to kind of draw a mental boundary around it is greatly improved.

And this relates to his idea of talking about state spaces, not just his idea.

What's the state space for the program?

What are all the possible states that the program could allow that it could get into?

This gets difficult when you're thinking about writing a parser, for example.

I've got an infinite possible number of strings I could take.

So state space is massive, right?

But even so, if I'm taking input and producing output, it improves my ability to reason about it.

This is a functional programming, very Haskell perspective.

He writes, "Functional programs are much more readily appreciated as mathematical objects than imperative ones."

And then he talks again about tooling.

He says, "It is not only the violin that shapes the violinist.

We are all shaped by the tools we train ourselves to use.

And in this respect, programming languages have a devious influence.

They shape our thinking habits."

Sometimes I wonder if this is true.

Does the language you use shape how you solve programming problems?

I think it clearly does.

I think you and I have both experienced the -- used more so than I have experienced this thing where having worked with Haskell and Rust, you start thinking about programs in things like Python in that way as well.

So I think it definitely -- you know, the thing that you're used to and the thing that you've had success with probably carries over to things that are not exactly the same.

I do complain loudly when I review pull requests, which has a huge amount of side effects and very little inputs and outputs.

When you've got functions that are not returning things, I'm like, "What's going on here?"

I've got to go somewhere else to try to understand what this is doing.

Please help me out.

In the lecture that I pulled those two quotes from, he says -- he talks about this example program where he's got these red and blue dots and he's connecting them with lines and he's saying we want a reason about a algorithm where we know that we can draw lines connecting every red and blue dot and there are no intersecting lines.

And he says there's a finite state space in this program.

It has n factorial states, n being the number of dots.

We can ensure termination.

If execution of the algorithm never visits the same state again, it will terminate.

And that's this surety that I like.

Dijkstra's surety is about we use mathematical reasoning and we can be sure that we are not violating invariants.

We're speaking in terms of invariants.

We're speaking in terms of state spaces, domain and range.

And these are much more specific things than best practices and updating some state variable as a side effect over here.

We are enforcing a kind of mathematical rigor on what we're doing and it will greatly improve our ability to know that our program is correct, that it's reliable.

In this letter where he talks about Haskell and Java, he's again rejects operational reasoning and he says Java allows and kind of encourages thinking do this, then that, do this, then that.

But I do wonder all languages do that to some degree.

He's kind of like taking a really taking aim at us software engineers.

He hates our title and he hates that we spend days saying well first I'll do this and then that and then we'll produce that.

Yeah, I especially find that interesting in Java since my experience with Java has always been about trying to keep threads.

That doesn't sound very structured, my friend.

Yeah, there was something I was reading earlier today where he was talking about how he, I can't remember where I saw it, but there was something about it's silly to make computers a reason like humans, which at the time he said it probably seemed like a completely plausible idea and now it seems a little bit old fashioned.

So he might have been the guy who we use to talk, we use to think about how we're writing clean code, how we're writing code we can comprehend.

And in the future there may be a lot of code which is black box.

So even Dyches algorithm will just get a computer to write it.

It'll dump it out and then I'll run that.

So he has this line about what will the somebody asking what will the impact of the research and artificial intelligence be.

And he says, can you research something that is not science.

I feel that the effort to use machines to try to mimic human human reasoning is both foolish and dangerous.

It is foolish because if you look at human reasoning as is, it is pretty lousy.

Even the most trained mathematicians are amateur thinkers.

Instead of trying to imitate what we are good at, I think it is much more fascinating to investigate what we are poor at.

It is kind of maybe hard to reconcile that with all that is happening right now in the AI space.

It is a lot more productive than you would imagine if you were predicting purely based on Dykstra's perspective.

I guess what he maybe missed was if you can make computers think like humans but make them do it much faster and even better than humans do, then it is probably a worthwhile thing.

It might be dangerous as he said.

It would be interesting to go back and ask Dykstra, hey Dykstra, imagine I have a program that can predict the next 100 words in any language.

I am not telling you that this is a way of thinking.

Is that a useful program?

What would he say?

Yeah, I would be interested to hear his entire perspective on things like deep learning.

I would have to think that something like transformer models would just seem like a horror show to him.

Yeah, an abomination.

Yeah, I can imagine that too because you can't reason about it.

Yeah, it is like a huge, extremely complex black box that you really can't determine all the consequences.

You forfeit your ability to talk, to describe invariance probably with that stuff.

Yeah.

All right, so this has been part two of our talk about Edsger Dykstra.

There is an S in the first name, Edsger Dykstra.

I learned a lot by looking at Dykstra's career and his work and I still find it inspiring.

And I would recommend people who haven't read some of the stuff we talked about, go out and read it.

Read structure programming, worth the read, honestly, worth the read.

Read the Dykstra's algorithm paper, read the mutex's paper.

What do you think, Mike?

What would you have people read listening to this?

Well, this sounds a little slew, but I think I'd probably have people start with something like Wikipedia.

The bio.

I just think, I think it's important to kind of understand his place in the early years of computer science because, you know, he had, he had lots of novel ideas.

His Dykstra's algorithm is still something that's that we're using today.

His early work on concurrency is still something that's relevant today.

And I think that's something that's really important to understand all this stuff on structured programming.

But, you know, as you say, he's also kind of like the philosopher of our field.

And if you look at his, his biography and his history, you'll see also that he had a big influence on the development of programming and of computer engineering at sort of a cultural level.

And he had a meeting where they were talking about the crisis in software engineering and how things were too complex.

And we needed to figure out ways to make software more manageable.

So, you know, he wasn't just sitting on the sidelines saying mean things about object-oriented programming.

He was really involved in, you know, developing the field.

And it's also kind of interesting that he came out of, you know, he didn't come out of, he wasn't one of these guys.

He wasn't one of these guys who went to MIT and Stanford.

And so he didn't come out of the same sort of pedigree that we think of of yearly computer scientists.

All right.

Thanks so much, Mike.

Thanks for talking to me today about Dykstra's work.

I hope that people are, if they haven't read this stuff, are inspired to go out and check it out.

Definitely.

We'll see you next week, Mike.

Have a good weekend.

Bye-bye.

See you next time.

[Music] (audience laughing)

People on this episode