Picture Me Coding

The Ethical Engineer

March 20, 2024 Erik Aker and Mike Mull Season 2 Episode 28
The Ethical Engineer
Picture Me Coding
More Info
Picture Me Coding
The Ethical Engineer
Mar 20, 2024 Season 2 Episode 28
Erik Aker and Mike Mull

In our industry there are myriad examples of companies behaving unethically, so what power do we have as software engineers to uphold ethical principles and resist bad behavior? In this episode Mike and Erik look at adopting a software engineering code of ethics, relying primarily on the ACM's Code of Ethics as a foundational set of guidelines for ethical and professional behavior.

Also, we're not really known for this but it's worth mentioning that this episode may include some hot takes? I guess I would say that these takes are at least a little warmer than usual.

Mike's music choice this week is The Past is Still Alive by Hurray For The Riff Raff, while Erik talked about his travails with DNS. 

Oh, by the way, if you want to email us, the email address is working now(!) and we can be reached over at podcast@picturemecoding.com


Show Notes Transcript

In our industry there are myriad examples of companies behaving unethically, so what power do we have as software engineers to uphold ethical principles and resist bad behavior? In this episode Mike and Erik look at adopting a software engineering code of ethics, relying primarily on the ACM's Code of Ethics as a foundational set of guidelines for ethical and professional behavior.

Also, we're not really known for this but it's worth mentioning that this episode may include some hot takes? I guess I would say that these takes are at least a little warmer than usual.

Mike's music choice this week is The Past is Still Alive by Hurray For The Riff Raff, while Erik talked about his travails with DNS. 

Oh, by the way, if you want to email us, the email address is working now(!) and we can be reached over at podcast@picturemecoding.com


[MUSIC] Hello, welcome to Picture Me Coding with Erik Aker and Mike Mall.

Hello, Mike.

How are you doing?

>> Hey there, I'm doing pretty well.

>> How was the camping last week?

Camping was good?

>> It was excellent, yeah.

I went to Pinnacle's National Park, which is the last national park in California that I had not to visit and had a pretty good experience.

Saw some California condors, which was kind of my goal.

And so I was pretty excited about that.

>> That's pretty cool.

That's a cool place.

They got the caves there and some bats.

>> They got caves.

They got quite a few interesting trees.

And it's a small park, but very beautiful, especially in the spring when it's nice and green.

>> And they have a swimming pool.

Great for a national park to have a swimming pool.

>> They do have a swimming pool at the campground though it was not open when I was there.

>> That's a bummer.

That's a bummer.

Did you listen to music this week?

>> I'm always listening to music.

There's the album I was going to talk about this week is the latest album from "Hurray for the Riff Raff," which I think is also one of the great band names of all time.

I think the band is really mostly this one person, Alinda Sagara.

They've had this kind of interesting background of being sort of a, you know, like hopping trains and, you know, living on the street and that kind of stuff.

>> Crust, "Gutterpunks."

>> Sort of like that, yeah.

Anyway, they have an album out called "The Past is Still Alive," which I've been listening to a lot.

It's one of those albums that kind of sneaks up on you.

It's sort of a fairly conventional folk rock, but every time you listen to a song, you hear something and you're like, "Wait, what did they say?"

So, yeah, it's a good album.

I've been listening to it over and over.

I think you would like it.

It's not extreme metal or weird classical stuff, so.

>> I haven't been listening to music this week.

I guess I'll tell you a story instead.

So you were on vacation this week and I thought, "Wow, you know, I'll go and I'll check our email address for our podcast."

The last few episodes, we've ended and you've said, "You know, please send us an email if you have comments or questions about this topic."

And I thought, "Wow, we haven't gotten an email yet."

That's surprising with the hundreds of people listening to us every week.

Not a huge audience.

Big enough that one of them would probably want to reach out.

And.

>> At least to complain.

>> At least to complain, yeah.

Tell us.

We don't know what we're talking about.

So I thought that's a little weird.

So I did a typical debugging thing and I thought, "Wow, I wonder if the email doesn't work."

So I sent an email and I got this response within a few seconds.

And the response was, "Address not found.

Your message wasn't delivered to podcasts@picturemecoding.com because the domain picture me coding couldn't be found.

Check for typos or unnecessary spaces and try again."

I was like, "Well, that's weird.

The website works."

But then I looked below and it says, "DNS error, DNS type Mx, lookup of picturemecoding.com, responded with code no error, DNS type Mx, lookup of picturemecoding.com has no relevant answers."

And then it went through IPv6 and A records.

And it was like, you don't have any of these.

I was like, "What?

That's weird."

And so then I went and I look and we actually are using Cloudflare for DNS.

I like to use Cloudflare for DNS.

And I did a thing where when I set up the domain, I created, it's actually a CNAME for our website.

And you know what?

I never made any Mx records.

So those DNS records, they didn't even exist.

So somebody tried to send us an email to guys who sort of sound like they know what they're talking about.

Some of the time they got this rad message, which was like, "That domain doesn't exist."

Those guys don't know what they're talking about.

Obviously, they have failed to configure DNS for, let's say, the third time in their careers.

It's always DNS, right?

I think the worst part of this case is I didn't make the...

I didn't do the wrong record.

It's not like I got bitten by cashing, DNS cashing.

I just flat out did not make Mx, right?

It didn't occur to me to make a record for the email address.

It works now.

If someone sends this email right now, it actually works.

That's my story for you, sweet.

Yeah, I guess it's another one for the embarrassment bucket.

Whoops.

Sorry.

If you have sent us an email and you got the messages that our DNS records are not present, that's on me.

I apologize.

I have fixed that.

They exist now.

So send us all the email you want.

I think.

Mike, this week I wanted to talk about ethics, ethics in our field.

I feel like this is a conversation that I do not hear come up much.

It does come up, but maybe not amongst practitioners like us.

I feel like I don't hear it as much.

What ethics should we uphold and how should we go about upholding them?

Now, here's how I want to start with you.

I say let's just take it for granted that we agree we would like to act ethically.

We are responsible citizens.

We want to code with our conscience, maybe.

We want to be ethical contributors to our society.

So I think the next question is, well, what code of ethics should we adopt?

And how do we figure out what behavior to pursue and what to avoid?

What do you think?

Ethics, you know, if we're talking to me about ethics today, are we going to get mad at each other?

I think that's a possibility.

Yeah.

I guess we probably see eye to eye on most of this stuff.

I would agree that's not really talked about much.

Seems like I used to be talked about more.

I used to hear a lot about, for instance, the computer professionals for social responsibility used to be a bigger thing, I think.

Is that a band?

Yeah.

No, I've got a good band name.

But I guess my other question about this is I'm okay with taking it for granted that we should act ethically, but I think one question that comes up for me is what are we accomplishing by acting ethically?

I think so the code of ethics that we looked at, the ACM code, I think the tone I get from that is it's our responsibility to act ethically because it's to the benefit of society and it's to the benefit of the engineering profession.

But I'm left wondering what's in it for me.

See, that gets to the part of the discussion I think.

I was hoping to just leapfrog over that.

There's famous texts about ethics, Aristotle, Kant, Mill.

I never read JS Miller, John Stuart Mill.

Niko Maki and Ethics is an Aristotelian text that's assigned a lot in college.

Kant and the moral imperative, these are pretty famous works.

And there's this attempt to justify ethical behavior to ground it in for Kant, something like reason.

But I think it's almost easier for us to just be like, yeah, let's not talk about that.

Let's just say maybe we are imbued with the conscience.

Right?

What's in it for you?

Don't feel bad at the end of the day and drown your sorrows in whiskey.

Feel like, yeah, I contributed.

I did something okay.

I have this vague recollection that Aristotle's argument was that if you act virtuously, it's going to make your life better.

And I sort of think that's true, but I don't know that I could make that argument to another person who disagreed with me.

Let's say that you go to a holiday party at the end of the year and you talk to some people who you like and respect.

Maybe a acquaintance says maybe friends, maybe family members.

And they're saying, so Mike, how did your work go this year?

And you got two options.

You can feel really bad or you can feel like just sort of like, okay, you get to choose.

That's the goal of ethics.

What's in it for you?

Don't feel bad.

Okay, I'll buy that.

All right.

So let's move on.

We have given some ground, some justification for having an ethos.

We're going to try to behave ethically.

The questions now are, how do we do that?

So you mentioned the ACM, the Association for Computing Machinery.

They have a code of ethics that they published.

And I look at it, I think, okay, this is as good as any.

Let's use this one.

So there's four sections in the ACM code of ethics.

We'll give a link to it here.

It is pretty interesting.

The first part is not too long.

Four sections, first part, ethical behavior.

I'd like to focus mostly on that.

Second is what they call professional conduct.

The third is professional leadership principles.

There's a few surprising things in there.

And number four is how to use the code.

So ethics, professional conduct, professional leadership principles.

And I think as a software engineer, if I go and work for a company, generally I'm not going to be completely independent as a, unless I'm a consultant, I guess.

I go and work for a company.

I'm a member of an organization.

And it already gets a little bit cloudy potentially.

I'm a contributor to the organization.

Maybe the organization is performing on ethically.

And how does the responsibility for that follow my shoulders?

But I want to look at that question through some examples.

First, I want to look a little bit more detail at the ACM code of ethics.

Actually, the first section.

So the first section has stuff that is probably unsurprising.

Ethical behavior is the first section, right?

They give, they have numbers for all these different sections.

There are seven in the first section on general ethical principles.

Here they are.

1.1 contribute to society and to human well-being.

1.2, number two, avoid harm.

Number three, be honest and trustworthy.

Number four, be fair and take action not to discriminate.

Number five, respect the work required to produce new ideas, inventions, creative works, and computing artifacts.

Number six, respect privacy.

And number seven, honor confidentiality.

And actually not to dig too far into that last part there, but honest and trustworthy, respect privacy, and respect confidentiality, honor confidentiality.

Those kind of all seem very similar.

Which one of these leaps out to you?

Well, two things.

I think the last couple of them, possibly number four as well.

I mean, I agree that they are principles of being an ethical engineer, but I also think there are things that are very often codified in compliance programs.

So a lot of companies are going to have security programs that put controls and conditions around confidentiality and privacy.

And you are probably subject to labor laws that prevent you from discriminating.

Does that mean, are you saying we don't need an ethical code?

Because the lawyers have figured this one out.

Well, I think we do need an ethical code, but that suggests to me a little.

The fact that there are laws around these things suggests to me that people have not historically acted all that ethically.

The avoid harm one, I think, is probably the one that catches your attention first.

I guess first it has this sort of apocratic oath quality to it.

But it also is extremely broad and kind of vague.

I want to read that one in particular.

In this document, harm means negative consequences, especially when those consequences are significant and unjust.

Examples of harm include unjustified physical or mental injury, unjustified destruction or disclosure of information, and unjustified damage to property, reputation, and the environment.

This list is not exhaustive.

Well-intended actions, including those that accomplish assigned duties, may lead to harm.

And here is an interesting point.

When that harm is unintended, those responsible are obliged to undo or mitigate the harm as much as possible.

So you can't just say, whoops, it was an accident.

They're saying you're still on the hook for it.

Avoiding harm begins with careful consideration of the potential impacts on all those affected by decisions.

When harm is an intentional part of the system, those responsible are obligated to assure that the harm is ethically justified.

There's a lot in here, right?

Harm, this word is repeated over and over again, and there's a weird little escape patch.

When you're intentionally causing harm, it has to be ethically justified.

What?

Yeah, that's, I don't know, maybe, like for instance, there are occasionally medical trials where you give people drugs that you know have bad side effects, but it's ethically justified because those people have a terminal disease that kind of on their last hope.

Yeah, I guess those are neither, neither of those are really software examples.

Okay, so we're talking software.

All right, let's do some real examples.

That might make this a little bit easier.

So the ones that leap out to me contribute to society and to human well-being, also pretty broad.

Avoid harm, be honest and trustworthy.

That's one, two, and three.

Be fair, take action not to discriminate.

That's four, and respect privacy.

That's six.

So there's a couple there that I wanted to skip over.

So here's some examples.

Now, the cool thing about our field is there are lots of examples of violations of the ACM ethical code.

So we could probably have multiple episodes on this topic.

So I started thinking about trying to find examples of specific, a specific example, sorry, for these exact ethical principles.

So the first one I was thinking about, respect privacy.

Okay, well, we'll be a respect privacy one.

There was a story, it was in 2012.

I remember there was a blog post that became kind of famous.

An Uber employee did a bunch of data analysis.

I may track the frequency of probable one night stands in cities in the US by day and by neighborhood.

They correlated late night and next day trips.

And the blog post, they talked about this as a quote, "Ride of glory."

People were kind of pissed about this.

This seems like potentially a violation of respecting privacy.

So here's my question.

If you're, this is 2012, if you're an engineer at this organization, this is Uber, and someone comes to you and they say, "Hey, I want to try to do a model which predicts the likelihood that someone is involved in a one night stand.

Can you aggregate a bunch of data for me to produce that?"

What are you going to say?

Oh, and to make it harder, we will give you a bonus for this.

How big is the bonus?

That's the question, right?

Go ahead, go ahead.

Yeah, this is super gross.

And I can't imagine that it would have much of a practical purpose.

That's why it's an easy, you're right.

Great is a great point.

Yeah, I mean, I think it's probably a little bit glib to say I would just turn this down because it seems dubious because I think there are probably a lot of cases where if somebody comes to ask you to do this type of work, you may not be in a position to decline it.

Yeah, right.

That's what makes this hard.

The burden falls on us, right?

Well, I have a job.

I got to earn the money.

I'm in this organization.

I got to pass the buck.

Yeah, I mean, I think you can't really make an ethical justification for this, even if you could make some sort of hand-to-way the explanation that this has a business purpose.

I don't think there's really an ethical justification for it.

But if you're a lower level employee being asked to do something by a powerful boss, I don't know how you turn it down either.

So you perform the task even though you know it is an unethical task because you don't have the power to say no.

Yeah, and I think in this particular case, again, I think this is weird and unjustified, but probably nobody's going to die.

Yeah, I thought this was a great example to start with because it's easy to imagine being a little bit self-aware if this request comes to you.

It's easy to imagine saying, "Nah, I'm not going to do that.

That's pretty whack."

Yeah, I mean, I would be more like I would be very reluctant to do something that revealed information about somebody's location.

That seems dangerous potentially, right?

Yeah, I even saw something recently about somebody telling women who post their Strava runs and stuff on to Instagram or whatever to make sure you don't tell people that you're doing this, you're going to be in this location every Wednesday or something like that.

Yeah, that seems like honor confidentiality or respect privacy.

Yeah.

So here's a different one.

This one is on the ethical principle of being honest.

This one's a little fuzzy because it may not be solely software engineer.

You know, there's a lot of shared responsibility here.

Tesla, one of those companies that's always in the news.

They made these claims that their self-driving mode is just a stepstone to autonomous, fully autonomous driving.

At the same time that they made these claims, they turned around and they told the California DMV, "No, no, no, no, this is not autonomous driving.

It will never be autonomous driving."

Now, maybe this is a marketing thing, but there is an honesty thing here, right?

This goes against the ACM ethical principle 1.3, be honest.

If you're working on this self-driving mode at Tesla and they're selling it as fully autonomous driving to the public and talking to regulators and saying, "It's never going to be fully autonomous driving," is there an ethically compromised role that you're playing as a contributor to this?

I feel that there is.

I mean, clearly the people who are making these claims are probably not the engineers.

Yeah, marketing.

But the engineers probably know that the claims are false.

Again, it's a question of what is the action you take?

You can be a whistleblower, probably lose your job, or you can justify it to yourself.

I remember seeing a, I think it was a radio story or a TV story about this where it might not have been Tesla, but it was another company that was making somewhat unsubstantiated claims about their autonomous driving systems.

And people were saying, "This is unethical.

Shouldn't you take this off the market?"

And one of the responses was, "I would feel wrong not pursuing this because I think autonomous driving will ultimately be much more safe than having humans on the road."

And so it would be irresponsible of me to not pursue this fully autonomous mode.

That sounds a little bit like a rationalization to me, but there are probably people who make that an earnest, make it claim an earnest.

Yeah, it's kind of a moonshot, but at the same time, if I'm sort of dishonestly saying this is partly autonomous driving, it's going to be autonomous driving, maybe I inflate people's belief that they can rely on this thing.

And there's a chance that this lie does harm.

There was a story in 2022, and the U.S.

Department of Justice launched this investigation because there were boards of dozens of crashes involving this autopilot feature in Tesla's.

It's possible that people are like, "Oh yeah, this is almost autonomous driving.

I can rely on it."

And boom, crash.

Yeah, it seems very dangerous to give people the impression that it is autonomous driving.

But again, I have questions about what is the actual responsibility of the engineer in this case.

That's one where it's a little trickier.

All right, moving on.

I thought maybe for 1.1, I'm going to read 1.1.

I read 1.2, which was avoid harm.

1.1 is contribute to society and to human well-being.

That might sound pretty vague, pretty abstract, pretty broad.

And they have this parenthetical kind of comment, acknowledging that all people are stakeholders in computing.

Hmm.

This principle, ACM Code of Ethics says, which concerns the quality of life of all people affirms an obligation of computing professionals, both individually and collectively, to use their skills for the benefit of society, its members, and the environment surrounding them.

This obligation includes promoting fundamental human rights and protecting each individual's right to autonomy.

An essential aim of computing professionals is to minimize negative consequences of computing, including threats to health, safety, personal security, and privacy.

When the interests of multiple groups conflict, the needs of those less advantage should be given, increased attention, and priorities.

So it's like, what is the social good here?

But when I think what's contributing to human well-being, the good of society, I immediately think, well, what about these addictive interfaces that our industry has built?

You know, the guy who made Infinite Scrolling, he famously has gone on TV and talked about how guilty he feels about Infinite Scrolling, or features like on YouTube where you play a video and then they auto-play the next one.

Their goal is to maximize the amount of attention that they receive from users.

Recent studies have determined that people who are heavy users of social media, they are more likely to be anxious and depressed and lonely.

But these addictive interfaces may not be contributing to the good of society.

Do you think the person who made Infinite Scrolling has acted in an ethically compromised way?

This particular thing is tricky to me.

So my personal take is that YouTube has definitely gotten to a point where they do some dubious things.

I'm a YouTube user and I get videos that are clearly things that are not stuff that I would choose to see.

And it seems particularly dangerous when you have unsupervised children watching YouTube.

Oh yeah, my kid, no YouTube at all, not allowed.

But on the other hand, I worked for a number of years in doing music recommendations.

And if I think about how I'd been an employee at YouTube and somebody had come to me and said, we're going to do this feature where when you watch a video, it's going to automatically, when that finishes, it's going to automatically take you to another video based on your recommendations, based on your taste.

I probably would have thought, oh, that seems like a good business thing.

Yeah, I mean, it may not be that beneficial to the end user, but it's probably something good for the business, you know, more engagement, potentially more ads.

I don't see anything wrong with that per se.

Seems like a decent thing to pursue.

I think where it becomes murky is where you stop recommending things that are based on the user's taste and you start recommending things that seem to be getting high engagement regardless of what the user has done historically, which seems to be what YouTube and other social media sites have started to do.

Right, engagement is the way in which you rank things that people see.

Well, TikTok is an interesting example.

TikTok has gone way more into this strategy.

As soon as you show up, unlike other platforms, you don't even have to follow anybody.

They just start showing you stuff.

And as soon as you respond, as soon as you engage in any tiny little way, they eagerly start loading buckets more of the stuff on you.

And when I hear ACM, our software should contribute to society and to human well-being.

Maybe people are getting some enjoyment on this.

Maybe I'm overly stodgy and finicky and critical, but I don't see how a platform, which beyond any other consideration, wants as much eyeball attention as possible and is willing to do any algorithmic thing.

That seems ethically compromised to me.

If you think to yourself, this person really likes cat pictures, let's show them more cat pictures.

I don't know that I would have necessarily thought of that as being unethical in and of itself.

But I don't think that's what's happening here.

I think this person likes cat pictures.

Let's try to get them to view cat videos for 12 straight hours or longer.

It does not matter what happens to the person.

We need them here as long as possible.

Yeah, I think in retrospect, you can see that that's probably bad.

But I don't know that I would have necessarily thought of it as being unethical when it was proposed.

I think where it does get unethical, obviously unethical to me, is this person has gone through 17 cat videos.

Let's show them something about how vaccines are going to implant a chip in your head or something like that.

Exactly.

That's where it gets pretty clearly unethical to me.

But I don't know as an engineer if I would have immediately thought that showing them an infinite stream of cat videos is going to be bad for them.

In retrospect, it seems pretty obvious that that's bad.

It seems pretty obvious.

Just feel like if I have a platform, it's okay if you're not on the platform all day.

And maybe I get less VC funding as a result of that.

But that's all right.

I'll survive.

I've still got a modest business.

But you know what?

That's why I don't run one of these businesses most likely.

Yeah, that's probably the only reason.

You have a good example that you brought.

This is ACM 1.3, be honest and trustworthy, Volkswagen.

Do you want to tell me about the example you had?

Yeah, this is from a, I don't know.

It's probably been 10 years or so now.

But the reason I like this example is because it's a case where an engineer specifically took an action that was unethical and that they probably knew was unethical.

And the story is that somehow there was a system in this Volkswagen and these Volkswagen and Audi automobiles such that they could detect when they were being emissions tested.

These are the diesel motors, right?

I think that's right.

Yeah.

So this engineer put in code that, yeah, I assume into some sort of embedded system that could detect when it was being emission tested.

And so it would alter the behavior of the engine while it was being emission tested, such that it would pass the test.

But when it was not being emission tested, it would work in a different way, such that emissions were above what they should have been by law.

This is a great example because it is clearly unethical behavior.

Yeah, it was unethical.

From the beginning, there was no question about it.

Then it was something that an engineer did.

I mean, to be fair, probably if he was not prompted by management, certainly management knew about it at some point and decided to let it go because, you know, I'm sure these systems require quite a bit of testing and you're moving them from vehicle to vehicle.

And so I don't think this was a secret by the time it was revealed.

But nonetheless, it was a case where an engineer consciously made a dishonest decision.

And there's a void harm potentially in there, right?

You're polluting the air as a result of this.

Yes.

In addition, it'd be honest and trustworthy.

It's not honest and trustworthy.

Yeah, it's definitely, it's not something where an airplane's going to fall out of the sky, your car's going to burst into flames, but it is something that's causing harm.

Yeah.

This is, we're avoiding something that I find pretty hard here.

And you've, you're avoiding it less than I am.

I work at, let's say I work at this company, someone comes to me and they're like, "Hey, can you write a function which detects if the car is being emissions tested?"

And I say, "Okay, why?"

And they say, "Because we want to reduce emissions during tests."

And my response is what?

Kinda no?

Yeah.

I quit.

What's my response there?

It is, it is very tricky.

I think maybe you could get away with, I don't believe that meets the compliance program or the laws that are in place to govern our behavior in this industry.

You start sending angry emails or you flash your ACM card.

When I joined ACM, they gave me a card.

It was like that moment in the jerk where he gets the phone book and he says, "Yes, I am somebody now."

That's how I felt.

Yeah.

I mean, I think you can make, in some cases like this, even if you're less powerful than the person that's asking for it, I think you can probably make the case that this is something that's eventually going to be picked up in an audit.

It's something that's going to hurt our reputation.

It's something that's going to cost us business.

And so I think we shouldn't do it.

You can make a business case that ethical, well, acting on ethically has negative consequences.

I think you can.

Again, I think it's probably a little bit, it's a little bit disingenuous to say that anybody could get away with that.

I'm sure there are cases where the pressure to do something that is unethical is just, you know, when it's you or your job, the pressure to do something is probably hard to overcome.

I imagine it seems like a bit of a cop out to say, well, this could have negative consequences for the company.

It's a thing I might argue, but deep down, the reason I don't want to do it, it's a shitty thing to do.

I don't want to do shitty things.

I want to go to the Christmas party and be like, yeah, I feel good about what I did this year.

I don't feel bad.

I want to go home to my family and be like, yeah, I acted honestly.

I was trustworthy.

I avoided harm.

Yeah, I agree with that.

But I can also sort of envision a scenario where the boss says, we want to implement this feature and you say, I think that's unethical.

It will make me feel bad.

And they say, well, the government standards for emissions are unrealistic.

And it's going to cost the company money if we don't do this and people will lose their jobs.

And there's always a counterargument that may be unrealistic or a rationalization.

But I don't know how you respond to those sorts of arguments.

There's always some kind of justifying bullshit you could say is what you're saying.

Okay, here's a different one.

The AI conversation comes up a lot.

It's possible to build models to do all kinds of stuff.

There was a study, I think it was 2023.

And the study was if you apply to a job, employers, they're less likely to call back job applicants with black sounding names versus white sounding names.

Identical resumes, the name is different.

They're less likely to call back black applicants based on the name.

Now, you work on a system that maybe automates resume ingest.

You pull in resumes, you're using all of these features in your model that go into what the human decision makers make.

At some point, you realize, oh, wait a minute, this study comes out.

Hey, name is, it can't be a feature because if we just use that and we base our prediction on existing human behavior, what we're doing is we're encoding this bias.

There's this existing human behavior, which has a bias in it.

If I produce a model which tries to predict that human behavior, then I'm actually producing a model which encodes the bias.

This is in ACM 1.4.

This is the first example we have of this one, which is be fair.

Take action not to discriminate.

Now, if you're working on deploying this model, do you have an ethical obligation to fight back against these types of encoded bias?

This is a super loaded question, isn't it?

Yeah, I think you do.

I think the difficulty with problems like this is that it may not be immediately obvious that the bias exists.

I think this is why a lot of companies and platforms have sprung up to make the assumption that there's some unfairness in your model.

You have techniques now that try to look for that kind of bias.

For instance, it probably makes perfect sense when you're first building a model to use, say, the zip code or something as a feature.

But there's a clear bias there because it's assuming that whatever thing you're trying to model for is uniformly distributed across zip codes, which is generally not true.

Zip codes can be a proxy for demographics, right?

Yeah, exactly.

All right, that's a tougher one.

But I tried to frame it in a way where you would be like, yes, I would try to resist that.

But your answer is interesting.

You're saying there's other subtle stuff in there.

It's easy to make that mistake is what you're saying.

I think it's easy to make that mistake and not realize it.

I think when a lot of these biases were initially found, I don't think the people who had built and deployed these models were attempting to be sinister.

Right.

It's not like TikTok or YouTube or Facebook where they are attempting to be sinister is what you're saying.

Yeah, I don't know that for sure.

I mean, there may have been cases where people knew exactly what was going to happen and they were okay with it.

But I think in a lot of cases, it wasn't something people thought of before they started to see the behavior of the model in the real world.

So I saved ACM 1.2 Avoid Harm for last.

And in order to find examples of Avoid Harm, I just looked up a whole bunch of stories about Facebook, which whole episode could be about Facebook.

Facebook is the company that most commonly peers in our industry, people I know have said, "Never, never, never, never would I ever work there."

And the reputation it has is as a ethically compromised company.

The whistleblower Francis Haugen a few years ago said that they have, quote, "moral bankruptcy."

So here are some great examples of Avoid Harm.

And the question is, if you're in this massive organization, I don't know how many tens of thousands of software engineers they employ, including purported people who do barely anything, right, like in Silicon Valley, the guys on the roof who are drinking their slurpees.

I'm going to assume they've left all of this unethical behavior behind when they changed their name to Meta though.

So.

No, no, no.

Okay.

I doubt it.

I doubt it.

I don't think changing the name is a...

Okay.

So the big one was for 2021.

They had this internal study.

This was leaked.

And in their own data, they discovered that, quote, "Instagram makes teenage girls feel worse about their bodies and they blame the platform for anxiety, depression, and suicidal thoughts."

So, ACM 1.2, you work on a platform like Instagram and it is harming members of the population of people who use it.

And you know this based on data that you've aggregated, data that you've collected.

What do you think?

I have no question at all that the company has done quite a few dubious things.

Again, I sound like a broken record here, but I am unclear where the responsibility of the engineer starts and ends here.

I think it's don't go work there if there's so many examples of these.

This is an organization that does not care about the well-being of the like the chattel who they're harvesting data from.

Yeah, I think that's probably an easier choice to make now.

Again, I can imagine cases where you get hired at some point by Facebook to be, say, a react developer or something like that.

Yeah, right.

Right.

Or early Instagram before Facebook bought it, Jango app.

That could have been us.

Yeah, and you think, okay, this is maybe not making the world a better place, but it's just people posting pictures of their vacation and connecting with friends from high school or whatever.

It's at least harmless.

And then one day you wake up and you realize, oh crap, this is a Russian PSYOps platform.

I should probably not be contributing to this.

Oh crap, teenagers are committing suicide at higher rates because of this platform.

Right.

That I am contributing software to.

Maybe five years ago, you could say, well, I'm going to quit my high-paying Facebook job because I can get a high-paying Netflix job tomorrow, but is that still the case?

The other issue I have that I wonder about is Meta does a lot of things that are somewhat unrelated to social media.

So, you know, like for instance, they at one point employed, I think it was Simon Payton Jones.

So, you know, somebody was working on, people were working on Haskell within Facebook.

There are people doing static analysis.

There are people doing interesting data science stuff.

And while I think now you can look at the company as a whole and say, I'm concerned about their ethics.

I don't want to work there.

I think there was maybe a time when that was less certain.

And if you knew you were doing stuff that was unrelated to the actual social media function of the company, maybe it was less clear that that was a bad choice.

I just don't know.

You don't get a pass for contributing to Haskell.

In 2012, they got in trouble because they did this study where the data scientists, they tried to, they put their finger on the pulse of what the timeline of Facebook users would see.

Deliberately to try to promote happiness or sadness in the people using Facebook.

Right?

So they like deliberately tried to make people happy or sad and see if they had the outcome that they were trying.

Like if they could manipulate the emotions of their users.

I mean, that's pretty freaking nefarious, right?

This is like, do we have the power to make people feel sad?

Let's run a study for a week.

Are you up for it, Mike?

That was in 2012.

I mean, this sounds like sophistry, I guess, but I think even in this case, as an engineer, you might have thought, okay, this is a fairly legitimate experimental design.

I'm not attempting to do harm here.

I'm just doing sort of an A/B test to see what happens.

And it's, I don't, it's not like a psychotic.

You study on a university campus where you recruit undergraduates and you go, hey, look, you know, like you're coming in this building, you're signing the forms.

Okay, we're going to do some weird stuff and we're going to see how it works.

This is, you have no clue that they're running this massive science experiment on you.

That's shitty behavior.

Yeah, I agree.

But, you know, again, web companies do this all the time to test out different user interface features.

And I think it's conceivable at least that the engineer or the data scientist or whomever who was responsible for this was simply told, we're going to do this A/B test and see what happens.

And they were unaware of the fact that, you know, the company was going to choose to pursue the more negative thing because that seems to improve engagement.

So that's the last one here.

Engagement based ranking, which has resulted in massive amounts of disinformation resulted in they've been connected to genocide in parts of the world.

I mean, that's a violation of avoiding harm.

It's a violation of contributing to the good of society.

I mean, Facebook is, it's like they wrote the code of ethics to say at the ACM, do not run a company like Facebook.

Yeah.

Do not work there.

That's what they're saying.

I guess on the plus side, they never had a slogan like don't be evil.

So that's true.

They can't be they can't be blamed for going against their don't be evil because they never had that.

Yeah.

So at least they're not directly defying the principles they were founded on.

I mean, would you say when you go, well, it's common to A/B test stuff, there's a way in which I don't want to pin this on you.

There's a way in which you might be saying, well, everybody does this.

Yeah.

I feel a Godwin's law invocation coming up here.

But I am not in any sense claiming that what they did was good.

I'm just saying, I'm trying to imagine myself as the engineer in these scenarios.

You should do that.

Yeah.

That's the hard question.

You're right.

You're right.

I think it's very easy and potentially a little self-righteous to say, oh, I knew this was unethical and I if it had been me, I would have simply refused to do it, or I would have simply left my job.

Agreed.

It is easy to say that.

Yes.

I have not been presented with these choices very much in my career.

I think the worst cases that I've seen in my own experience have been, you know, maybe some misrepresentation by the leaders of the company, you know, kind of the fake it till you make it sort of thing.

Frances Hogan, the whistleblower of Facebook, she said, "I've seen a bunch of social networks and it was substantially worse at Facebook than what I had seen before.

Facebook over and over again has shown it chooses profit over safety."

It's hard not to look at these examples and to come to the same conclusion.

There are a lot of examples of Facebook where they clearly do not care about avoiding harm as an organization.

So it may be that if you go there, it's incumbent upon you to be aware of the reputation for the organization.

And by actually going there now, after these stories have come out, to be aware that you're choosing the unethical organization, you're likely to be asked to engage in unethical behavior.

In that particular company's case, I think it's very hard at this point in history to go there.

Although the current job market, if they were hiring and you needed a job, it's probably something you could justify to yourself.

So we didn't talk about the later parts of the ACM code of ethics.

They, in the third section, they talk about leadership principles.

We, I don't have examples for these, but I just thought there were a couple of interesting ones.

3.6, they say use care when modifying or retiring systems.

So it's possible like you could retire a system and it might have a negative effect on your users, and that's like a leadership principle.

Interesting.

They also say recognize and take special care systems that become integrated into the infrastructure of society.

You made this point about a small company versus a large company, and they're talking about that here.

Even the simplest computer systems have the impact, have the potential to impact all aspects of society when integrated with everyday activities, such as commerce, travel, government, healthcare, and education.

So when things get big, when they start becoming integrated everywhere, you might have even more ethical responsibility than you had before.

You look at a game like Pokemon Go, millions of people are playing Pokemon Go.

Let's say you're a software engineer for Pokemon Go.

Now this is hypothetical.

This does not exist.

I'm just pulling something out of my brain randomly out of a hat here.

And they say to you, "Hey, some of our users are using Pokemon Go to meet up and engage in criminal behavior."

And maybe they're really profitable for us as software engineers on this platform.

Let's make it easier for them to meet up and do crimes.

What do you say as a developer?

I'm really intrigued by the Pokemon Gang.

Oh, you're getting distracted.

The example's not good enough.

It's supposed to be like a clear yes.

Yes, it's so ethically compromised.

I would have to say no.

Yeah, I mean, there's this sort of classic historical argument about, are the engineers responsible for the ways that people use the things that they build?

I don't have a good answer for that.

I think yes, you are.

But sometimes it's a little bit difficult to get the genie back in the bottle.

Ah, that seems like a bit of a cop out to me.

Infinite scrolling.

If I didn't build it, somebody else would build it.

Yeah, maybe.

Let that other person be the ethically compromised one.

It's kind of my take, Darryl.

Maybe the clearest example is the one that you raised.

Volkswagen, we're going to know when our system is being emissions tested.

We're going to write code to determine that we're being emissions tested.

And then we're going to have fewer emissions.

We're going to suppress the combustion and the engine.

And when it's tested, we will actually deliberately lie about what our system does on purpose.

You had to have software engineers build that feature.

I think there's a whole other episode also about the current pursuit of more and more sophisticated LLMs and the pursuit of AGI, I guess you would call it.

A year ago, there was this idea that we should slow down and try to figure out what we're doing and put guardrails in place, which seems to have evaporated to your argument before.

This is one of those things which is probably going to be incorporated into society as a whole.

And there's only a handful of companies right now that really have the horsepower to be able to do this stuff.

And they're in deep competition with each other.

So we seem to just be sort of hoping that the ethical concerns will work themselves out at this point in time.

Okay, so we're going to end there.

Feel free to send us an email.

Now that we have MX records for our email address, the email will actually arrive.

Email address is podcast@picturemecoding.com.

If you have stories where you felt like you had to engage in ethically compromised behavior as a software engineer, share them with us.

We'll read them.

We set up the MX records so that it will only accept positive emails.

If you send us mean negative emails, those will still bounce.

Oh yeah, right.

We have an AI which actually runs our DNS.

And what it does is it reads the email forwards the packets onto the appropriate server.

That sounds pretty advanced.

Hey, we should build that as a company.

Yeah.

Dynamic DNS.

We're only going to receive traffic we want to receive and everything else gets DNS failure.

Just launched six startups right there.

All right, Mike.

Thanks so much for hanging out with me.

Glad to see you again.

This has been Picture Me Coding with Erik Aker and Mike Mull.

We will see you again next week.

See you next time.