Picture Me Coding

What the Hell is Edge Computing?

February 28, 2024 Erik Aker and Mike Mull Season 2 Episode 26
What the Hell is Edge Computing?
Picture Me Coding
More Info
Picture Me Coding
What the Hell is Edge Computing?
Feb 28, 2024 Season 2 Episode 26
Erik Aker and Mike Mull

In which Mike and Erik try to understand what "edge computing" means and whether it's meaningful or marketing speak. The topic comes up because Mike decides to run an LLM on his laptop and it turns out that when he's doing this he's participating in edge computing!

For musict his week Erik is listening to the Baroness album Stone while Mike has been into Brittany Howard's What Now.


Show Notes Transcript

In which Mike and Erik try to understand what "edge computing" means and whether it's meaningful or marketing speak. The topic comes up because Mike decides to run an LLM on his laptop and it turns out that when he's doing this he's participating in edge computing!

For musict his week Erik is listening to the Baroness album Stone while Mike has been into Brittany Howard's What Now.


[MUSIC PLAYING] Hello.

Welcome to "Picture Me Coding" with Erik Aker and Mike Mull.

Hi, Mike. How are you doing?

Hey there. Doing pretty well.

I have a working bathroom in my house again.

That's a nice feature of a house, a working bathroom.

That's a good thing to have, yes.

Yeah.

Not going to shave my face until we get bathroom number two, and then we're done. Two bathrooms, then we're done.

That's pretty fancy.

Two bathrooms?

I know.  I probably shouldn't crow about it. Not everybody has two bathrooms.  My house has two, and they were both down for like three weeks. 

[LAUGHS] What have you been listening to this week?

The album that I've been listening to a lot is an album that I did not know I needed, but I think I did. It's the latest album from Brittany Howard, formerly the front woman of the Alabama Shakes.

How were you on the Alabama Shakes? You were a little cool on them?

I confess I did not love them.

Yeah, same.

There were moments where I was like, wow, this could be really good, but I just couldn't fall in love with it, same.

I kind of expected to as well, given the type of music and the origin of it, but never fell in love with it.

But I really liked her last two solo albums. And this one in particular is kind of this interesting combination of soul and R&B and funk stuff. And there's a track that's pretty much just a dance track, and there's one song that sounds like a sort of long lost Prince track. And so it's been a lot of fun to listen to a little bit out of my normal comfort zone, which I like. Nice change of pace, especially for February. Typically don't get a lot of really strong releases in February of the year, because people are waiting towards summer for the big albums.

I never thought about it in terms of the calendar of music. I didn't realize February was a lower month for things getting released.

Yeah, I don't know exactly how it works.

It just seems like with the big artists, they wait closer to the summer or later in the year so that they get into the Grammy discussion and stuff like that.

Oh, OK, stuff we don't care about.

I've been listening to the Baroness album.

Last year, I talked about their EP a little bit and a couple of songs on the EP.

The EP shows up on the album.

The album's called Stone.

And my high praise for it is it sounds just like every other Baroness album, which is great.

You gotta like that.

Yeah, more of the same.

We've got this sound, and we're going to keep on producing it for you because you love it.

I love it.

Yeah, I always say that about one of my favorite doom bands, Monolord.

I know what I'm going to get with a Mono Lord album.

They're not going to throw in a love ballad or something.

You get that box in the mail.

You already know what's in the box.

And you already know that you love it.

Is it good music to listen to when you're having your bathrooms remodeled?

Yeah, it's hard to listen to stuff when you're having your bathrooms remodeled.

Maybe that's why.

Just want something loud to drown out construction noises.

And dust, man, there's a lot of dust from construction.

I think that's the part I'm looking forward to going away, getting away from.

No more dust.

I haven't noticed that.

This week, you wanted to talk about edge computing.

And I have a curiosity about this, which I will share with you in a little bit here.

Why do you want to talk about it?

Partially because it's kind of in the news a lot recently.

As I mentioned to you, it's kind of zeitgeisty.

There's these sort of two almost contrasting things going on where people are building these giant AI models that require tons of compute and are almost essentially centralized.

But then there's also this trend toward edge computing, which is kind of interesting.

To be perfectly honest, when I started looking at it more closely, I don't think it is what I thought it was.

So I just thought it would be an interesting topic to discuss both.

It's both interesting technically and kind of in the news, so to speak.

You don't think it is what you thought it was.

Can we start with that?

What did you think it was?

So I had kind of two vague notions of it.

One is that I worked in a lot of places that used CDNs, Content Data Network, for things like audio and video and just getting it closer to the customer so that when they pull that media, it doesn't take so long to get to them.

Yeah, we sometimes would call that stuff static assets.

Did you refer to that as static?

This static is because it's like it doesn't change.

If I have a picture, it's not going to really change.

If somebody wants to download that picture, great.

I can get it close to their computer.

Yeah, generally true.

It's kind of caching that's closer to the user.

I mean, I think it does in some cases change, which is why it's not permanent.

But anyway, I guess that was kind of my thought about edge computing that and to a certain extent, people doing things like putting computation into IoT devices and things like that.

Well, from my understanding, I think we're going to get a little deeper here.

That's not so far off, though.

That's what you originally thought.

Same with me.

When I hear edge computing, I think, oh, like Cloudflare.

I have respect for Cloudflare.

They use their services as their very popular service provider.

Cloudflare, primarily known as a CDN, but they've become like a cloud computing company now.

They've got a serverless platform, and they've got-- you could do DNS, and you can do quite a lot with just Cloudflare.

Yeah, I think if somebody had asked me, who does edge computing, Cloudflare is probably the company that would have first come to mind for me.

And they have a phenomenally cool internet speed test page.

If you ever need to speed test your internet, look up Cloudflare, speed test, it is definitely the coolest one you will ever see.

Good to know.

I do that quite frequently since my home internet provider is also a cable company, which I will not name.

So we've already covered a few things here.

We talked about CDNs.

And you mentioned IoT.

We talked about caching, static assets.

Edge computing, though, it sounds vaguely like-- I think when I first started hearing about this, I thought, well, OK, I understand what Cloudflare does.

How are we going to put computations way out there on their nodes at the CDN edge points?

That's what I thought.

But then I've also heard Pippit talk about it being like, no, no, no, this is like client computing.

We've run all the computations at the very, very beginning of whatever our network is.

Yeah, what kind of inspired me also to look more into this is I started playing around with trying to figure out how to run LLMs locally on my own hardware.

And so I was playing around first with the Transformers library from Huggingface.

Let you-- you can download-- I was using the Lama 2 model at that point and trying to do some code generation stuff.

And then I also came across this tool called OLAMA, which is pretty nifty if you haven't checked it out.

Can we-- let's get back to that.

But can we try to define some terms first?

So when we think of edge computing, that's contrasted with something.

So what should we really be calling the edge?

What are the computers or things that we call the edge?

And what are those in contrast to non-edge computing?

Yeah, good point.

So clearly networks don't have edges in the same way that knives have edges.

Because this is a giant graph.

Everybody's connected.

Everybody else is what you're saying.

Yeah, I think my take on what the edge means is the parts of that graph, parts of that network graph where you don't have any more leaf nodes, so to speak.

But-- Like what would be an example there?

Well, so that's where it gets kind of tricky.

So my laptop would be an example.

There's nothing really downstream from my laptop or, say, my phone.

Sweet.

So you're doing local development.

You're actually doing edge computing.

Yeah, but some people would call those end points so they don't qualify as edge.

Whereas other people say, OK, the edge means where the internet sort of terminates for you.

So my laptop isn't part of that.

My phone isn't part of that.

But there's some sort of gateway or back home network from a cell tower or something.

And those things are kind of at the edge.

Wait a minute.

Wait a minute.

Cell tower?

So if I'm standing at the edge of a lake and the lake is the internet, and I've got my phone in my hand and I throw it into the lake as one would, then that's where I'm interacting with the edge.

So my phone connects to my router.

My router talks to my ISP.

And then my ISP gets me onto the so-called information superhighway.

Remember that term?

So the edge is-- it's probably past my ISP, right?

If we're talking like Cloudflare, for example, they've got data centers.

And maybe those data centers are just one stop away from my ISP.

Am I oversimplifying?

Is this making more sense?

That's where I think of as edge.

Yeah.

I probably would have, too.

So I looked up a bunch of definitions from different places, IBM and Dell and Cloudflare.

OK.

Well, hit me with some of those then.

So they all are slightly different.

So here's one from Lenovo.

Lenovo is really big into this idea now.

And they say edge computing is computation that happens outside the cloud or outside the data center.

OK.

Lenovo-- so they have a service called Edge is a Service-- it's called ThinkEdge, which they describe as Edge is a Service, which is kind of weird.

Edge as a Service.

Right.

So basically, they sell these servers that kind of sit on the edge of the internet, but they don't sit in like a normal-- like most companies will have a server room in their building or something.

Right.

They're selling this hardware, which is something that you could say deploy on an oil drilling platform or a factory floor.

And they're calling that edge computing because it's connected to the internet on one side, but it's also connected to a bunch of sensors in your facility.

And so you need to be able to protect it from the elements.

You need to be able to protect it from physical tampering and things like that because it's probably not in a locked room with access controls like you would have in an office building.

OK.

OK, kind of I'm getting it.

But at the same time, it sort of sounds like we might be inventing a term in order to sell more boxes.

It does sound a little bit like a marketing term.

It does sound like that.

Non-technical term.

And it's kind of a little bit cheating to say edge computing is computing that's not in the cloud.

But IBM says edge computing is a distributed computing framework that brings enterprise applications closer to data sources, such as IoT devices or local edge servers.

So that's a little bit tautological there.

Edge computing is a framework that brings applications to local edge servers.

Exactly.

So I did not find that definition particularly helpful.

It's not illuminating.

Thanks, IBM.

Sounds a little bit like marketing speak.

It does.

Yeah, so we are verging on the edge of-- it's sort of the part for the-- the part for the-- the part for the part for the part for the part for the part.

We're verging on the brink of do we believe this term has valence.

[LAUGHS] One thing I have noticed-- well, let me move on to Cloud Flares' version of it.

OK, OK.

And let me just tell you, I'm a fan of Cloud Flares' work.

So I probably-- I'm pre-- I'm primed to trust their definition, however marketing buzz-speaky it is.

The thing I like about Cloud Flares is that they sort of confess that it is not precise.

But anyway, here-- OK, the term.

Here goes.

For internet devices, the network edge is where the device or the local network containing the device communicates with the internet.

The edge is a bit of a fuzzy term.

For example, a user's computer or the processor inside of an IoT camera can be considered the network edge.

But the user's router, ISP, or local edge server are also considered the edge.

The important takeaway is that the edge of the network is geographically close to the device.

OK.

So geographically close sounds like what a CDN specializes in.

Right.

OK, that's kind of cool.

Now, I have heard this in association with IoT.

So you know all those cool buttonettes with the dishwashers and the internet-connected toothbrushes and doorbells and baby cameras and stuff?

Yeah.

I think of that as edge computing.

We take that edge and we turn it into a giant wave and we overload servers with it.

Yeah.

So I can't figure out if you had like a IoT device that had like a Raspberry Pi or an Arduino in it, I think that would be considered edge computing to some extent.

Yeah, OK.

OK.

But I also think there's this idea that you have-- you know, maybe you have a bunch of sensors on a factory floor and they're streaming a bunch of data.

But to get sort of an overall picture of the health of your factory, you need to do some computation.

And so I think the idea is that all of those devices stream into this edge server and then the edge server does the machine learning work or the-- OK.

I like that because, man, we would be pretty wasteful to send all that data into some cloud service data center.

Yeah, it would be wasteful.

And there are situations where you may need to make decisions based on that data that need to be made faster than the latency that you can get by sending it to a cloud server and doing a computation there.

So I think that's the value proposition, as the business people say.

But I'm still a little bit unclear on why that's an edge server.

Like, for instance, why does that server need to communicate with the outside world at all if it's doing the number crunching?

So isn't that just a local server at that point?

Yeah, in a data center, right?

Yeah.

I mean, another way to think about this is, what if I want to run my own cloud in my own company?

Is that edge computing?

Good question.

Yeah.

So I guess all we've basically established is that the term is a little bit vague and as such is somewhat vulnerable to marketing.

But Cloudflare owned up to that.

It's fuzzy term.

So Cloudflare has some interesting products.

And actually, my website, my website's ericaker.com, it is a speltkit site.

My site's just marked down really, so I could have just done a static site generator.

But I like building stuff.

And I don't write on it very often.

It's just a basic blog.

It's a typical tech blog.

There's only a dozen posts on there now.

And it used to be a Haskell server that had an elastic search back end.

And that was just for fun.

I built that for fun.

And I wanted something a little simpler.

So I decided I'll deploy this on Cloudflare.

And I'll use their product, which is called Cloudflare Pages.

Cloudflare Pages, I'd like to know more about how it works.

I have to admit I haven't read too deeply into it.

But they do talk about it as edge computing.

With Cloudflare, you can run a full stack application that's like front end, back end.

You can talk to data stores.

And somehow, magically, it runs in their CDN nodes, close to the users who are accessing it, which sounds a little magical.

If I have a service that I deploy into the cloud, I know specifically I've got it in this cloud region, US West 1, US East 2.

And then I might have something like a geographic load balancer.

And a user comes into Florida.

They hit that load balancer.

And the Florida load balancer says, you're in Florida.

So you're going to go to US East 2.

You're going to hit this service in US East 2.

Now, this starts getting complicated.

Because if we're reading and writing data, we're using a PostCrisical database, it's probably going to end up going to a single server.

Obviously, you could scale, you could cluster these things.

But the kind of straight ahead, common way to do this is, I've got a server.

And I've got really one that you're going to write to.

So you're going to write to the one in the West.

You're going to write to the one in the East.

So someone's rights are going to be slow.

These are kind of the common problems.

I think this is geographic load balancing.

So something like a little bit hand wavy and magical when I start thinking about what Claude Flair is doing.

I can run a full stack application.

I can talk to a database.

And I don't have to solve these types of problems somehow.

Sort of mysteriously, it all just magically works.

Now, for my side, I don't have a database.

It's all static.

It gets built as part of this SvelteKit application.

And you merge a pull request and a GitHub Action Runs.

And it just magically sprinkles itself up to Claude Flair.

And then Claude Flair, there's no server that I ever SSH into.

It just serves the stuff as part of its CDN.

It feels very nice.

And it kind of would be nice if this is what Claude deployments became at some point in the future.

There's no conversation about virtual machines.

I'm not dedicating resources to it.

They just-- they take all of that off my plate for me.

When you say edge computing, I have this dream that, maybe that's what we're going to be doing someday.

We're going to be writing applications.

And we just sort of pushed them out into the dust of these CDNs.

And the CDNs figure out the hard stuff.

But below the surface, there are going to be some interesting distributed systems problems, I imagine.

What happens to my data?

Yeah, as an old PC guy, from somebody who came out of the early days of personal computing, the idea of having on-premise servers or localized servers is kind of appealing.

The idea of sort of putting an application on that and then it somehow magically becomes available to the outside world is also kind of appealing.

But I don't know if that's exactly where the trend is going.

Yeah, I don't know either.

So IoT, Internet of Things, that was pretty big buzzword, I would guess maybe 2015, 2016.

And suddenly it was like, IoT is going to be huge.

It's going to be everywhere.

And I remember there was a sales rep from MongoDB who came to the team that I was working with, that company I was at before, a very large company.

We used MongoDB there.

No comment, a MongoDB.

But the sales rep came and they said, hey, we have the NFL is using our services.

And they have sensors all over their stadia.

And they have sensors on the ball.

And they have sensors on the players.

And so the players who are playing football smash into each other.

They can see how fast they're going and what the force of the hit is.

And they were collecting-- he said something like-- and this was years ago, so I probably misquoted.

But he said something like, we're taking 30,000 data points a second.

This is like IoT devices.

And they're all getting streamed into these data stores that are like at the stadium.

And then they figure out all that data, make it available for analytics or whatever.

IoT was a big deal.

We were going to have internet connected toilets, and toothbrushes, and toasters, and all kinds of crap that probably nobody really wants.

What happened to IoT?

Is it still going to be a big thing?

I think it's still going to be a big thing.

I think it is a fairly big thing.

I feel like it's made more inroads in industry.

Factories.

Yeah, factories.

And I've seen software that works in mines.

And it was the other example I was thinking of.

Well, there's the one that Lenovo uses of things like oil rigs and so forth.

But and then I think in the home, there are certain things in the homes that are super popular, those dormant cameras.

Yeah, people like those.

And I know people with Nest thermostats, and Nest smoke detectors, and things like that.

I just see those as surveillance devices.

I get a lot of surveillance from my phone, and I get a lot of ads as a result of being constantly surveilled by my phone.

I don't want my fridge to do it too.

We were in Home Depot, and every refrigerator had a screen in it.

And it was like, here's the weather today.

And I just kind of loudly said to my wife, I really don't want my fridge to be on the internet.

And there were two people who started snickering around me.

Yeah, I'm in that camp as well.

I always think about that old joke about how people who are technologically savvy don't even like to have printers in their houses.

Yeah, exactly.

I don't trust it.

I keep the gun near the printer in case it acts up.

My spouse bought it.

She picked up really cheaply a couple of these tiny Alexa devices.

And basically, they heard to keep our dog company when she's at home alone.

So keep the dog on the internet.

Yeah, so she asks it to play music when we leave so that the dog isn't totally lonely.

But I swear to God, they are spying on us.

Oh, they surely are.

They definitely are.

I can't remember the specific example, but I remember having a discussion about something relatively obscure with one of my kids or something.

And sure enough, started getting ads related to that.

And I was like, there's no way that's an accident.

Were you getting ads for formal methods or something like that?

Oddly enough, there are not a lot of network and not a lot of internet ads for formal methods.

But yeah, it was something weird like, I don't know, some weird type of cheese or something.

I want to go back to your LLM example.

But the last complaint about IoT was they were notorious.

These devices were notorious for having virtually no security because there's no web interface that people would log into.

They didn't present a platform for a user.

They would ship with default passwords and garbage like that.

So it's just like, I just see these devices for my house anyway, as I'm highly suspicious of them.

Yeah, I've been very resistant to cameras, especially.

I just feel like those are inevitably going to get hacked.

The other one that I've had people-- having worked on in my house, I've had this pitch to me a couple of times of, you need electronic door locks.

And I'm like, absolutely not.

Yeah, I have the same response.

Why would I want something that can fail to lock my door?

Yeah, if you can pick my deadbolt, more power to you.

Yeah, exactly.

Yeah, I understand it's not 100% infallible.

I get that.

I understand that the field of locksmithing exists.

I understand that.

I'm comfortable with this technology.

I want to keep it.

I want to go back to your LLM zone.

This is what got you started on Edge Computing.

We've done a great job touring through all of the fuzzy, like, ambiguity of how people use the term Edge Computing.

We may have indicted Edge Computing as a marketing term, but maybe not.

I think Cloudflare is doing some interesting stuff.

That's my summation so far.

Tell me about the LLM stuff, because I was really surprised when you said, I want to talk about Edge Computing because it relates to LLMs.

How?

How does it?

So obviously, training the things is difficult to do locally.

I mean, you could do it, but you'd probably have to go out and buy a bunch of NVIDIA A100 or something like that.

But I simply wanted the use case of, I've got a pre-trained model, and I want to do what they now call inferencing.

So I want to take the model and ask it questions, but I don't want to send my data to Google or to Amazon.

Microsoft.

OpenAI.

OpenAI.

Not because I'm particularly paranoid that I'm doing highly sensitive things.

I just wanted to see how plausible it was.

And so as I said, I started messing around initially with the Transformers library from Huggingface, and now I'm looking at this Olamma tool, which I really like.

Can you tell me how it works?

What are you actually running?

So you download-- what do you download?

Is it a massive-- I mean, these things talk about how many billions of parameters they have, right?

What are you actually-- what are you downloading, and what are you running?

Yeah, so the models that I'm using, I've been playing with the Lama 2 model, which has, I believe, 7 billion parameters.

And just the last couple of days, I started playing with what I believe are called the GEMA models, G-E-M-M-A from Google, which are kind of their open source.

I think people are referring to them as the open source version of the GEM and I models, but I don't know for sure if there are really any relationship between the two things or not.

But yeah, so essentially what happens is you start these tools, and they download a bunch of parameters and save them locally.

And then you ask it questions through whatever interface.

In the case of the Transformers library, you're actually writing your own code.

O-Lama gives you basically kind of like a shell prompt, kind of like you were sitting at a chat GPT window or something.

And you can ask it various questions, and then it uses the parameters to go out and do a computation and generate the series of tokens.

Is it all on your machine, though?

It's all on your machine, yeah.

Once the parameters have been downloaded, everything happens locally.

And I'm running this on my Mac M2, and it's using the GPUs a little bit.

But it's a little slow.

What do you mean, slow?

Minutes?

Yeah, for the Transformers stuff, it's taking minutes.

But it depends a lot on-- there's certain parameters that you can set, temperature, and certain probabilities and so forth that determine how quickly you get an answer back.

But it's between minutes and probably takes 20 or 30 seconds with O-Lama for the types of questions that I'm asking, which are mostly simple code generation type problems.

What's a number of tokens you're getting back?

I mean, if you're running it locally, you're not getting charged by the token anymore.

So you could potentially generate a huge amount of content back, but then does the accuracy decline?

The experiments I've done so far have been pretty small.

And you can set the number of tokens that you want to get back so you can limit it to, say, 512 or whatever.

But yeah, I mean, there's two potential benefits to running things locally.

One is that other than the cost of the machine and your labor, you're not paying for a token like you would with one of the cloud models.

And also, you can generate more content and potentially use more context than you might with the OpenAI interface or something like that.

So this is possibly something people are calling edge computing.

Now, I can imagine a deployment scenario where-- because I don't necessarily need a bunch of these things to agree, because they're a little bit slower to produce answers-- I can imagine a scenario where I run a bunch of these all over the place for-- maybe I've got a website and you just want to interact with it.

I don't want to have just one.

They can maybe get overloaded.

It's not that quick to respond.

Users understand.

They submit a question or a prompt, and the answer will come back bit by bit.

So maybe the edge here is I run a bunch of these all over the place that are closer to my users.

Or that's not what we're talking about.

You're talking about I'm running this on my machine.

It is a large computation that's running on my machine.

Yeah, I'm running it on my laptop.

But if you start doing searches for these things or related things, you will hit links that talk about edge computing.

And I think they are doing precisely what you say, which is take code generation as an example.

Say that you're a software engineering firm and you are really intrigued by what you can do with GitHub Copilot, but you are reluctant to upload your code to Microsoft.

And that may be unreasonable paranoia, but keeping things local does have its appeal.

So you could buy a fairly beefy server and run that in your data center or somewhere in your facility.

And what you'll see is that there are certain companies, Lenovo being one of them, that are starting to sell hardware specifically for this sort of use case now.

So specifically for AI inferencing, meaning using a pre-trained model to get answers rather than buying a server to train a model, you just want to get answers, but you want to get them without having to send your input to your questions to a commercial cloud model.

So maybe the smart home of the future, everybody has one of these Lenovo servers.

They've got a server rack in a closet somewhere.

It's got a pre-trained model in it.

And they just ask it questions throughout their lifetime of moving whatever the people who inhabit into that house.

They're asking their trained model questions.

And over time, every house gets slightly weirder as the model gives different answers and develop a flavor for the house.

Yeah, or another possibility is you have a bunch of cameras at your door and you're running a local edge server that does image processing and does facial recognition.

Jeez.

There's a level of paranoia involved in this example.

Sorry, go ahead.

Well, right, but the upside to this is that you're doing facial recognition on this local-- on this server in your house that doesn't require you to send that data up to one of the clouds.

So wait a minute.

Can you make a model that can detect solar leasing sales people at my door?

My dog is very good at detecting solar leasing sales people and barking at them until they go away.

But could you make a model in case-- I mean, my dog's getting old.

Yeah, in fact, that's probably-- there's probably a startup out there right now developing AI guard dogs that will recognize sales people coming to your front door and bark.

It's the no soliciting sign of the future.

You know what they do?

They carry a clipboard.

You see someone with a clipboard, you think, oh, they must have some official business.

And I have been trained-- my own model is, oh, there's someone with a clipboard.

Get away from my door.

Yeah, clipboard, shirt with a logo on it.

Any of those things pretend I'm not home.

Sometimes a high vis vest.

But you could also see it having your neighbor Bob is at the door or your daughter Sally's at the door and seems to have lost her key or that kind of thing.

Yeah, magically unlock it using the magical locks that we don't even want to have.

Right.

In high school, I hung out in college, hung out with a lot of people who were in the hardcore straight edge scene, a lot of bands, a lot of friends were in bands, hardcore bands.

And they would argue about what straight edge meant.

Similar.

I think it's the whole concept of the edge.

People think, oh, that's a defined line.

You step over that line and that's the edge.

This is a bad analogy.

I immediately started thinking of the U2 guitarist also.

Oh, the edge.

That was a good minor threat.

OK, well, we did a great job muddying the waters on this one.

Yeah, I guess if anybody out there knows what edge computing is, feel free to drop us a line.

Yeah, send us an email, podcast@picturemecoding.com.

This has been Mike and Eric on Picture Me Coding.

Thanks so much for tuning in.

We will see you next week.

See you next time.

Bye-bye.

Bye.

[MUSIC PLAYING]