Containerd with Phil Estes
Download MP3You're listening to DevOps and Docker Talk, and I'm your host, Bret Fisher.
These are edited audio only versions of my YouTube
Live show that you can join every Thursday bret.live.
This podcast is sponsored by my Patreon members.
I'd like to thank all the paid supporters that make this show possible.
You can get more info and follow my updates on all the content
and open source I'm creating at patreon.com/bretfisher.
And as a reminder, all the links for this show, the topics we discuss.
As well as the links I've already mentioned are available
on the podcast website at podcast.bretfisher.com.
Hey listener.
If you get anything out of these shows, I'd love a review or
even just a rating in iTunes or your podcast player of choice.
If you don't know those podcast players often recommend new podcast to people and it's
kind of how people discover stuff and the way to discover it is through the rating system.
And I don't have a lot of ratings right now.
I think I only have like last check on iTunes.
Five or so ratings, which is crazy considering I get over 10,000 downloads a month.
So, if you could do me that solid, I would really appreciate it.
So we could get more viewers access to this content.
Thank you so much.
This episode was from September, 2021.
And I had friend and repeat guest Phil Estes on the show.
Now, if you don't know about Phil used to be a big deal at IBM and now he's a big deal at AWS.
He's a principal engineer with the container compute team.
And a lot of his open-source work revolves around the Docker, runtime security and now containerd.
So we get an update on the containerd project itself,
other projects.
Related to containerd likenerdctl and Lima.
. And then we wrap up the show talking about Open S S F, which is a new industry.
A consortium of companies all focusing on.
Security stuff in cloud native.
So they've got some neat tools.
They've got some other deliverables for us in the community to
take advantage of, and it's all in open source and that's great.
I think we could definitely use more.
Uh, Friendly, easy to use UpToDate security tools for us mere mortals.
And along the way, we answered a ton of questions from people in chat.
Again, if you've ever.
Not seen the live show it's on Thursdays and you can
show up at bret.live and you can ask my guests questions.
And that's how this show gets created.
For the show today, I'm very excited to welcome.
A friend of mine and Docker captain, and so many other things Phil Estes thanks for coming.
Hey, great to be here, Bret.
Has usual
And this is the third time on this show.
So
thanks again for being here.
We always talk about containerd.
We're definitely going to get into that, but we've got some
other topics and I'm excited to get into today as well.
So we've got what I feel like is a jam packed hour of updates on some things and conversations,
Phil, give us a quick update.
You recently moved to AWS.
And you're on the cloud.
Uh, The container team.
Yeah.
Yeah.
Basically a year ago I was starting the interview process thinking
about, other options in my career and had some friends at AWS.
Long story short, I joined the container organization here in January.
So I guess I'm another one of those COVID era great resignation people.
I think a lot of folks assumed after 20 some years at
IBM, I just stick it out there for the rest of my career.
But yeah, it was a lot of things came together to just give me some ideas to try something new.
And the container Orrick here has ownership of services that I think a
lot of people who use AWS will recognize ECS Fargate EKS the registry ECR.
So, it's a fairly large organization.
It also has some of the open source analytics pieces in it.
So it's a, it's an awesome organization for the things that I care about.
containerd.
Heavily depended on by several of those services.
So it's given me the freedom to keep working on containerd
and bringing, some of my open-source work here to AWS.
So, it's been a great eight months, so far, a lot of smart people here and yeah, a lot of cool
ideas around using containerd and continuing to push the envelope and, managed container offerings.
Honestly that, just to me, that speaks to not so much to AWS themselves,
but that speaks to the fact that containers are like the core
containers won, they they're just kind of everywhere in the ubiquity.
I mean, And a lot of places you may not even know they're
running you're running in containers and stuff like that.
I'd never really thought about the fact that now I'm going
to have to bug you with every AWS container question.
Because yeah, I don't think a day goes by that I'm not either dealing with a student
or a client or or my own stuff on AWS in some sort of, or form of a container thing.
Yeah.
pervasive.
Yeah.
Yeah.
It's it's definitely a huge shift kind of career wise,
but like I said, I think it's pushing me in new ways.
There's ways of my age to get stuck in the mud, so to speak.
And so it's forcing me to learn new things and AWS, I think it's well known that they've got this.
Service team model of, two pizza teams and whether or not that's always exactly how it operates
a courses is not necessarily a fact, but what it means is that there's a lot of like ownership
and autonomy that I've never experienced at IBM, which again is just run in a very different way.
And so it's interesting because people are operationally
minded, they're thinking about the business.
So it, I guess in some ways it's, I could have chosen to work for a startup maybe,
and had a similar chance to explore all the aspects of operating a business.
But in a sense, AWS does that at the service team level.
So.
that's been good for me to get exposed to things that I just, I wasn't in my former role.
Yeah, it sounds exciting.
They for those that haven't seen you on the other two shows here one of which is, I
think the only, maybe one of two in all my years of having someone physically here,
we did one where you were actually here on a little vacation trip, but to not know
that maybe don't know about your involvement with containerd give us a brief recap.
Sure.
2014 the super cool, OGs.
We're using Docker in 2013 or who knows what?
2014.
I think if you worked for any cloud vendor or anyone
related to that space, 2014 was the year that everybody.
So then all, we got to figure out this Docker thing, if
they hadn't already, or hadn't already been involved.
And so at the time from an IBM perspective, we were looking
at how to understand the huge Docker explosion containers.
So I got involved with the Docker project became a maintainer there and then
containerd and runc and OCI were all formed, soon that soon after and containerd was
spun out of Docker, we'll go into huge detail but late 2016 the idea was let's make
container to be a more full fledged, container runtime, let's registry interactions.
So you could use it standalone.
You could use it with Kubernetes.
And by spring 2017 had been donated to the CNCF um, So I, it made sense
for me, that's what IBM wanted to use was kind of, and obviously since then
almost every cloud provider is choosing an advantage offering to kind of skip
integrating at that Docker layer, which we know has a lot of, great developer UX.
And obviously a lot of people have been talking about desktop and other components of that.
But if you're running a managed service containerd is a great more nimble
choice where it's got a go API to have that kind of integration makes
a little more sense than trying to run Docker for your managed service.
So, Since the, since 2017/2018 there's been a huge growth in the consumption of containerd used
a lot of interesting ways used by a lot of clouds and Yeah, that's been an interesting road.
Yeah.
For those that haven't heard me say it a lot on this show
that you're just a, repeat an echo, what you said there.
containerd is silently taking over the world of runtime management
because Docker is being used less and less itself on servers.
As we all start to update our Kubernetes clusters to newer releases that maybe prefer
containerd or the, if we're using a distribution that automatically uses containerd.
And the Docker team has started to focus on developer tooling,
which I applaud and, they're going back to their roots.
And not that they don't make a container engine, but their container engine is now.
They're focusing it on a container engine for humans versus containerd, which is
maybe a lower level that's meant to be used by other API APIs or other robots.
But we're here today to actually, talking about how you can now use containerd locally.
The different ways you can do that.
So we're gonna get in that to a second.
.
And by the way, you're going to be at KubeCon.
So you're doing a containerd talk
Yeah, there'll be several maintainers from containerd doing our usual intro slash deep dive topics.
So yeah.
, I hope to be there..
yeah, Yeah.
For those that are able to make it in person or are gonna watch it virtually I
think you guys, every KubeCon have, or at least once a year, right.
you have a containerd update,
Kind of thing.
Yeah.
And for those of you that are not fluent in all of the layers of tooling, you
probably are using containerd today in some fashion, and you didn't even know it
like Docker Desktop at anything that Docker has, is using containerd underneath.
Uh, If you're using Kubernetes, either Kubernetes is using
Docker, which uses containerd or it's using containerd directly.
It might be using cri-o but that's usually just open shift.
So yeah, a lot of us, it's amazing how you can just be a hobbyist and sort of interested
in a project maybe, and you were looking at to do it professionally, and then suddenly
you realize that you're, I don't know, half a million devices are running your Code.
Like, not half a minute, half a billion or a billion devices are running your code.
Like if you think about every place that's ever had Docker or
Kubernetes installed is essentially running lines of your Code.
Yeah, That's scary.
I mean, in a sense it's crazy.
because I feel like I went through 20 years of programming
where I couldn't claim anything, even close to that.
And then to get involved in Docker, the containerd it's pretty wild to think of, the usage
explosion of those software stacks compared to anything I've ever worked on for 20 years.
So,
Yeah, for sure.
And we thank for your Code.
But Marcos comes in with in your fill in your opinion.
What's the area that containers still have some good opportunity to grow.
yeah.
So I made cheat OPA little bit and say, separate from container runtimes.
I think the image format, or I may even step back from image
format and just say sort of the idea of artifacts and a registry.
I feel like that aspect is getting a lot of interest and focus these days.
If you join the OCI calls, there's a lot of people trying to figure out representing everything
from S bombs to helm charts, to OPA I just lost what you call OPO the rules of for gatekeeper.
Ooh,
So there,
don't know.
yeah.
Anyway, there's all these things that I think we've this idea
of push and pull and storage and tagging and immutability.
I think there's a ton of room to grow there because it, a lot of
people are finding it's really easy to shove things in a registry.
But I think right now we're kind of at the very early edge of that, where it's really easy
to, there are registries with the flexibility to let you hack around and push all kinds
of different objects that don't look like container image, configs and layers and blobs.
But I think what necessarily will have to happen is that they'll have to be some kind of
structure around that, to know what it is and how registries handle the life cycle of it.
Cited in signatures is another aspect of.
So I think that's an area where we'll see a lot of advancement in the next few years.
Container runtimes that, I think we're already seeing it, as far
as breadth of like application, we see it all over the place.
I mean, everything from IOT to, to high performance computing to everything else.
So yeah I'm not always great with crystal ball things, but I definitely think that image we've
gotten so used to the, this nice idea of an image reference and a tag and push and pull that.
I think we'll see that model.
I mean, Our already Homebrew is using GitHub's container registry for their blob delivery.
If you, you know, brew, install a package.
So
Really that's cool.
Yeah.
I've recently started to love GitHub container registry had been spending a lot of time with it and
Having all that stuff, I'm very bullish on GitHub.
So, uh, Codespaces yeah, you name it, GitHub actions.
I'm in all those things.
So yeah, bringing the container images to the Code, and that really speaks to my, you know,
I wish Docker would have done at first, but to have GitHub doing it, which is that's the,
just the evolution of the industry, but to have images just as a natural extension as an,
as a default artifact store for everything you're doing and Code are related to Code is.
I love it.
I love every part of it.
And those like you and I, we were both a little gray hair and we been around
a while and we know that it could be way like it was way worse back then.
It was just stuff everywhere and decentralized and just confusing and so much complexity.
And yeah, th the secure supply split secure supply chain stuff to me is really
interesting because I very quickly fall off that cliff of this is too complicated.
No, one's gonna use it, and there's, I just walk away.
And traditionally security tooling has had a high barrier to entry.
And so anytime you talked to me about, built in features, like when containerd
dropped the feature for encrypting images, image stores low key image caches or
whatever I haven't used it, but I was excited that it was a built-in feature.
It would also be exciting if it was just on by default.
I don't know why, but I'm always excited when security advancements become
defaults, like the whole, the HTTPS by default movement, I was just a huge fan
of, because making security optional means 90% people aren't going to do it.
So, whenever we hear about things like secure supply chain and registries,
or in container runtimes, only running images that meet these requirements.
That's I heart that we got more questions for you.
What, when containerd will run natively on Mac OS 10, well now 11 macOS 11.
Yeah.
There are multiple answers to that.
The Code actually compiles on Mac OS.
In fact, it does, we have a GitHub actions check for that, every PR to make sure we don't break the.
Because there has been some use of the client uh, MacOS, just
like Docker Desktop, the Docker command is compiled and running.
But I assume when people ask that they're wondering, okay That's great, but what does it mean?
I can't run containers there because there's not a Linux kernel
or some set of isolating capabilities in the Mac OS kernel.
If you look at the containerd repo, I don't, I'm not good at memorizing PR numbers off the
top of my head, but if you just search a Mac OS or the made the under BSD we've recently.
There've been a couple initiatives, one is free BSD support.
And secondly, there is a set of PRs uh, in fact that very top one.
So there the darwin snapshotter so if you know, um, Linux architecture and
OS tuples Darwin is the OS name for Mac OS because of the kernel name.
Um, so that's, that's a snapshotter a, there's been a lot of discussion
about how to do mounts properly because the overlay feature that most
of you rely on whether it or not, isn't available in the Darwin kernel
so anyway, you can search through those PRS.
There's a lot of interesting work being done to actually have rudimentary container.
Support natively Mac OS so it's coming there are
people who are interested in and pushing it forward.
So I think you'll actually see that,
That's neat.
are we talking about.
Putting a a Mac executable binary in a container image and being able to run it directly on Mac?
That definitely could be one use case.
Yeah.
I'm now wishing I had done my, my pre Bret show reading to remember , on
Linux you have runc and runc is what's actually starting your process.
In this case, the submitter of that PR also has a tool.
I believe it's called run.
You.
Or something like that which is the isolator that he's put together.
And I thought, okay, it had some kind of Linux compatibility someone, some of the chat
probably has the freedom to go confirm that or correct me between now and the end of the show.
But yeah,
too.
Because yeah, we've, I mean, it's been a question.
Since the invention of Docker, like, Hey, can I run Mac apps on this?
And when you get into the layers of abstraction and the little tiny VMs,
these things are all running now, it's like w what really is on my Mac.
And so for those that are watching that are maybe not really up on all
the layers of all this technology, I've not seen a, , OCI compliant
image and runtime run a Mac built binary directly on a Mac kernel.
Doesn't mean we can't do it.
I don't pretend to understand all the complexities and nuances, but it's
just not been a thing that Docker themselves has ever tried to solve.
And I'm aware of.
And off the top of my head.
I can't remember anyone attempting to try to do that
in open source anyway, but we have windows binary.
no reason why that architecturally, I guess, or something
like that, but it would be work, it'd be lots of work.
And you've got to ask yourself, is that necessary?
And but that's a great question.
At all parts of the stack, this question comes up, people that are
interested in the lower levels Colonel API calls, stuff like that.
And then there's just casual users that are like, why do I need Docker Desktop to run Docker?
I can't, I just, what did I, when I do a brew install, Docker, why don't I get the engine?
And why doesn't it work?
Because by default, that's only gonna install the client, the command
line, which expects you to have a Linux engine running somewhere else.
Maybe you and I could, if there's any advancement
that in the future, we'll have to have a special show.
We're just like we break the news for everyone.
But it's not without the realm of possibility.
I wouldn't even have thought that GUIs and containers was sort of a.
Ugly hack on the side that only, Jesse and other bad-ass hackers can do, but we
just had a Nuno captain Corsair on the show like a month ago, talking about windows
11 and WSL2, and using GUIs from inside Linux, GUIs from inside containers.
And it worked.
He ran a browser.
I think he even had video and sound like it.
Every time I think that's not possible, it's not going to happen.
And, it's, it ends up happening.
So let's see what we got for next question.
BSOM has been trending topic recently.
Just wondering where the focus is at.
I think you've probably mean sbom either that, or you're talking about
something that I know nothing about, but a software bill of materials.
I think is probably what you're asking about.
It's not directly in my line of sight, but one of the things I mentioned to Bret as we were
preparing, it was part of the advisor council for the open SSF, which was founded last year.
So Open Source Security Foundation which has a set of working
groups, focused on various aspects of open source security.
Good things you probably already knew were going on from analytics foundation
perspective are many of them are going to be consolidated underneath the open
SSF which is still basically a bootstrapping mode, but about to become more of
its formal budget and governance will all be hammered out in the next few months.
Anyway, one aspect of course of open source security is supply chain security.
And software build materials and some of the formats around that and
how to store it and how to associate that in the container world with
container images there are people heavily involved in those discussions.
In fact, even the OCI comes up.
There are people who are involved in that from VMware who are
working on some of the standards and specifications around that.
Again, for people that haven't heard that term it's in the manufacturing space, a
bill of materials is a very understood concept in that you can create an exact list of
everything that went into my car or my microwave and and software bill of materials.
You think of this container image has some application, but it also has 30 libraries from
some distro that I, when I created it were pulled in has these other binaries and tools.
And so you can think of Sbom being a document that helps you know,
exactly what versions of those things went into your container image.
So, those that are working on improved image, scanning and tracking of CVEs, this is
another aspect of helping solve some of those challenges in supply chain security.
So again, there are people smarter than me working on
SBOM technology, but I think you'll hear more about it.
I don't know how many people follow that space, but the white house put out an
executive order a few months ago that has a section that are just initiatives that
definitely will directly affect anyone who provides software to the U S government will
need to think through some of these topics of how to provide an SBOM how to handle.
Supply chain security and your pipelines and how you source the binaries
that end up in something that you deliver to a government agency.
And people have obviously dealt in that space before anyone who's done fips 140 or some
of the other government standard security dis uh, uh, but again, I think we'll be hearing
more about this because obviously security matters more than just for the us government.
Look at any of the big, news stories of the last several months on security issues.
And this will be becoming more and more important for
any business at SBOMs I think will be part of that.
Very cool.
I'm glad you gave that little that summary on it too.
Cause a lot of us are just getting into that space.
It, my myself included this last year I've been on a project that's related to SOX2 or
whatever, and FIPs 140 and I'm like I'm dealing with contour and Envoy due using FIPs compliant.
The boring SSL come
oh Yeah.
Yeah.
Getting into that space and having to deal with a lot of those requirements
not always fun, but worthwhile in some cases like I always learned
something uh, So hopefully I guess no question ECS, does it use containerd?
So yeah, you see us today is still built around Docker.
Yeah.
There are plenty of discussions about, you know, now the other manage
offerings from AWS rely only on containerd can we do that for ECS?
Yeah.
I think that'll be an ongoing discussion, but because an ECS you're managing your EC2
instances, And there's an expectation by many of those users that the Docker client is there.
They're either running other software.
It's a little bit trickier again, thinking about how Fargate and
ECS differ in that whereby responsibility ends and AWS's starts.
So in Fargate, because we take responsibility for the
AMI that's running and all the other components there.
it was easy for us to say, well, we're going to use containerd to run your test.
ECS, you may be very much hands-on and expecting some
of the Docker work flows to operate as they do today.
So it's a little harder to make that case that, oh, actually Docker is out, there.
figure figure out.
how to do what you were doing or in containerd
Yeah.
And you're right.
Full disclosure, right?
If you're running Docker then is running containerd for you.
So technically yes.
The answer is always yes, but I, that's a great point you're making, because I just realized
I have a customer that they're running Jenkins and ECS they're, they're worker nodes.
And , it expects the Docker daemon to be there.
And if it just suddenly switched to something else, they would need some sort of either it,
you know, all, all the upstream tooling or downstream tooling rather is going to have to
change to be flexible for either use the, the containerd directly or use the Docker API itself.
Yeah.
And when Docker still works I recently found out actually related to the
Docker stuff that they're no longer supporting like a red hat and paid version.
So this actually means that Docker, I think this is a tangent, but Docker Inc.
Doesn't support directly the engine runtime on windows server.
That's actually Mirantis now.
And they don't support red hat, even though you can install it.
They used to offer like a paid version.
I think I remember way back in the day, Solomon would say paid versions of Docker for paid
versions of OSs, but they don't have a red hat version either anymore or even and evidently I
guess they don't get a lot of requests, so they just didn't bother with it once they did the split.
So I didn't know that.
And I've had an interesting internal dilemma around is, Docker loves the engine.
They've always, that's, that was their first invention.
And, but yet it's not running everywhere anymore.
It's actually, like Docker themselves are running it on fewer places than they were three years ago.
And I don't know how to feel about that.
Cause, cause it's tech because they're getting out of the server game, they're there wanting
to be developer tooling and all this stuff, but at what point did they say, well, you know,
the Docker engine on Linux servers um, you know, we're only, we're maybe not going to support
as many variants or we're not going to bother it because I don't think they've ever done BSD.
For example, and that may never happen.
Now.
It's just an interesting world where.
We're all, maybe not going to always use Docker everywhere
and we're going to have to learn other tooling, which is cool.
Like we can talk aboutnerdctl today.
And containerd has its own little command line, light tool.
That's pretty, pretty cool that I actually was going to do a talk at Docker.
I had a KubeCon rather on shifting from Docker CLI to
our member.
Yeah.
And it just, we just kept canceling the workshop because when they weren't doing real
world ones, we didn't want to do a virtual workshop because it had to be recorded.
We're like, well, I don't know how to, how good can I recorded workshop B when you
can't answer, ask questions and interact with people and help them with their problems.
So long story longer there?
Yes, but no, that was the answer.
Yeah.
Tier to your point.
What's interesting about, your statement of that, Jenkins needs
this or this tool needs that expectation of the Docker socket.
Because containerd is become fairly standard in a lot of managed service offerings.
We've seen a lot of vendors figure out a path forward that doesn't
rely on Docker socket or any expectation that it is the Docker edge.
And and so I think as that continues, there'll be an easier path someday
for ECS to say, oh, by the way, here's the containerd on instead of Docker.
So we'll see.
But I, I think just everything you said those last few minutes, people are recognizing
that the expectation that the Docker socket is the only way I can do container runtime
things every vendors having to figure out how do I support cri-o and containerd a Docker.
And I think we'll see a world in which it's easier to switch,
run times and still have all your tools and your expectations.
Yeah, for sure.
And in Docker themselves is doing that because their latest work this last year and a
half on Docker, Compose, adding that making that go binary, adding it as a plugin, the
Docker command line, it now deploys, cause people get to give me a lot of questions around,
the future of swarm can, you know, uh, Docker, Docker servers remotely, how do I deploy?
How do I Docker machine is deprecated?
So how do I create machines that are easily enabling the Docker remote and all this stuff?
And my message has started to be that Docker is getting out of that game of remote sockets directly.
And that's not the model anymore that they're adding
these ECS and Azure ACI and hopefully more plugins.
To talk to the cloud.
API is in the cloud API sits in the middle between your
local Docker and whatever thing, eventually render container.
Instead of Docker, having to talk to Docker, Docker is
just talking to other APIs in the cloud or on servers.
And I love that because I find that it's way more flexible.
It's it seems a much more open model and flexible model for scaling to all the
things, especially when you have now less than a hundred employees in the company.
It's so, can't do it all.
Can't build it for everything.
So use everyone else's tools.
Another good question.
A little bit could we could spend on this one forever.
Is there any disruptive work being done in the CRI or OCI space?
Have you seen anything lately that you might share from some of the meetings and whatnot?
Yeah.
I'm trying to decide what disruptive
Any anything significant that's different than, yeah.
Cause a lot of us we, once the standards were created and like the
Dockerfile was standard and then the image format was standard.
We're like, ah, okay.
And we all moved on.
We haven't really
Yeah.
up if I hear about other types of artifacts, the registry standard now, stuff like that.
Yeah.
So two things that may be interesting.
One, one is of interest to me, although I've been bad about
not getting as involved as I probably could or should be.
So, I did the initial user namespaces spaces work in the Docker
Engine and containerd supports user user namespaces in a similar way.
And then Akihiro and some red hat folks have pushed, made huge strides in
rootless, rootless Docker, rootless runc, rootless containerd, rootless cri-o.
What hasn't happened is.
Connecting the dots from all of that work to Kubernetes in a formal way that through the
CRI you could say this pod, I want rootless are using this user namespaces space range.
So it's, it has effective root but it's not really root all the same,
user namespaces concepts that cap, which is a Kubernetes enhancement
proposal has kind of sat through a lot of versions of Kubernetes.
And that's what I mean.
I feel like it's something I've made maybe able to help with given my background.
And I just haven't been able to find the time.
But there are some people starting to move it forward.
One of the big kind of kernel features that I would say is kept user namespaces.
And in this weird spot for years is the idea that my file
system mappings don't naturally follow my user ID mappings.
And so therefore I have to play games with chown and
chmod to figure out how to make these things line up.
Well, Christian Brouder from Canonical finally got ID mapped, melts into the kernel.
So it's a merge feature of the Linux kernel.
And so now we have a PR containerd I assume cri-o may have something similar that
will allow you to use these idmapped mounts to get this natural alignment between,
Hey, my container runs in this weird ID namespace of a hundred thousand to 164 K.
And now my container image will have that same ID mapping.
And I don't have to do a bunch of weird things to, get the file system in the right state.
So I hope features like that will finally push us over the
hill of getting Kubernetes support for user namespaces spaces.
And so that involves the CRI that involves the runtimes involves kernel dependencies.
And so just like anything else with kind of this tentacles reaching
everywhere, it takes a while to get all those pieces like to the right place.
But I think that's an interesting, CRI a bit of work that's, taking some people several
years to get it moving that are the OCI like you said, Bret, like there's a set.
There's a sense in which, okay.
We standardize these things.
We can all relax now and sit back.
Celebrate and there hasn't been a ton done to push things, to, to the
next, what's the next hurdle or what's the next kind of breakthrough?
We're actually hoping to have an OCI summit while we are having an OCI summit.
It's going to be hybrid for sure.
We, months ago were really hoping for our first kind
of big in-person OCI get together next week in Seattle.
It's going to be hybrid.
But one of the topics is someone wrote a kind of a set of proposals that were loosely termed OCI V2.
Not that it was really a new spec or anything.
It was mostly a collection of ideas of, we've depended on this way of doing things.
We have these layers that are tar.gz'd.
They have this config object, what are the next kind of big ideas?
And better ways to store file system contents.
And so it's actually on our agenda next week to try and get some
people who care about those distinct topics to see, can we get some
working groups going that actually take these to the next step?
I would definitely call some of those.
Those are some of those are very disruptive ideas, thinking about what's,
it's been really great to get to where we are, but what are some things
that are keeping us from taking containers to a new set of capabilities?
Yeah and semi-related to that for those of you that run a Docker build
command a large portion of my consulting work is just related around
Docker files, Docker builds, and I've learned to love build kit.
And I know that it's not the only way to build tools out there, but it is, I
think some of the other tools even used build kit under, under, under themselves.
And one of the things that I'm starting to see is we've all,
we don't really think about versions of Docker files, right?
Obviously the Dockerfile and the image it produces are, I
think they're technically two different standards, right?
I'm not sure that the image standard includes the Dockerfile standard.
I don't actually.
Yeah.
But you know, like recently I think Docker announced to the heredoc standard and
as an experimental option and buildkit for improving the Dockerfile use, especially
now that we're all getting pretty complex Docker files when it comes to multi-stage
and instead of dumping everything in a gigantic run command with a bunch of back
slashes and double ampersands and all the, all this stuff, I feel like we've okay.
We've had this idea for seven years and you're starting to
see little factions create other ideas about how to do this.
I think recently I just saw, I think they both have a cloud service as well as open source
component, but they're basically re-imagining what the Dockerfile could be in a different way
to build the same compliant image, but from a totally different build type object or whatever.
And What I'd hate to see is a fracture of the ecosystem, right?
Where we're all are now like, well, I'm building on this
unique tool and that's the only tool that can build it.
But we do need to evolve and also the same thing with the image, right?
There are things that I want to do in images that
just don't really work today, the way that they are.
And the idea of sometimes moving volumes around with
the image format, it seems hacky to me, but I do it.
And I actually have a couple little simple tools to help people with that.
So it would be a, I think there's so much room for growth, and it's just exciting to think about
what five years from now, how much more the image standard is the standard artifact for everything.
I love the fact that brew is using an I'd love to
see other packaging tools use that as the standard.
I'm imagining a day where, whether you're using NPM or composer or a bundle or
whatever, you're using that in the background, the artifacts are really just OCI
compliant images, and That's like a dream state for me, because someone who deals
with that supply chain of all these various tools and they're all their own in their
own little quirks and limitations to get us to settle on something would be awesome.
So we have a ton of questions,
why don't you bring us up to speed?
We talked a little bit on the Docker announcement for Docker going paid.
There was a question in here asking about um, the future of
Docker, you know, are they dead because they're getting paid?
I know, I think that's actually, it's, they're on the path of survivability.
I think that honestly, the Docker Desktop thing, even though a lot of us didn't
necessarily like having to pay for software that we've been using for free.
That's probably the best approach for them.
Long-term if they can get enough people to pay for it, because I don't
know that they had another path to profitability and we all want a
Docker that lists that exists, but we don't want it to not exist.
So that's my short answer to that.
But for you Phil we talked about the idea of lima
nerdctl seeing some of the contributions that are there.
I'm going to, I always mess up his name uh, Akihiro.
Thank you.
Akihiro who's just a mainstay in the low level container
community, a ton of PRS and open-source work there.
Thank you.
Thank you all for all you that are volunteering on a
containerd project, but you've now got other stuff.
And can you tell me a little bit more about that?
We've got like Lima, we've got nerdctl.
All of these sites, all these things that are scoped inside of the containerd organization.
Yeah, So we I'll first say from a governance perspective, since you brought that up we
have, we ended up creating a concept of core sub projects and non-core sub projects, the
difference being that containerd the engine and all it's, a snapshotters as a sub projects
that are incorporated directly into the container runtime, there's all core projects in the
sense that our support statements, our stability statements all apply, across those projects.
But obviously there's a lot of interesting work that people are doing that,
from time to time and just make sense that they associate with containerd.
And so the stargz-snapshotter.
Was one of our first ones, although we actually had a rust implemented TTR PC, which is a
lightweight GRPC implementation that is used by the containerd shims to talk back to the engine.
So the rust implementation, which again a I believe it's a cloud provider in China
obviously has backend services at rust they wanted to talk to over TTRPC and rust.
Anyway.
So there's these other components that maybe the core containerd community didn't write or DevOps.
But we think they're valuable to be in the sphere of containerd.
And so we call those non core sub projects.
They're still maintained all the containerd maintainers, have
to agree and approve based on the governance to include those.
So nerdctl and Lima or nerdctl specifically, I think Lima is still something Akihiro created.
It is a standalone actually.
It's really, if you think about it, it's the piece it's some of those pieces of Docker
Desktop that people don't talk about as much, but I think on your show a few weeks ago,
and the list you made on Twitter all these, how the VM gets created, what's inside of it,
how it talks back to the host, Lima handles all that aspect so that you could have nerdctl.
, uh, Mac OS and have that same feeling of I'm able to interact with my container runtime.
I don't even have to think about the fact that it's inside of the
nerdctl uh, uh, specifically, like you are showing here is just basically
a client for containerd that gives you the same UI UX as Docker.
So all the Docker pull, push, run PS uh, uh, various query commands.
The cool thing is because Akihiro built it around containerd some of the other
features that have associated themselves with containerd you get for free.
And so his rootless mode support the encrypted container layer work that has a containerd plugin.
I think he also mentioned the.
stargz-snapshotter or so the cool thing is these are, can all be built in to nerdctl and so instead
of, a complicated, how do I get this piece working with containerd you get prepackaged with nerdctl.
So you get the Docker like syntax.
And he also gets some containerd features that are cutting edge.
So Yeah, I I take no credit for Akihiro, like you said, has done some amazing work over the years.
And we were starting to get a lot of people saying, well, the CTR tool,
which is our little kind of added debug client, you know, just, I can't move.
I I'd like to use containerd more as my container runtime, but I need more Docker like features.
And so nerdctl was really the answer
Yeah, like building.
Yeah,
Yeah.
Yeah, exactly.
That it includes build kit because that's why, when you look
at other things like podman, and CRI-O and other ideas of.
Tooling.
And maybe those are bad examples, but they usually building a separate,
but so much of what we've all been used to is that it, image management,
running containers, managing containers, and then building those images.
Like we all just assume that it's, from the Docker days,
we're spoiled by being on one command line, one tool.
So a lot of us that are doing it all day long.
I mean, not, everyone's going to be into one single tool that does it all, but it's kind of
neat that this is technically a bunch of separate tools, but are just wrapped up in a main tool.
And technically, I guess that's really what Docker is doing in the
background anyway, cause it's build kit under, in inside of Docker.
But I love this project.
I saw it before the Docker Desktop changing licensing and all, I saw him promoting this on Twitter.
I think.
I instantly thought it was fantastic and I haven't really used it
because Docker Desktop is my go-to my jam, but I have set up Lima.
Lima is a Mac specific, but that manages the VM and auto helps.
It helps me get uh, nerdctl and everything else there.
And I have played around with it and it does as advertised it works.
And I can only, I can see that, that the functionality and the edge cases
that may be Docker Desktop are still better at can slowly be solved over time.
For those of you that missed that episode.
If you just go back on this show and in YouTube, if you go back in my videos, I don't know, maybe
a month or three weeks when they announced it, we did a whole show on like, if you're on WSL2 on
windows, if you want to try to avoid Docker Desktop, or if you not so much that you want to avoid
it, but if you're you can't pay for it or your company won't pay for it and you're forced to not.
How to get Docker just to run directly WSL2.
If you're on Mac, if you're on Linux, you just run Docker and nothing's changed
because Docker's licensing doesn't change in the engine or the command line.
It's just for Docker Desktop.
And then here on Mac, this is the Lima specifically at least is a max focused thing to get you
that Linux machine on Mac until the wonderful utopia of us just running everything directly on Mac.
So, we are running out of time, but another topic that I wanted to get into is the open SSF stuff.
And can you give us a high level real quick of what?
And then we'll do some last minute questions, like a high-level of what that's about.
Yeah.
Like I said it's a foundation underneath the Linux Foundation,
so similar to OCI or CNCF it is with just put together last year.
Founding members, Google GitHub, Microsoft IBM and maybe one or two others that I'm forgetting
with this idea that the open source security of the secure supply chain and the topics
around, secure development, hygiene, et cetera, are all just critical in today's world.
And therefore we need a place where people can collaborate together.
So those six bullets are kind of the main working groups that exist today.
So if you're interested in tooling or best practice, or how critical
projects like OpenSSL, get, security funding these are all active working
groups that have leadership and regular participation across the industry.
And out of it are, again, it's fairly new, but out of it Are coming
interesting projects the scorecards project, all-star, the blog has some
interesting updates on what things are happening within the OpenSSL.
I th I think this is a great place for people to get involved, to care about open source security.
It's fairly young, and I think that there'll be plenty of ways for people to get further involved.
And yeah, it's been interesting.
I got involved because IBM asked me to be a representative even after,
joining AWS AWS said, yeah, definitely continue your involvement there.
It's a, a set of initiatives.
That every enterprise company has got is going to care about.
whether or not they formally joined or participated.
OpenSSL is obviously a decision for each company to make, but yeah, it's again, I think it will
be a place where hopefully a lot of good collaborative work will happen around these topics.
Like SBOMs a secure supply chain and OpenSSL has definitely gotten more interest in and people
wondering, how to get involved because of the white house executive order from a few months ago
because companies realize they'll have to have answers for this and yeah, you can get a cool
And they have swag.
, Yeah,
yeah I'm excited about this, especially the fact that a part of it isn't just policy.
And, but then it's also going to result in some tooling, because I think for a
lot of us on the front lines, the the tooling is how we implement those standards.
And when current tooling is either paid only, or it's adhering to whatever that
vendor's standard is, and not necessarily a common set of practices the conversation
recently, and one in one of our talks actually one of our little chats in the discord
and community was around, security scanners that are incorporating availability rules.
So like Compose.
Complaining as a security concern because you don't have CPU limits or memory limits.
And my argument was like, well, I don't really consider that a
availability thing, but I don't really consider that a security thing.
And I don't like it when security tools start to tell me
about other things that aren't directly security related.
Cause I like to keep my security stuff very focused and very important,
and I don't want it to get muddled with a bunch of other random stuff.
And we went down to a great discussion of the pros and cons of
that and maybe how it's actually better anyway, that it's there.
And and I just see it like agencies like, or organizations like this that are a group of
people from all sorts of backgrounds of different companies coming together and actually
iterating on ideas about how we maybe you're going to implement this stuff like that.
All star, all stars tool for those of you that didn't see it, the all
stars tools, something that was just a topic like a couple of months ago
with a client of mine that has, dozens and dozens of repos and GitHub.
And how do they standardize on making sure that each repo has a common.
Patterns and practices and settings that are not just security related, but
largely security related to make sure that everyone's doing the right things and
they needed, they need more bots and automation to basically figure all that out.
And all stars is now going to solve part of that problem.
And yeah, it's pretty exciting stuff.
So yeah, so I put the link in chat for everyone.
Go check out, open SSF.
That's a pretty new thing.
And I'm excited to see some of the stuff that comes out of that, that working group.
All right.
I don't know if yet, do you have a few minutes left to rapid fire questions?
yeah.
Why is containerd not enforcing rootless by default?
.
That's a good question.
One of the things is that we don't see ourselves as ourselves being
containerd as the arbiters of the sort of default configuration.
So again, if you're using Kubernetes, like your, whoever is providing you
that Kubernetes platform is the one who configures a containerd so again,
we see ourselves usually wrapped in some other tool or some other delivery.
So, if you think about nerdctl, nerdctl is actually.
Providing rootless out of the box for containerd.
And so it's doing that set up.
The other aspect, I think that the big sense to, to think about there is that rootless
requires, you know,, userspace networking capabilities with slurp, for data, the us and
some other packages that it depends on a fuse based, you know, userspace snapshotter so,
containerd making that decision for you and then enforcing potential performance penalties.
That's not really something, I think containerd containerd the
project wants to make that's more a runtime configuration install.
Yeah.
It's like an endless.
installed decision, not a runtime decision.
I could probably draw an analogy somehow between that and like choosing to install
your NPM dependencies global or in the current directory, it's a, it's an install
time decision based on what, how you want to scope your binaries and stuff.
So, yeah, that's a great, that's a great point.
I like that answer.
Next question . Is there any plan to integrate containerd with sigstore,
which I'm not aware of to verify and validate images before running them?
I think sigstore verifies the image signature at K8s administration
control, not at a container runtime at mission control, sorry.
And in mission control, not a runtime.
yeah.
So yeah, sigstore and cosign , these are newer tools of the container signing space.
So we've had.
Red hat signed a signature signing model.
We've got notary from Docker and now a CNCF project.
We've done no integration at the containerd runtime level with signing capabilities.
People that want that capability tend to be implementing it in a Kubernetes admission control.
So IBM has an open source.
One that some people use I'm sure OPA and some of those, the tools
of that sphere, let you say, I only use images from this registry or
they have to have signatures or they have to be signed by this entity.
So again, all that to say that we have not, I mean, containerd is pluggable.
So if someone wants to write a validation of signatures, post pull that's possible.
And we've had some very high level discussions about it, but again image signing is a
kind of fast moving space with cosine and synstore getting a lot of airplay right now.
Notary's been around for years red hat had the GPG
based signing for several years now and openshift.
So it's not something we, I think those communities, if they want runtime based
signature validation, like that , could be a contribution in containerd but it's not
really something we're implementing because there's not a single signing solution today.
Man.
I honestly, I can't wait for that stuff.
Cause one of the things that always got me excited about Docker itself and then swarm,
when it came out was there were so many security decisions made by default in a good way.
And using reduced kernel privileges and apparmor and all these things by default and then swarm
with its default to a multi-factor authentication and secure out of the box uh, rather is uh,
dedication at least and secure tunnels for the control traffic, just such a smooth implementation.
It was highly opinionated and very narrowly scoped.
So, when you get into Kubernetes world, like just in the
latest release, there's all this talk now finally about.
Using, using the additional security profiles of Linux inside of Kubernetes,
which I'm very excited about, but I'm looking so much forward to the
day where, user namespaces rootless mode no Sudo privileges by default.
Just like you go on the list, signed by default sort of checkbox
signing through all your tools so that when you're installing your
nodes, they know that this is the only place they can run images from.
And then image registry already knows how to deal with signing.
And just that whole pipeline, it's a ton of tooling.
It's a ton of decisions that have to be made.
I get it, but I'm just very much looking forward to the day where running
a distributed architecture with images from multiple locations is got
the same checks and balances that we had with a simple local Docker run.
because obviously the scale of all that really magnifies the problem in the.
But I'm excited to see the space in that.
And we're going to have a, yet another talk maybe I should get with you
after this to find some of the people and players in that industry and get
them on the show to talk about some of that more security focused stuff.
All right.
I said rapid fire and I'm not obeying my own rules.
So, is nexus Artifactory or gitlab good for container registry or S3 bucket?
I will just say I have not seen anyone use an S3
bucket in a business that makes money on containers.
They all use a cloud registry.
If they're not using DockerHub, they're using either a
cloud registry or they're running their own like Harbor.
I don't know what OpenShift uses.
I just know Harbor a lot.
And Artifactory gitlab nexus all great.
It registries.
I'm taking this one on my own instead of asking Phil, but
Yeah.
if there is a way to use an S3 bucket, I'm pretty sure it's not as easy as all those.
I mean, Some of those tools may end up using an S3 bucket, for storage of layers or metadata,
but that's a, it's combining two different topics because there something has to front.
And actually, I think you brought up Maros's comment earlier.
I mean, is Jerome, showed off a fond hack where you could actually, front you know a poll from S3
.
it as with anything else it's, how much do you want to assemble something yourself versus using
a tool that some, someone's validated It's registry, distribution, API compliant, et cetera.
Like said, there's plenty to choose from there's
Harbor there's cloud registries, there's Artifactory et
Kraken is Kraken one.
Does that's the distributed registry does.
sounds familiar.
Yeah.
Anyway, there's lots of options and I recommend all of
them, which I think probably all of them have an S3 driver.
So they're storing stuff in S3.
They're just wrapping in the front with the API.
Otherwise your client tool has to be able to translate and make assumptions
about S3 and where you're putting things in S3 and then be able to, yeah.
Yeah.
So, yeah, it's don't do S3 directly.
I don't recommend it.
What resources, docs, talks, books, et cetera.
Would you recommend for someone trying to get started with containerd.
, we talked a few minutes ago or a while ago about the KubeCon updates from the containerd team.
So if you search on YouTube KubeCon containerd you'll find years worth of intros of deep dive.
I'd say all those are good because they give you in 30 minutes what it is.
Yeah.
I'll be in some of those Akihiro, Derek, Michael Crosby, others.
So you know, that's pretty a pretty good starting point.
Documentation-wise we've actually been looking to have more
people, try and help us improve containerd's documentation.
Our website is really light docs, as you can see.
And so
go, people you want to get involved in open source.
You just had an ask.
So yeah,
So we'd love
hard.
Yeah.
So we would love people to get more involved in in that aspect.
Yeah, and I like video.
But also remember that containerd it's for most people, it's not necessary to know
containerd because it's usually something else like Kubernetes in front of it.
That's controlling it for you.
You do, in some cases, maybe you need to know how to
install it so that Kubernetes can take advantage of it.
But that just depends on your distribution and how you're using Kubernetes.
Alright.
Next one, does the host and container runtime combination matter somehow, would it make any
compatibility differences with environments like Synos with CRI-O versus Debian with containerd.
I dunno if this is asking about, I guess the word compatibility was in there.
So this is a good time to say the great news today is.
We're using those OCI standards for runtime, but image distribution, such
that you can build an image with Docker build, you can run it on CRI-O.
You can run it on containerd you could run it in a Kubernetes cluster that has sub nodes.
I don't know why you do this, but using CRI-O is the CRI run time
and some use containerd so, mixing and matching distros and engines
thankfully is not not a compatibility issue because of the OCI standards.
And so, you can use Kaniko or Kao or Buildkit, you can build a hundred
different ways and all the various engines will run that image.
So, yeah, I assume that's the question being asked about.
Yeah, he said, thanks.
That person said thanks very much for your answer.
Yeah.
And yeah, I agree with that.
We're going to do the last question.
from Marcos.
Do you see serverless, firecracker and containers
going through parallel tracks in the short midterm?
. Maybe Marcos will have to tweet.
It would be to make sure I understand.
The question.
So I don't know if listeners firecracker is a rust base virtual machine they're
lightweight VM, so people who've heard of kata containers or a gVisor is not the same
thing because it's actually a userspace kernel, but anyone who's read or talked about
sandboxes Linux kernel based containers are one way to think about isolating a process.
Lightweight virtualizer is like Intel started this discussion years ago with clear containers.
That kata came out of that.
And now it at AWS brought a firecracker and some of
the Fargate infrastructure is built on firecracker.
AWS has been public that Lambda also uses firecracker.
As the isolator of choice.
So firecracker has both container tie-ins as well as pure kind of serverless.
So that's where I'm not entirely sure.
I guess what I'm getting, what I'm saying is there's crossover, those spaces, people are thinking
and solving similar problems with lightweight virtualization both for containers, but also
just for kind of process-based serverless, tap, go run this task for me go over this function.
So, yeah, I think firecracker and kata and others in that space have definitely some
proponents, but we haven't really, it's another place that it seems like we're very
early at how it all plays out and how we ended up integrating things like that into.
Services like pure serverless functions, offerings and container offerings.
I think there's still a lot of maturing that will happen in that space.
Yeah, I have not had a chance to use firecracker myself, but I've been very interested in it.
And and how it plays out.
Yeah.
Yeah.
yeah, th all the ideas around running VMs and containers
side by side, and just k8s'ing, everything k8s the world.
This has been a great conversation.
I wish we could talk for two more hours and this would be the
epic long show, and we would go down tons of rabbit holes.
But what, I'm not going to do that to you.
And I want to say thank you so much.
I'm excited that you're in AWS.
Maybe we'll have something in the future.
Once you've had some time there to talk about.
Some of the stuff that AWS is doing an open source, cause that's also a topic that I would love
to get out there is to just talk about some of the efforts AWS has having in the, in that space.
And the fact that you get to focus more on containerd is also exciting.
Cause I know that we, that's all our topics have been about for five years now.
And maybe some day we're going to have a show where we bring the whole crew back and we
we have the, we have Nirmal and we have all the other captains that are hanging out do
something like that would be a lot of fun to have Marcos and everybody in the same room.
Thank you so much, Phil, for being on the show again.
You can get him On Twitter and follow him for containerd updates and stuff from
that team and all the other exciting stuff happening in the container world at AWS.
Awesome.
Thanks for having me Bret..
see you.
Thanks so much for listening and I'll see you in the next episode.