Rancher Labs and Kubernetes with Darren Shepherd

Download MP3
Darren Shepherd, co-founder of Rancher Labs, joins Bret to talk about Rancher's latest projects and all things Kubernetes.

You're listening to DevOps and Docker
Talk, and I'm your host, Bret Fisher.

These are edited audio only versions
of my YouTube Live show that you

can join every Thursday bret.live.

This podcast is sponsored
by my Patreon members.

I'd like to thank all the paid
supporters that make this show possible.

You can get more info and follow my
updates on all the content and open source

I'm creating at patreon.com/bretfisher.

And as a reminder, all the links for
this show, the topics we discuss.

As well as the links I've already
mentioned are available on the podcast

website at podcast.bretfisher.com.

Bret: This week is another show from our
2020 archive that sadly we should have

totally launched the month we had it.

So it was recorded around this
time of 2020 in the autumn of 2020.

Basically a year ago, I had Darren
shepherd, the CTO and co-founder

of rancher labs on the show for
our second now annual catch up with

everything that rancher labs is doing.

Now on this show, if you think of the
context of what it was a year ago.

We were talking about the recent
Docker announcements about Docker hub

limits, which of course now in 2021.

We've all moved on beyond that.

And Dockers made a bunch of announcements
about new features and paid Docker,

desktop, and all sorts of other stuff.

But a year ago we were catching up with.

Darren on all the projects that
they're working on, including our ke

and rancher and K three S or keys.

And all the other stuff
they're working on Rio.

And there's just a bunch
that we talk about.

So I think that this show
is still really great.

A year later, I listened to the whole
thing and I think it's super great.

What rancher is doing in the space,
and if you're at all new to Kubernetes,

we talk about the different.

Kubernetes distributions and
how rancher plays with that and

where it's best to use rancher.

The rancher, gooey that we all could
just call rancher as well as the

rancher, Kubernetes distribution.

Which are technically different
things and where Rio fits.

And then we go down some
great discussion of the gap.

Between Docker Compose, which I've
talked about in other live shows

recently here in 2021 and how it's
getting a sort of a reboot and a whole

new life and a bunch of new features.

And so Darren and I talk about
the excellent product that composes

and how there's a gap between that.

And Kubernetes and some of the
ideas they have for solving of that.

So it's sort of a great retrospect
on where we were a year ago.

In comparison to where we are now.

So enjoy this.

You're old but extremely still relevant
conversation with darren shepherd the

cto and co-founder of rancher labs

thank you so much for being here, Darren.

Darren: Yeah.

Thanks.

I'm excited to be here again,

Bret: this is like our
annual thing in the fall.

It's like the tradition now, since
tradition, since we have you for

the second time in two years, might
as well say it is a tradition.

So Darren has been on the show before
and we talk, we basically run through

all of Rancher Labs, projects and
products, and it's a great rundown, of

course, you know, being the co-founder
and often the person who came up with

the idea in the first place, Darren is
the great, a great person to talk to.

So I'm glad to have you.

. I have some questions of my own that
I need answers to that I'm hoping that

Darren can shed some light on you can
also follow up by the way, follow him.

You can find them on Twitter.

He's constantly tweeting.

And he's, it's always good stuff.

It's always good stuff.

Darren: For instance, like
I had one complaint about a

Bret: lot of stuff.

Yeah, that's true.

Well, and I like your honesty.

You don't complain this
like a complaining though.

You complain about legitimate things.

You had a tweet today actually that I
wanted to ask you about, cause like we're

going to go random on the topics today.

Aware of a Docker registry service that
supports assigning a custom domain.

I'm not sure what I, what you meant
by that, but it was an interesting.

Darren: Yeah, well, no, because we're
looking for where to host our images

because, of the recent changes
with Docker hub and the limiting

and whatnot we're running into a
significant problem because the our

users, our open-source users are all
getting rate limited and we haven't

been able to find a decent solution.

We're looking at our options.

So one of the things is we would like
to have our own like domain of like,

Rancher registry or something like that.

But, if I go with like ECR or R I'm sorry.

Yeah.

ECR or GCR, can I do that basically
white label it cause we don't want

to run it or run it ourselves.

But but what we're finding out is this
is actually a really tricky one is the

amount of data that we're serving up in
images today is absolutely ridiculous.

We had no idea the amount of
bandwidth that is coming out of it.

Of Docker Hub today.

Yeah.

Bret: Because they don't, they
haven't exposed that before.

Yeah.

They haven't given us
those tools to see it.

So you, yeah.

So we count

Darren: the guests we've been working
with Docker directly in trying to figure

out a solution of, what makes sense.

We have no problem with paying
for anything, but you know,

whatever solution makes sense.

So we, we are that between was just
cause I'm legitimately looking for

what are the best ways for people
to run registries and also where the

bandwidth won't kill me the cost.

Right.

Bret: Right.

Well, that's a, it's a discussion that
we've been continually having with

the Captain's group too, because a
lot of us support a lot of, I mean,

all of us got into be Docker captains
because we were largely focused on

open source and Docker's original
engine engine was open source and

that was the focus and we were all.

Very concerned as well about the
announcements earlier this year.

And we were confused by them as well.

I think, I feel like Docker is trying
to do the right thing, but obviously

like it's a very expensive thing.

So we all sympathize with them cause
it's like, it's a super expensive and

you know, people are paying for it,
but yeah, it's a tough, how do you

satisfy all these needs of open source?

It's a

Darren: tough problem.

Yeah.

And like, I totally
understand what they're doing.

And I mean, because it's like, they've
basically been giving away free storage

and free bandwidth for quite a while now.

And we all knew that had
to end it at some point.

So yeah, we're in a, kind of a
little bit of a strange situation.

I think our situation might
be a little more that unique.

Because we are an open source project.

Yeah.

So there's

Bret: not an efficient, yeah.

You're not an official image, but
you're a big open source provider and

Darren: yeah.

Out of content and bandwidth we're
pushing is really large, but we also

don't have a specific like customer base.

Well, I mean, we have our customers,
but our users are, 10 times a hundred

times more than our customers.

So yeah.

So it's hard to say like, well, everyone
just use a Docker account and yeah.

So it's been tricky.

Yeah,

Bret: that's true.

And for those of you listening and
you're like, what are they talking about

with this this rate limiting stuff.

So go back into, if you go to Docker
hub is putting in new rate limits and

other limits to basically ease us into
the idea that a lot of us are gonna

need to pay $5 a person or whatever.

For some things so that we're not
doing unlimited hosting and unlimited

storage, unlimited bandwidth and their,
the their website and their blogs.

So just go to blog.docker.com.

We'll actually have a
bunch of those blog posts.

If you're interested in this
show, we've talked about it a lot.

So if you go back and just search Docker
Hub at bret.live that'll take you to

YouTube, and then you can search on the
previous shows and we've probably had

three or four shows just talking about
this for like 50 minutes or something.

But I was interested in that tweet
cause anyway, so random topic.

Let's start off before we get to
the questions, let's start off

with your main product Rancher.

And so Rancher lab is a company
Ranchers, one of the products.

What is Rancher and how, how
does it relate to Kubernetes?

I guess?

Cause people often ask me like
Rancher versus Kubernetes and

that's like a weird question.

Yeah.

Darren: Yeah.

So fundamentally what the product is
that, it's I want to clarify everything.

We do.

Everything is completely open source.

It's free to download.

We don't even have a it's like
the open source bits are the same

ones we give to our customers.

So it's freely available, but
so our main product that we have

in cell support for is basically
defined as a multi cluster manager.

So it's going to help you
deploy and manage Kubernetes.

And so what that Kubernetes is
we're not very particular about

what, flavor of Kubernetes you get.

That's what differentiates Ranchers.

We really view Kubernetes as a commodity.

So if you're getting EKS or GKE or AKS,
or you're running it, you want it to

set it up yourself through kubeadm or,
you use one of our distributions, like

RKE is our distribution for enterprise
k3s as our distribution for edge.

So Rancher is going to help you basically
deploy those deploy those distributions

or manage them, do the security set up,
GitOps, deploy applications, the way we

view it as it's like, we want Rancher to
be your portal to the Kubernetes world.

We're the entry point.

Like if you're just starting off
today and you want to deploy a cluster

Ranchers, really great spin up Rancher
makes it really easy to deploy a

cluster and you can get running.

If you're a large enterprise or
you're more mature, Rancher will

scale up to those use cases too.

And yeah.

Bret: Yeah.

I like it.

That's a really you're
well-practiced at that.

All right.

So related to that, usually when I
get someone that comes in and they're

asking, okay, Ranchers, this thing
that I can use to deploy Kubernetes,

and it's one of the ones I recommend
to get started, especially right.

Trying to do a Kubernetes vanilla as
we call it the for those out there that

the Kubernetes ecosystem has lots of
different distributions and you can

do just the pure open source upstream.

But to me, that's a little bit like
building your own Linux kernel.

It's really only for the people
that know what they're doing.

And so I'd like, and I, in my courses,
I talk about Rancher is one of the

ways for deploying on a server.

And it has sort of, a lot of it
makes a lot of these decisions

for you and puts it together.

But how has that sometimes when
people are getting started, they

hear something else called RKE.

How is that ranch different than that?

Or how are they related?

Darren: Yeah, so RKE is like
our distribution of Kubernetes.

And so RKE is going to be very similar.

To I mean, any other distribution
that somebody offers like

EKS or achy, so where RKE?

Cause like the way we view it as I
mentioned, like we kind of view Kubernetes

as a commodity, you can get it anywhere.

So you should be really
running the Kubernetes version.

That is the best for your environment.

So if like you're an Amazon, EKS is
probably going to be the best thing.

But if you're on premise, like on
premise, you're running your own servers.

You need some options.

You need an answer there, and
that's where RKE comes in is

like, we give you a distribution.

That's going to be good
for, data center on prem.

And then also, very similar
to that, we have k3s which is

another another distribution.

So that it's just one, one
component in the bigger picture.

It is not required.

It's optional.

A lot of people like RKE we
have lot of people using it.

But we don't, we don't try to fight and
say, RKE is the best distribution ever,

because it's like, well, like we just
give you one because you need one, but if

you get one, some other way, that's fine.

Like our goal is really to just help
you manage Kubernetes and, yeah.

Right,

Bret: right.

I like it.

And you just recently had back
to the Rancher part, you just

recently had the 2.5 release.

And I got a lot.

I got, yeah.

A lot of emails about that asking
me to read stuff that I didn't read.

So do you have a summary of
what what 2.5 did for us?

Here in

Darren: 2020.

Yeah.

So 2.5 it was a major transition for us.

It's leading us into probably
what will be like a 3.0 release.

So there's a lot that actually changed
under the hood, but it's really

solidifying kind of this viewer story that
we've had in the company from the very

beginning of like computing everywhere.

It's like, we really want to
enable you to be able to deploy.

Applications anywhere.

It doesn't really matter where it is.

You can have a consistent approach if
it's on premise cloud edge, whatever.

So our 2.5 release strengthens our
ability we have better integration now

with k3s S to, expanding into the edge.

We've better integration with EKS, so you
can manage the cloud, Amazon cloud better.

But one of the major features that I
personally worked on that I really care

a lot about is the integration of Fleet
and Fleet is our GitOps at Scale solution.

So this is a GitOps solution that
will scale up to a million clusters.

It really came out of our work
in the edge space but that's been

directly integrated into Rancher.

So now when you install Rancher, you
have a built-in GitOps solution that

that, worked great for one cluster
works great for a hundred clusters,

also works great for, a million

Bret: right.

Fleet.

That's is that did that
get released to this year?

That was a 2020 thing, right?

Yeah.

Darren: Yeah.

It was a 2020 thing.

So we announced the
project earlier this year.

I think it was around like
maybe March or something.

So we announced that and when it
first started, it actually had no

direct integration with Git because
I just gave a talk on this at edge

conference, this edge conference we did
yesterday with Microsoft and the Amazon.

I went into kind of great detail on
the architecture and whatnot because

the key thing about Fleet is the multi
cluster capabilities is how to deploy to

a lot of clusters and how to manage the
configuration and how to do that at scale.

And so Fleet is like a very powerful
multi cluster management engine.

And then you kind of layer
get on top of it and then it

becomes an excellent GitOps.

But Fleet is actually, it's a project you
can just use Fleet directly by itself.

And then it's really nicely integrated
into Rancher if you want a GUI.

And it automatically, because you
have to register clusters with Fleet.

So the, all the clusters under
management and Rancher automatically

get registered with Fleet.

So you can just add to get repo and it
starts deploying software through that.

So you can use it by itself, but you
know, it's even better with Rancher.

Bret: Yeah.

Nice.

And any anytime.

I mean, GitOps is starting to
be a buzzword a little bit like

DevOps, where, what does it mean?

Darren: Right.

So yeah,

Bret: no, go ahead.

I was gonna say like, what do you
define GitOps or what is Fleet do

and to GitOps away necessarily?

Yeah.

Yeah,

Darren: because I mean, it is such
a bit of a buzzword or whatever, but

it's like, basically it's the, when I
talk about get-ups, I'm specifically

talking very much about the kind of
the Kubernetes flow of like, everything

you need to find in Kubernetes is
a manifest, it's a desired state.

So like you just have to do kubectl
apply and that becomes something.

So it's a very logical, simple thing
of like, well, what if I just put those

manifests and get, and when I change it,
you know what I updated and get you just

automatically apply it to the cluster.

So it's like GitOps with
Kubernetes is like a no brainer.

It's like, oh, it's so simple.

And that's why you see projects
like flux, which are really good.

It's a very simple approach
and it worked really well.

The complexities is just how you
start how do you do this at scale?

And, but the basic concepts are.

Are so simple.

And so like when I talk about GitOps,
that's all, I really care about there.

And like, I don't even think,
like, there's definitely a lot

of problems with GitOpss too.

It's not like, it's, you know, there's
just never like a silver bullet for

all these things but it has a lot of
benefits now that it's like, it's clear.

I mean, I'll tell you like
some of the downsides is.

Things we tried to fix with Fleet.

Is this like, as people scale up with
good offs is like, well, they thought they

would have all the centralized control and
everything, but they ended up with like a

million get repos and different branches
and people are doing PRS and they're just

like, well, what the heck's going on?

Like, what's the real
state of my environment.

And that was one of the driving factors
behind Fleet where we're like, well,

I can't just hook like every cluster
directly to a get repo because I

don't have enough centralized control
and visibility, like, especially

in an enterprise, like how do I,
you know, what the heck's going on?

So Fleet works as a kind of
a centralized GitOps manager.

So you can go to one place and get
visibility of everything and then

also control like our back and stuff.

If you're in there.

Right.

Bret: So does that replace
something like Argo or flux?

Darren: Yeah well, so like flux is
like fundamentally not multi cluster.

I mean, maybe the newer flux stuff.

They're doing a lot with get-ups tool
kit and maybe some of that's moving

towards multi cluster, but like the
flux V one was not multi cluster.

Argo is multi cluster.

And I know a lot of like our users and
customers are very successful with it.

Fleet plays in the same flight plays in
the same space, so it overlaps with it.

Maybe competes with them
to a certain degree.

I don't like really to compete with other
open source projects, but but it's like

the architecture of Argo did not work.

To get to the scale we needed.

We our Argo is a push model.

And that's, we know firsthand the
complexities of that because Rancher

two is actually written as a push model.

And that will currently scale
to a couple thousand clusters.

Whereas with Fleet, we
needed to get to a million.

So we had to flip to a, we Fleet, the
architecture of it is really what we say

is like a two-stage pole architecture.

Bret: No, I never thought about
that with with Argo being pushed.

We had I had Viktor Farcic on two weeks
ago, three weeks ago, something like that.

And our, so our whole show
was about Argo CD and GitOps.

And so for those of you watching
again bret.Live takes you to YouTube.

You should go back in the videos a couple
of weeks, and you'll see a guy named

Victor and we're talking about Argo.

And so if you wanna know, like
get more into get us, which is

my favorite thing of 2020 is.

Of to get operating all
the things in Kubernetes.

Sadly I, for those of you that are
Docker and swarm fans, I don't I don't

have any tools yet that anyone has
really created that automate that,

although it wouldn't be terribly
hard, but it, I haven't seen it.

So maybe you could be the one, maybe
you and the internet, you could

be the one open to open source.

Not necessarily Darren, because
he's already probably got a list

of 50 things that he needs to make
open source projects, because he's

constantly coming up with new stuff.

We talked about that last time, actually,
if I remember the question where I

was saying something like what's your
backlog of the list of things like, all

these things that you've said I'm just
gonna make a little personal project.

And then it becomes a
product that everybody wants.

And it's like, what's the next one?

What's on your list that you
haven't yet totally went abroad

because you have any time.

Hey, time to work

Darren: on our stuff.

I mean, there's so things, I mean, cause
basically it's like, if you follow me on

Twitter, I complained about everything.

But like the way I kind of
view is like, I don't like to

just complain for complainant.

It's is it's like I, I legitimately
want to solve the problems too.

So if like, if I'm complaining about
something it's most likely, cause

I'm also working on it or I'm using
it and I'm trying to make it better.

And I mean, I just have a massive back
backlog of ideas or things or whatever.

The one, one thing that's been
fun with Ranchers as scaled as a

company we're up to, I don't know.

I don't know.

I think we're like approaching 300 people.

It's like two 50 or something like that.

Yeah.

It's gotten big.

Is we've I think we're getting better at
like, I can basically kind of prototype

and come up with an idea and then hand
it over to a team who can actually

support it because supporting, it's the
maintenance of software that kills you.

I mean,

Bret: Even just one project, like
one project that, there's people,

there's how many projects right.

In open source on GitHub are have
thousands of users and there's one

person more or less with a couple of PR,

Darren: you know?

Yeah.

Well, I know.

And then it's like, oh,
there's 300, 300 contributors.

And then you go to the insights
tab and you go, oh, that was,

those were all doc commits.

It's like, there's only two guys working.

Bret: Yeah.

99, 90 9%, 99.9% of all
commits are from one person.

Then one commit per person for the
other 3, 299 people or whatever.

Yeah.

I mean that, and that's tough.

I mean, open source is great, but also
those of you that have ever made any

out there made any projects on open
source, like, I'm sure you sympathize

with us that you'd just be nice.

Like just the rule

Darren: is yes, please.

Because the reason

Bret: we haven't replied to your
thing or fixed your thing, isn't

because we're ignoring you.

It's because there is a thousand
other things that we're trying

to also do and it's yeah.

And

Darren: it goes to, it goes both ways too,
because it's like, as when I first got

into the industry, I was a Docker user.

And so I was the really annoying Docker
user complaining about everything.

And then we ended up creating
Rancher and now we have a

decent following or whatever.

And so now I have users complaining at me.

And so now being in this position, I
also realize it's like, as a maintainer

or whatever, to, it's important to be
nice, just give everyone the benefit

of a doubt, it's like, yeah, it was
just, there's no reason to GitOps.

Bret: That's generally the rule
in the internet, but especially in

open source on GitHub because yeah.

And I think I wake up every morning
and, cause I myself have with my

courses, I've been lucky enough to.

Like five or six repos, some
thousands of stars or something.

And they're none of it's really that
useful to people other than to learn

or to use tooling, to learn different
parts of swarm and Kubernetes.

And so luckily they're not, there's
no production stuff in there.

Right.

But I think I probably wake up every
morning and think about that one repo that

once someone asked about yesterday that I
haven't done anything with in three months

and that I should do something about it.

So for those of you out there that
have ever asked, we're probably

thinking a lot more about your
stuff, then you realize it's just a

matter of managing all the incoming

Darren: so yeah, there's a lot
of great questions coming in.

I know we're going to
get to some of these.

Yeah.

Bret: All right, so I'm just gonna
start doing some rapid fire for you.

We're looking to move our
Azure Docker enterprise swarm

to either Docker, Kubernetes.

So Mirantis Kubernetes or to AKS but
we want great monitoring and management.

How would Rancher apply to either change?

Darren: Okay.

Docker, Kubernetes.

Oh, with Miranda.

Bret: I think he thinks of that
Docker enterprises Mirantis or a K

S how would Rancher relate to that?

Darren: Yeah.

Yeah.

So it's like, if your strategy, if you're
already going to, like, if you're in

Azure and you're S you're staying there,
you're happy with that cloud or whatever.

I'm always going to say, go with
the cloud distribution you're by far

going to get the best experience.

It's going to be integrated the best
with all their services and whatnot.

So I would definitely say if
you're an Azure, then AKS is

going to be your best option.

That's still, check out Rancher
to also manage it because we add

a lot of functionality on top.

But in terms of distributions,
yeah, the cloud, you really can't

beat, what AKS EKS GKE are doing.

Bret: Yeah.

All right.

But what Rancher.

Could Rancher manage the AKS,

Darren: is that yeah.

And so we're Rancher would like, would
add value to that is is so we will help

you manage all of your clusters you,
cause we actually, we partner a lot

with Microsoft and have a lot of really
successful customers and deployments

with Microsoft customers on Azure.

And so what Rancher is gonna help you
do is like, come up with a strategy

of how you manage multiple clusters,
how you do authentication and

authorization across those clusters.

So let's say what you can do is you
can create an automation flow, like

GitOps or Terraform or whatever, which
is then interacting with Rancher,

which is then deploying the clusters.

And then Rancher will add on
top of it, like monitoring.

Also, if you want to go with like
more Prometheus oriented, I mean,

Azure is going to give you their
kind of integrated monitoring.

But if you want to go with kind
of Prometheus and that approach,

which is quite nice because it's all
container negative and all that stuff.

So it's like Rancher
will help you with that.

Yeah.

The way we look at it is like, you
know, what kind of, once you get

the Kubernetes base in there, now
you can just start dealing with

kind of a pure Kubernetes world.

And then Rancher is going to help you
with that, so we can, hook up like the

good ops pipelines and things like that.

Bret: Okay.

We've got a question on, I don't
even know how to pronounce this,

I guess this is the question.

Kyverno versus OPA.

Oh, wow.

Darren: Yeah.

Bret: Goodness.

So let's get some skills out there.

Someone knows what they're talking about.

Darren: Yeah.

Serious.

Yeah, so I think not to say Kyverno,
I'm going to go with that pronouncing.

That's great thing about open source.

You never know how to say anything.

So I'm a big fan of actually Kyverno.

I OPA is.

It's got the Mindshare or the orig,
like it very quickly became the

de facto solution of , oh, Hey,
OPA is the answer for everything.

So OPA is nice because it's a policy
engine that's going to work for anything.

And then, so you have gatekeeper,
which is going to be the

Kubernetes specific integration.

So if you're looking for a solution for
enterprise or like for policy management

or policy, offering an enforcement
across anything, OPA is your way to go.

If you're focused more on Kubernetes
and you just care about Kubernetes,

Kyverno is significantly easier to use.

I very much like it.

So it just depends on,
on, on your approach.

Cause like, to me, the biggest
downside of OPA is Rigo.

I don't know how you say it, but it's
their policy language it's based.

It's, derive from data log, which is
dry from prologue, which is so it's a

different way of viewing everything.

It's more of a query language,
a query and evaluating data.

So it's a little hard to grasp at first.

So I'm a big fan of . I don't
think at this point in the policy

space, anybody should be making
a large bet of just saying like,

this is the only solution going.

Yeah.

So I honestly would say we'll
just start with Kyverno.

And if that doesn't, if you need to
extend, pass it then try this, try

Bret: that, the challenge with any
of these tools, I think, right?

Like in the container world.

I think the only thing we've
standardized on is that the Docker file

the Docker file.

Like the image format is the image format.

Darren: Yeah.

It's like the only the longest thing
that's going to last out of all of

this is going to be the OCI image spec.

That's basically.

Yeah.

That was a great one.

Bret: Yeah.

I mean, and I know I have tried neither
of those, , I've only dipped my toe

into controllers into mission control.

I'm always curious about
other people's experiences.

And so that's not to keep plugging
that discord chat, but there's a

lot of smart people in there that
keep asking questions and they're

like, what do you think of this?

And I'm like, I don't have
time for all of the things.

So maybe I need to ask, have
Darren more on the show to have

these and Kubernetes questions.

That's why I asked
everybody's opinion on there.

. Like, what do you guys think?

And I go jump into Docker
Captain's room, which is

always a great slack to be in.

Cause there, there's a lot of
experience in that room as well.

Cause they're mostly senior,
. DevOpsy sysadmin people.

Yeah.

Personal Docker registry equals nexus.

Is that versus nexus?

Darren: I think we're probably
saying like that's a good option.

I mean, nexus is a decent
option for the Docker registry.

I'll say for our use case.

I don't have as much
experience with nexus.

I mean, I've seen a lot more like,
Artifactory JFrog being deployed.

That's probably the one I see the
most, if you're not going with a

hosted, a cloud vendor or whatever

Bret: interpret our biggest events
because that's paid right as a

Darren: paid yeah.

It's pricey.

And our biggest thing is the
running the operations of that

at the scale that we need.

I mean, I we're pushing over
a petabyte of data a month in

terms of our our image a lot.

Bret: Yeah.

I mean, starting in 2017 when
I started doing DockerCon talks

about production and how to get to
production faster, project planning.

Basically I would have these
couple of slides on it and that's

like, things you should outsource
do not run your own registry.

If you can get at any way, avoid it.

Like it's a, it's an HTTP storage system.

It's boring.

Like why do you want someone constantly
managing garbage collection of images and

bandwidth problems and storage problems?

Like, that's just,

Darren: yeah, it's it's
a rest API on top of S3.

I mean, I mean, it's like, it's
just another object storage.

We just called it, a different thing.

So it's , no, please
don't run them yourself.

They're there.

Yeah.

If you can just get it as a
service, that's much better.

Bret: Like we measure, how can we
measure production grade Rancher?

K3s

Darren: so yeah, the beauty of a
k3s is it is still just Kubernetes.

I mean, it's like, we did a lot
to make it small and easy and

whatnot, but it's so Kubernetes.

So like how you're going to monitor
and manage it is really no different

than Kubernetes in general.

And because k3s has actually gotten
so popular, we're now actually seeing

first-class support from it, from
like, ecosystem partners or whatever.

So I, it's like, I think SysDig
now officially supports it.

And so you can run all those same.

Um, just run those and you'll
get a Kubernetes cluster.

But yeah I mean, it's, Prometheus Datadog
assist egg, all the regular solutions.

It's like basically how you
want to monitoring monitor it.

So if you hook up k3s to Rancher,
you can deploy you know, Prometheus

the Rancher and then you get
the normal monitoring there.

Yeah.

Bret: Yeah.

Compliant or what does
it conform it, inform it

Darren: conform.

Yeah.

Yeah.

Bret: There was a couple of
comments about the GitHub running

the Docker registry, but to my
knowledge, GitHubs still requires

auth on all of its registry stuff.

And we're allowed this open source stuff.

The challenge is doing it without auth.

Darren: There, there's a question here.

How did you come up with the idea of
replacing etcd by with SQL light?

That's a super technical
one, but that's a fun story.

Bret: Yeah.

I've been watching a little
bit of the drama as it unfolds.

Darren: Cause this is ever since, so
basically Rancher got into managing

Kubernetes, basically as soon as
Kubernetes started getting users.

And so our biggest hassle of
managing Kubernetes has always

been and still basically is etcd
because it's a persistent system.

So just like not to harp too much, it's
like etcd in the earlier days actually

had some reliability and some issues or
whatever these days is pretty rock solid.

But it's fundamentally at the end
of the day it's persistent system.

And so from the very
beginning, we all went.

We're like, well, why do I need etcd?

Why can't I run on top of something else?

Cause I don't want to run
like our customers don't want

to run a persistent system.

Like they would much
rather get it as a service.

And so the motivation for swapping out
etcd with something else was always

that I'd like to be able to run it on
RDS because I can just pay for RDS and

I don't have to worry about operations
or if I'm in an enterprise I have a

database team and they just do it for
me, so I don't have to worry about it.

So we actually started this work way
means like five years ago we did this.

We contracted glider labs to develop
a prototype of replacing etcd with you

know, doing the shim of getting etcd to
run on top of our relational database.

And w and we basically proved
that the idea was feasible, but

the timing wasn't quite right.

So it was this pet project of mine after
that for many years, where I just kept

hacking away at a database, how, like
a database solution for Kubernetes.

And so it wasn't until k3s came along
where we then found a perfect use case

for it, because we could swap out FCD
with SQL Lite and then it made Kubernetes

much more lightweight, less intensive
on the disc, much more, better, like

much better, much more, better, much
better for you know, edge solutions.

And so that's where it came from.

It's like people actually
don't realize that.

Really the amount of effort
that went into creating that.

And so that code is actually it's
in a project called kine, which is

a project that I really would like.

It's part of k3s you know,
k3s is a CNCF project now.

So that project is
hosted with k3s in CNCF.

But that project, I would like to see
graduate, like move upstream to be a

Kubernetes project, because you can
run Kubernetes on top of any database.

And I had mentioned Fleet, we only talked
a little bit about Fleet and how do

you scale Fleet to a million clusters?

The only way to do that is actually
you have to run Kubernetes on top of

on top of a relational database because
the data store etcd just won't hold up.

So this is like a super
technical component.

Something I've worked on for many years.

I actually pretty darn proud of that.

It, that it works.

And I do see.

As the cloud distributions, like EKS and
those things get more, you know, as that

becomes more of the commonplace that
all of the cloud vendors are basically

gonna move away from running etcd.

Cause they're going to run different.

It's just too costly to run etcd.

So it's like, you could run, we
haven't actually done it yet, but

it's like, it's theoretically possible
to add a driver to kind to support

like dynamo DB, and then it would be
ridiculously cheap to run a Kubernetes.

Bret: Yeah.

That's a really good one.

And I'm going to have to
like make a clip of that.

Yeah.

That'll be its own YouTube
clip etcd and SQL Lite.

Yeah.

In all the SQLs.

Yeah.

Talking about that.

All right, so there's a conversation
Darren's argument against GitHub

registry and sfxworks pointed out
that GitHub now recently allowed

for public pools to require no
authentication, which I did not know.

Yeah, that's really.

Darren: There are some
with GitHub registry.

I mean, actually I think it's great.

That's probably should be your default
option, you know, if you're, one of the

kind of the leg up that GitHub has is
that you're already controlling you know,

teams and authentication and authorization
and roles and all that stuff.

And so integrating GitHub registry,
I mean, your Docker registry into

hub, I think is a great solution.

I would recommend that for most people
it only just becomes more of a problem

when you're like enterprise and you
care about, oh, you know, where your

data is stored and stuff like that.

But whatever it's Microsoft
they'll, you know, they'll

figure out some answer there too.

So I have nothing actually
against it for our use case.

Like there's two requirements
we were looking for is can

I put a custom domain on it?

Which I can't really,
I don't think I can do.

And even though it's unlimited use
for open-source projects, I think

there's still actually a limit
there's I don't think the bandwidth

that we're pushing will actually.

But we are actually, we're investing
that's one of the options on our plate.

Yeah, cause we still have,

Bret: I'm sorry, you were saying you
don't think they get, even though

GitOps claims bandwidth, unlimited
bandwidth, open source, like if it

was a petabyte, they might reach out.

Yeah.

There's still a

Darren: kind of like an
abuse limit, you know?

So I don't think it's actually completely,
you know, unlimited cause yeah.

I mean it's somewhat abusive.

I which I understand the amount
of bandwidth that we're doing.

Yeah.

Yeah,

Bret: for sure.

And I think that some people, you
know, and myself even made this mistake

probably years ago when I didn't quite
understand the difference between

containers as an artifact and that
concept when I was first getting started.

But there's a huge difference
between source code storage and

get pools and get pushes and stuff.

And container image, registry,
storage, and container.

Well, I mean like images are
many factors more in general of

storage and bandwidth than code.

When people sometimes think
, GitHub allows all this for free?

Why can't all this be free?

And it's like, well, it's actually
quite different to store, this stuff.

And that's why all these companies,
like NPM had to go make an

enterprise model because NPM, as
a free thing is quite expensive.

I'm sure.

But it's, but that's still
just open source code.

There's not a lot of compiled
binaries in there, even though I

technically, I think you can do it.

Darren: Yeah.

And that's why I'm, you know, it makes
sense for it's like the only place I

think right now you can actually get
free unlimited downloads is a good.

But it's they have the scale of being
Microsoft and, you know, so they're

easing the costs in different ways.

But you know, they'll have
to offset that somehow.

Yeah.

And

Bret: people are asking questions
and talking about like auth

and can't everybody just auth.

And I think that the question for me that
the thing is it isn't that we all can't

just ought to go do pools of images.

It's that until re until now none of us
have had to, for anything that was public.

So any open source, none of us
have had to NPM off or bundle off,

or any of these other things for
anything other than private repos.

And it's ma it's a matter of.

Now everyone and every server and every
CII solution and all this stuff all

has to now deal with auth on a thing
that we never had to deal with this.

I don't think it's a matter
of, no one can do it.

It's just, it's a new thing,

Darren: a significant problem, because
that's introducing auth where people

didn't have it before is, it's, I
mean, it's like our customers have,

it's like if I have a thousand
clusters under management or whatever,

it's like, that's not a minor.

Bret: Right?

Exactly.

Yeah.

There's lots of chances
for that to go wrong.

Yeah.

All right.

Moving on from that.

What are the container
technologies supported by Rancher?

Darren: All of the above.

I mean, Rancher Cause it's like
there are products and projects and

whatnot, we go down as low as we
have, like Rancher west and K3 O

S, which are our operating systems.

So we're looking at the kernel level.

k3s and our K two, somebody had
actually a question about our K two,

but those are bundling and container D.

So we're doing, CRI
container, runtime level.

We have Longhorn, which is storage.

Yeah.

I mean then Ranchers, Fleets
get, so it's like our projects

they cover most of the space.

And like the only thing we
don't do directly is going to be

like monitoring, logging those
types of things, universities

like Prometheus or fluent bed.

And but we still like support them.

They're supported versions.

You can get through Rancher and whatnot
but as I introduce Rancher in the

beginning, it's like, we really would
like to be the, like your one-stop shop.

It's like.

We are your kind of gateway
into the Kubernetes ecosystem

and then CNCF universe.

So while, we're not necessarily the
one, you're not, every project is not

a Rancher project, but you're going
to go through Rancher to get that.

Right.

Bret: Right.

And all of these, maybe some,
most of these are all focused

around Kubernetes, right?

Like the of your
products are all pulling.

Darren: Yes.

We're very, so like the when we announced
Rancher 2.0, which I don't know, three

or four years ago, I can't remember.

The time was like, that was a
very big shift in our company.

And we went a hundred
percent in on Kubernetes.

So before that we were still
supporting like Docker swarm and

mesas and those solutions, but we
went completely completely all in.

And that's been a.

That's been a challenge for me
personally, , because Kubernetes

is definitely more difficult than
let's say, like what swarm was doing.

But that's where things like k3s S
came from was like our attempts at

trying to make this easier for people
it's like the kind of running at k3s

cluster is really no different than
running a swarm cluster these days.

It's pretty it's about that easy.

Now deploying your apps on Kubernetes
is still harder than let's say, like

swarm, but but we're, hopefully these
things will get solved over time

Bret: yeah.

Yeah.

Now that Swarm's continuing his continuing
support will ranch or beginning support

Darren: back?

No, absolutely.

No.

We have zero interest
in swarm, unfortunately.

I'm sure you have a good amount of
swarm fans in your, and your audience.

And I was always a fan of swarm and the
simplicity and what they did, but we don't

really view it as strategic in any way.

So I actually, I wasn't even quite sure.

So I'm assuming that means that Mirantis
said they're going to continue to see

who said they're supporting swarm.

Yeah.

Bret: Mirantis yeah.

Yeah.

And for those of you watching, if
you're a swim swimmer fan or swarm fan

or interested in swarm just go back,
let me say this a thousand times a day,

bret.Live, go back in the show list.

And earlier this year we
had a whole swarm show.

We talked about 2020 updates, but
essentially Mirantis right out of

the gate said, we're going to give
it two years of enterprise support.

Cause you know, if they bought 700
Docker enterprise customers, essentially

when they bought the majority of
company and on, as far as I know,

close to 90% of those were all running.

Swarm, As their primary yeah.

Orchestrator, they could, they also
might've been running Kubernetes,

but they were also running swarms.

So swarm is a prime,
most of their customers.

And so Mirantis I think was doing
that to sort of calm the fears right.

Of the whole, purchasing
the customers bailing.

Cause they're worried.

But since then they have
continually restated.

Oh no.

We're adding now we're
going to add CSI support.

We're going to add jobs.

Jobs is already up in upstream
Moby, but the release cycle and

feature set is still very slow.

It's not like there's a team
of 20, as far as I know, it's

still like a person and a half.

That's the developer team on swarm.

It's not going to any, in any way k3s up
or ever be the thing that Kubernetes is.

Right.

So

Darren: yeah.

So the thing was swarm is yeah

we had an orchestrator called cattle
that was in ranch one, which was a very

simple and people were very successful and
the people who used it really liked it.

And when we came on with Rancher two,
which was all Kubernetes, it was just

nowhere near the same simplicity.

And it was painful.

It was like, it made me sad of like,
well, we had delivered this thing

that was so easy to use before.

And now we've made Kubernetes easier to
administrate, but using it is still hard.

Um, But I do honestly believe it was
it's the right move for like our users,

our customers yeah, the stuff is a
little harder to use, but the ecosystem

is incredible around Kubernetes.

And it's just hard to deny that.

And I don't, if you continue
with swarm, just make sure you

have a good exit strategy because
it's I mean, it's, there's just,

no, there's no ecosystem there.

Yeah yeah.

I would, I was just to say for the
Kubernetes, if you want a really simple

solution of , how do I just basically
deploy applications Rio plus k3s Rio

is a project from ranch or whatever,
real plus k3s is a super, super easy

way to run containers in production.

Rio is a layer that we add on top
of it, which is going to give you

automatic certificates and load
balancing and Canary deployments.

It will pull apps directly from
Git, it does some auto scaling.

So if you're really looking for that,
just very simple, you know, almost

closer to a Heroku, like, experience,
but you're running it yourself.

That's what Rio is.

Bret: Yeah, you could run Rio
without Rancher and just run it on K3.

Darren: Yeah.

Yeah.

You can just, you just spin up.

Cause Rio's, it's just
a Kubernetes project.

So you can just launch k3s K S
was really a no brainer to run.

And then run Rio on top of that,
Rio is just a CLI it will basically

install and deploy itself and you can
hook up your apps from GitHub and then

we'll deploy and run applications and
they'll scale and you get a routing.

There's a load balancing and a real
is a pretty fun little project.

Bret: Yeah.

Yeah.

I, my answer to students and a
lot of my consulting clients,

it's like, if you haven't made an
orchestrator choice, now you probably

should choose Kubernetes by day.

When people go through my courses, the
way that they're introduced to Docker,

Compose, and then registry and like some
things like that and hub, and they go

through those basics and then to wet
their lips on what orchestration is,

because it's already built in a Docker,
they learned swarm a little bit and there

was a couple of hours of stuff in there.

Yeah.

And then for a lot of people,
they just stopped there and

it's good enough for them.

And that's great.

I still actually run my own website
on swarm because it still works.

And my website is just a website
and it has a couple of things.

I think a let's encrypt proxy
in front of it, or I'm sorry.

A traffic proxy running
less encrypt in front of it.

And then a storage driver from an
old project that doesn't even exist

anymore, really for REX-Ray, I think.

And, but that works for me and I
haven't changed it in like three years.

But if I did it all again today,
I would probably, you know, if I

needed to run three or four websites
and I just wanted, I had some nodes

on DigitalOcean or something, I
just wanted to have a little hobby.

I might do K 3s or Rio.

I does Rio run on top of, I mean, it's
just a requires a Kubernetes communities.

Yeah.

And Kubernetes.

Yeah.

Yeah.

So

Darren: like even like
DigitalOcean or yeah.

If you run real on top of digital ocean,
that's like, that's a pretty sweet setup.

Because digital ocean is a no
brainer to spin up a cluster.

Yeah.

The biggest problem I see in the ecosystem
right now is this gap between basically

Docker, Compose and Kubernetes is that
there's just not a good path forward

because Docker Compose is so good.

It's like, it just doesn't get
enough credit for what it's done.

And it's this great tool for
people to understand containers

and spin up environments and stuff.

But there's like, I know Docker has
some solutions, but they really haven't

been adopted much of, running compose
and production because it's a super

hard problem because this is a space.

I mean, I've worked with a lot of
like, compose is great for developers.

It's not great for operations.

It's not great for enterprise.

So how do you bridge that gap?

And this is something I've looked
a lot at Rio was our first attempt

at trying to bridge that gap.

I don't think we actually succeeded there.

We are actively working on kind of a Rio
to where we're trying to address this,

this major gap that I think exists.

Bret: Yeah.

Yeah.

Especially when it comes to.

You go from 200 lines of composed to
2000 lines of manifest and just to do

essentially, well, that's the same thing.

You get more bells and whistles and
you probably, you've got more stuff

going on there, but yeah, like you
throw in that ops responsibilities

and suddenly, you have all, thousands
of lines or something instead of

and, you've separated it from one big
compose file to small little manifests,

Darren: but it's the separation of
concerns of like composed gives you

like, oh, I can assemble my application
and it makes sense for the developer.

But then how do you add
all the production bits?

Cause like, that's what Kubernetes,
like all those bells and whistles are

all for production and it's like a
developer doesn't care about that at all.

And they shouldn't, it's like, it's a
bunch of details that are just annoying.

Right.

And so it's like, how do you keep
that simplicity, but then create

like effectively an artifact
that you can move forward.

And yeah.

Yeah.

I mean, I really hope.

Next year when we do our annual
talk, I'll be talking about how

we've addressed this problem.

This is something that
bugs me a lot, I wish.

Bret: Yeah.

And it's definitely like
the next frontier, I think.

And it also is challenging because it
gets lost in the messaging of people are

learning containers, learning Kubernetes.

And then this is the kind of problem they
have once they're ready to go production.

And then they fall off this cliff of
it's reasonably easy for a developer

and they'll suddenly holy crap.

And that cliff is a rough one.

And we struggled the same way in
the training world too, because

we're trying to figure how to
train people on these things.

It's like, okay, well, I'm going
to teach you about this, but before

you know this, you got to know this.

And before you know this,
you got to know this.

And now even now you need to
know 10 things just to get

your app into production.

Darren: Yeah.

And it's even more complicated
in enterprise because what's

happening is the it team is like,
okay, Kubernetes is the answer.

So they come up with their Kubernetes
strategy and they have, they're

like, okay, now I have Kubernetes.

And now I'm going to give
it to my development team.

And then it's like, the development
team is like, How do I use that?

And then people end up building all these
custom paths and layers on top of it.

So it's like you have problems coming from
both sides of , people start with Docker

Compose and they understand containers.

They don't know how to
run it in production.

And then in an enterprise it's
like, well, yeah, we already

decided to Kubernetes, and now
we're forcing that on you and yeah.

Have fun with that, as much as Docker
was all about the developer, it was

like all about the developer, like
empowering the developer, Kubernetes,

unfortunately is all about core.

IT It's like, that's why it's been so
successful is that enterprise has adopted

it and it's to a certain degree Yeah.

I don't think it's been very good for
the developer to a certain degree.

Right.

It's like kind of it's
pushed the it's yeah.

It, it put the ball back in its court,
you know, control developers before were

like, I want Docker and I'm so much more
agile with it and they're pushing it.

And then they're like, wait, hold
on, give me two years Kubernetes.

Now you deal with it.

So yeah,

,
Bret: I'm a huge compose fan and
every one of my consulting clients,

I almost insist that we actually
like that's the first part of the

project is we compose this thing.

And it usually ends up with developer
happiness, and then we start talking

and then it's a completely different
conversation to talk about server clusters

and Kubernetes and shifting the manifest.

And the only thing I haven't been able
to come up with is watching over and over

again, multiple times a year with these
projects, like seeing the ebb and flow

of that is , if there was only something
in helm or whatever that basically the

developer gives it's composed file.

It doesn't necessarily do a translation.

Cause we've got those converters,
but like basically what if it takes

that compose and then it applies
like the commission controllers the

ingress, all the other things exactly.

On top of it, not changing
it, you know, it's like, yeah.

I'll tell

Darren: you exactly the
project that we're working on.

Cause this is pretty much what you
said is that we spun it out of Rio.

It's something where it's
called Dolly right now.

It's kinda hard to find because it's
in one of our developers repos but.

Is that we're specifically trying
to do that where it's like, what

if I take a composed file and
effectively turn it into a helm chart.

So it's like, I can do
the live development.

I can do Docker Compose up.

I can, develop my application.

I get all the cool flow of
Docker Compose, basically.

But then when it goes to deploy, I
can basically turn that into a helm

chart, but turning into a helm chart,
I also then need that layer that

you're talking about of like, how do I
inject my production that's in, like,

how do I add my Istio configuration
and my OPA, it's like all the stuff

that you're you need to be compliant
for your organization or whatever.

What we saw was that like with Rio
specifically as real has a runtime

aspect to it, and the downside
of Rio was developers, like Rio.

Like it resonates with them and they like
it, but they can't convince the core.

It, the people running Kubernetes.

You need to run the runtime of
Rio, , because they have to get the

operations team to buy into it.

And then they're like,
oh, well, what is this?

Cause Rio comes with a
service mesh built into it.

And so they're like, oh, it's STO.

And like, now, then you'd know
Istio and all this other stuff, or

it's Linkerd linker, it's Linkerd
by default, I'm big Linkerd fan.

But but yeah, so it's like, basically,
it's like, how do you create from

the developer side and interface the
developers, like, but that can turn into

a, basically a standard artifact that then
can be handed over the wall to operations.

As much as dev ops are supposed to be
together, but like, it is still in, so

it's like, how do you think create a
standard artifact that can move forward?

And then people who are like, I
understand Kubernetes and I don't

care about your developer world.

They know how to deal with it.

And so that's like the solutions like
that, that we're looking at right now.

So anyways,

Bret: yeah.

L a helm and some other
bits to help with that

Darren: help.

How does one of my favorite
projects to to hate these days?

But I love, yeah.

I mean, helm is is become like Fleet.

If you look at Fleet, internally,
Fleet is actually built on top of helm.

Even though you don't have to author
helm charts what we do is we actually,

when we get pick up the manifest from
get, we dynamically, turn them into a

chart and then use home to deploy them.

So when you look at your cluster,
everything is defined as a helm release.

So you have like good auditing,
you know what everything is.

There's a lot of advantages of putting
things into a helm chart, like the

downside of helm and what everyone
always complains about is the template

thing and offering the charts.

And then there's complexities
around CRDs and whatnot, but

those are all misdirected.

Anger.

Yeah.

Bret: Well, and that's the thing, right?

Like when you do what you
just mentioned, right?

So like a developer learning
Docker is one thing.

And then learning the composed format
and then they're like, okay, cool.

But then to get it into production, if
you're expecting that same developer

or the team is to like 90 learn to
manifest, oh, now you need to learn how

I'm charting or customize or whatever.

And it's just it's an exhausting a little
bit, like there's very few people that I

know that can be a good developer and then
also try to manage all these other things.

So I think it's totally legitimate
to say that let's keep those

ops focused things in ops.

And it's not that we can't share in it.

This is all infrastructure as code.

Then everyone should be
able to see this stuff.

But it's definitely a hard area that
I think we're going to be seeing more

and more people trying to deal with.

Hopefully, I mean, at some point
it's gonna have to be added to my

courses because as the community grows
and my students mature, they're all

having to deal with these issues too.

And I'm gonna have to come up
with something and that's been

the problem too, with Kubernetes.

It's like, you make a course that people
want me to make a service mesh course.

I'm like, okay.

Which one?

, I can't do a service mesh
course on all of them.

And I can't do a CI course on all the CIs

Darren: like, well, the service mesh
is just as complicated as Kubernetes

if you're going to go with Istio.

But yeah.

So yeah.

Oh I, you the last one anyway.

Anyways, keep going.

Oh,

Bret: okay.

That's okay.

We'll come back to it.

I'm going to, I'm going
to do some rapid fire.

See if we can't k3s up on some
of the questions . . RKE2

Darren: so that's a savvy
user who knows about RKE2.

Cause that's not something we've in 2.5.

We announced something
called RKE government.

I think it's what it's called.

And, but it's technically

it's RKE to

Darren: in preview, but we've tailored
it specifically for the government sector

who has like higher security requirements.

But what RKE is, is the next version
of RKE that will be coming out.

Like it'll be more fully supported
in our next version of Rancher.

What would be like a 2.6?

And it's really exciting because what
we've done with RKE2 is we've built.

We've revamped.

RKE on the same.

It's almost like the same platform is
k3s it acts and behaves a lot, like k3s

it's just as simple to interact with as
k3s but instead of choosing a bunch of

kind of lightweight, embedded options
that k3s does RKE comes with like your

full integrate enterprise grade fatness.

It doesn't use any embedded components
everything's ran separately.

But it's super easy to get up and running.

It's really easy to administer so
I'm really excited about that project.

And , it's consolidating our
approaches down to it's like,

if it's going to be, you know,

basically RKE can run

Darren: cloud data center, that's
where we, and then k3s has edge.

But no matter where you run, we
basically have the same approach.

So it's RKE2 and k3s.

Uh, Be the same approach in how you
interact with it, but they have different

capabilities in terms of, more tailored
to the environment where they're running.

Bret: Yeah.

Okay.

Next one is next level of
Rancher academy course coming.

I enjoyed the L one operator one.

Darren: So the training, we I
don't know when we launched this.

I mean, it's relatively new.

I think it's like maybe
in the last six months.

But it's been a huge success.

I mean, a lot of people signed up, have
gone through this training, really enjoyed

it, I really, you know, it's funny.

When it comes to training, w we w we
said, Hey, we're going to start doing like

certification or whatever in internally,
I kinda like barfed on the idea.

And because I'm not like I'm not a big
fan of , certification but the reason why.

The thing that's great about this
is it's not so much about like the

certification of getting a paper.

Like, I don't think that proves anything
it's about learning the content or

whatever, because if somebody asks,
this was like a Twitter, I think

it was maybe there was on Twitter.

We were discussing this of like, well,
what's the value of like getting the

CNCF certification for, Kubernetes?

I think it's called CK
a, is that what it is?

So in my answer was like, we actually
have seen a lot of value in the

certification for it's your way to
like, you've got a competent employee,

a smart person, they don't necessarily
know a lot about KU Kubernetes.

It's just a quick start for
them to understand and learn

everything about Kubernetes.

And so that's what this is like, if
this is not about getting certified,

like, like, you know, that paper
means something or whatever it's

about teaching you the content.

And you can have your people.

Your team members can
just go through this.

And so it's been very successful.

So when this level two coming, I
can't say direct, like, I don't know.

I know that they've gotta be
they're working on it because

it's been very successful.

So they're going to be adding more
content to that, but the exact schedule,

I don't know, but I'm glad you enjoyed

Bret: it.

Yeah.

Thanks, David.

And for those of you that haven't heard
about this so there's a link in the chat.

So the way that this works,
cause it's this isn't like this

isn't Docker mastery, right?

Those of you that are students.

So this expects you to know
Docker and Kubernetes first.

So my recommendation is if you're
interested in Rancher and I

definitely recommend that you
check it out, cause it's one of

those things that you should know.

If you're going to do Kubernetes,
right, you probably need to learn

a couple of distros at least.

And this is definitely the, one
of the ones I would recommend.

So do the Docker mastery course
from me and then do the duke

Kubernetes mastery course from me.

So those will get you the fundamentals and
then you're going to learn a distribution.

And this is a.

I've checked it out.

I've actually watched some of the
videos and I would recommend this.

This is great.

While we're there look up Adrian's channel

Music Intro: should have
this link somewhere.

Bret: Oh, I'm not signed in.

That's why it's not finding, it's not
funding it because it's like, who are you?

I'm showing you container ships.

Oh, I'll find it later.

I'll put it in the, I'll
put it in the notes anyway.

No, why he's not showing up on my end.

Let's go back to the

Music Intro: questions.

Bret: Longhorn CLI another insider.

When can we expect to see the Longhorn CLI

Music Intro: oh,

Darren: I get it.

So the direction of Longhorn
has been, I mean, cause there is

technically a Longhorn CLI, but
this CLI interacts with the engine,

which is not what most people want.

So the direction of we Longhorn has really
been to try to just make it a first-class

really good Kubernetes solution.

So like a storage class.

We don't have, I'm not aware that like the
CLI is like specifically, like we have.

A date or a plan or whatever
to deliver a first-class CLI.

So I can't, yeah, I don't have a
great answer there, but I think

it's a worthwhile thing to to do.

We are scaling up that team.

As Longhorn is now I think we said
it was GA, but it's officially

supported like through Rancher and
so we have customers and stuff on it.

And we're continuing to expand
in that space as the more people

are willing to put persistent
workloads and they find kind of use

cases where Longhorn makes sense.

But yeah, not a great answer for that one.

Bret: Right.

That's okay.

All right, so I have the URL we should go
check out Adrian's channel if you've ever

watched any of those Kubernetes videos.

. Let me just, this is another
channel you should watch.

In fact you just had one, two days ago.

He's like, oh, live almost every day.

And

Darren: yeah, he works.

He does it every morning.

Maybe.

I don't know.

Bret: Yeah.

It seems like that.

But if you like this channel, you
should definitely check out his channel.

I put that the link in chat
he's he's talking about all this

Rancher stuff and all just cloud
native containers, all that stuff.

That's great.

Darren: Yeah.

He's got a site.

It's a cncn.io.

His main site of, all of his content and
what's going on with what Adrian's up to.

Yeah, it's some pretty impressive content.

I don't know where he finds all
these to be perfectly honest.

Bret: Yeah.

He's good stuff.

And so the, sorry, if people
didn't correlate that he is in the

videos of the Kubernetes training.

So if you do that, I'm sorry
of the Rancher training.

So if you do the Rancher academy,
you will be seeing his face.

Yeah.

All right.

K3 is where we talked
about K3s and turning PI.

Darren: Oh the, yeah.

Turing PI announced that they just
announced the the CM4 compute module four.

They're going to be
releasing that next year.

That's a cool, that's a cool platform.

I personally, I don't have one.

I haven't played with them.

Alex Ellis is really the one to follow for
all of your raspberry PI Kubernetes stuff.

He's got some great projects, like,
k3s up it's k3s up that makes it

really easy to deploy Kubernetes on,
on, on these like clusters and stuff.

So here's what really the one
to follow if you want to, more

know about that stuff or, yeah.

So yeah, I'm excited for the new one.

I'll probably end up buying the new one.

That's built on the cm four.

Bret: Yeah.

Yeah.

This is a, this is the repo.

Alex Ellis has actually been on the show.

I think he was on earlier this
year is a friend of mine and

he, all of his stuff is great.

Yeah.

And he's always looking for people
to help out with the open source.

So like if you're wanting to get into
open source and actually commit some

code, we're still in the month of code for
digital ocean, by the way, for the, like

the hackathon, the Hacktoberfest , I've
been talking about that all month long.

So he's always looking for people
to put in some PRS and all of

his projects are really great.

He also is really good at
making great CLS for the tools.

So making them easy, very
similar to Docker experience

Darren: open FAS is his project to, or,
I mean, he was a creator of open FAS.

I mean, there's a good
community around open FAS.

And

Bret: yeah, he's worked hard at it
for years and I respect the effort

cause he's got a lot of hustle.

Darren: Yes.

It

Bret: takes a lot of
hustle to make open source

Darren: projects, success, not
easy to do what he's doing.

Bret: Yeah.

I'm a question on any good public registry
for helm charts, similar to Docker.

I do know that there's a discussion
around registries being able to store

a and they do already these Docker is
experimented with this on storing, like

composed files helm charts, other stuff.

Yeah.

Darren: Yeah.

So that's like the OCI
artifacts work that's going on.

Because like, when you look at
the Docker registry, it's really

just a, it's just a content store.

It's a content addressable store.

And so you can put anything in there
and in fact you can today, that's all

problem that Docker hub has is people
storing movies on there, in there?

Docker.

Yeah.

Yeah, of course.

People, if you can store
it, people abuse it.

Yeah.

But oh, what was actual question and that?

Bret: No, that's okay.

Public registry for chart.

Darren: Yeah, so there's two of them that
I know, so there's some work from like

the CNCF they're doing like well, that's
more of like an aggregator for things.

So there's like artifact
hub that CNCF is doing.

But then I can't remember what it's
called, but Jay frog just put out one

where it's like, I dunno, I wish I could
remember, but so J frog has one more.

They're aggregating all the content
or whatever, but I'll tell you,

this is a major problem in my mind
that even if you aggregated all the

helm charts, the majority of the
helm charts out there are not good.

And so the qual, like this is one of
the biggest problems with helm today is

the quality of the charts is very poor.

And this is where something at Rancher
we're investing a lot in is like, today.

You have some upstream, let's say
Linux software, but then that gets

repackaged by distribution like Ubuntu.

And then you can eat very easily, install
it in Ubuntu and they maintain like

that package and make sure it's easy.

And it's integrated with other things
that doesn't really exist today for help.

And so we're trying to go down that route
with Rancher right now of how can we

start maintaining a set of helm charts
that are based off of the upstream, but

we they're well curated in that like,
cause like every chart today wants

to install Prometheus for example.

But how do we make it?

So every chart we can, like,
basically use the Prometheus chart

that we also support or whatever.

And so that was one of the things
that we actually we've started.

This effort with Rancher Tuto
five is starting to figure out

how we can curate a set of charts.

Just so these things easily installed.

Like you can install and upgrade them.

That's like the biggest things.

So yeah, so that's, I think a
gap in the ecosystem right now.

And I honestly think like people's view
of what a Kubernetes distribution is today

is really like, you just get Kubernetes,
but there's really no reason that like

a Kubernetes, like, like there's room
for a distribution to very similar, like

to what Linux distribution is really a
Colonel and a set of user space packages.

While we say, this situation is a
little different where we're saying,

well, Kubernetes is the kernel and
you can get that anywhere, but you

need like a distribution of the
userspace stuff, all the CNCF projects.

So this is an effort we're investing
in on the Rancher side of, you know,

how do we create a kind of curated
set of like all the CNCF projects?

So they just easily install and you
can get charts that you just, you

don't have to be an expert to use them.

Bret: Yeah.

I just now realized while you were,
I was looking at this while you

were talking that the Helm Hub.

A repo actually redirects
over to artifacthub.io.

So that is the same thing.

And I know about operator hub, which is
not technically helm specific, but that's

where I ended up searching for things.

Sometimes when people are held sorts,
eventually leaned into a operator style.

So yeah, I don't know how much

Darren: of this yeah.

Operator hub is a weird thing.

It's hard.

I don't know.

I've done some other blogs or blogs
and articles and these interviews about

operators and the complexity of that
kind of paradigm or marketing term.

So I'm not personally a big
fan of hop operator hub.

Bret: Yeah.

Yeah.

I mean, yeah, just because it's on there
doesn't mean that it's quality, right.

Like that's true of anything
in the internet yeah.

Is there a convenient way to
install Rancher on existing

k3s hub cluster, sorry.

Is there a convenient way to install
Rancher on an existing k3s cluster?

Darren: Yeah, so th the simplest, so the
simplest way to get going with Rancher

is I always just throw people to our
GitHub page, cause it has like direct

links into the docs of what to install.

So Rancher there's two
ways to install Rancher.

One is you can do a Docker container.

That's a very simple The, you know, I
have nothing but Docker, but that gives

you like, not a production grade thing.

So it's not really the best the
other is installed as a helm chart.

And the helm chart is
fairly straightforward.

You need to read the docs.

But the docs are quite good that we
have out there because you need to

understand the TLS, you know, what
options you want to do for TLS but so

if you have a k3s cluster and you want
to solve Rancher, then it's really

just installing that to helm chart.

So you're going to download
the helm CLI and then run home.

Bret: Yeah.

And I'm trying to think
about is k3s up Alex.

I feel like Alex has a utility
that basically does CLI like an

easy way to deploy helm charts.

Darren: Yeah, so there's well it started
from k3s up and then it moved into

another its own project called arkade.

So I don't know if k3s up, I don't
know if k3s up still does it, I

don't really know, but like, but so
there's another project called arkade,

which is basically a wrapper around
helm charts and it makes it really

easy to install a bunch of projects.

So check out arkade.

That's a great way to
install a bunch of apps.

Yeah.

Our arkade with a K

Bret: yeah.

I'm glad you remembered that.

It's like an easy CLI for deploying
popular things essentially that are

repeatable comment, not question.

Thank you for providing specific
detailed instructions for air gapped.

Installs a us without internet access
are often forgotten and ignored

Darren: thanks.

Thanks.

That means a lot because we put
a lot of effort into air gap.

I mean, yeah it's yeah.

A lot of people do not pay
attention to how to address that.

And it's not actually that easy.

Oh yeah.

Bret: Yeah.

Documentation is hard.

And you also there's this effect and I
don't know if there's is this natural

effect that once you've read and written
enough documentation, it's hard for

you to understand where the gaps are,
unless you have a way, an easy way

for people to constantly point out
like thumbs up, thumbs down buttons

like they have in the Docker docs.

Don't really help them.

Without the context of
like, what is this missing?

What are we needing here?

And everybody wants to put in a
GitHub issue for docs or whatever.

So it's, I think it's always a challenge.

, what do you think about distroless images?

Is it okay to use them as long as
Kubernetes SIG released suspended,

the kubctl, debug feature.

Darren: Wow.

Suspended.

What does that mean?

Cause I'm looking forward to the kubectl
debug thing, getting more leverage, but

I mean, there were some hitch in there.

Maybe

Bret: not in one 20, maybe
that's what they're, maybe

they're post-marketing again.

Darren: Yeah.

For people that are, that don't
know the, what the debug feature

is, it's really quite cool.

For an existing pod, you can spin
up a sidecar to run a pot or a

container in that existing pod.

And so if you had something like
distroless, which has no shell, it

has no, you can't exec into that.

Yeah.

With the debug, you could spin up
like a tools container and then run

some, some debugging things there.

I think district list
is a great way to go.

Yeah.

I, the maintaining, like the
CV scanning and all that stuff

is is really just a pain.

So the less you can have in
the container the better.

So I think a district
list is a great approach.

Scratch, if you can do it, if
you're doing that, I find basically

the only people successful with
scratch images go developers right.

No,

Bret: it's been a lot of C plus
developers yet in, in container land.

So it's mostly go people that
are doing the aesthetic binaries.

Right.

But I love a netshoot so for those
of you thinking about this district

list thing, so distroless is going to
help you make your apps in a smaller

container image, but it's not for day one.

So that's why it's not even
taught in my courses yet, because

it's typically a maturity thing.

Like you've got to, you've got to get
good at just troubleshooting containers

with all the stuff in them, including
shells before you take the next step

of trying to remove a lot of that
unnecessary stuff, unnecessary, meaning

that it's your app doesn't need it.

And that's the challenge is that
a lot of this stuff requires you

to iterative approach to your
learning, as well as your deployment.

So most of the customers, in fact, none
of the customers I've ever worked with

over the last five years on day, one
of deploying things into production

have ever done like a scratch or a
distroless image because they're ops

people, they're troubleshooting people.

Everyone needs to learn how
these things work, right.

Approaches those lean approaches to
these things make it even harder.

So yeah, the debug command is exciting.

The ephemeral image idea.

I have this thing I love
Nicholas is a netshoot repo.

He's got a great network
troubleshooting utility.

It's basically just the hodgepodge
of all the Linux utilities for

troubleshooting distributed systems.

I recommend and use that a lot.

So that would be my go-to once
this debug command works now.

Darren: Yeah.

And I think you make some excellent
points on, you should call it.

It's like distroless is very advanced.

Yeah.

Yeah.

I mean, honestly the majority,
the vast majority is going to

be, Ubuntu or like red hat based.

Bret: Right, right.

All right.

So we've got this, these questions
answered or we're running over people get

some great questions today, but there's
a lot of Darren brings in the question.

When do you plan to fully utilize
the new UI and Rancher GA?

Darren: Yeah.

Yeah.

So that's so Rancher 2.5.

Well, we actually introduced
like, as a experimental feature

in two, four to five is so there's
like two, five right now is a.

Split between we have
an old UI and a new UI.

And so basically the next release of
Rancher is going to complete the new UI.

So we should, you should be able to
accomplish all of the, everything

in the new UI then, but we will
not get rid of the old UI probably

for another release after that.

So that's the plan right now that new UI
that we put in there is very ambitious.

There's a lot of work and we're
honestly, we're still refining the user

experience, getting it it's a balance
between trying to get a lot of power

and flexibility of what Kubernetes
provides, but making it easier.

And we're still actively working
on a lot of the things there.

The next release, you should
which we've a lot of six months.

Of when we do like the major releases.

So you're looking at like,
basically another six months

before that UI will be done.

And then a year before
we delete the old one.

Bret: Okay.

Will KubeVirt becoming to the RKE line.

Darren: Well, I mean, so KubeVirt
had already works with, I don't

think there's any problem with
running KubeVirt with RKE.

We are doing some specific things.

You'll see some announcements
from us around KubeVirt.

So we are working on some projects
specifically in the KubeVirt space.

But those are not really quite ready
for prime time, what we're doing there.

But but that use case of, basically
running VMs There's still, a good

market or, where I see the KubeVirt
playing is like, in the OpenStack space

where OpenStack was trying to go after
KubeVirt is a replacement for OpenStack.

And it's it could possibly be a
replacement even for VMware, if you're

simplifying your infrastructure,

Bret: yeah.

Again, advanced, right?

, you definitely want to be a well
oiled Kubernetes team before you start

layering all this stuff on top, right?

Yeah.

Does Rancher provide
enterprise support for k3s?

Curious to know,

Darren: Yeah.

Yeah, we do.

I mean, that's the short answer.

Bret: Easy answer.

Can we run KubeVirt on top of k3s

Darren: yeah, it should work.

I don't see any problem there.

It's still just a, still Kubernetes.

As long as you have, VTX support on your.

Yeah.

Hm.

Bret: Does it make sense to use k3s for
normal servers instead of RKE if yes.

Why do you need, what do I need?

RKE

Darren: so K3 has just optimizes
more for a smaller scale setups.

So there's a lot of people who
use k3s basically for everything.

There's really not a lot of downsides,
except for the fact that it is not

have great cloud provider integration.

You're going to have to roll that yourself
by bringing an out of tree cloud provider.

So that's where it gets a little more
tricky, but at this point you can find a

lot of information on the internet already
of like adding a cloud provider to two k3s

like digital ocean or AWS or something.

So it's really, it's just the
components and stuff we choose.

They're great for the user, but
sometimes for like, I don't know, like

like enterprise needs and whatnot,
they want things to be separate.

So I'll tell you, we, we run all of
our production Kubernetes instances

on k3s because they're very easy
to deploy and orchestrate k3s

can actually scale massively far.

We scaled it to like 5,000 nodes
whenever the max Kubernetes is.

Yeah.

So you can get pretty far with it.

.
Bret: Okay.

Can I orchestrate Longhorn with Rook.

Darren: No, I don't think
there's any integration there.

Yeah.

No, we haven't really worked
on that because it's Longhorn

itself is quite easy to run.

It's just basically a helm chart.

And then once it's running, you
just interact with it as a storage

class or there's a GUI for it too.

You can just point click.

So that really provides for us.

Yeah.

Bret: This is a can of worms,
but maybe we can make it quick.

Do you have opinions on cluster API?

Darren: Holy crap.

It is an important thing.

I believe that is the future.

I have been actively, it's every,
about every three months, I try

to use cluster API and try to
figure out what's the strategy for

Rancher to move to cluster API.

And it's a slow, painful process,
but we are getting there.

It's a very tricky thing because the
value of the interface of cluster API

is very high level and it doesn't do
much besides just define a cluster.

And then all the options are opaque
of like how, what are the options

of how to spin up an actual cluster?

So then all the options become pro
the provider dependent and they're

specific to the EKS or UKs or whatever.

It all comes down to the maturity of
the actual drivers and the problem with

the cluster API historically, and this
has changed, but the problem with it

historically is they focused
completely on a VM based model

where they were only spinning up
VMs and then orchestrating the VMs.

So there was no integration with say
EKS or GKE, and those are the primary

ways that people, I think people
should be running Kubernetes in clouds.

So there was, there's just always
been this practical thing of the

actual value you're getting out of the
framework, but the idea makes sense.

And so we're committed to cluster API.

It is very slow going.

It's like, that project's been
around for probably four years now.

And so yeah, it will slowly evolve
once, you know, VMware is the one who's

pushing it and very heavily once VMWare
started investing in sort of picking

up pace and because VMware is dedicated
to making cluster API their solution.

, I mean, they want that to work
for their products or whatever.

And yeah, so it's like Fleet, for example,
has nice integration with cluster API.

So if you spin up clusters
with cluster API, that can be

automatically registered with Fleet.

And yeah, so it's this ongoing
thing, but it's really messy,

Bret: right?

There's a lot of stuff going on now,
interactions and stuff going on there.

Yeah.

So it's it.

I mean, a lot of these tools, the goal is
to make the complex look simple, but it

doesn't mean that it's not complex anymore
to build more integrations and things.

Darren: And , we've been doing
this like provisioning clusters and

whatever for so long that there's.

It's like cluster API is, it's two
years behind with what we're doing,

but someone's like, we want to move
to a community standard thing, but

then the maturity of what we're doing
is so far beyond what cluster API is.

And it's how to balance that, you can
always say like, oh, well you should

just contribute everything to that.

And it's like, well, no, because
it's, there's an idealistic side to

the way cluster API works, where it's
purely declarative and it, doesn't,

there's a lot of things where it's
like, well, no, once you get into the

real world, people actually want to
behave slightly different and that

doesn't, it's not compatible with it.

So there's, yeah it's been
a thing, but we're committed

and trying to figure that out.

It's just going to take I don't see it
being feasible for at least another year.

Bret: Right.

Okay.

. And another one he Leon's asking
me, I think how does someone work

through your courses to begin, or
maybe your course on to being ready

to deploy their first production app?

How do I know what steps to take, but
I'm afraid there are production things.

I don't know that I don't know.

Well, there are the knowns, the unknown,
the known unknowns and the unknown

unknowns, or however that works.

So I don't have an easy answer for you.

The biggest, I mean, this is always a
challenge with training is that training,

that the closer you get to real world
training, like 200 level and 300 level

training, the harder and more niche
that training gets to create and take.

So most of us are living in the space
of T of giving courses that are sort

of introduction courses, maybe a few
intermediate courses, just because

you're I don't work on any one company
project for deploying containers.

That's the same.

They're all using different CIS
they're all using different server.

Different ways of creating VMs,
they might be cloud or data center.

Just to even get you close to that
and it depends on your role, right?

Like, are you an operator or are you just
a developer or are you responsible for

everything that relates to container?

Cause now that means
monitoring logging security.

Like there's a ton, that's like eight
courses of things that you'd have to take.

It's probably 200 hours of video.

So I don't, my answer is, . I don't know.

I don't know if Darren

Darren: good, honest

Bret: answer.

That's the consulting
answer is, was first.

And then the honest answer,
sorry, I don't know.

Rancher in Rancher.

I don't know if that
was a serious question.

Darren: No.

Well, ironically this is something it's
not quite supported yet, but we'll know we

Rancher and Rancher is actually possible,
but Rancher managing Rancher, managing

Rancher instances is actually going
to be an officially supported thing.

Not in this really not in 2.5,
but in a, in the next one.

Because there's actually people doing
this, that they have some use case where

yeah, it's, this is getting very advanced.

This is for very large organizations
and things where it's like, they have a

ranch, for instance, was then managed.

It's more Ranchers, but yeah, it's crazy.

But yeah, it's actually weird

Bret: What is Rancher OS I think
we're coming to the end here

cause we got a roll, but what does
Rancher or is it like another Linux?

Darren: We didn't talk about it.

Yeah.

So Rancher OS that was released a
very long time ago in container years.

And that was it was a very aggressive,
experimental thing of running.

Absolutely everything
in the us in containers.

So every single component
is in containers.

Over time, that technical aspect of
it became not all that interesting.

It is what people like Rancher OS
for is a very simple distribution

to just spin up and run a container.

It has like a, it has support
for cloud in it that has like a

compose ish format built into it.

So you can just define, these
are the containers I run.

So Rancher OS is a container optimized
or oriented Linux distribution.

We have K3 O S, which is a
more Kubernetes oriented one.

There's a question just came in right
now with like any updates on K3 O

S is that so if you've noticed the
news over the last couple of months,

we were recently acquired by Susa.

That's not quite done yet, so I can't
speak to a lot of the, like, we're still

in the process of that merger, but you
can definitely expect once that merger

is done, you can expect some updates
on what we're doing in the us space.

So something I'm very excited
about, but cannot talk about.

Bret: Right.

Okay.

So stay tuned.

And so the answer to Kevin is yes, but no.

Yes,

Darren: exactly.

Bret: Yes.

But later.

And we got some new questions
coming in, but I'm going to cut

it off at this final question of
what do you do in your free time?

k3s develop.

Darren: Yeah.

So work is all consuming.

Like it is quite consuming for me on
the other side of the fence I have five

children and yeah, so I have no free time.

I have no hobbies.

I don't do anything except
for, family things or work.

So yeah, I'm actually, yeah,
that's pretty boring in that.

I basically pretty obsessed with work
and then do stuff with like, yeah,

Bret: well, kids are not
boring even on boring days.

So I would say that your
hobby is family, which in

Darren: COVID days is

Bret: yeah.

COVID days.

That's a lot of us, right.

Is more time with family and
more time in the same house.

Yeah.

Darren: Maybe spend a little
bit too much time with family.

Bret: Yeah.

My wife was traveling this week.

And I had, I was in the house
alone with the puppy for four days.

And it was the strangest feeling.

I haven't had that in almost a
year of being alone for four days.

And so I dunno, there's
pros, there's cons.

You know, I ate poorly.

I don't know why I stayed up too late.

I don't know what else, not sure what
else negative things happened, but yeah,

that was the change of my patterns.

All right, well, I know we ran
way longer than normal, but I

thank you so much for being here.

We were killing it on the questions.

We had 2100 playbacks already on YouTube.

The we were up to like 150 people's
concurrently on the channel.

So I think you broke the
internet on my YouTube channels.

So good job on that.

That's a new record.

I think 150 concurrent is
the peak I've ever been at.

People love Rancher and want to
hear more about Rancher products.

And that's great because a lot of,
yeah, a lot of us in the open source

and the container space had been
fans of you guys since the beginning.

So it's great that you're still doing
great work and putting out new tools

and always have things to talk about.

All right.

You can find out more of us on the, I
built the cloud and the Bret Fisher.

These are their little Twitter handles.

So go to Twitter.

We're both active there.

We both answer questions and talk
to people on there all the time.

Thanks again, everyone.

We'll see you next week back
here on YouTube live and thanks.

See you soon.

I build the cloud,

Darren: correct.

Thanks.

Fun.

Thanks so much for listening and
I'll see you in the next episode.

Creators and Guests

Bret Fisher
Host
Bret Fisher
Cloud native DevOps Dude. Course creator, YouTuber, Podcaster. Docker Captain and CNCF Ambassador. People person who spends too much time in front of a computer.
Beth Fisher
Producer
Beth Fisher
Producer of DevOps and Docker Talk podcast since 2019. Assistant producer on Bret Fisher Live show on YouTube. Business and proposal writer by trade.
Cristi Cotovan
Editor
Cristi Cotovan
Video editor and educational content producer. Descript and Camtasia coach.
Rancher Labs and Kubernetes with Darren Shepherd
Broadcast by