Four Months Felt Like Four Years
E7

Four Months Felt Like Four Years

Nirmal Mehta: non-technical users
starting to use these tools to build

business applications that aren't devs.

Right?

They don't know anything about CI/CD.

They don't know anything
about infrastructure.

They don't know anything about code
and they're gonna start building apps.

if you thought Shadow IT was a big
problem back when cloud was new, oh boy.

Shadow IT with gen AI is like,
that's a whole brave new world

that we're entering into.

Bret AI July 2025: Welcome to the
Agentic DevOps podcast, and I am

one of your hosts, Bret Fisher.

My co-host Nirmal Mehta
is back in the studio.

His studio, not my studio, and we are
recording what we're calling season two.

It's actually been almost six
months since our last episode, and

a lot has happened and we quite
frankly, have just been overwhelmed.

I think a lot of us are
feeling this right now.

The last three months since
December have been a total shift.

I'm looking at this as more of the
epoch of AI is good enough now, that's

the summary of this entire podcast.

Where are we last left off in August,
we were talking about AI as a helpful

assistant, but yes, you have to spend
a lot of time with prompting and

managing hallucinations and all that.

I'm not really doing much of that anymore,
and for the last three months, I haven't

experienced much of any, uh, what I
would call significant hallucinations.

I certainly have seen it make wrong
decisions and do things I didn't intend.

But it's, it's amazing
how much better these are.

Truly Opus, and some of these newer models
like GPT 5.4 and 4.5 and 4.6 of Opus and

Sonnet have fundamentally changed the
game for me and a whole lot of us online.

So we talk about that.

We go through some of the headlines of the
last six months, but we're now starting

season two, and we've already got multiple
other episodes lined up around the world

of AI and its intersection with engineers
doing automation and DevOps like things.

So, I'm excited.

Uh, we've got a whole bunch of shows.

I mean, we've never had seasons
before, but why not Season two.

Let's, let's call it that.

And, I'm looking forward to
getting these episodes out to you.

So this is the first of many and
we go pretty random and quick

through a lot of the content we're
gonna talk about in this episode.

So here we are, 2026.

Everything AI.

Bret: Hi, Nirmal.

Welcome

Nirmal Mehta: Hi, thank you.

I'm Nirmal Meha, and I'm a principal
specialist solution architect and

containers tech lead at AWS and,
longtime friend and co-host, with

Bret on all the different shows that
Bret has created over the years.

I'm super excited to be back here
for the next quote unquote, season

of the Agentic DevOps podcast.

Just a quick note.

The opinions that I will be sharing
on the show are my opinions and not

of my employer, Amazon Web Services.

And with that.

Boy, do we have a long list
of things to catch up on?

Bret: we can't even
catch up on six months.

Ok,

Nirmal Mehta: No.

Bret: so if this is your first Agentic
DevOps podcast, welcome to the show.

Nirmal Mehta: Yeah.

Welcome.

Bret: And Nirmal and I started this
podcast a year ago, next month.

We were in London, I believe,
London Cobe Con, CloudNativecon.

Nirmal Mehta: Correct.

Bret: We started it there.

It felt like the right time to start it.

Nirmal Mehta: It really did.

Bret: We were talking about it, but it
felt like murmurs, like everyone in the

industry was focused on how to run AI.

Nirmal Mehta: Yeah.

Bret: Using AI, and agents were new.

The agents were a new idea.

And MCP was less than six
months old at that point.

Nirmal Mehta: Yeah.

Bret: I think we talked
about that first one.

Like we mentioned A2A, like a
brand new protocol for agent.

Agent or

Nirmal Mehta: Correct.

Yeah.

century ago.

Bret: I know it feels like forever.

Well, none of that matters anymore.

We're gonna get into it.

But we had some shows over the summer.

We had some guests, we had Laura Tacho on.

We

Nirmal Mehta: Yeah.

Bret: Solomon Hikes on talking about
sort of the world of CI in DevOps, in AI.

Like how that relates to
AI and how they see it.

They have since pivoted.

We might talk about that.

But this is gonna be the kickoff episode.

So I wouldn't call it a reboot.

Nirmal Mehta: Yeah.

Bret: I wouldn't call it

Nirmal Mehta: a

Bret: comback.

Nirmal Mehta: No,

Bret: I've been here for years.

that's the LL Cool J quote.

just for the kids that don't
know that 80's, 90's music.

and so let's back up a second
and remember where we were.

Our last episode was Laura
Tacho in really around August.

I think we aired it in September.

We recorded maybe around August

Nirmal Mehta: About,

Bret: of AI not being a a productivity
booster for business when it comes to

business goals, business outcomes of
the software teams, and how AI wasn't

actually trailblazing this amazing
improvement in business productivity.

It helps with coding and speeds up
coding, but that's a small portion of

a developer's week and that is only
one part of the software lifecycle.

So we kind of had some sort of bad
news around, hey, maybe five or 10%,

you know, business productivity.

And then third quarter happened,
and third quarter we got more news.

we saw more studies saying,
yeah, AI has had, I think the

Yale study, which is debatable.

I don't think everybody agreed with it,
but I think the headline there was AI

has had zero effects on jobs so far.

So there are definitely different camps.

You've got one camp on the red pill side
that is describing it as 10x developer

and just amazing productivity benefits.

But everyone that I know that says
that is an independent developer.

Nirmal and I were at a great conference
together and this conference was a group

of friends that are all builders, they're
all SaaS makers, developer consultants,

making products, working for big and
small companies sometimes, but they're all

highly invested in using AI for their job.

Nirmal Mehta: Yeah.

Bret: One thing that Nirmal and I come
from is we come from the ops world

where these people tend to come more
from the pure dev, pure builder world.

Nirmal Mehta: Yeah.

Bret: From enterprise teams and big
consulting companies and complication and

legacy and monolith, like that's our world

Nirmal Mehta: Yeah.

The operations side.

Yes.

Bret: I do have my friends and
I do spend that time making,

I am an independent creator.

I am a solo developer and I
have solo developer friends.

And lemme tell you, these two camps are
completely different when it comes to AI.

Like it's two sides of the same coin
on AI, but they're completely different

experiences and opinions based on the size
of the team and the size of the company.

What do you think

Nirmal Mehta: And, and there's
like, there's a lot there.

for me, it was not only attending,
that solo developer retreat with

you, Bret, which was amazing.

but I followed that up, by attending
the Pragmatic Summit in San

Francisco, just a few days later.

Bret: back to

Nirmal Mehta: of this recording,
that was just two weeks ago.

And, from my perspective, you're
right Bret, both of us are kind of,

our experience and our expertise is
grounded in operations system and

administration, SRE DevOps, engineering,
DevOps practices more on the Ops.

Right?

I would say like we're lowercase
dev, big case Ops people,

Bret: That's a

Nirmal Mehta: and we've.

Bret: Yeah.

Nirmal Mehta: We've spent our
careers bridging, you know, we're

DevOps practitioners, right?

First and foremost.

And we've been bridging this
tension that's been between the

developers and the operations folks.

And of course, those lines have
gotten very blurry with cloud

native tooling and, the modern
organizational structures these days.

in my day to day, I typically
talk to platform engineers,

operations folks, engineering
teams that are, like SRE teams.

I don't talk to developers,
especially not solo developers much.

And so for, for me, for me,

Bret: contracts and
enterprise Amazon contracts.

Nirmal Mehta: for me, these two
events back to back were kind of

like an expedition from me into this.

The world of developers, like the
current state of developers, it

was like a exploration of that
world that I, I'm not usually in.

And it was fascinating and I kind
of agree, Bret, like we've been

working for over a decade on bridging
DevOps together culturally, right?

Containerization, cloud native automation,
and making developers lives easier and

also making operations folks' lives
easier and bridging that tension, right?

that classic throat over the fence.

I think you're kind of right with
respect to Agentic systems, AI.

kind of reigniting some of that
tension because the adoption,

what I saw, and granted this was
definitely a group of leading edge

adopters of this technology for sure.

But what I saw there was, as a
developer, the adoption and the usage

of these tools, at least in some
part of the SLDC lifecycle, right?

The inner loop development
was pretty phenomenal.

I just saw really amazing things,
and just productivity boosts.

I don't want to guess on the
percentage, but very clear, direct

re, you know, outcomes there.

again, this is the Age Agentic DevOps
podcast and our mission on this podcast,

one, is we wanted to focus on the.

More of the operations side.

How are these AI and agentic
systems affecting the operations?

Like everything kind of from when the
code is built, to like the testing

CI/CD and then actually running a
production and then the operations

of those applications in production.

And it was interesting to see how the
acceleration of these tools, these tools

accelerating application development
was now highlighting in very stark

terms the need for organizations to
have automation from the CI/CD all the

way through to their infrastructure
in the same way that we've been

talking about for the last 20 years.

Because now there's a wave, a
tsunami, I think of hundred x

more applications gonna be built.

starting now.

The wave has already started.

And I don't think as DevOps
practitioners, as operations folks, I

don't think we're ready to handle this.

wave of new applications and this new way
of software development that's happening.

I think there's like even more tension,
but the fundamentals are still the same.

That's the takeaway I got is
like you now more than ever, you

organization needs automation.

It needs solid DevOps practices,
it needs solid documentation and

you know, efficient operations and
I'll say automation again and likely

we'll need to adopt agentic tools.

We'll talk about this a
little bit more, but I think.

We will have to adopt agentic tooling to
even just operate these systems at the

scale to meet the needs of these, the
tsunami of applications that are coming.

Bret: Yeah, I mean, it really talks,
you know, it talks, this leans

into like, is your CI/CD debt?

We talk about code debt,

Nirmal Mehta: Yeah.

Bret: but ops, debt, SRE
debt, DevOps debt, these are

all same, similar problems.

And if your systems are not enabled
enough that a developer can spin up a

new idea, iterate on it, put it into
staging, put it into production without

your involvement, if they're not enabled.

We might label that as platform
engineering, but if they're not enabled

for that, you're suddenly gonna have
a lot more work on your plate if

you're having to deal with everything
that the developer's creating.

' cause if the developer can code
10 x faster, let's just say they

can, or three x whatever, two x,
it's gonna be faster than today.

They're coding more.

We know that the negatives around like
the business productivity, that all

starts to fall down because the code
is only a small portion of the problem.

Like we, they create code now.

We dev people, DevOps people have a new
problem, which is they're shipping more

updates to the same code and they're
probably gonna be shipping a lot more

new repos as they spin up things.

And this is what we saw at our, retreat
before you went to San Francisco.

Is that a solo creator can now be
working on three or four projects at

the same time, or a solo dev in a team
can now, you know, things that before,

well that would be a couple of weeks
of work and I'd have to get approvals.

Assuming that you don't have to
go through approvals and all these

different processes, you can iterate
on an idea very quickly, especially

in those first early days of an idea.

And what I'm finding as a
DevOps engineer is I'm creating

way more tooling for myself.

And so let's go back to the third
quarter because we're gonna step

through a couple of major shifts in

Nirmal Mehta: Yeah.

Bret: paying attention
that have enabled all this.

Nirmal Mehta: Yes.

Bret: enable it for the DevOps
engineers, not just the developers.

Nirmal Mehta: if you remember, some of
our earlier episodes, we were kind of

highlighting that, you know, if we, a
year ago when we were using these tools,

I think we all had like a healthy dose of
skepticism and we should still maintain

that healthy dose of skepticism, where we
saw well, these tools are hallucinating a

lot, they're making simple applications.

Bret: I spend all my

Nirmal Mehta: Like,

Bret: Yeah.

Nirmal Mehta: the output, had to be
shepherded looked at very closely.

and I think a healthy skepticism
look at that was, you know, these

models are really cool and there's
cool stuff that's enabled by that.

And some of these operations, like
with MCP servers, that's awesome

as that was starting to roll out.

But it's not gonna replace this
whole entire function or, you

know, the quality is not there yet.

and if the last time you used
any of these AI tools was, let's

say six months ago or older,

Then you might not realize something that
happened in October and, in the last six

months or however it's been since October.

and that is these new models that came
out, especially, and we heard this from, I

can't even count how many people told me.

this was like announced at the
Pragmatic Summit as if it was like

just an obvious paradigm shift.

But when Opus 4.6 from Claude came
out from Anthropic, Claude, Opus

4.6 came out and sonnet 4.6, and
some of these other models too.

Gemini 3.1, pro 5.3 Codex,

Bret: right.

Nirmal Mehta: like basically the,

Bret: this year, but yeah.

Nirmal Mehta: let's just say
like the latest, Cohort of

models that have been released.

Bret: while you were,
let me recap for you.

While we were at KubeCon, We
had multiple state of the arts.

We're gonna call it the soda
models, frontier models, whatever.

Released like within the same week.

One of them was Opus 4.5, right?

We had Sonnet, I mean we were
on Sonnet three five a year ago

Nirmal Mehta: right.

Bret: Sonnet and Opus 4.6.

Many people.

Yeah, like you, so many people
have told me are game changer.

It is so much better.

The hallucinations are way less.

We now have the AI guiding us
through gaps in the context.

I just demoed a couple hours ago on
the live stream how if I tried to

build an ECS cluster and it's walking
me through, answering the questions

without me asking it's way better,

Nirmal Mehta: Yes,

Bret: much better than six months ago.

Nirmal Mehta: correct.

Bret: I think everyone who's done
this or tried this is agreeing.

I think like you're saying, everyone
who's still using the cheaper models

or maybe not the Claude models, I
mean, it's not that like Claude's got.

is the cream of the crop right now.

It's everyone's favorite.

That's a combination of
the Claude code CLI and in

combination with the Opus models.

But can still do this on Gemini Pro 3.1.

You can still do this on code, open
AI, you know, chat g or GPT 5.3.

Now, Codex, if you specifically look
up the Codex, but if you're using

either one of those three, I just
know that I have the most experience

with Opus, so I can speak to that.

But what you're saying is
both of us at the same time in

different groups of people across

Nirmal Mehta: Correct.

Bret: the industry, within two
months or even just a month of that

November release, everyone I know
who was what I would call blue pill,

meaning we're not really convinced
AI is gonna do that much for us.

and maybe you can get the function right,
but that's the best you're gonna do.

I'm probably gonna have to change
something in that function.

You're probably gonna make unnecessary
lines of code that you don't need.

And these are all still true,
but they're way less common.

And now suddenly that combined with
the lead release of skills, which

also happened in the fall in October,
Claude announced the idea of skills

that has taken on a whole another level
of hype because it's truly easy to use

and I am actively creating new skills
every week for the last two months.

So we are now in a world suddenly
where six months ago I wasn't

sure where AI fit in my world.

And now every conversation for
everything I do in DevOps starts with AI.

there's nothing I'm doing anymore where
I'm not aligned with an AI assistant who

is helping me walk through my design or
my GitHub workflow creation or, you know,

whether I'm making a landing page for
something or whether I'm deciding whether,

The CVEs I wanna patch are even necessary.

there's just so many different things.

I'm doing AI first now and I think in
the last 24 hours, which I've spent

probably over 10 hours in the last 48
hours talking and waiting on an AI.

And it has only gotten I think,
one, maybe two things wrong.

And they weren't even really wrong.

They just work the way I wanted 'em to.

And that is across dozens and
dozens of prompts across multiple

projects GitHub Actions, on web

Nirmal Mehta: Yeah.

Bret: GoLang tooling for my DevOps.

Like I'm just doing so much
stuff all at the same time.

So you and I now have to bring that,
like to me the goal is to bring that

experience to the enterprise team, right?

Nirmal Mehta: Yeah.

Bret: the possibilities.

Nirmal Mehta: I just wanna,
highlight this and make a very

exclamation point for our listeners.

first of all, thank you all for the
folks that have supported us and,

shouted out that they enjoy the show.

They trust us too give them some
to ground what we've been seeing.

And so I think there's two
things here as takeaways.

One is if you've tried these tools and
models in the past, like six months ago or

older and you haven't touched them again
since then, I highly recommend you at

least try some of the latest SOTA models,

and maintain.

Another thing that we talked about
early on in the podcast episodes is

also maintain a healthy skepticism.

Everyone's use cases and what
they're doing is very different.

they still are LLM models.

They still act in the LM model way.

and yes, the quality in terms of output
that we're seeing for whatever we're

working on is increased dramatically.

as engineers, as practitioners,
as operators of production systems

maintain a healthy skepticism,
which means understanding how to

use these tools, understand where,
their limits, their capabilities.

I would highly recommend against just
dismissing these tools completely.

Bret: the number of people that we respect
in the industry have in the last three

months gone what I called AI red pilled.

This is a Matrix, this is back to the
ultimate choice of Morpheus and Neo

in the Matrix where he offered him two
pills, the red pill and the blue pill.

The blue pill meant you'd wake up at
home, nothing would've changed you,

you know, the reality hasn't been
altered, and you can just live your life

pretending that nothing ever happened.

That's the skeptics, right?

The number of people that, but have,
are no longer taking the blue pill and

are now on the red pill, which is, let's
see far, how far the rabbit hole goes.

And let's try this thing everywhere.

We can try it.

You can still have skepticism
in trying a new tool, right?

Nirmal Mehta: Yeah.

Bret: skeptics when we
first started Docker.

We were like, this is really that good.

and then we saw people that would
try to use too much Docker, right?

Like you use Docker everywhere.

Like we thought it was gonna be the
replacement for brew on our local

machines and we were gonna run every
local CLI tool in a Docker container.

'cause it's convenient and it's isolated

Nirmal Mehta: I mean, we
we're kind of going there

Bret: We didn't, yeah, we

Nirmal Mehta: with agent tools.

Bret: far.

Nirmal Mehta: Yeah, that's true.

Bret: I thought containers were gonna
be like I thought Docker files were

gonna be our build documentation.

Like it was gonna be everything
for CI inside a Dockerfile.

Nirmal Mehta: True.

Bret: were stages and Docker files.

Nirmal Mehta: Yeah.

Bret: thought I was totally red pilled.

Obviously it's always
somewhere in the middle,

Nirmal Mehta: Yeah.

Bret: I like that you wanna be skeptical,
but I am not issuing the same skepticism.

Nirmal Mehta: I think,

Bret: mean,

Nirmal Mehta: think what I mean
by skepticism here is have a

systems engineering approach
to these new tools, right?

don't just read a blog post and.

Consume the hype and say, oh,
it's like a hundred x productivity

gain across the board, but
really start to use these tools.

I think the broader message
here is, don't ignore it.

Get good at using these tools

Bret: Right.

Nirmal Mehta: think about how these
tools can be used in your day to

day, in the automation of the systems
and, orchestration and automation

and documentation that you might be
responsible for in your job or your role.

And also think about it as like an artist
would, a canvas, a brush paint where.

You could use these tools to explore lots
of innovation and creativity as well.

but skepticism in the sense that
use it, see how it's being used.

Don't just take like LeVar Burton
said, don't take just our word for it.

Bret: I don't think anybody, we're not
even using the words vibe coding anymore.

those words are a year old and we
now understand the nuance and it's

sort of a meme at this point, but no
one's saying Vibe code your DevOps.

Right.

Nirmal Mehta: Correct.

Yes.

Bret: That is why I'm making courses.

That's why I've been working for a year
now to build this DevOps world in my

community and in my content, because
this is gonna take years and years.

It's, this is every one of these big
waves like Docker, like Kubernetes.

Nirmal Mehta: Yeah.

Bret: It's at least 10 years before
ma. The majority of companies have

understood it and adopted it, but I
think there's three main areas for people

to focus on in the world of DevOps.

The first area that everybody should
be doing right now, regardless of

your skillset, regardless of whether
you're allowed to at work, just

do it at home, is to try, even if
you can only get the free models.

GitHub free account provides free
copilot, which provides free LLMs for

you to use with all of the copilot tools.

That's web.

VS Code.

That's command line TUIs, like
you can use all that stuff.

There's free Claude, there's
free, GPT, there's free Gemini.

You can use these today.

But you, if you can just get a
$20 plan with any of those things,

whether it's GitHub copilot, whether
it's Claude, whether it's Codex.

just spend $20, even if you just spend
it for a month to do some learning, if

you can't get your company to pay for it.

think anyone that I would sit down
with and help them through the

basics of getting started would
see really fast benefits to their

personal workflow for DevOps.

'cause when we think of DevOps, we think
I'm largely writing YAML or shell scripts,

or maybe if I'm fancy I'm doing Golang or

Nirmal Mehta: Right.

Bret: I'm doing some
automation for my GitHub.

I'm probably spending
a lot of time in Git.

I need to do git commits and
GitHub things, or GitLab things.

And almost all of that can immediately
be benefited just by having, Claude

Code installed on your machine or the
chat GPT desktop is a fantastic tool,

and you don't have to necessarily be
a developer to do all of this to see

the benefits, but it's a study buddy.

It's a work buddy.

It's there and it's reliable.

The better, the higher end and more
expensive models you can find more

likely will give you better results,

Nirmal Mehta: Yes.

Bret: seek on open, open weighted
models or closed source models.

But I can tell you that for the last
three months, everyone I know and

across dozens and dozens of people,
big or small, they're all talking

about Opus as the model to use.

Some of them still use
Gemini and still use GPT 5.3.

Those still work, but they're
all in love with Opus.

That's the first wave is like
get used to local tooling.

Nirmal Mehta: Mm-hmm.

Bret: push yourself to try using
AI as a buddy to help you anytime

you need to create some content.

Content meaning YAML or whatever.

next phase is getting AI to
help you automate your CI.

Nirmal Mehta: Yes.

Bret: we are still very early days,
but in the fall of last year GitHub

announced a bunch of different
things, especially at GitHub Universe.

I'm sure Amazon also launched
months of different things on

Bedrock, but I unfortunately
don't pay attention to everyone.

But I can tell you that on the GitHub
side, and maybe you can, compare this to

what's happening on Bedrock, and other
functionality AWS, but like at GitHub.

They launched, not only keeping
track with all the models.

'cause if you didn't know GitHub copilot
models is kind of like open router

where you can use everything from Opus
to Gemini to DeepSeek, to GPT, to Qwen,

to, tiny Little Pie and tiny little,
there's dozens of models that you use

all through GitHub, and you only need
a GitHub account to get started there.

You don't even need to spend money.

But if you pay for the GitHub copilot,
I think it starts at like $17 a month,

you can get access to all the bigger
models, and I use that every month.

And I can easily go days or weeks in
the month on just a $20 plan without

hitting limits, and you can start to
experiment something that teased us with

in the fall, but officially launched.

I think actually last month that
they're calling Agentic Workflows.

Luckily, Nirmal and I were on point
with the name of our podcast, Agentic

DevOps Podcast because they're calling
this functionality Agentic Workflows.

And it's a little bit different
than just throwing an LLM prompt

in your workflows or pipelines.

It's using AI to create workflow.

So you create a markdown
file with some front matter.

You give it the front matter as
sort of like the deterministic

stuff, and then you write a prompt
of what you would like it to do.

And then the AI through
GitHub Copilot will create the

workflow for you and run it.

This is one of their ideas for how
we might really complex or even

workflows that have an undetermined,
non-deterministic nature to them.

A good example might be triaging.

Why your GitHub Actions failed.

So it has to go read the logs,
find the failure, then go look

up what the failure might be.

Or maybe it just knows built into the
model, and then it might suggest on the

existing PR it might open a new PR based
on what you want it to do, and it will

start to help automate fixing problems
essentially that before if you're trying

to just create a very deterministic way
without a Code Rabbit or a GitHub Copilot,

those might be very hard to do, right?

There would've to be some grips involved,
some set and ox, like you would definitely

have to have a very long and thorough app.

it would, that's what it probably
would turn into an action

that's almost like its own app.

And we've had those, we've seen those
before, but now you can start to

create these custom ones yourself.

So that's stage two.

I think stage three in the last stage
is where have an always running AI

that's able to respond to events.

an operator would.

Nirmal Mehta: Right.

Bret: the most mature area.

So in terms of the maturity model
that I'm building out for my

courses and for my community, it's
starting with the local tools.

You and I saw that like the individual
developers are very productive on

their local machine with these tools.

as you get really comfortable with that
stuff locally and you figure out what

YAML and TOML does this thing write
well, and where do I need to learn how

to hold its hand a little bit more?

You're gonna learn all that.

And then as you get better at using
what we call now the agent harness,

which is just really the UI that
you're using with your agents, but

I'm using in the open code harness.

Others, might be using it like Claude
Code or Codex or a different harness,

Nirmal Mehta: Yep.

Shout out to Kiro.

Bret: shout out to Kiro.

Yeah.

another harness, like there's a
lot of harnesses and everyone's got

their favorite and they all sort of
are merging around the same ideas

of letting the agent continue to go.

Longer and longer, like we're
getting better at the AI's, running

for longer periods of time, doing
multiple steps without hallucinating.

That's getting better.

Nirmal Mehta: Yes.

Bret: time, we've invented this idea
of skills that really allow you to

create a SOP, a standard operating
procedure for each type of work you do.

And I just talked in the livestream
today about, as you get better with your

local tooling, you have your own work
that you typically do over and over.

Most of us don't create YAML and
TOML, out of thin air, right?

If we're doing DevOps, we're
usually copy and pasting a lot.

We have standards in the team repo that
we set up with standards in them that we

use our official templates from, right?

We've all got different processes
for how we know that this particular

cloud formation has to be, that
these adhere to these standards, pass

these lins, like you've got all that.

It's probably in your confluence or
in your GitHub repo itself, you can

very quickly turn that into a skill.

That skill is just markdown.

And it's like you treat it like
you're teaching a junior engineer to

automate or to do some part of work
that you don't wanna do yourself.

And so I have had great success in
the last month the standard operating

procedures that I have for how I
create a reusable workflow in GitHub

Actions or how I would go about
creating some sort of terraform.

I know what I care about.

I'm particular about certain things.

I have certain standards
those maybe are in my head.

We call that tribal knowledge.

I'm a tribe of one, so I have
tribal knowledge to myself.

But you're gonna put that in a
markdown and I just call it a SOP now.

Like the skills are SOPs.

I had SOPs before in my notion
with my team that we are automating

through skills and in AI today.

And I think that.

be a bigger portion of my
courses that I anticipated.

' Cause once you can get good to that,
you're onto something because what

you've done is you've documented
a non-deterministic workflow an AI

that runs locally on your machine,
you're only one step away from

that running in a GitHub action.

And that's what GitHub calls Continuous
AI, which is AI running in your

automation, doing various things in a
non-deterministic way that you couldn't

otherwise do with the previous generation
of what we're calling deterministic

workflows, which is programmatic
if then statements, L statements.

Like

Nirmal Mehta: based.

Bret: Yeah.

So I need to have a diagram for this.

I don't.

but this is, I think
where we're all going.

And I think that if you're not at
the beginning, at least of this

series of steps and maturity model.

I think now is the time, baby.

Like we're a year into gentech
DevOps and today is the day.

Nirmal Mehta: Yes, totally.

And I think it's somewhat optional to do.

I don't think it's gonna be optional
for, for our jobs in the near future.

And I think it's not gonna be optional.

I think we're gonna need these tools
to support the proliferation of

CI/CD workflows that are gonna be
needed to deploy, And maintain all

of these new applications that are
being built with these Agentic tools.

I want to kind of
highlight something here.

What we saw at these events in the
last month were developers accelerating

the, their application development
lifecycle, the inner loop, and then

they weren't making decisions on
how to deploy these applications.

Claude Code was doing that.

the agents were trying to figure
out, they let their agents

figure out how to deploy this.

And so those SOPs, the concept of
SOPs and skills are gonna be super

important because, as operators, we're
gonna be responsible for creating

the platforms and the guardrails.

That are the landing zones for these
applications and also the system that

helps operate all of these new pipelines
and workflows that have to be built to

deal with a, you know, what I call the
tsunami of applications that are coming

to you, to all of us as operators and

Bret: gonna be creating like this.

Nirmal Mehta: I think
we're gonna be create,

Bret: Yeah.

Nirmal Mehta: Yeah, I think that's true.

I think the word DevOps is gonna be.

Even more DevOpsy.

like we're really gonna be developer
operators, with respect to building

applications that are doing all of
those operational tasks, for us,

which then leads to the next thing,
and I know this is maybe a little bit

skipping in our outline today, but that
means that we're gonna have a lot of

agents running and doing things on our
behalf, running kind of in parallel.

And so, if you've been kind of just seeing
what's going on in the last couple months,

I think the term agent orchestration is
a term to look out for and I think as

operators, as platform engineers, as SREs,
we're gonna have to be able to answer how.

And where, and how all these
agents are gonna run in a secure

environment, with all of the, well
architected principles around it.

And, that's another area of tooling
that is just coming out right,

is, and I think, one can make the
case that something like OpenClaw

is like agent orchestration layer.

I think Kubernetes is very well placed
to be an agent orchestration layer.

And, I think all of our cloud native
Kubernetes knowledge and containerization

knowledge, is not obsolete.

It's gonna be very well placed to be
the harness and the orchestration for

all of these agents as they're running,
doing all of these tasks, both on the

developer side and on the operations
side, and also your business agents.

We haven't even talked about that.

But I think another thing that we
saw is non-technical users starting

to use these tools to build business
applications that aren't devs.

Right?

They don't know anything about CI/CD.

They don't know anything
about infrastructure.

They don't know anything about code
and they're gonna start building apps.

And, if you thought Shadow IT was a big
problem back when cloud was new, oh boy.

Shadow IT with gen AI is like,
that's a whole brave new world

that we're entering into that.

Bret: which really just goes to the fact
that we're gonna need more automation.

There's a great graphic from
our friends at Geo Coio.

this is the dev loop, or I guess it's
really the inner plus outer loop.

This one, you saw this at the retreat.

so this is by Michelle
Hanson, of Geo Coio.

And it's a very simple diagram.

in the first one, it's
traditional projects before AI.

And it's basically a timeline.

And one fifth of this timeline is
scoping before development, and then

three fifths of the timeline is coding.

And then one fifth at the end is qa.

And what they're describing as
with AI, maybe you can get it

done in 60% of the time as before.

So, so you're saving 40% of your
time overall, but the coding

is highly compressed to less
than one fifth of the time.

This is just approximation.

these aren't exactly like

Nirmal Mehta: Right,

Bret: metrics or studies or anything
like that, but the scoping is

doubled and the QA is almost doubled.

And so the two sides, the
planning at the beginning.

Which we now have plan
modes in our agents,

Nirmal Mehta: right.

Bret: now have these concepts
of plan agents and build agents.

Nirmal Mehta: And specs

Bret: helping you create the spec.

Yeah.

Nirmal Mehta: yeah.

Bret: plan with all the tasking.

And then at the end, you're
gonna need a longer qa.

QA to me means I'm gonna
need more automation.

I'm gonna need code ql, I'm gonna need
way more AI and automation running

against every commit because I'm gonna
suddenly get a bunch of projects thrown

at me that weren't as well, vetted
by humans as they were in the past.

Not if AI is making lines of code,
that means that humans may have at

best read the code at best, but they
certainly didn't write every line

of code, so they didn't rewrite it.

They didn't rethink
about every single line.

They didn't toil over every
line, which maybe in the long

run is actually a good thing.

But what that really means for me as a CI
person is that I'm probably gonna be more

responsible, and I've had already two.

Now, in the last month,
two different teams tell me

They're having to push back as
DevOps engineers to the developers

because the developers are shipping
more vibe coded product that they're

having more difficulty because
their testing isn't catching the

problems that the AIs are creating.

And so these apps are getting
shipped to production.

They fail in production, and then the
developers go, it works on my machine.

and this is the age old problem,
but it's happening again.

Because we're shipping more
lines of code in a commit.

We're shipping more lines in a pull
request, and not as many eyeballs

have studied every single line.

the leading edge teams in this field of AI
right now, the ones that are pushing the

limit, aren't looking at the code anymore.

Like they're shipping production code.

That's only, that's written by
AI, and then reviewed by AI.

So, and you can have three
different AIs review it, right?

you can have

Nirmal Mehta: Yeah.

Bret: my friend Aaron Francis,
created a new tool called Counselors,

which does that exact thing.

you give it a problem.

it has three or more AIs it will
go and ask, and then it will

summarize the differences and
what they agreed on and what they

disagreed on in a summary for you.

And It's literally called the counselors,
And we don't have that specifically for

DevOps yet, but I can see that coming.

Where we're going to have a council
review, the PR, because maybe this

month opus is a little bit more loosey
goosey, like Opus likes the vibe coding

this month in the latest version.

And maybe Gemini is more
picky by default, right?

this reminds me of the nineties and
two thousands with antivirus software

where we had, the best antivirus teams
that, like I ran an antivirus team

for a, enterprise of 7,000 mailboxes.

we never could trust just McAfee
or, you know, or just one scanner.

So we had the scanner of scanners.

We had a single product
that had like eight or nine.

It had Kaspersky it had all these
different scanners you had to be

able to get an email through all
of them it would get to the inbox

Nirmal Mehta: I think so.

Yep.

Bret: we had very opinionated,
and the same is true the CVE

scanners today, like some teams.

Scan with two different CV scanners.

Some teams, have their favorite.

But anyway, this image that I'm
describing to the audio audience.

the second thing, do you
have any thoughts about

Nirmal Mehta: So

Bret: tell you

Nirmal Mehta: yeah, I think I saw this
quote we ship faster but break more.

So that's like the era that
we're in right now, right?

Like we're shipping faster,
but everything's breaking

more than it used to.

Bret: yeah.

We've got a, and that
leans on the DevOps engine,

Nirmal Mehta: Correct?

Yes.

Bret: automation.

That's, and there's so many talks
over the last year that I could quote,

Nirmal Mehta: And.

Bret: Han's doctor at the Gradle
Summit, I think the CEO of

Gradle, he talked about this.

He's basically saying a wave of
problems coming to DevOps and

SRE and Ops from all of this.

And we better prepare because it's come.

Nirmal Mehta: Yes,

Bret: to your team, it's coming soon.

Nirmal Mehta: And the other thing
here, just going like highlighting

that diagram, scope and Q&A scope
is documentation, it's context.

And, we haven't had this conversation
yet, but I think in the new architecture

there's going to be a context layer.

there's gonna be agent orchestration,
but the's a context layer there

As operators are gonna be responsible
for main, like creating, run,

running, maintaining, because when
no one, no human's looking at the

code, the code is still important
because it is the application, but

it's, importance is goes down a
little bit compared to the specs, the

requirements, the context, your business
documentation and your Jira tickets.

And like the context and the intent of
your developers or your business users

is more important than the line of code.

Bret: Mm-hmm.

Nirmal Mehta: And I think that was another
theme that came out of our conversations

the last month is that, Having systems
that maintain and version control that

context are going to be part of our
architectures moving forward as well.

Bret: Yeah.

Nirmal Mehta: And, what that looks like.

Not sure.

I think it's gonna be an amalgamation
of a lot of things, but this also means

that with skills and these operational
agents, these i AI ops agents on the

SRE side, that's still true there.

It's the context.

You know, it's your runbooks, it's
your SOPs, it's your troubleshooting

steps, which your in incident response,
which are your product documentation.

If you're sitting there listening to
this and, you've been fighting your

management about having, story points,
in that sprint to update your docs

and like never getting enough time,
This is your hour, this is your time.

Because organizations that will
be successful with these tools are

the ones that have automation in
place or building toward that have

good documentation practices, have
good, operational practices and

organizations that have been deferring
that for another time in the future.

That tech and like have not
paid that technical debt,

that you're gonna be behind.

Just straight up.

You're gonna be behind, you're
not gonna be able to leverage

these tools because that's the
main interaction with these tools.

Now.

it's not like programming a line of code.

it's giving them context and intent.

Bret: you know, when we talk about these
things like the skills and the agent

files and the rules and the commands, I
was talking on the stream today about the

difference between skills and commands.

And I think skills, I mean,
commands are probably mostly dead.

I think skills are gonna win and like
that, I think that model is people are

really understand liking that model
and it's working well for people.

But we're essentially just recreating the
software lifecycle we had documentation

in wikis and doc systems, and we had
standard operating procedures in another

system where they were very meticulous
and they had lots of screenshots and like,

this is how the humans get things done.

These are our standards.

And we would have, you know, mermaid
graphics or various other tools, you

know, Visio or whatever you might have.

For diagramming workflows or diagramming
system designs or network designs.

Like these are all things
we would do in planning.

And then, then after we'd ship
something, we'd realize, especially

like when you hire new people, you'd
realize, oh man, like we're, there's

a gap in our knowledge for them.

Nirmal Mehta: Yeah.

Bret: I realize we missed this process.

Let's go create that sop.

these are all things that the
individual or small team developers

are figuring out how to do for the AI.

'cause it turns out, if the AI
has all of that, he can perform or

she, or the it, I guess it's an it.

The, it will now look at all that stuff.

And the simplest way for us to do that
today is to just put that in the repo.

The problem for like us DevOps
people is that we don't.

In very few place teams that I've worked
with is all of their DevOps knowledge.

In the repo it's, I mean

Nirmal Mehta: Right.

It's all over.

Bret: a lot of TOML, yeah

Nirmal Mehta: Yeah.

Bret: but they have documentation systems.

They have Slack, they
have tribal knowledge.

They have all these
different various places.

They have some stuff that's basically
in the dashboards, like you imply

knowledge through a dashboard
layout and what things are important

and which metrics we care about

Nirmal Mehta: Right.

Bret: these are all over the place,
but the reason it's all in the AI repos

today or the AI enabled code repos today.

It's not because I think that's where
it's all gonna be in five years.

It's because that's the quickest way we
can figure out how to dump text into the

AI context so that the AI is way smarter.

And we're eventually gonna figure
out whether it's MCP that connects

all these things together.

Like we might very quickly end up
moving past markdown in directories

to, Hey, if you just point this to your
MCP to Confluence, then you can just

point it to the different, you know,
URLs of all the DevOps documentation

and you don't need the agent file,

Nirmal Mehta: Runtime.

Bret: Yeah.

we're gonna figure all that out.

These are, those are like the standards
and the practices and the conventions,

I think we're all gonna figure out.

But like right now, since we're in the
bleeding edge, it just happens to all be

markdown with SMO front matter and a repo.

So go check that stuff out.

the last thing I wanna talk about before
we wrap up, 'cause I know we're hitting

our limit, but this is the story I tell
to people that I, this is my vision now.

Like I didn't really have this
vision nailed down a year ago.

I think.

think everything I was saying,
I have to go back and listen to

our first episode 'cause I feel
like everything we're predicting.

Has come to pass.

Like it's coming faster than I thought.

Thought like, I

Nirmal Mehta: Yes,

Bret: it

Nirmal Mehta: that's for sure.

Bret: before we had reliable
models for DevOps work.

this is not about replacing our jobs.

Like everyone's concerned about it.

I'm concerned about it, especially
as a content creator because course

sales are down I wouldn't say no one.

People are buying courses less and
they certainly don't need courses

that teach them the manual anymore,
which is why my whole course vibe is

completely shifted to advisory and best
practices and architecture type stuff.

Not, Hey, this is how you need
to learn every single command.

'cause you're not probably gonna type
the commands here very much longer.

I started 30 years ago and this slide
I'm showing on screen for those audio

listeners is about the major shifts
that about once a decade, although

these are increasing now to about two
a decade of major shifts in it I see

as an operator and a DevOps engineer
that have been happening my entire

career and are gonna continue to happen.

And it just so happens that managing
agents, which are a bunch of LLMs

is just the next phase of it.

And in the nineties and this graphic,
what it's supposed to represent is

at the top is in the nineties I got
to be a part of the mainframe to

PC wave where I was literally like
shutting down a mainframe or mini

frames or Unix servers that were, stuff

Nirmal Mehta: X.

Bret: to pc.

Yeah, shifting to PCs that
didn't even have mice yet.

Like they were literally
dos with word Perfect.

Right.

And then we eventually got mice
and we got windows, and then

we got Windows 95 and then.

Like we started to do distributed
computing and that enabled me to go from

managing one large mainframe, which I went
to voc, two years of vocational school

on how to manage Honeywell and HP and
Solaris and these kind of things, right?

but I was only able to manage
a couple of these things.

We had like four dudes to
manage five mainframes and many

frames, and that was it, right?

So it was like a one-to-one.

And then in, in the early two
thousands, we invented the

technology as VMs, virtual machines.

And I can remember those days very
distinctly because everyone told me at

the, my enterprise, I was working at a
large city, half a million people, and.

Man managing a city of
a half million people.

And I remember

Nirmal Mehta: I.

Bret: the meeting where we were
just trying to describe how VMs work

and none of the system engineers
thought it was a good idea.

Everyone thought it was gonna
wreck, shop and crash things,

and it would be horrible.

And we were having to give
analogies, like talking about like

the blade that goes inside the
mainframe is actually a different

compute that you can shard it off.

And this is kind of like
that, but in software.

And everyone's like, no,
it's a horrible idea.

We're running a kernel and a kernel.

Well anyway, VMs are the standard.

They have been for decades, and it
enabled us to easily manage 10 or

even a hundred machines ourself.

Then you fast forward to the cloud now
system engineers and DevOps people can

manage hundreds of servers themselves.

They, that's the dawn

Nirmal Mehta: say thousands at this point,

Bret: thousands.

Well, we had to create
new tools for that, right?

That when we

Nirmal Mehta: right?

Bret: that in 2010, we didn't have

Nirmal Mehta: Yes.

Bret: tools.

Nirmal Mehta: Right.

Bret: then Ansible shows up,
then Terraform shows up, and

now we're able to do thousands.

Then containers show up and
we stop talking about servers.

And now one engineer can manage
if not thousands, if not tens

Nirmal Mehta: Ends of thousands,

Bret: workloads.

Nirmal Mehta: correct.

Bret: happens in the 2010s
to, you know, Kubernetes takes

us all the way up to 2020.

And then we get things like off
shooting from that, like Wasm,

serverless, lambda, stuff like that.

But that is not replacing engineers.

None of those things in
my job replaced anyone.

I literally took the same two
guys that were managing hardware

installations in the data center.

They were so worried about
their jobs in 2005, and I gave

them jobs managing the VMs.

We had to retrain them, retool
'em, but now they had VMs to manage

instead of physical hardware.

And the same thing happened.

We went to the cloud.

We had to take everybody that
was managing SSHS to a bunch

of Linux servers in a closet.

And teach them cloud tooling.

And now we have better cloud
tooling so we can manage even more.

We had to teach people Docker so they
could manage thousands of workloads

instead of thousands of servers.

And I think that AI today, if we keep
reiterating this over and over, that I

need to change this slide now that we're
just gonna be managing a series of agents,

gonna be directing agents all day long.

I'm already doing this I'm like, I need
you to create this GitHub action workflow.

Please use my skill that outlines
my favorite things that I always

want in my GitHub action workflows.

Please help me meet these goals
and ask me any questions along

the way that you need answered.

Turns out the new opus
models do that really well.

It comes back and asks you questions with
a nice little menu system that you get to

choose what it recommends versus what you
think or do I wanna opt in like to a hand

filled, crafted answer to the question?

It interviews me just like a
junior engineer would when they

go, well, you didn't teach me.

How you expect the security
to be in this YAML file.

And I would realize, oh yes,
let me teach you young Padawan.

That's the same thing I'm
doing to AI right now.

And I'm able to do that thing is going
off and spinning up a new workload that

is building a new, workflow for me.

And then I hop over to a new
terminal where I'm SSH into a

server, and some people are going
crazy and actually installing

these harnesses, like on running
servers to troubleshoot the server.

I wouldn't go that far, but maybe
I could use my local one to SSH

into a server and it could send
SSH commands one at a time maybe.

I feel like probably gonna be a
day where we have a tiny little

agent harness that's like a single
binary that doesn't touch anything.

It has no right access.

And we can maybe throw that
on a server temporarily.

Maybe it runs in a container
and then we pull it off when

we're done and no one ever knew.

essentially there to help us troubleshoot.

I mean, you have probably heard
of the net shoot Docker container

that our friend Nikolai has created

Nirmal Mehta: Yes,

Bret: now.

Well, that's gonna get replaced
with an AI that has all those tools

built in and just goes and does the

Nirmal Mehta: correct.

Bret: us.

Right?

Nirmal Mehta: Correct.

Bret: it's all this really means is
I'm managing more infrastructure, more

automation that I have to do because
there's more code being shipped.

More and more people making code
now because now we're talking, we're

hearing about product managers.

They're able to code again because
they're able to actually make

commits that they wouldn't otherwise
have time to make because they can

do it in a fraction of the time.

this is just gonna keep happening.

So I'm gonna just help
everybody feel comfortable

that your job isn't going away.

It's just changing maybe faster than
it would in a normal 10 year arc. Maybe

this is gonna be like a five year arc

Nirmal Mehta: Yeah.

I mean, we're already two years in.

Bret: using AI.

Yeah,

Nirmal Mehta: Yeah.

Bret: I

Nirmal Mehta: I think you're right.

This is like, this is a five year arc.

Bret: Yeah,

Nirmal Mehta: so that was well put.

I think on an earlier episode, I
reiterated, you know, this won't

replace you, but someone who
knows how to use these tools will

Bret: Yeah.

Nirmal Mehta: if you need some impetus
when you're sitting there trying to figure

out what question to answer for yourself,
with these tools, think about how would

I use these tools to do some small aspect
of my current job day to day, but also

think about how would I use these tools
to operate an environment with instead

of a hundred containers or a thousand
containers, maybe 10,000 containers or a

hundred thousand application containers

Bret: Yeah.

Nirmal Mehta: How would I
use these agents to do that?

And maybe that's a good question,
for exploring these tools is

like getting to that answer.

I would use these tools in this
way, to operate systems at that

scale because I think you're right.

we've seen this trend from mainframe
to containers and serverless and cloud,

that one person's scope of responsibility
has just expanded, but the surface

area of applications has also expanded.

And so there is a need, and
it won't eliminate your job.

It's just the person that knows how to
use these tools will be doing your job.

Bret: Right, right.

Nirmal Mehta: So hopefully at the end
of this we're almost close to the end

of our context window for this episode.

hopefully we've convinced you
that you could still maintain

that skepticism about, the output
of LLMs and these tools, but.

Please do yourself a favor and start
using 'em and start getting used

to them and start understanding
how to use these tools effectively.

'cause it's just like any other skill.

it's, you know, you have to learn, you
have to exercise that muscle and you have

to understand what it can and can't do.

with that, I mean, we got halfway
through our list today, so tune

in again for another episode.

We've got so much more,

Bret: So what you

Nirmal Mehta: context to explore.

Bret: coming, want me to

Nirmal Mehta: Yeah, let's,

Bret: that are coming up.

Nirmal Mehta: yeah, let's do it.

Bret: D, we're gonna have Solomon,
hes at Dagger back on the call.

We're gonna have, the co-founders of
Mineral, which is a GitHub Actions AI.

Troubleshooter that essentially I've
been using for now a month that it just

stares at my GitHub all day long its own
running 24 7, looking for ways to help.

It doesn't do code commits
in the classic way.

that's a different thing.

It's not trying to like build my apps.

What it's doing is looking for failed
GitHub workflows or, CVEs that Abott

has suggested that I need to implement
through a PR and it's reminding

me that those aren't there or it's
recommending strategies for improving

the speed of my GitHub workflows.

it's highlighting whether the CVEs
are necessary to get fixed in a

particular depend bot or renovate PR.

It's, there is so much stuff that
it's doing, but it's essentially like.

I look at it as like the janitor, like
the custodian of my repos, and it solves a

very specific problem set that I'm excited
to have them on to talk about because I'm

Nirmal Mehta: Sounds awesome.

Bret: I feel like I'm currently a
free customer, but it's providing

use to me as an individual.

I can only imagine how helpful
it's gonna be for a team.

it's kinda like an automated Jira.

Like it makes the tickets, the fix for
the ticket, and then if you just say

approve it fixes it with a PR or whatever
it needs to do to fix that thing.

Nirmal Mehta: cool.

Bret: And so we've got SRE ones coming up.

I'm gonna try to get someone from
any shift on, which is an SRE type of

tool that's similar to the mineral.

we're gonna try to get someone on from
the leadership of GitHub on, soon.

And so we're gonna actually have
someone talking about GitHub Actions

that actually controls the teams
that are working on that there.

we're gonna, I'm working to get someone
from GitHub next on, because GitHub

next is where a lot of this innovation
is happening in the world of what

they're calling continuous AI or Agentic
workflows, which is all this stuff they're

doing with co-pilot inside of GitHub.

and then that's probably only
half the list I have that

we're currently working on.

So we've got more episodes coming.

I'm super excited to finally get into
some real data around how do I implement

a workflow that's not gonna hallucinate
or be at risk of a prompt injection or

it's gonna leak credentials through MCP.

I'm super ready to get into the nerdy
details, and that's what I think

this year is gonna be all about.

Nirmal Mehta: Awesome.

And with that, please let your
colleagues, your friends, your family,

your neighbor, you know, or your mom,

Bret: Agentic DevOps

Nirmal Mehta: to subscribe.

And like the Agentic DevOps podcast, you
can check us out at Agentic DevOps dot

fm if that's not the way you found us.

please share,

Bret: favorite

Nirmal Mehta: we'd love to hear from
you, if there's a specific topic that

you would like us to cover or go.

dive deep into, please let us know.

We've got a long list and, can't
wait to dive into that in this

season of the Agentic DevOps podcast.

Bret: yeah.

Find us both on LinkedIn.

he's Nirmal Mehta.

I'm Bret Fisher.

We're on LinkedIn.

We're on Blue Sky.

I'm still on Twitter.

You can find us on our
[email protected]/devops.

All the links are in the show
notes for what we talked about

today, and we will see you on the
next episode of Agentic DevOps.

Chow.

Bret AI July 2025: Thanks for joining
us, and I'll see you in the next episode.

Episode Video

Creators and Guests

Bret Fisher
Host
Bret Fisher
Cloud native DevOps Dude. Course creator, YouTuber, Podcaster. Docker Captain and CNCF Ambassador. People person who spends too much time in front of a computer.
Nirmal Mehta
Host
Nirmal Mehta
Principal Specialist Solutions Architect at Amazon Web Services (AWS)
Beth Fisher
Producer
Beth Fisher
Producer of the DevOps and Docker Talk and Agentic DevOps podcasts. Assistant producer on Bret Fisher Live show on YouTube. Business and proposal writer by trade.
Cristi Cotovan
Editor
Cristi Cotovan
Video editor and educational content producer. Descript, Camtasia and Riverside coach.