Episode 073: Kubernetes on GCP in the Real-World | CloudSkills.fm

Mike Pfeiffer on April, 22, 2020

In this episode I catch up with Miles Matthias who is a Solutions Architect at Stripe and founder of CTOLunches.com. Miles and his team have designed and implemented Kubernetes for some of the largest customers running on Google Cloud Platform (GCP).

Don’t forget to subscribe to our mailing list at cloudskills.io/subscribe for weekly updates, exclusive training, and advice on how to amplify your career.

Full Transcript:

Mike Pfeiffer:
What’s up everybody? It’s Mike Pfeiffer. Welcome back to another episode of the CloudSkills.fm podcasts. Super excited to have you here as usual. Today on the episode, going to be talking about some awesome stuff. I’ve got Miles Matthias on the show. What’s up Miles?

Miles Matthias:
Hey Mike, how’s it going?

Mike Pfeiffer:
Good to have you on man. So for everybody that’s listening, maybe we can get into your backstory and what you’re working on today.

Miles Matthias:
Yeah, definitely. So, and I started my career I guess in college doing a bunch of security stuff and so I have some fun background stories of hacking and doing some fun stuff there.

Miles Matthias:
But I spent several years after that building all sorts of custom web and mobile applications for companies of all sizes. And then I started my own company as a cofounder and CTO and built that for several years, raised VC money, did all that whole kind of thing.

Miles Matthias:
And then I’ve spent the last couple of years consulting some of Google’s largest new GCP customers. So that’s been what I’ve been up to lately. Really helping people understand cloud infrastructure and especially from developer points of view.

Miles Matthias:
And now I have collaborated on a fun project with a few other cloud consultants who all have backgrounds as being CTOs and hands on developers to take the lessons that we’ve learned, helping these really, really big companies do some awesome cloud native things. And helping small to medium sized companies take advantage of all those things we’ve learned.

Mike Pfeiffer:
Yeah, that’s awesome. Because the perspective I bet that you have is really unique. Most people aren’t really that far into cloud native, right, if you look at it in the grand scale of things. And I would love to hear about that experience, especially the GCP stuff and getting big on their biggest customers. How did that play out? And maybe you could walk us through the basics of what you were working on with that.

Miles Matthias:
Yeah, me and my other partners and containers have helped a whole bunch of companies. I can just think of an example of my head of a very large company that operated five data centers all over the world, physically on their own. And then signed a deal, a five year, hundred million dollar deal with Google to say, “We’re going to just sell all the physical servers, close down the data centers and we’re going to switch entirely to GCP.” Right.

Miles Matthias:
And so then that was obviously a big learning curve for their team moving into the cloud. The other thing was that they were also moving into containerization and Kubernetes in that process. That’s a pretty frequent thing we see is that when the organization is making the investment to make a big migration from on prem to the cloud, they’re also going to think about a few other types of migrations that they might be behind on.

Miles Matthias:
And usually containerization, Kubernetes is one of them. And so this customer in particular, we went in helping them. They had a whole bunch of application teams, microservices, architecture and talented developers, smart people. It wasn’t that they couldn’t figure things out for themselves. It was that there are a lot of options, there are a lot of ways to do things.

Miles Matthias:
And really when you’re making this big transition and big move, it’s really helpful to have some people that have done it before in the room to help guide you. And help shortcut a lot of that research that you would have to do on your own for several weeks a month.

Miles Matthias:
And so our job partially was to give them a really good best practice starting kit to say, “Let’s set you up with what is going to work well and teach you about how it works and why it works. And so you’ll get really comfortable with it.” And then as they understood those things a few weeks and months after that they could make their own decisions about configuration or tweaks or changes so as their business changes and the application changes.

Mike Pfeiffer:
Sounds like an interesting gig, man. So I’m curious. Were most of these companies already doing containerized apps? Or were you really taking off the ground up in getting all this going?

Miles Matthias:
Yeah, really from the ground up. So, they hadn’t really done any containerization. Most often they haven’t done any containerization, but by the time we get there, like I said, engineers open up those docs and start trying to mess with things to do whatever. And so they have a pretty decent starting point of like, “Oh, I built a Docker container.”

Miles Matthias:
A lot of times they think about, we come in and can help them make optimizations that are really going to save a lot on bill times or image sizes and other things like that that they may not be familiar with right away. And so, yeah, there’s a little bit of newness there, but definitely not a start from scratch as if I’m coming in explaining, “This is what a Docker file is and this is what [inaudible 00:05:25].” Right.

Mike Pfeiffer:
Sure. Yeah, it makes sense. And so I’m assuming then this is traditional software development teams, meaning they’re building their own in-house software, building a product to take to market, stuff like that?

Miles Matthias:
Yep. Yep. And this company as an example I have on my head was a microservices architecture as well. So they had 30 different app teams. Right. So two to five developers per app. And that app does this very specific thing within their system. And so we’re onboarding 30 different applications in different teams all into sometimes the same Kubernetes clusters, other times other Kubernetes clusters in multiple regions, in staging environments, QA environments, production environments, local dev environments. Right. So, yeah.

Mike Pfeiffer:
That’s a lot of work.

Miles Matthias:
Yeah.

Mike Pfeiffer:
But a lot of fun work and probably, I’m sure you learned a lot of lessons along the way. Was there any consistent patterns, like customers picking a certain type of architecture or was it mixed bag?

Miles Matthias:
Yeah. Trying to think about that. Not really. I mean I think we primarily have worked with GCP customers. And one of the big pulls for people moving to GCP are very big data tools, like big query, big table, things like that. But also Kubernetes. Right.

Miles Matthias:
Google is the birthplace of Kubernetes and so GKE is a very compelling managed service offering. And so a lot of the people working with the common tools are Kubernetes. Several of them that we have worked with too would have pretty common CI/CD tools as well. Jenkins, Spinnaker, things like that.

Mike Pfeiffer:
Got it. Yeah. This is all stuff that people are trying to figure out right now. Right. And so I think it’s timely, it’s a timely conversation. And so people, it seems like are now starting a lot more than they were before, I guess, to grasp Kubernetes, at least from a foundational perspective. That’s still pretty deep, right?

Mike Pfeiffer:
But people are starting to get the, “Okay, I get containers, Kubernetes clusters make sense.” Now we’re getting into more conversations about, to your point of 30 teams, 30 different services. How do we get all the services talking?

Mike Pfeiffer:
And service mesh comes into the conversation, which is a new concept usually for people. And so maybe we could get into that a little bit and your experience of working with that in the real world, which I think a lot of people probably don’t have experience with.

Miles Matthias:
Yeah, it’s funny. We worked with a couple of customers that use Istio. One in particular was probably the largest Istio deployment that’s probably in existence, besides like [Ampos 00:08:12]. They, before we came to help them, were struggling. It’s a data ingestion pipeline and so they were trying… Their metric that they cared about was requests per second. And they were having trouble getting above 400,000 requests per second.

Miles Matthias:
So they brought us in and their goal was 1.5 million and we got them to 2 million. So the engagement was great. But I guess, the thing that when people start talking about service meshes, a lot of the customers who work don’t even really need a service mesh. Kubernetes has service objects in them to have built in methods for different services to communicate to each other. They are limited a little bit.

Miles Matthias:
Most notably when you have a service that is a load balancing traffic for multiple pods, it is just going do a round robin. Right. It’s not going to think about anything in terms of possibly different versions of the application or different regions that these pods might be in front of.

Miles Matthias:
So a service mesh can do some more advanced load balancing features that just aren’t built into Kubernetes quite yet. And so that’s pretty cool. The other thing that I really need to tell people about the service mesh, which is important to understand, is that when you are building your application in a microservices architecture and starting to think to yourself, “Oh gee, it would be really nice to have a proxy in place.” That is where you can start really thinking about a service mesh. Because all Istio is, is a Envoy proxy that sits next to every single one of your containers, your-

Speaker 1:
A proxy that sits next to every single one of your containers, your pods, and routes traffic to and from that pod. And Istio is simply an API to declare behavior of those proxies, that’s it. That proxy is an envoy proxy written by Lyft and see, you can go use the envoy proxy yourself, you can go set it up, you can use it, you can think about all sorts of configuration options like authentication, retry logic, all sorts of stuff like that. And Istio really is just the way to kind of mass configure all these envoy proxies. So that’s another way to think about it is when you start wanting to separate all of that networking logic, like I said, authentication, retries, all of these other things, routing, load balancing, separate that away from the application, and you just want your application to be able to say, “Make a request over there,” and that’s it, and then have the proxy kind of worry about all those other things, that’s a good time to start thinking about Istio in a service mesh.

Speaker 2:
Awesome. Yeah, I think that that’s a hard thing for people to grasp sometimes because the networking is usually an Achilles heel for a lot of people. And so anytime you start thinking about this inner service communication, it can be instantly intimidating. But that was awesome, and I think that that probably helps people grasp the concept pretty well.

Speaker 2:
So I guess I would say in the real world then, in these big microservices projects, do you see any consistent patterns, like anti-patterns maybe, or stuff that you keep running into the people could avoid, or anything like? Because people talk about it, right? And it’s like, is every company a candidate for microservice? And I think we all can agree that some are better suited than others, right?

Speaker 1:
Yeah. I guess in terms of microservice, I mean it’s really kind of up to your organization and what it looks like. I mean if you have, a lot of these customers that we work with are very large, very dispersed, and it’s really hard to have kind of a single code base where everyone is working in it at the same time. And it just gets organizationally straining to have everything just in one thing. And I think it has more to do with your culture and your organization about whether or not you should move to microservices or not.

Speaker 1:
For instance, you can use Kubernetes and you can use containerization with a monolithic application. There’s absolutely nothing wrong with that. You can go ahead and do that, like spin up, for instance, a Ruby on Rails application and run it on Kubernetes. That’s totally fine. You can completely do that. And in fact a lot of people like doing that because they understand Kubernetes, and abstracts a lot of things like VMs and hardware, and other things like that just away from them.

Speaker 1:
I would think more about how your organization works, how your teams are divided and organized, what the process is for collaboration, and really think about that. And a little bit, think about your system architecture too; what components are there, how often they’re interacting with other components, or are they really just talking to one or the other? I would think about some of those types of things before just kind of jumping onto a buzzword just because it seems to be what everyone’s using.

Speaker 2:
Right. Yeah, it’s kind of like the go into the user group or meet up conference and kind of getting that starry eyed thing where you’re like, “Oh, let’s go try this out! Shiny object!” But yeah, so that’s cool stuff. How about in terms of platforms, obviously Google has been big for you. Have you worked with some of the others? And any major opinions there?

Speaker 1:
Yeah, I built my company personally on AWS before I had much GCP exposure, and I’ve helped a few other companies use Azure and AWS and others. Especially as I come from it, from an application developer perspective where I’m usually running on Kubernetes, and using a SQL database [inaudible 00:14:16], I don’t really care that much which cloud platform. I personally think the just personal experience of GCP is really nice. I like the console. I like the tools. I like the command line tools. That’s me personally. But it’s really whatever your organization is doing, and nowadays whichever one offers the most credits I guess too.

Speaker 2:
That’s a great point, because that’s part of the process at this point. Some organizations are going to get tons of credits based on their relationship with the provider. That’s why I always try not to get religious about vendors because, to your point you can do what you’re doing anywhere, and it’s your process that matters.

Speaker 1:
You definitely can for the most part. I do think that if you have very specific needs, like for instance if you’re just really into huge scale data analytics or something like that, GCP is a good a reason to use that. And the other thing I tell people too is I don’t really get too worried about getting locked into a cloud, because I mean one of two reasons, either (A), you’re using something like Kubernetes and a SQL database where it’s like, "All right, I can go do that anywhere; or it’s two, you’re using that cloud for a specific reason, so take advantage of those reasons, right? You are saving yourself so much time by taking advantage of this platform specific features that in the long run, if you did end up having to move to something else, you’d still come out ahead in terms of effort and time and investment.

Speaker 2:
Yeah, definitely. Totally agree with that. I know that Google is totally a major player in the big data space, and I was just having a conversation with somebody about that earlier today. They’re like, “Yeah, we’re all in an AWS word, except this thing over here, that’s where we put all the huge datasets.”

Speaker 2:
But I haven’t personally gotten pulled into too many Google projects, or conversations. It’s something that I want to do later down the road. But to your point, like you don’t really have to worry so much, or you shouldn’t even really worry so much about lock in anymore. But, kind of switching gears from there, I’m curious about your thoughts on serverless, because reality is that’s compelling from an application developer’s perspective, but we also have this concept of serverless containers, which is kind of confusing, right? But what you think about kind of the serverless paradigm, and building apps that way, in a pure serverless kind of solution, like with serverless functions for example?

Speaker 1:
Yeah, well I guess in my head I kind of split it out into different things like GCP Functions, and AWS Lambda are kind of one thing, right? Like I have a snippet of code, and I want that to be called on my web hook, or pub/sub type event, or something like that. Those are really great pieces of glue, a lot of times in different infrastructure, to stitch different systems together, to track things like analytics, right? Where you just need to some kind of end point, or event messaging system, and just say for instance, “Post something to Slack, and email it to the group, and do this,” and you just want to have like a single notification end point or something, right? I think those types of services that are really good for very specific use cases, where you just have a small snippet of code, and then you want to do it for some of those reasons.

Speaker 1:
I think the serverless, like Cloud Run, Fargate, some of those other ones where it’s like, “Here’s my container, I don’t care what is running it really. I just need it to run, and when I get more traffic, scale it up. And then when I get less traffic, scale it down. And I don’t really care how you do that.” Those are, I think, good for a couple of use cases. Like one, I think it’s good for if you have logic that is reasonably longer than just a snippet to put in those functions-type services, and you want to put it into an entire container. Those are nice for that.

Speaker 1:
I also think it’s nice for people that are running a single application. Maybe it’s just a single web app, and they’re really not doing anything else, they’re not truly doing microservices architecture, and they just want it for that reason. I don’t think there’s anything wrong with kind of starting that way. I think most of the people that I work with that have experience working in Kubernetes, whether it’s because we have the need for micro services or just comfortable with Kubernetes, I haven’t seen people that have Kubernetes experience build an application, and then put it on one of those services.

Speaker 1:
Usually they will want their own Kubernetes cluster, and put it on there. It is a little more maintenance. It is a little more thinking about, “Okay, what kind of machines make up this cluster? How much CPU and memory do I need?” Things like that, but that’s also just kind of, I guess, the tool you’re comfortable with.

Speaker 2:
Yeah, that’s something too that developers can sometimes have a challenge with, picking out the right infrastructure, right? So in those projects where you do have to think a little bit more about infrastructure, as well as building the app-

Speaker 3:
It’s where you do have to think a little bit more about infrastructure as well as building the app. And is that something that you were set up with for success because you’ve been in the industry so long where knowing about the infrastructure stuff enough to gain that experience? Or was it a hill you had to climb? Because I know for a lot of people in the developer space in cloud, the infrastructure pieces can be intimidating.

Speaker 4:
Yeah, they definitely can. It was something that I was taught, I guess, several years ago before Kubernetes really came on the scene and we were doing everything on VMs, my original web app experience is in Ruby on Rails, so Ruby on Rails still does a great job I think of probably the most complete guide that I’ve seen of a lot of frameworks where it’s like, “Hey, here are the common scenarios that you’re going to have,” and even back then, just on the VM side of things, following the guides and just using a tool like Capistrano where you could just deploy to a VM was really nice and really nice guides, were they easy to follow. I think there are a lot of other pieces that you can use in cloud infrastructure and it is something that you either… I’m never one to like really sit down and study and read all the docs before I touch anything. I’m the type of developer that likes to go in there and build a bunch of stuff and see how I can break it and then try to fix it and then get experience and do things that way. I like getting my hands on things.

Speaker 4:
But I think one of the more important parts that I try to give everyone else I work with and that I was taught really well was it’s a procedure on how you organize your code repo, thinking about how your application is built, how it’s prepared to be deployed, and then how it’s actually deployed and thinking about these as different components. And those nowadays translate into things like continuous integration, continuous delivery. So thinking through that process and having a procedure and organizing your code repo with a few basic tools like a read me and a make file, I think are really, really important and I try to tell everyone and every app code repo that I touch, and talking to people about getting their head around this, and then the specific cloud infrastructure, whether you’re running it on VMs or Kubernetes or whatever, matters a little less because you can understand that it’s just one part of this larger system.

Speaker 3:
I love that you brought up the fact that it’s useful to understand the structure of a code repo and understand how to build an app and do continuous integration, continuous delivery, because everything’s going that way. If you look at infrastructure teams, they’re starting to go down that road where all their code changes, especially in cloud, flow through some kind of source control pipeline process. And so I think anybody listening is going to get sucked into it at some point anyway.

Speaker 4:
Yeah, absolutely. It’s something that I do immediately on any new app that I built is always have a make file, always have a read me, and the make file has at least a minimum of a couple of steps, and that’s build my app, build my container, and then from there I can do a bunch of things. I can hook that up to a CI system where every time I commit and push, that kicks off the CI build where it calls those make targets, and then it’s building application and building container. And then you can step that into a deploy, which can again just be another make command, and it can be something as simple, if you’re on Kubernetes, that’s like kubectl apply. Or it can be something as complicated as that CI system calling something like Spinnaker, where you’re using more complicated delivery tools.

Speaker 3:
I haven’t looked at it to Google’s DevOps tools and services. I know in AWS I spent a lot of time working on that stuff and they have code pipeline, code deploy, all that kind of stuff. I’ve been in Azure DevOps the last year and working with people on that. That’s been the most time I’ve spent lately doing DevOps projects. I’m curious about Google. Do they have like CIC tools and services just like these other guys?

Speaker 4:
For CI they have a what’s called Cloud Build, and that is just a build platform where literally all you have to do is have a YAML file that just says these are my steps and it’ll go through your steps. Each step requires a base container to essentially say these are the tools I want to use. Whether that’s the G Cloud SDK that you need or the kubectl tool that you need, whatever tools you need. That’s one part of the step. And then the other part is what command to run, and why I said the make file was because make file’s a really nice way just locally or giving it in your repo for other developers on your team to pull down the repo. They can run it on their local machine. Just really easy make build or something like that, and hit enter and it does what needs doing on your local machine.

Speaker 4:
And then on these steps and cloud build, you can do the same thing where I just say I need just the basic Unix tools and I need you to run, make build and Cloud Build will do that, give you a really nice UI of here are the steps that passed, here are the ones that failed. It’s really easy to hook up to something like a GitHub repo to say every single time I push to the staging branch, do this and notify these other systems or whenever you want. But just writing a YAML file, hooking it up to your repo and you’re done basically, which is really nice.

Speaker 3:
That is sweet man, because then you can take that YAML pipeline file… You’ve got your pipeline’s code, you’ve got everything else’s code. It’s really sweet. So what about release, like deployments type stuff? Is that also in that suite of tools or is there a separate service in Google for that?

Speaker 4:
There isn’t any managed delivery service on GCP yet. Like I was saying earlier, it can be a step in your Cloud Build, to say deploy or whatever. If you want to go through a longer procedure or a more complex case and use something like Spinnaker, google actually has a team of developers that actively contribute to Spinnaker and build it. I don’t know if you’re really familiar with Spinnaker, but I got really, really hands on with it and actually led a CIC migration for a big customer and helped contribute to Spinnaker a bunch. And then I spoke at Spinnaker’s conference a few months ago, so I’ve gotten really familiar with it and can answer a whole bunch of questions there. So Google doesn’t necessarily have a managed Spinnaker offering. They have some really nice open sourced GitHub scripts to help you get up and running with Spinnaker on GCP. That is really nice, and that’s all open source. You can just start using it today. And then you can get involved in the Spinnaker community, which is really all open source.

Speaker 3:
Yeah, I love that.Spinnaker is something that… I don’t know, we’ve got a lot of Microsoft people in this audience and we got some people on AWS as well. But yeah, I think Spinnaker’s probably something that’s not on the radar of most of the cloud skills nation. So maybe you could break it down for us.

Speaker 4:
So Spinnaker is a continuous delivery tool, and it is the tool that allows Netflix and Google and others and thousands of other companies to deploy thousands of times a day. You can deploy on every single commit. You can deploy on a Friday night at 3:00 AM, it doesn’t matter. It is a tool with enough ability for you to implant logic about how your release process should run, which environment it should deploy to first, and put in logic like, “Hey, I’m going to deploy to staging. If that goes well, then deploy to this environment. If that goes well, then deploy to this environment.” So you can feel comfortable saying, “Look, Spinnaker, evaluate every single commit I make and if it works, it works. Deploy it, go ahead and deploy it.”

Speaker 4:
That’s what allows you to have these really high release cadences of these teams and put not only less… A longer release schedule introduces a lot of other problems, but also less manual repetition of things. It’s a system that you can program that will automatically do these certain steps. So the logic of dependencies of different environments is a really big feature that a lot of people just use, and that alone is a really big deal for them. The other piece that’s Spinnaker is really the best tool for you for doing is what’s called canarying, and what that actually is, is… Let’s say you have a candidate, a release candidate of a new version of your application, and you have a hundred, we’ll say, VMs for this. We’ll say you have a hundred VMs running of this application. A common canarying scenario would be, "Hey, in our production environment-

Miles Matthias:
Scenario would be, hey, in our production environment where we have 100 VMs running, why don’t we run this candidate on one VM, leave the 99, and let’s let it run for a while? Let’s let it run for a couple of hours and let’s see how it does. At this point, because we were already deploying to production that’s already passed written tests and all those types of things that developers think through, but really ultimately, you don’t know, your test coverage can only do so much, right?

Miles Matthias:
And when you get it in production, that’s when you actually see how it’s going to impact things. And so the idea is let it run. Maybe we have one VM, let it run for a few hours. Okay. It worked well. How about we say we have 80 VMs of our old version and 20 VMs of our new version? All right, let’s let it run for another couple of hours. See how that performs. Okay. If that does well, then let’s have all 100 of them go and be the new version, right?

Miles Matthias:
And so it gives you this really cool real world, is this a good release? And actually test it in production, with production environment and production behavior and production data. Not just your test pass, right? That’s a totally different way of evaluating these things. And Spinnaker is one of the best tools and probably one of the few I’ve seen out there that can really help you do this canarying.

Miles Matthias:
It is more of an advanced feature, because as automated as that is, completely automated, the one part that you have to do that does take some effort is that definition of what is a good behavior, right? And that’s teaching Spinnaker to say maybe it’s the number of 500s, maybe it’s this metric that we’re tracking with Prometheus goes up 10%. Who knows? Whatever makes sense for your application, you have to express that to Spinnaker, configure that, and then just make sure that that definition is true over time.

Mike Pfeiffer:
Yeah. That was a really great explanation. I know that there’s light bulbs coming on in a lot of people’s heads right now, because they’re like, oh, I know that pattern and now I know that that happens in Spinnaker as well. So that was awesome. And I loved your explanation of canary releases, because it’s everywhere, right? You hear about it everywhere, but the way it actually is implemented depends on what you’re working with. So that was awesome. I really loved that.

Mike Pfeiffer:
And the other thing you mentioned was Prometheus, which I can imagine a lot of people listening have heard of it and then they’re like, oh, what is it? That’s really like monitoring is important, right?

Miles Matthias:
Yeah. So for Prometheus and Grafana are CNCF and cloud native computing foundation projects. Prometheus is out of Google, again. And it’s the time series data method of tracking metrics and seeing what happened in your application. It’s completely open-source, so you can run it, it’s completely free to run. It’s very common in a lot of these larger enterprises. We will have what we call a Kubernetes cluster bootstrapper, and so we’ll say, all right, we made a new cluster. By default, we want to go ahead and put Prometheus on there and Grafana and Alert Manager and a couple of other things on there and then you can start deploying your applications, right?

Miles Matthias:
And so Prometheus is like really common to put in there, because it’s really nice, it’s completely open-source. But basically, the way you can think about it is, what are some of the things that I want to track in my application that are unique to me? And it can be things that are just not unique to you, right? It can be things like 500 HTTP codes, or it can be things like CPU or memory usage or things like that, right?

Miles Matthias:
But think about if I have an application that is a simple e-commerce web application, I want to track revenue, right? Or, I want to track number of failed credit card payments, right? My application can do something, and then it can just call a single line of code to say, hey, this event happened. Or, add 50 bucks to my revenue, right? Or, something like that.

Miles Matthias:
And that is a metric and Prometheus is a way of storing all those metrics, of different types, keeping track of them over history, over a timeline. And then Grafana is just a way of visually seeing that, right? Saying I want to pull up all this data and put it on graphs. I want to make my own custom graphs.

Miles Matthias:
Grafana actually, you can use as the visual component to multiple metrics store backends. So it’s not just Prometheus, you can use it for other tools too, but most commonly, because if you’re using, those are two very common pairings, because they’re both, like I said, CNCF, open-source, very common tools.

Miles Matthias:
And the nice thing about Grafana is that you can define these graphs and then write them out to code. They come as JSON blobs, right? And so you just save it to a file or check it into a code repo or something like that, share it with other people. And it’s very, very common for these enterprises to have very complex Grafana dashboards where people log in and see real-time monitoring and metrics of their applications and of their company.

Miles Matthias:
How was our revenue? How was our payments processing, error rates? All of these other things. So it’s really common in all the enterprises that we’ve worked with, and it’s like I said, because it’s open-source and really just free to get up and running, it’s definitely something I encourage smaller to medium sized teams to leverage if they can.

Mike Pfeiffer:
Yeah. Anybody that hasn’t looked at Grafana should definitely check it out, because man, it is sexy. Just the visuals, right? But you unpacked a lot there, and I think it paints a good picture for people, because the power that you have at your disposal for putting logic around these releases, to these different environments through Spinnaker, by queering these metrics, to your point, which could be anything that you could basically want to invent, to track, like your revenue, it’s insanely cool stuff.

Mike Pfeiffer:
Miles, this has been an awesome conversation. Is there other places where we should send people in the show notes to check out things you’re working on, projects, anything like that?

Miles Matthias:
Yeah, I’m working at… Yeah, you can always follow me on Twitter at Miles_Mathias. You can check out hiremiles.com. I’m a partner at containerheroes.com, which is again the collective of people like myself that have been CTOs, cloud consultants and trying to help with small and medium sized companies, learn from the things we’ve learned at really big companies.

Mike Pfeiffer:
Awesome. All right, everybody, go follow Miles. Miles Matthias, thanks so much, man. Appreciate it.

Miles Matthias:
Thanks, Mike.

Subscribe to the CloudSkills Weekly Newletter

Get exclusive access to special trainings, updates on industry trends, and tips on how to advance your career in the tech industry.