DevOps Performance Trends for 2024 with Scott Moore

By Test Guild
  • Share:
Join the Guild for FREE
Scott Moore TestGuild DevOps Toolchain

About this DevOps Toolchain Episode:

In this episode, we are joined by Scott Moore, a seasoned IT professional with over 30 years of experience, to delve into the latest trends and concepts in DevOps and performance engineering for 2024.

During their conversation, Scott Moore shares insightful perspectives on a wide array of topics, ranging from performance testing and security to networking, unbreakable Internet, mobile connectivity, and the role of AI in software engineering. As an influencer and expert in multiple domains, Scott emphasizes the need for a comprehensive discussion on software engineering and the evolving landscape of DevOps. The episode covers the growing focus on performance engineering within the DevOps framework, the integration of performance testing for reliability and efficiency, and the changing role of performance engineers in cloud-native environments. Scott also sheds light on the misconception surrounding testers' roles and underscores the importance of engineers possessing extensive knowledge and expertise in their respective fields.

Tune in as Joe and Scott explore the significant shifts in DevOps practices, the emergence of DevPerfOps, and the changing dynamics of performance engineering in today's rapidly evolving technological landscape. So, get ready for an engaging and enlightening discussion on the cutting-edge trends shaping the world of DevOps and performance engineering. Listen up!

To up-skill and future-proof your career, join us at the 8th annual online must-attend testing event –Automation Guild!

We cover DevOps topics like Opentelemtry, CI/CD, Load Testing, Observability, Security, and more!

Use discount code tube30 at checkout to save 30% off:

https://links.testguild.com/agtube

About Scott Moore

Scott Moore

With over 30 years of IT experience with various platforms and technologies, Scott is an active writer, speaker, influencer, and the host of multiple online video series. This includes “The Performance Tour”, “DevOps Driving”, “The Security Champions”, and the SMC Journal podcast. He helps clients address complex issues concerning software engineering, performance, digital experience, Observability, DevSecOps and AIOps.

Connect with Scott Moore

Rate and Review TestGuild DevOps Toolchain Podcast

Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page. Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.

[00:00:01] Get ready to discover some of the most actionable DevOps techniques and tooling, including performance and reliability for some of the world's smartest engineers. Hey, I'm Joe Colantonio, host of the DevOps Toolchain Podcast and my goal is to help you create DevOps toolchain awesomeness.

[00:00:19] Joe Colantonio Hey, it's Joe, and welcome to another episode of the Test Guild DevOps Toolchain. And today, you're in for a special treat because we have the one and only Scott Moore joining us to talk all about DevOps and performance trends for 2024, as well as what the heck is DevPerfOps. It's been everywhere. Really excited to get his view on this. If you don't know, Scott has over 30 years of I.T. experience with various platforms and technologies. He is one of the main thought leaders, I go to for anything in performance testing DevOps. He's an active writer, speaker, influencer, and host of multiple online video series that we'll have links for in the show notes for this as well. And this includes the performance tour that you definitely need to check out. He actually goes around and does education things and has a really cool set up where he interviews people on the road which is cool DevOps driving, which I think is new, the security champions, which I've just heard of, and the SMC Journal podcast, which is a must listen to. He helps address multiple client complex issues and concerns around software engineering, performance, digital experience, observability, DevSecOps, AIOps, and now DevPerfOps. You don't want to miss this episode. Check it out.

[00:01:26] Joe Colantonio Hey Scott, welcome back to The Guild.

[00:01:31] Scott Moore Wow, that was a mouthful. It's good to be on the show with you, Joe. I'm just trying to keep up with you, my friend. I don't know if that's possible.

[00:01:38] Joe Colantonio Yeah, I do a lot, but, you've been really adding stuff to your plate, so I don't even know what. What's your main focus right now?

[00:01:44] Scott Moore Well, I'm kind of all over the place, and it's because a couple of things. The performance tour will continue. It's going to keep on going. But we've got so much material out there, but other companies have come to me and said, we'd like for you to talk about more than just performance because there's a wide world out there in software engineering. We want to be part of the conversation, too. Why they want me to talk about that, I don't know, but I'm willing to do it. We've got expanding into topics like DevOps, Security, and I'm doing even stuff with like networking and unbreakable internet mobile connectivity. And AI, of course, got to talk about that to be in any conversation. I'm just talking about everything. I'm interviewing people, but I'm also providing my, I guess, 30 years of subject matter expertise as well.

[00:02:27] Joe Colantonio Yeah. So I guess that's one of the trends. This show used to be called the Performance Podcast. And it just felt like it was very limited, and I felt like it should be part of the bigger conversation around DevOps. And I actually, this is I guess we'll go into trends first, and then I want to get into the PerfOps things because I ran a survey that said, do you think performance testing belongs in DevOps? When I did it, majority of people said no, and I thought that was crazy. I think it does belong as part of DevOps. I guess the first thing is that one of the trends are seeing then not only that people are becoming more concerned with performance, but not just in maybe how we used to be back in our day against really black box type of performance testing but more of a conversation where the whole team is now involved and it's not just a specialty of one person on one small team.

[00:03:11] Scott Moore Yeah, it's just such a wide net that when you say performance that you're casting this huge net out there. Is it the performance of the hardware? That's a whole specialty in of itself. Is it the software? Is it the end-user experience? All of those things can be a specialized skill set, and it takes a long time to really build up all of the knowledge you would need to get the breadth that you need. I think we've lost a little bit in context around what is performance engineering, what is performance testing, how does that relate to microservices and new systems versus systems that are traditional or legacy systems. And now we're hearing more about reliability and efficiency. And I think that's where we're going is reliability. And efficiency and performance is all part of that. But maybe we've forgotten about that. So that's what I'm trying to bring that to the table.

[00:04:01] Joe Colantonio Why focus now on reliability? Is that because of a cloud-native. And you can't really back in the day we could control what the software was? The services were? The hardware was? Now when you're in production kind of control now we need to focus in on the production type of performance.

[00:04:18] Scott Moore I think that's part of it. I mean, I think we had so much lift and shift without optimizing when you did the shift that there was this old what is going on moment for lack of a better term, once we got it out there in production, we started seeing those cloud bills and all of a sudden a whole movement comes out called FinOps. And all that is short for is managing your cloud costs, the financial aspect of it. But if there was efficiency and there was an optimization with the lift and shift, you probably wouldn't have had FinOps come out as strong as it did because you would have had that savings from the get go. And I think that's the part where performance engineering can play such a big role in that. It's partly cloud-native, and it's also the skill sets that are part of knowing how cloud systems work versus your on-prem and now multi-cloud as well hybrid cloud. There's a ton of stuff going on and we got to keep up with it.

[00:05:18] Joe Colantonio Absolutely. So, Scott, what I think about performance engineers, you and James Pulley come to the top of mind. And I love performance testing. I started off as a performance tester. Is that kind of role dead then, or is it like where do you see it? Because I was kind of not surprised you changed over. But like is it morphing?

[00:05:35] Scott Moore That I changed over to what?

[00:05:37] Joe Colantonio Chaged over more to like in a DevOps context rather than.

[00:05:42] Scott Moore Well, yeah, I mean you have to, you adapt or you die, right? And you have to look at what's happening. But whatever you call it, the same rules apply. Physics hasn't changed. There are resources that get depleted when they do, problems happen. When things aren't efficient, they cost more. When things aren't tested, there's more risk involved. Those things are going to happen no matter if you call that person a performance engineer, systems engineer, software engineer, or whatever label you give it. I think what has really bothered myself and I'll try to speak a little bit for James because I know him so well. We've been really concerned because there's been this, I guess, a cultural mentality that when you go from testing to a developer, you've made a step up, that the developer is the goal and that testers are an entry-level role that I just attained to be a developer. And we come from the school that a developer is merely an incubation point for defects. You're a defect creator who's creating all the work that we need to do to fix your stuff. And so we feel that an engineer that is in quality testing, performance, security, we need to know as much or more than a developer knows about even their own code so that we can say, here's where you missed it, here's what you need. And that has not been. There's a cultural shift that happened several years ago that says, if I don't know how to code, I'll test, I'll poke around, and I'll attain to be a developer once I learn real skill sets. And I think that's a tragedy.

[00:07:15] Joe Colantonio Absolutely. And a few thoughts came to mind here. First one is AI, the other is it's this has been a battle for 20, 30 years of why do companies you think undervalue testers then in the sense that they see as a cost center rather than a value add. And you just mentioned like all these bugs that can occur across the whole tech stack, where testers are ideal to find them and actually add value. So when they go to production, they're not going to lose downtime or even brand type of repetition.

[00:07:41] Scott Moore Well, remember when the Agile Manifesto came out and we started moving to do more with less? When did that happen? Around 2007 ish. What was happening with the economy around 2007, 2008? That was a reaction to that. And what we found was there were testers who were not providing value. They were charging a certain amount or cost a certain amount of unit in a company, but they weren't providing the value. I'm talking about in the eyes of the executives. And so the sound of money to an executive is code in production not being held up by the tester. If the tester is not seen as providing value, they will always go for the lowest common denominator to get that, they may still have to do it because of regulatory reasons. We have to check this box and it became a check box, but without actually doing the real checking that needed to be done, the real wringing out of the defects out of that system. So if you're not being provided value at a $150 an hour, you're going to find something for $50 an hour. And if you get the same thing, that's the point. And I think that is also a tragedy. And I can't take the full blame, obviously, I want to take blame, partial blame for this because I feel like it's people like me and James and the people who could be teaching classes or affecting the education system to bring this at an earlier stage where people graduate from college knowing these performance engineering and system engineering principles that used to be part of IT, it just I don't think we did a good job in the performance community of passing that torch.

[00:09:13] Joe Colantonio Absolutely. As part of that education, you speak to a lot of people, I assume a lot of executives. You go on the road more than anyone I know of. Do you see this coming up as part of the conversation then? Were they are aware now or are you making them more aware? Are you surprised by maybe how more aware than you may have thought?

[00:09:29] Scott Moore I think now the awareness is because we can now see when they're on the cloud, we can see how much it costs for poor performance. And when you hit them, when money is involved, all of a sudden ears perk up and we're interested, right? So the conversation always has to be, do you know how much money you're leaving on the table because you are not addressing this? And from money then goes risk. I can guarantee you that those are two things that will get the attention of the CFO, the CEO, and the CTO. And that's what's keeping them up at night. If you can address a problem that's keeping them up at night, you become valuable to that company.

[00:10:03] Joe Colantonio That's interesting. Like a lot of times, I do think it's a tester's responsibility to be an education component of their job. Do you think they should have this conversation with their sprint teams? Hey, I'll be looking at performance and if so, do you know this costs X amount of money if we don't do this and this, do you recommend that as a way to get performance baked into the application from the beginning?

[00:10:21] Scott Moore Well, there should be a conversation happening about it. Any time there is a functional question asked there should be a performance question, and there should be a security question asked. Those things should be in every conversation. And it should be ingrained. So how do we get it ingrained? We've got to start the conversation. And sometimes that conversation needs to be a large poke in the eye to get people interested. And that poke in the eye was look at the bill. Look at the cloud bill. Why is this happening? What are we actually paying for? And that's where it starts. And also people like me who start up crazy ideas and then just get people ticked off because I just look at what's going on in the market. And I'm like, somebody created some idea that we're going to do this. Maybe it's Google, maybe it's Facebook, maybe it's Netflix, and they're great engineers. And for their business model, whatever they came up with, it works for them. And it's great, but it may not necessarily work for your company. Everybody doesn't have to be Google for it to work for their model, and it may work better if they don't do what Google does. I think people forget that.

[00:11:23] Joe Colantonio Oh my gosh, it's like a copycat syndrome. You're an insurance company or a regulated company, and they're trying to do what Google does and you can't. It doesn't even make sense. Yeah, I love that. And that's some you've been seen as well done, people trying to do something that may not necessarily fit to the context of what they're doing.

[00:11:39] Scott Moore All the time. A perfect example is I was at a Perf tour live event and they're like, well, we do the Spotify model really well. Explain what the Spotify model they just copied whatever Spotify did, what did Spotify do? They looked at scrum and they looked at traditional agile and said, this doesn't work for us. Let's change it to what will work for us. And that became their model. Now people want to do a model that was a customization of something else, instead of customizing it the way Spotify did, they just clone it. That's not the solution. People are lazy and they're looking for low-hanging fruit. It's not always that easy.

[00:12:12] Joe Colantonio How did that conversation go? Did you tell them? You may want to rethink that.

[00:12:18] Scott Moore Yeah. It didn't go too well, Joe. I don't know if I'm allowed back in that town, actually.

[00:12:24] Joe Colantonio Awesome. What are some other things you've been seeing? Like I said, did you speak to a lot of people in this space on the road a lot. What are some of the trends you've been seeing, maybe that people need to be aware of, especially in 2024?

[00:12:34] Scott Moore Well, in my area, the thing that keeps popping up obviously is AI, what is this? Is it going to hurt me? How can I learn this? How is it going to affect my job? Those types of things. And where does it fit in and where do I see that it's going to help? And you can't talk to anybody about 2023 without saying AI. Obviously, it had a huge impact, but we really haven't seen what it can do. We're in that Gartner hype cycle model right now and it's in full hype. I think it's going faster than I think we ever thought it was going to or maybe this year we'll be able to see what it can really do because we see new innovation like almost every day. But I think we're going to see it assist in automation. I've already seen that in some of the products like Tricentis has the test and products and they have TTM for Jira. You can take notes from a meeting, put it in as bullet points, and you've got test cases built for you, and requirements built for you. And it does all of this without very much input at all. I mean, it's something you can have to tweak it obviously we're going to start seeing that same thing I think in performance testing tools, load testing tools, where it provides guidance on test results and analysis. I think it will provide better modeling and forecasting and observability, helping you tell, well, here's where it looks like your business how you're actually doing. But here's what it looks like you might be doing three months from now. I think we're going to get closer to that in 2024.

[00:14:00] Joe Colantonio Absolutely. And you mentioned a good thing, AI I thought, in functional testing it's been a big buzz, but I always thought it would fit better with performance type of things, especially with the machine learning part of reading logs and doing correlations for you automatically being able to actually read a bunch of data give you bubble up insights. Does seem like performance tooling then is lagging behind even automation front-end tooling or is it just-

[00:14:21] Scott Moore Right now I think it is. And I think it's here's my theory. This is the Scott Moore's theory because many of the load testing tools never focused on analysis first, they were concerned with generating load. Anybody can generate load I can generate HTTP traffic on a website all day long. But unless you have a really good in-depth analysis where you can correlate and cross-reference the data, what's the value? And even if you can do that, the value of those results has a very small time window, because as soon as you've made changes to the system, you got to do it again and get that. That's where I think AI can really benefit in the analysis part and the forecasting and noticing anomalies. That's its big strong point. And that's what we need is noticing anomalies and relationships that we never thought went together. Well, the reason this transaction is high, we thought it was because we were CPU-bound. But AI found that the reason your CPU bound is this other thing upstream or downstream, that wasn't necessarily showing up on any radar and traditional graphs, but it found that relationship. That's where I think that we're going to see the improvement.

[00:15:27] Joe Colantonio Absolutely. I know like I don't want to bring up JMeter and things like that, but a lot of these open source tools, I think we come from an enterprise background, where we have enterprise performance tools that actually had analysis and the whole pieces built in. So it is a struggle. So what AI then do you see that because it's a lot of them is not open source at the adoption is smaller. Or do you see people actually seeing it as a value add where let's invest some money into AI tooling line to help with this.

[00:15:52] Scott Moore I think they will eventually have to invest money in something that will give them that good analysis back. And JMeter alone would never be, in my opinion, considered a true performance engineering tool. It would be a load thrower, it would be a way to generate load. But as everybody knows, you have to combine that with other solutions, kind of as a bonded solution together to get that value out of. And the value comes when you start adding things like Grafana and other types of visualization products. That's where the key is now when you get that and you can apply AI to that, now you've got some, but it is lagging behind back again to why I said, because these tools have not focused on analysis first. And that's what should have happened.

[00:16:37] Joe Colantonio Absolutely. You also mentioned something that's really interesting is that there's many kinds of performance out there. And one thing I see is a lot, but I think it's a good trend. I'm just curious, get your thoughts on it, as functional automation tools, being able to bubble up insights on the performance of a transaction over time. I seen some companies that implement telemetry and tracing in the background of a functional test, and then even though you're doing a front-end test that could tell you, hey, yeah, you did this activity, but in a service or a database layer down here below, the milliseconds were are increasing or something like and I think that's an awesome win-win of functionality. Do you see that as a trend going forward leveraging functional tests to get you some performance type of information?

[00:17:19] Scott Moore I do, I think it is awesome. I think we need much more of it, and I think we need to emphasize that. And every chance I get to talk to a functional test product maker. I always mention that I'm like, this is a great automation tool that you've got. Do you have a way to put a little stopwatch timing around stuff in here, whether it's from an end-user perspective, or do you have a way to monitor something, some way to come up? And a lot of times they kind of look at you say, I was like, well, we're not a performance tester, and I understand that. But do you realize just by putting that one little feature, you might save your performance-testing group a week's worth of work? I mean, wouldn't it be great to know that it's already not passing for that one single virtual user running through that automated script? I think that would be great. And I think every functional automation tool needs to have a way to have timers in it of some kind.

[00:18:11] Joe Colantonio Absolutely. I'm trying to come up with a concept, you know we have the testing pyramid. Like how about a performance pyramid? Like you said, there are many kinds of performances. Is there a way where like the basis minimum performance baked in then the middle layer and top layer, is it all, or is that concept not really scale to performance type of a mindset?

[00:18:31] Scott Moore I don't know if it's a pyramid. I think it's more like a circle, like a pie. It's get kind of encompasses everything. And you just have to find out what all the slices of that pie are and where do they overlap. And I think that's something that I hope that with some friends of mine, we can start helping to define that and maybe come up with some agreeable standards on what that all means very soon.

[00:18:52] Joe Colantonio Awesome. So what are some other trends then? Observability obviously has been big. Opentelemetry? What are you seeing in that regard?

[00:19:01] Scott Moore Well, at first I watched this all last year, and I reported on it that the observability tool sets were saying, oh, we've got AI. Basically what they did was implemented a chat bot that talked to ChatGPT the way that you could from ChatGPT, just throwing an API call out there and giving you some type of an answer with the same amount of accuracy and context that ChatGPT 3.5 had. And of course, it's getting better over time, but we're still without context. I think what's going to happen in Observability, Dynatrace was the first one to mention that I heard of is moving from just having Generative AI built in to look at stuff, or even just create custom graphs for you from a text. You don't have to learn some SQL query. You can just say computer create graph of blah blah blah. And I mean, that's great, that's almost Star Trek. But I think really it's the language and text that's been modeled on that's called causal AI. Right? So I dunno if you've heard of that, but Dynatrace is saying, let's train this AI on the right documentation, the right information, and without all the extraneous stuff so that we know when we ask this question about this topic, it knows the topic, it knows what the context is. I think causal AI is the next step for observability.

[00:20:13] Joe Colantonio Right? Causal AI. Nice. Now, you mentioned developers earlier how you need to know coding with AI. I guess you need to understand coding, but it's not. I don't think it's going to be like, you need to be aware of coding, but not necessarily be an awesome coder, because you can kind of use AI to assist with that. Do you see that as a viable future plan, or do you think that's all bias?

[00:20:33] Scott Moore I really have mixed feelings about it, Joe, because I've asked it so many different questions and I've tried to get it to do certain things, and there are some things that has done a fairly good job of. But there are other things when I actually know all the context behind it. Some of the stuff I'm getting is very, very scary. And I'm thinking when you don't have that context and you blindly do stuff, especially when it's with code, that's the problem, it's back to the trust but verify, right? And I think you just you have to be that way. If you don't have the ability to verify, you may be taking a risk. And I'm just not comfortable with that right now. But I am a born skeptic against that already. I'm not a Luddite fully, but I go begrudgingly into the void.

[00:21:17] Joe Colantonio Absolutely. Once again, back to performance testing is I have this on my list a few years ago as a trend, a shift left. Do you see companies actually shifting performance testing left? And if so, what kind of performance testing on the shifting left, if at all, that you're saying?

[00:21:32] Scott Moore Yes, but not enough. What we have found, and I've actually had this conversation many times with companies that are doing the testing and companies that make the testing tools. Roughly 4 to 5% of companies have successfully shifted left. And the majority of their performance testing, I mean, they really get it and they're seeing good results from it. Now, those who are doing it are seeing phenomenal results, very advanced stuff, and it's great. It's 4%. That's the problem. It's easier said than done and you have to step into it very slowly. There are certain things that just aren't ready for prime time to shift left, yet. And that's something that I tried this year for the Performance Advisory Council. More on my presentations was can we actually use this new thing called well, it's not really new, but it's new to us, some of us, the Extended Berkeley Packet Filter, the EBPF we keep hearing about where you can get all this monitoring information, whether it's performance or security or whatever, at the Linux kernel level, a lot less overhead and can grab anything, create flame graphs, create all kinds of good stuff. And it's great for Kubernetes. But it's not really ready for prime time from my experience actually implementing it line by line. We will get there. But that's another part of the problem. Trying to shift something like that left. You need to shift it left because EBPF can get so detailed. It can trace every little tiny thing you want that as far left as you can, getting it working, getting it all running. You kind of have to hold your mouth right, go through all the little tiny steps, then hoops. And you might get something. We need to make it easier for people to do that, in my opinion.

[00:23:11] Joe Colantonio Absolutely. So do you think performance testing from a stress test, a load test perspective is getting easier? Back in the day, I used to do test in a staging environment and then you released to production never the same results because the environment was so off. Do you see containers being in a way to enable consistency in performance environments? So if you do get a result in your staging environment, it's basically just a duplicate of production. So you have similar results.

[00:23:36] Scott Moore Yes, I do see that. And I love that part of containerization because in 85% of the companies I ever consulted with, the environment itself was the biggest problem that they had and a lot of unknowns out there. So being able to just clone an environment and also being able to create a test environment only when you need to use it, so you're only paying for it when you need it. So this dynamic capability of creating your dynamic infrastructure, that is a game changer to me. That's amazing. And I would like to see more of that.

[00:24:11] Joe Colantonio If someone does go with containerization other tools now, or do you see tools coming out that can help you almost self-healing to make it more resilient so that it automatically can adjust to do? Hey, this is a bad a about a code deployment. Let's roll it back or let's restart this live container or take care of the predictive autoscaling automatically. Or is that still a pipe dream?

[00:24:30] Scott Moore I think we're starting to see some of that. I only know of a couple of companies, but they're still sort of in startup mode. They really haven't garnered a lot of attention yet, but I think that's the kind of stuff that's in the works that we will start seeing this year. I don't think it'll take till next year, but we will start seeing that all of these rules that trigger things that will happen, that will self-heal. We really haven't seen a lot of good self-healing stuff yet, but I think that's what's coming next.

[00:24:56] Joe Colantonio Nice. All right. I guess it's time to dive into Dev PerfOps. This concept I've been seeing everywhere and I never know because you're kind of like a not a trickster, but kind of like a so you have a fun sense of humor. And I never know if you're being serious or sarcastic with a lot of things. So what is Dev PerfOps and is it a real thing?

[00:25:15] Scott Moore And okay, well, let's get one thing straight. I'm always being sarcastic. Even when I'm being serious. I am always being sarcastic. You're not supposed to ever know if I'm being serious. And so when I started, I wasn't serious. It was actually an exercise in just making an extreme statement and just trying to see if I could take everybody off with it. That's what it was. And as I ran with the idea, if you take an idea that sounds like a decent idea and you just run to each extreme with it, you can start shooting holes in it. But when you take something to the extreme, you can't really shoot a lot of holes in it. You might actually have something. And so I was actually at the Performance Advisory Council about a month ago, and I was with the Mr. James Pulley and 4 or 5 other noted performance engineers. And I just got ticked off because I was looking up some things when I'm working on this security champions, I'm talking about DevSecOps. I just finished an interview with somebody and their whole platform is about DevSecOps. Their whole website is about DevSecOps. And I thought, hold on a second. I thought everything was just DevOps. I thought everything was either going to be in development or operations, and we're all in one big come by room working together, and testing gets encapsulated in that. But it's assumed that that is part of this. But in reality, we all know what happened. It's devs and it's ops and it's however your company defines that. And if it doesn't really have room for test, well, that's just what it is. That's again Scott Moore's opinion. I just kind of got ticked off and I said, well, hold on a second. If DevSecOps is really a thing, how do they get their own piece of that? How do they get to insert themselves into this title? I go out there and I look, I said, well, who really supports DevSecOps? Guess what? You look on Amazon, one of the largest companies in the world, what is DevSecOps? They explain what it is, go to any observability, go to Dynatrace, go to Datadog, look at their websites, and look up what is DevSecOps? They got a page out there capturing all the SEO to explain to you what DevSecOps, which means they endorse it. So you know what I did? I went out there to generative AI, and I said, look at what DevSecOps is and change everything to be performance engineering instead of security with the same importance in the same emphasis. Copy paste, create the domain, devperfops.org because that wasn't available, create the website. Three hours later it was up and I said I'm the person who's going to say so. Somebody somewhere came up with the term DevOps and said, it shall be. Why can't I be that guy? So I said, devperfops is now a thing. Put it out there and guess what? We already have like 20 companies who have already endorsed and said, you know what? That's actually a good idea because it's at least as legitimate as DevSecOps. If DevSecOps can be a legitimate subcategory of DevOps with an emphasis on security, so can performance. It is now a real thing. I am in the process of working with legal, and we are creating a 501 C6, which is kind of what chambers of commerce. And we're basing it on the model of FinOps. Both the CNCF Cloud Native Computing Foundation and FinOps Foundation are both run by the Linux Foundation and FinOps actually, before it was adopted by the Linux Foundation was a 501 C6. And I looked at what they had and I thought, this is brilliant. This is exactly what we need for the performance side. I'm modeling it right after these same organizations doing the same thing, but instead of security, we're focusing on performance. That's it.

[00:28:54] Joe Colantonio I'm just surprised about the amount of you're not getting, it doesn't, I'm not seeing a lot of pushback where people like DevOps should include security in performance. There shouldn't be these distinctions. There's always test ops, there's AIOps is all these ops ops ops ops ops ops. It's got to stop somewhere. So is it because people don't understand what DevOps is? That we're learning all these things? Is that what it is?

[00:29:16] Scott Moore Exactly. Somebody came up with something and then somebody adopts it for their own purposes and says, well, this is what I think DevOps means. Words mean something and but they mean something different to certain people, whoever's in control and got the biggest company and a lot of money. I guess they can do that. So I want to attempt to find out, like security. Why did they come up with DevSecOps? You will find that they said, well, they're in DevOps. All these pure DevOps shops, we're still getting hacked. We're still having security issues. So you guys aren't including it at least enough. So the security community got together and said, we're going to make sure you do take care of it because we can't afford to get hacked because we're out of business. If we do so, they were able to legitimately create that element to it. Now, I would agree security and not being owned by hackers is probably more important than having a fast website. But if your website is hackable but only one is not hackable, I should say, but only one person, two people can use it. You're still in a lot of trouble, right? Is it really functional if it can't do that? So we get back to the importance layer. Maybe it's not. But if you have a culture of performance like Google does, like Walmart does, like other companies who Netflix did or does, why can't we have something that has that same sort of emphasis? You weren't handling performance, so we're going to make sure that you do handle performance. And we're going to. So really, should it be included? No, I completely agree. But neither should security have their own. When DevSecOps finally says we give up, we're not going to have it. We will call ourselves DevOps and we will just have a guild inside of DevOps, and we'll call ourselves Security Guild. When they're ready to do that, I will give up DevPerfOps shut everything down, and go back to the way it was. But until they do, if they get theirs, we get ours. End of story.

[00:31:06] Joe Colantonio So I don't mean to keep hammering this. How's this different than SRE then? Because I thought SREs are like the new performance type of champions.

[00:31:14] Scott Moore I thought that as well, but as I investigated it more, they're like, we really don't care about that a whole lot. We are site reliability engineers. We are there to make sure the site is reliable. Yes, we want it to perform and we will try our best to take care of that when we get around to it. But it's a matter of being up, being reliable, and make sure that we reduce the toil out of the organization in the system, and that's what we're going to handle first. And I have never met yet a site reliability engineer who has the same depth of skill sets around performance as any performance engineer I've ever met. And that's the problem. And neither will an SDET, an SDET is great in automation. Mostly concerned about functionality. They don't have the performance engineering capabilities and skills. And that's what we're missing. And again, we're back to its foundational skills that should have been taught in school, in college, in a graduate, in IT degree. But it's not. We have to change that. And if I have to piss everybody off by calling it DevPerfops until they pay attention and do something about it, shut me down, shut down DevPerfOps by making this thing right. Make DevOps right and we'll go away.

[00:32:21] Joe Colantonio Completely agree. That's why every year at Automation Guild the last two days I include performance testing topics, observability topics, topics I think all testers should be. I think testers should own it. And I definitely agree with you there. Okay, Scott, before we go, best way to find or contact you and what are you most excited about or working on in 2024 that you jacked up up? Besides, the DevPerfOps obviously is very cool.

[00:32:44] Scott Moore As you can tell, I'm passionate about this. I'm excited as a mongoose and snake pit about this right now. No. That's just one of the things I'm doing. I got a lot of things going I'm easy to find on LinkedIn. Help@ScottMoore.Consulting would be a good email address for me, but you can find me on my YouTube channel and all these other sites. Performance tours is .performancetour, the securitychampions.com, DevOpsdriving.com. I mean, there are all these shows I'm excited because I'm working with Tricentis in two aspects now that we're continuing the performance tour. So you'll be seeing new episodes of that. You will be seeing also some parodies. I'm working on some new parodies right now, and they're going to outdo anything I've done before. Just be patient with me. I'm also working with Tricentis on their DevOps line, so they have a line of business in Tricentis specifically over DevOps and testing and DevOps cultures. And it's part of the test and acquisition. So I'm working with them on this DevOps driving show. So we're going to be discussing everything around DevOps, not just around the testing aspect of it, but that'll be a big part of excited about that show. And then of course, we continue to do the SMC Journal, The Security Champions. I was approached last year about doing something around security. I was sent to Black Hat and Def Con, which was a huge learning experience, a totally different crowd than the testing crowd, and also black hat, totally different crowd from Def Con. That could be a whole interview on its own. The company said, we want you to do a show around security. And so that's what we're doing. We're interviewing thought leaders in that area, and we were talking about all aspects of security. And then finally, I'm working with a company called West Networks. And they work with a company called Peplink to create these mobile cellular providing routers and switches. They do all kinds of infrastructure, but they will go two mobile units where there has been no internet connectivity, and they make it almost unbreakable with multiple cellular providers, plus Starlink, 24 Starlink's on a cruise ship, they bond them together as one big internet connection. And I think 36 things have to go wrong before you won't get any internet. I love working with these guys and I use their stuff when I'm out on the road to keep my internet going as well. So we're going to be doing a show with them this year called Mission Installable, and it'll be sort of like something you might see on HGTV, but instead of a demo of a house renovation, it's going to be installing unbreakable internet in some weird place and doing it in record time.

[00:35:08] And for links of everything of value, we covered in this DevOps toolchain show. Head on over to TestGuild.com/p135 and while you are there, make sure to click on the SmartBear link and learn all about Smartbear's, awesome solutions to give you the visibility you need to do the great software that's SmartBear.com. That's it for this episode of the DevOps Toolchain show, I'm Joe. My mission is to help you succeed in creating end-to-end full-stack DevOps toolchain awesomeness. As always, test everything and keep the good. Cheers

[00:36:01] Hey, thanks again for listening. If you're not already part of our awesome community of 27,000 of the smartest testers, DevOps, and automation professionals in the world, we'd love to have you join the FAM at Testguild.com and if you're in the DevOps automation software testing space or you're a test tool provider and want to offer real-world value that can improve the skills or solve a problem for the Guild community. I love to hear from you head on over to testguild.info And let's make it happen.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
Bas Dijkstra Testguild Automation Feature

Expert Take on Playwright, and API Testing with Bas Dijkstra

Posted on 04/14/2024

About This Episode: In today's episode, we are excited to feature the incredible ...

Brittany Greenfield TestGuild DevOps Toolchain

AI-Powered Security Orchestration in DevOps with Brittany Greenfield

Posted on 04/10/2024

About this DevOps Toolchain Episode: In today's episode, AI-Powered Security Orchestration in DevOps, ...

A podcast banner featuring a host for the "testguild devops news show" discussing weekly topics on devops, automation, performance, security, and testing.

First AI software tester, Will You Be Replaced and more TGNS116

Posted on 04/08/2024

About This Episode: Will you be replaced by AI soon? How do you ...