Berichten door:

Cordny

How the TestingSaaS Skill Maturity Framework help you fill the skill gaps of your DevOps team

How the TestingSaaS Skill Maturity Framework Helps You Identify and Close Critical Skill Gaps

How the TestingSaaS Skill Maturity Framework Helps You Identify and Close Critical Skill Gaps 1536 1024 Cordny

In modern SaaS environments, quality is no longer just about testing, it’s about how capable are we across the entire delivery lifecycle.

Yet many teams still struggle with a fundamental question:

“Where do we actually stand in terms of skill maturity?”

The TestingSaaS Skill Maturity Framework was designed to answer exactly that, and more importantly, to expose the gaps that are holding teams back.

The Challenge: You Can’t Fix What You Can’t See

Most organizations operate with limited visibility into their true capabilities.

You might hear things like:

  • “We’re doing automation”
  • “We’ve implemented CI/CD”
  • “Our testing is solid”

But when you look closer:

  • Automation is brittle and hard to scale
  • CI/CD lacks meaningful quality gates
  • Testing is reactive instead of strategic

The issue isn’t effort, no, it’s lack of a structured maturity model.

What Is the TestingSaaS Skill Maturity Framework?

The TestingSaaS Skil Maturity Framework provides a practical, real-world model for evaluating skills across modern testing and quality engineering domains (and also other!)

It breaks down capability into:

  • Core skill areas (e.g., automation, exploratory testing, CI/CD, test strategy)
  • Clear maturity levels (from foundational to expert)
  • Observable behaviors that define each level

This allows teams to move from vague assumptions to objective, evidence-based evaluation.

How TestingSaaS Reveals Skill Maturity Gaps

1. It Defines What “Mature” Actually Looks Like

Instead of generic titles like junior or senior, TestingSaaS describes what people actually do at each level.

For example:

  • Level 1 – Tool User : Uses tools according to documentation. Responds to incidents.
  • Level 2 – Operator : Manages pipelines and monitoring. Resolves known issues.
  • Level 3 – Analyst : Understands cause and effect. Can interpret metrics.Performs root cause analyses.
  • Level 4 – Architect : Designs systems with scale, cost, and reliability in mind.
  • Level 5 – Strategic Technologist : Thinks in terms of systems, risk, sustainability, and business impact.

This clarity creates a shared understanding of excellence.

2. It Enables Objective, Multi-Dimensional Assessment

The framework allows teams to assess maturity across multiple dimensions, not just roles.

A single team might be:

  • Strong in automation execution
  • Weak in test architecture
  • Missing strategic quality leadership

By breaking skills into components, TestingSaaS highlights specific, actionable gaps.

3. It Exposes Hidden Imbalances

One of the most valuable insights the framework provides is imbalance.

For example:

  • Heavy investment in tools, but low skill maturity
  • Strong individual contributors, but no system-level thinking
  • Advanced CI/CD pipelines, but poor test design

These imbalances are often the root cause of:

  • Slow releases
  • Production defects
  • Scaling challenges

4. It Connects Skills Directly to Outcomes

This framework doesn’t just assess skills. It links them to real business impact.

Skill GapImpact
Low automation maturityHigh manual effort, slow feedback
Weak exploratory testingMissed edge cases, production issues
Lack of strategy-level skillsMisaligned quality direction
Poor CI/CD integrationDelayed releases

This makes it easier to justify where to invest and why.

From Insight to Action

The real strength of the TestingSaaS framework is not just diagnosis, it’s direction.

Targeted Upskilling

Teams can:

  • Focus on specific maturity gaps
  • Build structured learning paths
  • Track progress over time

Smarter Hiring

Instead of vague requirements:

“We need a senior tester”

You define:

“We need Level 3+ capability in automation architecture and CI/CD integration”

Continuous Improvement

The framework supports an ongoing cycle:

  1. Assess current maturity
  2. Identify gaps
  3. Prioritize high-impact areas
  4. Develop capabilities
  5. Reassess and evolve

This turns skill development into a repeatable system.

Final Thoughts

Skill gaps are inevitable. Hidden skill gaps are dangerous.

The TestingSaaS Skill Maturity Framework gives organizations the clarity to:

  • See where they truly stand
  • Understand what’s missing
  • Take targeted, effective action

Because in a world where speed and quality define success:

Maturity isn’t optional, it’s a competitive advantage.

worrying about IT skill growth

5 Mistakes That Block your Learning using a Skill Maturity Framework

5 Mistakes That Block your Learning using a Skill Maturity Framework 706 1127 Cordny

Skill maturity frameworks, like the TestingSaaS Skill Maturity Framework are everywhere in IT, but most fail at their core purpose: helping people actually improve. Instead, they often become labeling systems that create the illusion of progress without real capability growth.

If you’ve built or are using a Maturity framework like the TestingSaaS Skill Maturity framework, here are five critical mistakes that can quietly block learning upgrades.


The 5 critical mistakes when using a Skill Maturity Framework

1. Treating maturity like a checklist


One of the most common pitfalls is reducing maturity levels to a set of completed tasks: tools used, practices adopted, or boxes ticked. But real maturity isn’t about what you use, it’s about how you think. When people equate “I use automation” with “I’m advanced,” they skip the deeper layer: understanding trade-offs, risk, and impact. A strong framework defines levels through decision-making quality, not activity.

2. Overvaluing tools and automation


Automation is often mistaken as the ultimate sign of maturity. In reality, it’s just an amplifier. Without strong foundations in for instance test design, exploratory testing, and risk analysis, automation simply scales poor thinking. This is how teams end up with thousands of tests and still miss critical bugs. Maturity should prioritize thinking first and automation comes later to extend that capability.

3. Measuring activity instead of outcomes


Many frameworks track progress through metrics like number of test cases, coverage percentages, or automation counts. These are easy to measure but misleading. They say nothing about whether quality is improving. If maturity isn’t tied to outcomes like reduced escaped defects, faster feedback loops, or increased release confidence, learning of the system and skill development stalls. What matters is impact, not output.

4. Ignoring context


A one-size-fits-all maturity model doesn’t work. The expectations for a fintech platform handling sensitive transactions are very different from those of a fast-moving startup. When frameworks ignore context, teams either over-engineer (slowing themselves down) or under-invest (increasing risk). True maturity is contextual, it adapts to risk, scale, and business needs.

5. Missing the upgrade path


Many frameworks describe levels clearly but fail to explain how to move between them. This leaves people stuck. Knowing your level is useless if you don’t know what to do next. Effective models define the transition: what to stop doing, what to start doing, and what signals indicate progress. Growth requires direction, not just classification.

The real problem: maturity as status


The biggest mistake is cultural. When maturity becomes a label, something to defend or compare, it stops being a learning tool. People optimize for looking advanced instead of becoming better.

An IT Skill maturity framework should act as a thinking model, not a scoring system. Its purpose is to evolve how teams make decisions, prioritize risks, and deliver value with a diverse level of IT professionals.

If your framework is working, you’ll see it in subtle but powerful ways: teams ask better questions, simplify their strategies, and catch meaningful issues earlier. That’s real maturity and that’s what drives lasting improvement.

Need help refining your IT Skill Maturity model? Let’s break it down together.

a picture of a strategic IT Technologist

What do Strategic Technologists? Aligning Engineering & Business

What do Strategic Technologists? Aligning Engineering & Business 1536 1024 Cordny

Most engineering teams are very good at building systems.

They:

  • ship features.
  • improve performance.
  • maintain reliability.

But many still struggle with one critical question:

How does this create real business impact?

This is where the role of the Strategic Technologist begins.

Beyond Architecture: The Final Shift

In the TestingSaaS Skill Maturity Framework, becoming a Strategic Technologist is the final stage:

Level 5 — Strategic Impact

It’s the transition from:

  • Designing systems
  • Understanding trade-offs

To:

  • Aligning engineering decisions with business outcomes
  • Optimizing systems at an organizational level

Most engineers never fully make this shift.

Not because they lack technical skill.
But because they were never trained to think in business terms.

What Is a Strategic Technologist?

A Strategic Technologist connects two worlds:

  • Engineering systems
  • \Business strategy

They don’t just ask:

“Can we build this?”

They ask:

“Should we build this, and what impact will it have?”

Core characteristics

A Strategic Technologist:

  • Thinks in business value, not just technical output
  • Understands cost, risk, and ROI
  • Uses technology to drive decisions, not just implement them
  • Aligns engineering with long-term strategy
  • Balances performance, sustainability, and scalability

The Hidden Gap in Most Teams

Most teams operate in:

  • Tool usage
  • Implementation
  • System design

But very few operate in:

  • Strategic alignment

This creates a gap:

Engineering FocusBusiness Reality
Optimize latencyImprove customer retention
Reduce errorsProtect revenue streams
Scale systemsControl operational costs

Without alignment, even great engineering:

  • Doesn’t translate into business value
  • Becomes cost instead of investment
  • Loses influence at leadership level

Where This Fits in the TestingSaaS Framework

The TestingSaaS Skill Maturity Framework defines this progression:

Level 1 – Tool User
Uses tools according to documentation.
Responds to incidents.

Level 2 – Operator
Manages pipelines and monitoring.
Resolves known issues.

Level 3 – Analyst
Understands cause and effect.
Can interpret metrics.
Performs root cause analyses.

Level 4 – Architect
Designs systems with scale, cost, and reliability in mind.

Level 5 – Strategic Technologist
Thinks in terms of systems, risk, sustainability, and business impact.

This final level is where engineering becomes decision-making power.

What Alignment Actually Looks Like

Let’s make this practical.

Example 1 — Performance Engineering

Architect mindset:

  • Improve latency
  • Optimize queries

Strategic Technologist mindset:

  • Does performance impact conversion rates?
  • What is the revenue impact of 1 second delay?
  • Where should we invest for maximum ROI?

Example 2 — Observability

Architect mindset:

  • Design dashboards
  • Monitor systems

Strategic Technologist mindset:

  • Which signals influence business decisions?
  • Are we measuring user experience or internal noise?
  • Can observability reduce business risk?

Example 3 — Green IT

Architect mindset:

  • Optimize infrastructure
  • Reduce compute usage

Strategic Technologist mindset:

  • How does sustainability affect brand and compliance?
  • Can Green IT reduce cost and improve positioning?
  • What KPIs matter at board level?

The Language Shift

To align engineering and business, you must change your language.

From:

  • CPU usage
  • latency
  • error rates

To:

  • cost per transaction
  • user experience impact
  • revenue risk
  • sustainability metrics

Same systems. Different conversation.

Why This Is So Hard

Because most engineers are trained to:

  • build
  • optimize
  • fix

Not to:

  • justify
  • prioritize
  • influence

And most organizations:

  • separate engineering and business
  • measure output, not impact

How to Develop Strategic Thinking

1. Understand the business model

Ask:

  • How does this company make money?
  • What are the biggest risks?
  • Where are margins under pressure?

2. Translate metrics into impact

Example:

  • “Latency improved by 200ms”
  • “Conversion increased by 3%”

3. Prioritize based on value

Not all improvements matter equally.

Focus on:

  • high-impact areas
  • measurable outcomes
  • strategic goals

4. Use observability as a business tool

Observability is not just technical insight.

It can answer:

  • Where are users dropping off?
  • Which features create value?
  • Where is cost increasing?

5. Think in systems AND organizations

A Strategic Technologist understands:

  • systems architecture
  • team structure
  • business constraints

🌱 The Role of Observability & Green IT

Within TestingSaaS, two domains strongly support this shift:

Observability

  • Connects system behavior to user impact
  • Enables data-driven decisions

Green IT

  • Connects engineering to sustainability goals
  • Links cost, efficiency, and compliance

👉 Both are bridges between engineering and business.

Final Thought

The highest level of engineering is not technical mastery.

It’s strategic influence.

When you become a Strategic Technologist:

  • You don’t just build systems
  • You shape decisions
  • You drive impact

And that’s where engineering becomes a business asset, not just a cost center.

👉 Want to understand where you are on this journey?
Explore the TestingSaaS Skill Maturity Framework on testingsaas.nl.

💬 Question:
What engineering decision recently had the biggest business impact in your organization?

As part of the TestingSaaS Skill Maturity Framework: Thinking like an IT-architect

Architectural Thinking: Moving Beyond Operations

Architectural Thinking: Moving Beyond Operations 1536 1024 Cordny

Most engineers don’t get stuck because they lack effort.
They get stuck because they stay in operations mode.

They manage pipelines.
They respond to alerts.
They fix issues.

And they are get very good at it.

But at some point, operational excellence stops translating into growth.

This is where architectural thinking begins.

The Plateau Between Operator and Architect

Within the TestingSaaS Skill Maturity Framework, this is the transition from a problemsolver to a designing architect :

Level 2/3 → Level 4

From:

  • Managing systems
  • Executing tasks
  • Solving known problems

To:

  • Designing systems
  • Anticipating trade-offs
  • Influencing long-term decisions

Most engineers plateau here.

Not because they can’t grow.
But because they are never taught how.

What Is Architectural Thinking?

Architectural thinking is the ability to move from:

“How do I fix this?”

to:

“Why does this system behave this way, and how should it be designed instead?”

It’s about seeing systems as interconnected, evolving structures, not just components.

Key characteristics

An architectural thinker:

  • Understands cause and effect across systems
  • Thinks in trade-offs (cost vs performance vs reliability)
  • Designs for failure, not just success
  • Considers long-term impact, not just quick fixes

The Operational Trap

Operations feels productive.

You:

  • Close tickets
  • Improve pipelines
  • Fix incidents

But over time:

❌ You optimize symptoms
❌ You repeat patterns
❌ You stay reactive

Without architectural thinking, you become:

A highly efficient operator in a poorly designed system

operating in chaos

The Shift: From Doing to Designing

To move forward, your mindset must shift:

Operational ThinkingArchitectural Thinking
Fix the issueRedesign the system
Follow best practicesQuestion assumptions
Focus on componentsFocus on interactions
React to alertsPrevent failure modes
a pro-active architect

Where This Fits in the TestingSaaS Skill Maturity Framework

In the TestingSaaS Skill Maturity Framework, this shift looks like:

Level 2/3 — Operator / Analyst

  • Manages monitoring and pipelines
  • Performs root cause analysis
  • Solves known issues

Level 4 — System Thinking (Architect)

  • Designs systems with intent
  • Understands trade-offs
  • Influences architecture decisions

Level 5 — Strategic Technologist

  • Aligns systems with business goals
  • Optimizes across teams
  • Thinks in sustainability and impact

Architectural thinking is the gateway skill

Let’s illustrate it with some examples.

Example 1 — Performance Issue

Operator mindset:

  • Optimize query
  • Add caching
  • Scale server

Architect mindset:

  • Why is this request expensive?
  • Should this be synchronous?
  • Can we redesign data flow?

Example 2 — Observability

Operator mindset:

  • Add dashboards
  • Set alerts

Architect mindset:

  • What signals actually matter?
  • Are we measuring user experience or system noise?
  • How does observability support decision-making?

Example 3 — Green IT

Operator mindset:

  • Reduce CPU usage
  • Optimize images

Architect mindset:

  • Can we reduce unnecessary computation entirely?
  • What is the carbon impact of this architecture?
  • Can we redesign for efficiency at system level?

Why Most Learning Resources Fail

Most content focuses on:

  • Tools
  • Tutorials
  • Implementation

Very little focuses on:

  • System design thinking
  • Trade-offs
  • long-term architecture

That’s why many engineers stay stuck between Level 2 and 3.

How to Develop Architectural Thinking

1. Study systems, not tools

Instead of:

“How does this tool work?”

Ask:

“Why does this system exist?”

2. Practice trade-off thinking

Every decision has consequences:

  • Performance vs cost
  • Speed vs reliability
  • Simplicity vs flexibility

Train yourself to see them.

3. Reverse-engineer systems

Take an existing system and ask:

  • Why is it designed this way?
  • What are the bottlenecks?
  • What would I change?

4. Use observability as a thinking tool

Observability is not dashboards.

It’s a way to understand:

  • system behavior
  • user impact
  • hidden complexity

5. Think beyond code

Architecture includes:

  • infrastructure
  • data flow
  • team structure
  • business constraints

Final Thoughts

Skill growth is not about doing more.

It’s about thinking differently.

The move from Operator to Architect is not a step up in tools.

It’s a step up in perspective.

And once you make that shift:

You stop fixing systems.
You start shaping them.

👉 If you want to understand where you stand in this journey, explore the TestingSaaS Skill Maturity Framework on testingsaas.nl.

👉 And some free advice:

Follow this course to get the architect skills needed in this age of observability and AI.

Observability Strategy Pillars: Build Real Observability Capability

turning system data into quality insights

Becoming a Data-Savvy Analyst: The Next Step in Testing Maturity

Becoming a Data-Savvy Analyst: The Next Step in Testing Maturity 592 874 Cordny

Modern software teams produce enormous amounts of data.
Logs, metrics, traces, test results, performance dashboards, and customer usage signals are generated every second.

Yet in many teams, that data is barely used.

Tests are executed. Dashboards exist. Monitoring tools run. But few people translate that data into actionable insights about quality.

This is where the Data-Savvy Analyst emerges.

In the TestingSaaS Skill Maturity Framework, becoming data-savvy means moving beyond intuition and execution toward evidence-based quality decisions.

The Traditional QA Analyst

A traditional QA Analyst already thinks more strategically than an Operator.

They:

  • Perform risk-based testing
  • Analyze requirements
  • Identify coverage gaps
  • Communicate risks to stakeholders

They answer questions like:

  • What could break?
  • Where are our risky areas?
  • What should we test before release?

But their insights often rely on experience and reasoning, not always on measurable system behavior.

And that’s where the next evolution begins.

The Data-Savvy Analyst

Image

Throughput analysis using Datadog

A Data-Savvy Analyst adds a new capability:

They use production and testing data to guide quality decisions.

Instead of asking only what might break, they ask:

  • What does the data tell us about system behavior?
  • Where do users actually experience problems?
  • Which parts of the system generate the most errors?
  • What patterns appear in logs, metrics, and traces?

This analyst connects multiple information sources:

  • Test results
  • Observability data
  • Performance metrics
  • Production incidents
  • User behavior analytics

Quality becomes measurable and observable.

Why Data Literacy Is Becoming Essential

In modern SaaS environments, systems are too complex to understand purely through testing alone.

Applications now include:

  • Microservices
  • APIs
  • Third-party integrations
  • Cloud infrastructure
  • Continuous deployment

Failures often appear in production conditions, not just in test environments.

This means quality engineers must learn to interpret operational signals such as:

  • Error rates
  • Latency spikes
  • Usage patterns
  • Resource consumption

Without this perspective, testing remains blind to real-world behavior.

The Shift from Test Results to System Insights

Traditional testing focuses on pass/fail outcomes.

Data-savvy analysis focuses on behavioral patterns.

Instead of asking:

Did the test pass?

The Data-Savvy Analyst asks:

  • How often does this endpoint fail in production?
  • Which user flows generate the most latency?
  • Which features are barely used but heavily tested?
  • Where do incidents cluster in the architecture?

Testing becomes part of a broader discipline: observing system health.

Skills That Define a Data-Savvy Analyst

Developing this capability requires new skills.

Understanding Observability Data

Data-savvy analysts work with:

  • Logs
  • Metrics
  • Distributed traces
  • Performance telemetry

Tools might include observability platforms or monitoring dashboards.

But the important skill is interpreting patterns, not just reading charts.

Asking Quantitative Questions

Data literacy begins with curiosity.

Examples of useful questions:

  • Which component causes the most incidents?
  • What percentage of traffic hits this feature?
  • How does performance change after deployment?
  • What signals indicate quality degradation?

These questions turn raw data into insights.

Connecting Testing with Production Reality

The Data-Savvy Analyst connects three worlds:

  1. Development
  2. Testing
  3. Operations

Instead of seeing testing as a separate phase, they treat quality as a continuous feedback loop.

Test results influence monitoring.
Monitoring insights influence test design.

Why Many Teams Struggle with This Transition

Despite the importance of data literacy, many teams struggle to develop it.

Common reasons include:

Tool Silos

Testing tools, monitoring platforms, and analytics dashboards are often separate.

Few teams actively connect them.

Lack of Analytical Training

Testers are trained to:

  • Design tests
  • Automate checks
  • Execute scenarios

They are rarely trained to analyze operational data.

Cultural Barriers

In some organizations:

  • QA owns testing
  • DevOps owns monitoring
  • Product owns analytics

The Data-Savvy Analyst crosses all three domains.

That requires collaboration and curiosity.

Why Data-Savvy Analysts Are Increasingly Valuable

As SaaS systems scale, quality decisions must become data-driven.

Organizations need professionals who can:

  • Interpret observability signals
  • Connect incidents with architectural weaknesses
  • Prioritize testing based on real usage patterns
  • Identify hidden reliability risks

These capabilities transform QA from a verification function into a decision-support discipline.

Practical Steps to Become a Data-Savvy Analyst

If you want to develop this capability, start with small habits.

Explore Your Monitoring Tools

Open dashboards used by DevOps teams and ask:

  • What metrics are tracked?
  • What alerts exist?
  • Which services produce the most errors?

Study Production Incidents

Every incident contains valuable learning signals.

Ask:

  • What failed?
  • What signals existed before the failure?
  • Could testing have detected it earlier?

Connect Observability with Test Strategy

Use operational data to guide testing priorities.

For example:

  • Focus tests on high-traffic features
  • Investigate areas with high error rates
  • Design performance tests based on real workloads

Testing becomes evidence-based.

The Future of Quality Engineering

The role of testing is evolving.

Operators execute tests.
Analysts reason about risk.
Data-Savvy Analysts interpret system behavior.

In modern SaaS environments, quality is no longer only about verification.

It is about understanding complex systems through data.

And the professionals who master that skill will shape the future of quality engineering.

How to become a Data-Savvy Analyst?

–> TestingSaaS Learning Resource Hub 



TestingSaaS and InnovaTeQ partner up in IT Education

TestingSaaS and InnovaTeQ combine forces to shake up Dutch IT education

TestingSaaS and InnovaTeQ combine forces to shake up Dutch IT education 790 340 Cordny

🔥HOT FROM THE PRESS 🔥

TestingSaaS and InnovaTeQ, now partners in IT Education

The last months I was deeply involved in setting up an affiliate program for the Hungarian IT course provider InnovaTeQ.
Both me and Ádám Tóth, founder of InnovaTeQ, have a vision to provide IT courses which are a mix of engineering, business and the use of tooling.
In today’s market these subjects are mostly given seperately, not giving you the big picture you need as an IT professional. Especially in the age of AI.

TestingSaaS and InnovaTeQ partner up in IT Education

So we started to work together, as content creators and affiliate partner.

Why did I become an affiliate partner with InnovaTeQ?

Because we want to introduce the Dutch market to the unique courses InnovaTeQ provides in IT.
From observability and performance testing to agile working.

Providing good value and for a good price.

A collection of InnovaTeQ courses

Here is a collection of InnovaTeQ courses:

Observability Concept Essentials

Observability Maturity Unlocked

Observability in Action – Roles & Use Cases

Time to get involved in IT education, the InnovaTeQ way!

illustrating the evolution from IT operator to IT analyst by jumping from 1 mountain to another

Why Moving in IT from Test Operator to Test Analyst Is the Hardest Step in the TestingSaaS Skill Maturity Framework

Why Moving in IT from Test Operator to Test Analyst Is the Hardest Step in the TestingSaaS Skill Maturity Framework 1800 1202 Cordny

In the TestingSaaS Skill Maturity Framework, the jump from Level 2 – Operator to Level 3 – Analyst is where most testers plateau.

Not because they lack intelligence.
Not because they lack tooling skills.

But because this transition is not about learning more tools.

It’s about changing how you think.

source: https://subud.ca/overcome-obstacles/

Level 2 – Operator: Reliable Execution

At Level 2, professionals are strong executors.

They:

  • Write and maintain automated tests
  • Execute regression suites
  • Use tools like Selenium, Playwright, Postman
  • Deliver predictable output

Success is measured in:

  • Number of tests
  • Stability of regression
  • Coverage percentage
  • Passed vs failed results

The Operator works inside the system.

They make it run.

This level is valuable. Many SaaS companies depend on strong Level 2 professionals to keep releases stable.

But it is not yet strategic.

Level 3 – Analyst: Strategic Quality Thinking

At Level 3, something changes.

The Analyst asks different questions:

  • What risks are we actually mitigating?
  • What is the business impact if this fails?
  • Where are our coverage gaps?
  • Which parts of this system are fragile?
  • Should this even be automated?

Instead of executing tests, the Analyst designs quality strategy.

They connect:

  • Requirements → Architecture → Risk → Test Approach
  • Product decisions → Quality trade-offs
  • Business goals → Technical implementation

The Analyst works on the system, not just in it.

Why This Transition Is So Difficult

1. It Requires an Identity Shift

Level 2 value = “I can build and run tests.”

Level 3 value = “I can reason about risk and complexity.”

That shift feels uncomfortable

Tool mastery gives certainty.
Risk analysis gives ambiguity.

Many professionals hesitate because they feel they are losing their strongest asset: execution speed.

2. You Must Be Comfortable Challenging Decisions

Analysts ask uncomfortable questions:

  • Why are we testing this feature?
  • What happens if we don’t?
  • Is this really high risk?
  • Are we over-automating?

That can feel confrontational, especially in delivery-driven SaaS environments.

It requires confidence and communication skills, not just technical expertise.

3. Tooling Stops Being the Center

At Level 2, tools are your identity.

At Level 3:

  • Architecture matters more than frameworks.
  • Risk matters more than coverage percentage.
  • Impact matters more than script count.

This is psychologically hard because many testers built their careers around automation expertise.

4. You Need System Thinking

Analytical maturity demands abstraction:

  • Understanding dependencies
  • Modeling data flows
  • Seeing edge cases before code exists
  • Translating business language into test strategy
  • Recognizing where failures cascade across SaaS integrations

This is cognitive growth, not procedural growth.

It takes deliberate practice.

5. Organizations Often Reward Level 2 Behavior

Many companies:

  • Say they want strategic QA
  • But measure success in test case output
  • Celebrate automation numbers
  • Prioritize speed over reflection

So professionals stay in the safe zone of execution.

And maturity stalls.

Why This Matters in SaaS Environments

In SaaS companies, especially scaling ones:

  • Releases become more frequent
  • Integrations multiply
  • Customer impact increases
  • Architectural complexity grows

Level 2 professionals keep things running.

Level 3 professionals prevent future chaos.

Without Analysts:

  • Automation becomes noise
  • Regression grows without strategy
  • Technical debt accelerates
  • Quality becomes reactive instead of proactive

This is exactly where many Salesforce partners and SaaS scale-ups struggle.

How to Move from Operator to Analyst

The shift is intentional. It does not happen automatically.

Practical steps:

  1. Start mapping risk before writing tests.
  2. Ask “What could hurt the business?” in every refinement.
  3. Study architecture diagrams.
  4. Model data flows.
  5. Participate in product discussions.
  6. Stop measuring your value in test count.

Replace:

“How do I automate this?”

With:

“Should this be automated and why?”

The Strategic Tipping Point

In the TestingSaaS Skill Maturity Framework, Level 3 is the tipping point where:

  • Quality becomes strategic
  • Testers influence decisions
  • Automation becomes intentional
  • QA starts shaping architecture discussions

It’s the difference between being a reliable executor and becoming a quality architect.

And that is why the jump feels difficult.

It requires you to grow beyond the comfort of tools into the responsibility of judgment.

If you are currently operating at Level 2, ask yourself:

Are you maintaining stability?

Or are you shaping the future risk profile of your product?

That answer defines your maturity.

a graph showing a DevOps team maturity over time, with an emphasis on the plateau phase

Why Most DevOps Teams Plateau at Intermediate Level (And How to Break Through)

Why Most DevOps Teams Plateau at Intermediate Level (And How to Break Through) 684 512 Cordny

Most engineering teams think they are improving.

They adopt tools.
They automate pipelines.
They attend conferences.

But skill growth quietly plateaus.

Not because of motivation.
Not because of budget.

But because there is no structured skill maturity path.

source image: https://deoshankar.medium.com/100-days-of-project-based-devops-learning-plan-a445fc9f2f9

The Skill Maturity Problem

Across DevOps, Cloud, Testing, and Performance Engineering,
most professionals operate in one of four hidden stages:

  • Tool-Focused
  • Implementation-Focused
  • System-Focused
  • Strategy-Focused

Without recognizing where you are,
it’s impossible to intentionally move forward.

Introduce the TestingSaaS Skill Maturity Framework

That’s why TestingSaaS created the TestingSaaS Skill Maturity Framework, which has 4 distinct stages an engineer has to go through to become a Strategic Technologist.

The 4 Stages of Skill Maturity

1️⃣ Tool Awareness -> The Tool User

You know the tools.
You can follow tutorials.
You execute instructions.

2️⃣ Implementation -> The Operator & Analyst

You can apply tools in real projects.
You troubleshoot issues.
You work independently.

3️⃣ System Thinking -> The Architect

You design solutions.
You understand trade-offs.
You influence architecture decisions.

4️⃣ Strategic Impact -> The Strategic Technologist

You optimize organizations.
You mentor others.
You shape long-term engineering direction.

The Hidden Constraint

The biggest bottleneck is not effort.

It’s access to structured, high-quality, practical education
that supports progression from Stage 2 to Stage 3.

Resources That Actually Support Stage 3 Growth

Over the years, I’ve seen a lot of DevOps and performance content.

A lot of it is surface-level.
A lot of it is tool marketing.

If you’re serious about deepening your expertise in:


• Performance Engineering
• Green IT
• Observability

There are a few structured programs I personally consider strong.

You can explore them here:
👉 TestingSaaS Learning Resource Hub

This Resource Hub is not exhaustive, and will be expanded continuously during my learning journey.
At the moment it is focused on Observability and Green IT, which are my own development goals in 2026.

Skill growth is not about consuming more content.

It’s about moving intentionally from execution to systems thinking.

If you’re unsure where you currently stand,
start by identifying your stage.

That alone changes how you learn.

Is GreenXAI an illusion?

GreenXAI, an illusion?

GreenXAI, an illusion? 308 163 Cordny

Green IT, that’s one of my favorite subjects these days here and on LinkedIn.

But did you know TestingSaaS also works in the field of Explainable AI aka XAI?
And that he combines it in GreenXAI.
Let me explain.


Is GreenXAI an illusion?

source image: https://en.imna.ir/news/807163/AI-Emerges-as-a-Vital-Tool-for-Environmental-Protection-Sustainability

What is XAI?

AI is for people a black box. You put some data in the AI app (like ChatGPT) and you receive the output. But can you explain how the output is created?

Here XAI joins the stage. This is a collection of methods and processes, enabling AI users to understand and trust the AI output.
The so called post-hoc methods are applied after the model is trained.
SHAP and LIME are examples of these, which I use daily in my work.

Because I am also interested in green IT, I was wondering how green these post-hoc methods are.


How green is XAI?

Well, they are not, because they run additional computations on top of the AI model. This means they increase runtime resulting in extra CPU/GPU usage, consuming more energy compared to just running the model alone.

But is that the whole story?

No, they expose bias and errors faster resulting in a reduction of wasted compute on poorly performing models.
Also they enable more efficient model design. By identifying which features truly matter, you can retrain a smaller, leaner, greener model.
And last, but not least:  they can be paired with tools like CodeCarbon to connect explanations with sustainability metrics. This increases the transparency in energy measurement.


Is greenXAI an illusion?

So, is green XAI an illusion by using these post-hoc methods?

SHAP and LIME are not inherently “green IT” methods, but they can play a green role within the AI development lifecycle by preventing waste and helping optimize models for efficiency.

I will still use these methods because I want to find out more about how the AI output is created. In the meantime I will make my coding greener when possible.

Are you interested in my GreenIT and XAI work? Just contact me and let’s see how I can help you.

Why you can't measure the AI consumption of different cloud computing vendors without a proper standard for GreenIT?

Comparing apples with oranges in Green IT

Comparing apples with oranges in Green IT 1200 768 Cordny

Comparing apples with oranges?
Yes, that’s what I think is going on when GreenIT professionals are comparing cloud computing vendors on their energy costs per LLM query.

Comparing apples with oranges?
Yes, that’s what I think is going on when GreenIT professionals are comparing cloud computing vendors on their energy costs per LLM query.
Last week Google published an article about how environmental impact on AI inference is measured by them. And whole LinkedIn went wild. And it was polarizing, supporters and critics falling over each other trying to shout the hardest.

But what did Google measure? That’s what TestingSaaS will find out.


Measuring environmental impact on AI inference by the cloud/AI providers

Google

First of all, Google measured the energy costs of a single Gemini text prompt (text, not another medium, which costs a lot more energy). The study focuses on a broad look, including not only the power used by the AI chips that run models but also by all the other infrastructure needed to support that hardware like water consumption, cooling etc.


The estimation results: the median Gemini Apps text prompt uses 0.24 watt-hours (Wh) of energy, emits 0.03 grams of carbon dioxide equivalent (gCO2e), and consumes 0.26 milliliters (or about five drops) of water.


How Google did this is explained in their technical paper. It goes too far to explain all here.
To have a better undertanding of these numbers Google stated:


The Gemini Apps text prompt uses less energy than watching nine seconds of television (0.24 Wh) and consumes the equivalent of five drops of water (0.26 mL) and 0.03 grams of carbon dioxide (market estimate)

And although the Google scientists also mentioned some critical remarks in their paper and article (median, market estimate etc.) LinkedIn went in critical mode. Just query LinkedIn on ” google gemini AI energy’ and you will find enough positive and negative posts on this subject.

Mistral

Last July, Mistral AI published a full life cycle assessment (LCA):

the environmental footprint of training Mistral Large 2: as of January 2025, and after 18 months of usage, Large 2 generated the following impacts:

20,4 ktCO₂e, 

281 000 m3 of water consumed, 

and 660 kg Sb eq (standard unit for resource depletion). 

the marginal impacts of inference, more precisely the use of their AI assistant Le Chat for a 400-token response – excluding users’ terminals:

1.14 gCO₂e, 

45 mL of water, 

and 0.16 mg of Sb eq. 

Awesome, now we can compare the results with Google, or not?

Comparing Google and Mistral AI energy costs

That’s like comparing apples with oranges.


Why?

Just look at what is measured:

– the “marginal by prompt” (Google),

– the “total cost of the cycle” (Mistral).

What Google measured is completely different compared to Mistral.

Why is this wrong?
Well, if you compare different things it’s like comparing apples with oranges. They are completely different, so no comparison can be made.
It’s not a standard you can compare.

What to do now?

So instead of criticizing the reports of how AI energy consumption is measured by the cloud computing vendors, why not figure out together what a suitable measurement standard could be for AI energy consumption in GreenIT?
Or not Green Software Foundation?

That would make the world less polarizing as it already is.
We’re engineers, let the politics out of it!