Berichten door:

Cordny

turning system data into quality insights

Becoming a Data-Savvy Analyst: The Next Step in Testing Maturity

Becoming a Data-Savvy Analyst: The Next Step in Testing Maturity 592 874 Cordny

Modern software teams produce enormous amounts of data.
Logs, metrics, traces, test results, performance dashboards, and customer usage signals are generated every second.

Yet in many teams, that data is barely used.

Tests are executed. Dashboards exist. Monitoring tools run. But few people translate that data into actionable insights about quality.

This is where the Data-Savvy Analyst emerges.

In the TestingSaaS Skill Maturity Framework, becoming data-savvy means moving beyond intuition and execution toward evidence-based quality decisions.

The Traditional QA Analyst

A traditional QA Analyst already thinks more strategically than an Operator.

They:

  • Perform risk-based testing
  • Analyze requirements
  • Identify coverage gaps
  • Communicate risks to stakeholders

They answer questions like:

  • What could break?
  • Where are our risky areas?
  • What should we test before release?

But their insights often rely on experience and reasoning, not always on measurable system behavior.

And that’s where the next evolution begins.

The Data-Savvy Analyst

Image

Throughput analysis using Datadog

A Data-Savvy Analyst adds a new capability:

They use production and testing data to guide quality decisions.

Instead of asking only what might break, they ask:

  • What does the data tell us about system behavior?
  • Where do users actually experience problems?
  • Which parts of the system generate the most errors?
  • What patterns appear in logs, metrics, and traces?

This analyst connects multiple information sources:

  • Test results
  • Observability data
  • Performance metrics
  • Production incidents
  • User behavior analytics

Quality becomes measurable and observable.

Why Data Literacy Is Becoming Essential

In modern SaaS environments, systems are too complex to understand purely through testing alone.

Applications now include:

  • Microservices
  • APIs
  • Third-party integrations
  • Cloud infrastructure
  • Continuous deployment

Failures often appear in production conditions, not just in test environments.

This means quality engineers must learn to interpret operational signals such as:

  • Error rates
  • Latency spikes
  • Usage patterns
  • Resource consumption

Without this perspective, testing remains blind to real-world behavior.

The Shift from Test Results to System Insights

Traditional testing focuses on pass/fail outcomes.

Data-savvy analysis focuses on behavioral patterns.

Instead of asking:

Did the test pass?

The Data-Savvy Analyst asks:

  • How often does this endpoint fail in production?
  • Which user flows generate the most latency?
  • Which features are barely used but heavily tested?
  • Where do incidents cluster in the architecture?

Testing becomes part of a broader discipline: observing system health.

Skills That Define a Data-Savvy Analyst

Developing this capability requires new skills.

Understanding Observability Data

Data-savvy analysts work with:

  • Logs
  • Metrics
  • Distributed traces
  • Performance telemetry

Tools might include observability platforms or monitoring dashboards.

But the important skill is interpreting patterns, not just reading charts.

Asking Quantitative Questions

Data literacy begins with curiosity.

Examples of useful questions:

  • Which component causes the most incidents?
  • What percentage of traffic hits this feature?
  • How does performance change after deployment?
  • What signals indicate quality degradation?

These questions turn raw data into insights.

Connecting Testing with Production Reality

The Data-Savvy Analyst connects three worlds:

  1. Development
  2. Testing
  3. Operations

Instead of seeing testing as a separate phase, they treat quality as a continuous feedback loop.

Test results influence monitoring.
Monitoring insights influence test design.

Why Many Teams Struggle with This Transition

Despite the importance of data literacy, many teams struggle to develop it.

Common reasons include:

Tool Silos

Testing tools, monitoring platforms, and analytics dashboards are often separate.

Few teams actively connect them.

Lack of Analytical Training

Testers are trained to:

  • Design tests
  • Automate checks
  • Execute scenarios

They are rarely trained to analyze operational data.

Cultural Barriers

In some organizations:

  • QA owns testing
  • DevOps owns monitoring
  • Product owns analytics

The Data-Savvy Analyst crosses all three domains.

That requires collaboration and curiosity.

Why Data-Savvy Analysts Are Increasingly Valuable

As SaaS systems scale, quality decisions must become data-driven.

Organizations need professionals who can:

  • Interpret observability signals
  • Connect incidents with architectural weaknesses
  • Prioritize testing based on real usage patterns
  • Identify hidden reliability risks

These capabilities transform QA from a verification function into a decision-support discipline.

Practical Steps to Become a Data-Savvy Analyst

If you want to develop this capability, start with small habits.

Explore Your Monitoring Tools

Open dashboards used by DevOps teams and ask:

  • What metrics are tracked?
  • What alerts exist?
  • Which services produce the most errors?

Study Production Incidents

Every incident contains valuable learning signals.

Ask:

  • What failed?
  • What signals existed before the failure?
  • Could testing have detected it earlier?

Connect Observability with Test Strategy

Use operational data to guide testing priorities.

For example:

  • Focus tests on high-traffic features
  • Investigate areas with high error rates
  • Design performance tests based on real workloads

Testing becomes evidence-based.

The Future of Quality Engineering

The role of testing is evolving.

Operators execute tests.
Analysts reason about risk.
Data-Savvy Analysts interpret system behavior.

In modern SaaS environments, quality is no longer only about verification.

It is about understanding complex systems through data.

And the professionals who master that skill will shape the future of quality engineering.

How to become a Data-Savvy Analyst?

–> TestingSaaS Learning Resource Hub 



TestingSaaS and InnovaTeQ partner up in IT Education

TestingSaaS and InnovaTeQ combine forces to shake up Dutch IT education

TestingSaaS and InnovaTeQ combine forces to shake up Dutch IT education 790 340 Cordny

🔥HOT FROM THE PRESS 🔥

TestingSaaS and InnovaTeQ, now partners in IT Education

The last months I was deeply involved in setting up an affiliate program for the Hungarian IT course provider InnovaTeQ.
Both me and Ádám Tóth, founder of InnovaTeQ, have a vision to provide IT courses which are a mix of engineering, business and the use of tooling.
In today’s market these subjects are mostly given seperately, not giving you the big picture you need as an IT professional. Especially in the age of AI.

TestingSaaS and InnovaTeQ partner up in IT Education

So we started to work together, as content creators and affiliate partner.

Why did I become an affiliate partner with InnovaTeQ?

Because we want to introduce the Dutch market to the unique courses InnovaTeQ provides in IT.
From observability and performance testing to agile working.

Providing good value and for a good price.

A collection of InnovaTeQ courses

Here is a collection of InnovaTeQ courses:

Observability Concept Essentials

Observability Maturity Unlocked

Observability in Action – Roles & Use Cases

Time to get involved in IT education, the InnovaTeQ way!

illustrating the evolution from IT operator to IT analyst by jumping from 1 mountain to another

Why Moving in IT from Test Operator to Test Analyst Is the Hardest Step in the TestingSaaS Skill Maturity Framework

Why Moving in IT from Test Operator to Test Analyst Is the Hardest Step in the TestingSaaS Skill Maturity Framework 1800 1202 Cordny

In the TestingSaaS Skill Maturity Framework, the jump from Level 2 – Operator to Level 3 – Analyst is where most testers plateau.

Not because they lack intelligence.
Not because they lack tooling skills.

But because this transition is not about learning more tools.

It’s about changing how you think.

source: https://subud.ca/overcome-obstacles/

Level 2 – Operator: Reliable Execution

At Level 2, professionals are strong executors.

They:

  • Write and maintain automated tests
  • Execute regression suites
  • Use tools like Selenium, Playwright, Postman
  • Deliver predictable output

Success is measured in:

  • Number of tests
  • Stability of regression
  • Coverage percentage
  • Passed vs failed results

The Operator works inside the system.

They make it run.

This level is valuable. Many SaaS companies depend on strong Level 2 professionals to keep releases stable.

But it is not yet strategic.

Level 3 – Analyst: Strategic Quality Thinking

At Level 3, something changes.

The Analyst asks different questions:

  • What risks are we actually mitigating?
  • What is the business impact if this fails?
  • Where are our coverage gaps?
  • Which parts of this system are fragile?
  • Should this even be automated?

Instead of executing tests, the Analyst designs quality strategy.

They connect:

  • Requirements → Architecture → Risk → Test Approach
  • Product decisions → Quality trade-offs
  • Business goals → Technical implementation

The Analyst works on the system, not just in it.

Why This Transition Is So Difficult

1. It Requires an Identity Shift

Level 2 value = “I can build and run tests.”

Level 3 value = “I can reason about risk and complexity.”

That shift feels uncomfortable

Tool mastery gives certainty.
Risk analysis gives ambiguity.

Many professionals hesitate because they feel they are losing their strongest asset: execution speed.

2. You Must Be Comfortable Challenging Decisions

Analysts ask uncomfortable questions:

  • Why are we testing this feature?
  • What happens if we don’t?
  • Is this really high risk?
  • Are we over-automating?

That can feel confrontational, especially in delivery-driven SaaS environments.

It requires confidence and communication skills, not just technical expertise.

3. Tooling Stops Being the Center

At Level 2, tools are your identity.

At Level 3:

  • Architecture matters more than frameworks.
  • Risk matters more than coverage percentage.
  • Impact matters more than script count.

This is psychologically hard because many testers built their careers around automation expertise.

4. You Need System Thinking

Analytical maturity demands abstraction:

  • Understanding dependencies
  • Modeling data flows
  • Seeing edge cases before code exists
  • Translating business language into test strategy
  • Recognizing where failures cascade across SaaS integrations

This is cognitive growth, not procedural growth.

It takes deliberate practice.

5. Organizations Often Reward Level 2 Behavior

Many companies:

  • Say they want strategic QA
  • But measure success in test case output
  • Celebrate automation numbers
  • Prioritize speed over reflection

So professionals stay in the safe zone of execution.

And maturity stalls.

Why This Matters in SaaS Environments

In SaaS companies, especially scaling ones:

  • Releases become more frequent
  • Integrations multiply
  • Customer impact increases
  • Architectural complexity grows

Level 2 professionals keep things running.

Level 3 professionals prevent future chaos.

Without Analysts:

  • Automation becomes noise
  • Regression grows without strategy
  • Technical debt accelerates
  • Quality becomes reactive instead of proactive

This is exactly where many Salesforce partners and SaaS scale-ups struggle.

How to Move from Operator to Analyst

The shift is intentional. It does not happen automatically.

Practical steps:

  1. Start mapping risk before writing tests.
  2. Ask “What could hurt the business?” in every refinement.
  3. Study architecture diagrams.
  4. Model data flows.
  5. Participate in product discussions.
  6. Stop measuring your value in test count.

Replace:

“How do I automate this?”

With:

“Should this be automated and why?”

The Strategic Tipping Point

In the TestingSaaS Skill Maturity Framework, Level 3 is the tipping point where:

  • Quality becomes strategic
  • Testers influence decisions
  • Automation becomes intentional
  • QA starts shaping architecture discussions

It’s the difference between being a reliable executor and becoming a quality architect.

And that is why the jump feels difficult.

It requires you to grow beyond the comfort of tools into the responsibility of judgment.

If you are currently operating at Level 2, ask yourself:

Are you maintaining stability?

Or are you shaping the future risk profile of your product?

That answer defines your maturity.

a graph showing a DevOps team maturity over time, with an emphasis on the plateau phase

Why Most DevOps Teams Plateau at Intermediate Level (And How to Break Through)

Why Most DevOps Teams Plateau at Intermediate Level (And How to Break Through) 684 512 Cordny

Most engineering teams think they are improving.

They adopt tools.
They automate pipelines.
They attend conferences.

But skill growth quietly plateaus.

Not because of motivation.
Not because of budget.

But because there is no structured skill maturity path.

source image: https://deoshankar.medium.com/100-days-of-project-based-devops-learning-plan-a445fc9f2f9

The Skill Maturity Problem

Across DevOps, Cloud, Testing, and Performance Engineering,
most professionals operate in one of four hidden stages:

  • Tool-Focused
  • Implementation-Focused
  • System-Focused
  • Strategy-Focused

Without recognizing where you are,
it’s impossible to intentionally move forward.

Introduce the TestingSaaS Skill Maturity Framework

That’s why TestingSaaS created the TestingSaaS Skill Maturity Framework, which has 4 distinct stages an engineer has to go through to become a Strategic Technologist.

The 4 Stages of Skill Maturity

1️⃣ Tool Awareness -> The Tool User

You know the tools.
You can follow tutorials.
You execute instructions.

2️⃣ Implementation -> The Operator & Analyst

You can apply tools in real projects.
You troubleshoot issues.
You work independently.

3️⃣ System Thinking -> The Architect

You design solutions.
You understand trade-offs.
You influence architecture decisions.

4️⃣ Strategic Impact -> The Strategic Technologist

You optimize organizations.
You mentor others.
You shape long-term engineering direction.

The Hidden Constraint

The biggest bottleneck is not effort.

It’s access to structured, high-quality, practical education
that supports progression from Stage 2 to Stage 3.

Resources That Actually Support Stage 3 Growth

Over the years, I’ve seen a lot of DevOps and performance content.

A lot of it is surface-level.
A lot of it is tool marketing.

If you’re serious about deepening your expertise in:


• Performance Engineering
• Green IT
• Observability

There are a few structured programs I personally consider strong.

You can explore them here:
👉 TestingSaaS Learning Resource Hub

This Resource Hub is not exhaustive, and will be expanded continuously during my learning journey.
At the moment it is focused on Observability and Green IT, which are my own development goals in 2026.

Skill growth is not about consuming more content.

It’s about moving intentionally from execution to systems thinking.

If you’re unsure where you currently stand,
start by identifying your stage.

That alone changes how you learn.

Is GreenXAI an illusion?

GreenXAI, an illusion?

GreenXAI, an illusion? 308 163 Cordny

Green IT, that’s one of my favorite subjects these days here and on LinkedIn.

But did you know TestingSaaS also works in the field of Explainable AI aka XAI?
And that he combines it in GreenXAI.
Let me explain.


Is GreenXAI an illusion?

source image: https://en.imna.ir/news/807163/AI-Emerges-as-a-Vital-Tool-for-Environmental-Protection-Sustainability

What is XAI?

AI is for people a black box. You put some data in the AI app (like ChatGPT) and you receive the output. But can you explain how the output is created?

Here XAI joins the stage. This is a collection of methods and processes, enabling AI users to understand and trust the AI output.
The so called post-hoc methods are applied after the model is trained.
SHAP and LIME are examples of these, which I use daily in my work.

Because I am also interested in green IT, I was wondering how green these post-hoc methods are.


How green is XAI?

Well, they are not, because they run additional computations on top of the AI model. This means they increase runtime resulting in extra CPU/GPU usage, consuming more energy compared to just running the model alone.

But is that the whole story?

No, they expose bias and errors faster resulting in a reduction of wasted compute on poorly performing models.
Also they enable more efficient model design. By identifying which features truly matter, you can retrain a smaller, leaner, greener model.
And last, but not least:  they can be paired with tools like CodeCarbon to connect explanations with sustainability metrics. This increases the transparency in energy measurement.


Is greenXAI an illusion?

So, is green XAI an illusion by using these post-hoc methods?

SHAP and LIME are not inherently “green IT” methods, but they can play a green role within the AI development lifecycle by preventing waste and helping optimize models for efficiency.

I will still use these methods because I want to find out more about how the AI output is created. In the meantime I will make my coding greener when possible.

Are you interested in my GreenIT and XAI work? Just contact me and let’s see how I can help you.

Why you can't measure the AI consumption of different cloud computing vendors without a proper standard for GreenIT?

Comparing apples with oranges in Green IT

Comparing apples with oranges in Green IT 1200 768 Cordny

Comparing apples with oranges?
Yes, that’s what I think is going on when GreenIT professionals are comparing cloud computing vendors on their energy costs per LLM query.

Comparing apples with oranges?
Yes, that’s what I think is going on when GreenIT professionals are comparing cloud computing vendors on their energy costs per LLM query.
Last week Google published an article about how environmental impact on AI inference is measured by them. And whole LinkedIn went wild. And it was polarizing, supporters and critics falling over each other trying to shout the hardest.

But what did Google measure? That’s what TestingSaaS will find out.


Measuring environmental impact on AI inference by the cloud/AI providers

Google

First of all, Google measured the energy costs of a single Gemini text prompt (text, not another medium, which costs a lot more energy). The study focuses on a broad look, including not only the power used by the AI chips that run models but also by all the other infrastructure needed to support that hardware like water consumption, cooling etc.


The estimation results: the median Gemini Apps text prompt uses 0.24 watt-hours (Wh) of energy, emits 0.03 grams of carbon dioxide equivalent (gCO2e), and consumes 0.26 milliliters (or about five drops) of water.


How Google did this is explained in their technical paper. It goes too far to explain all here.
To have a better undertanding of these numbers Google stated:


The Gemini Apps text prompt uses less energy than watching nine seconds of television (0.24 Wh) and consumes the equivalent of five drops of water (0.26 mL) and 0.03 grams of carbon dioxide (market estimate)

And although the Google scientists also mentioned some critical remarks in their paper and article (median, market estimate etc.) LinkedIn went in critical mode. Just query LinkedIn on ” google gemini AI energy’ and you will find enough positive and negative posts on this subject.

Mistral

Last July, Mistral AI published a full life cycle assessment (LCA):

the environmental footprint of training Mistral Large 2: as of January 2025, and after 18 months of usage, Large 2 generated the following impacts:

20,4 ktCO₂e, 

281 000 m3 of water consumed, 

and 660 kg Sb eq (standard unit for resource depletion). 

the marginal impacts of inference, more precisely the use of their AI assistant Le Chat for a 400-token response – excluding users’ terminals:

1.14 gCO₂e, 

45 mL of water, 

and 0.16 mg of Sb eq. 

Awesome, now we can compare the results with Google, or not?

Comparing Google and Mistral AI energy costs

That’s like comparing apples with oranges.


Why?

Just look at what is measured:

– the “marginal by prompt” (Google),

– the “total cost of the cycle” (Mistral).

What Google measured is completely different compared to Mistral.

Why is this wrong?
Well, if you compare different things it’s like comparing apples with oranges. They are completely different, so no comparison can be made.
It’s not a standard you can compare.

What to do now?

So instead of criticizing the reports of how AI energy consumption is measured by the cloud computing vendors, why not figure out together what a suitable measurement standard could be for AI energy consumption in GreenIT?
Or not Green Software Foundation?

That would make the world less polarizing as it already is.
We’re engineers, let the politics out of it!










Green IT vs. Sustainable IT

Green or sustainable IT?

Green or sustainable IT? 1024 1024 Cordny

Last week I had a great meetup with like-minded people about IT and getting it more greener.
Members from the Green Software Foundation and Sustainable IT Netherlands communities together.
Passion for technology converged with a shared commitment to sustainability.
That evening something was lingering in my mind, but I could not grasp it.
A few days later it struck me:

Green IT is not equal to Sustainable IT

Let me explain.

Why is Green IT not equal to Sustainable IT?

Green IT and sustainable IT, both terms are used frequently on social media, especially on LinkedIn, when promoting the use of IT measures against climate change. We use them interchangeably without even knowing it. But they are not the same.
We first have to see what these terms mean seperately.

What is Green IT?

In my honest opinion, Green IT, as also stated by the Green Software Foundation (GSF) is software that is responsible for emitting fewer greenhouse gases like CO2. So, less CO2, less energy, less waste etc.
It’s a tech thing, trying to solve a prblem quick, result driven.
But what’s then the difference with Sustainable IT?

What is Sustainable IT?

Sustainable IT refers to the design, manufacture, use, and disposal of IT systems and infrastructure with minimal negative impact on the environment, while also being socially responsible and economically viable.
The last part of this definition shows the difference with Green IT: while also being socially responsible and economically viable. Let me explain this further.

The difference between Green IT and Sustainable IT?

Sustainable IT is not only about IT, it involves also the social interaction. It’s a process, get IT sustainable.
How? By implementing processes, not only involving IT pro’s, but other people too.
And processes take time, it’s not a quick fix, it needs commitment. It is slower.
And it also affects the economy. How can we build solutions for sustainable IT that lasts a long time?

In other words, Green IT is a part of Sustainable IT.

What is the role of TestingSaaS here?

TestingSaaS has a green mission: help to reduce the emission of green house gases by IT.
With his knowledge of software testing, documentation, and green IT Cordny Nederkoorn
helps Small and Medium-sized Enterprises get their software testing and document creation greener, so they can create sutainable products.
Not only green IT, but also sustainable IT, by creating awareness at these companies and clients.

Would you like to discover what TestingSaaS can do for your organization?

📅 Schedule a free exploratory call via
https://lnkd.in/eAXUVjBS

or send me a direct message.

Let’s build sustainable IT together, for a lasting world!!









TestingSaaS goes green IT

TestingSaaS on a new mission: a green mission!

TestingSaaS on a new mission: a green mission! 502 356 Cordny

Remember I said it was time for some experiments?

Well, they started.

TestingSaaS goes green IT, just like its logo.

It started with a spark, erupting into a flame and last Wednesday a flow started to spread.

How did this happen?

The spark which erupted the flame: a book called Green IT

A few months ago I was walking through my favorite bookshop in Oss like Ernest Hemingway did a hundred years ago in the famous bookshop Shakespeare and Company in Paris. I always relax here, just browsing through books. Then I saw a book called Green IT by Jan Hoogstra and Eric Consten.

While reading the cover I got intrigued and I bought it. Yeah, the current IT (especially AI) is having a problematic impact on the climate. And this book gives , next to some theory, also practical examples how to decrease this impact. Software, hardware, networks, data centers and utilities, they all can help this do it. Companies and organisations like TNO, AFAS and the government are doing it.

This got me thinking, how could I help with my company?
Not only by getting more sustainable with my company, but also more as a real player in this new field.

A spark became a flame.

Then I met Wilco Burggraaf on LinkedIN.

The flow: Meeting the people from Sustainable IT Netherlands and the Green Software Foundation

Following Wilco, the Dutch Green Software Champion, expanded my network in Green IT, with people like Robert Keus, a social entrepreneur revolutionizing the way technology intersects with society. And developing a  first green AI chat, reducing the impact AI has on our environment by running on sustainable infrastructure and by repurposing heat. Chatbots eeh, where did I do that before ?


Man, I had to meet these people, but how? As an entrepreneur I’m also quite busy.
Let’s see if there are meetups where they are involved, and yes, there are.

One of them was last Wednesday, the 28th of May 2025.  A meetup from Green Software Foundation and Sustainable IT Netherlands communities, where passion for technology converges with a shared commitment to sustainability.
Hosted by Thorsten Picard at Capgemini HQ Utrecht.
This was the time to get that flow going!

The Green IT flow

The evening was wonderful. I finally met Wilco and Robert and a lot of other people, a real organic gathering.

I heard about the ‘Green Software Foundation’, and I was very happy to also meet people from ‘Sustainable IT Netherlands’. Corina Milosoiu and Chris Stapper, very delighted to have met you.

But a meetup is not a meetup when there are no talks.

Robert kicked it of, together with Cas Burggraaf, an energetic and eye-opening session, which proved that a talk about GenAI can be about so much more than just numbers and figures.

Then the stage was ready for a lighthearted talk by Mirko van der Maat from Capgemini about Sustainability in Architecture and Barbapapa. Oh man, Barbapapa, forgotten memories.

At the end it was time for some pitches, which were received very well.

But hey, what is a meetup without some drinks and snacks, very well facilitaded by our host?

Time for some networking. Great talks with very passionate people with one thing in common: Green IT!

TestingSaaS going green: the future?

Ok, we had a spark that erupted in a flame, becoming a flow.

Well, I want a good Aussie bush fire, I want to create a flood.

Yes, I remember your books Rijn Vogelaar.

And I can’t do it alone.

With my new friends from the Green Software Foundation and Sustainable IT Netherlands communities I can.

How? By Creating Content through Testing!

To be continued!!

Time for some experiments

Time for some experiments 2000 1333 Cordny

When you look at the logo of TestingSaaS you see a green magnifying glass.

One of the reasons I selected the color green was because of my biology-background at Wageningen University & Research.

For the future there will be another reason.

When I was in Ireland a few weeks ago the color green was everywhere.

And the I got an epiphany, something to work on when I was back at my home.
See the recent posts of me, Cordny Nederkoorn, and guess what it will be.

DM me if you know 🙂

Experiments are going on now, and it looks promising.

One step at a time.

To be continued!


picture of Tom Cruise in a fighterjet in movie Top Gun

Testing like Top Gun?

Testing like Top Gun? 1549 874 Cordny

Testing like Top Gun

A few years ago I was asked to help a crack remote IT-team from Nagarro (India) to help them with their software testing and quality assurance.
Their assignment then was to create plugins (interfaces) between the client’s marketing Platform (PaaS) and 3rd parties like Microsoft Azure , Google cloud platform and other platforms like Snowflake etc.
There was one catch, little documentation was available and we did not have a dedicated PO (later we got a great one!) and the situation could change by the day.
So, what do you do as a tester then?

source:  https://www.looper.com/831839/the-suprising-reason-top-gun-maverick-shot-a-jaw-dropping-amount-of-footage/

Introducing the OODA loop

Well, about 10 years ago a buddy of mine (and great coach) told me about the OODA loop, a decision-model created and used for making decisions quickly.
It was developed by a US Airforce Colonel, John Boyd, for use in air combat where situations change by the second. Remember Top Gun and its great sequel Top Gun: Maverick ?

How I use the OODA loop with software testing

OODA is an acronym for Observe, Orient, Decide and Act.
My first step was to oversee the situation (Observe) and filter the things necessary for my tests. These things I had to combine (Orient) and create the best fitted tests for the product and the current situation (Decide)
And then the testing started (Act).
But what if things changed?
Well, that’s why it is called a loop, and you can start again from the beginning at Observe.
All in a fast and agile way.
Doing this we created interfaces in a fast way and we were always aware of the constraints and possible risks. As a team, not a bunch of individuals!

Alas, after a while the management team wanted to align us with the other teams and with the company’s processes.
Which is understandable because the company became more a scale up.

But, what a time.
It shouldn’t be a surprise I use the same OODA loop for my clients at TestingSaaS.

Always a maverick at TestingSaaS, always a step further, sometimes in the danger zone, but then the OODA-loop helps.
See you in the air, I mean cyberspace….