software testing

picture of Tom Cruise in a fighterjet in movie Top Gun

Testing like Top Gun?

Testing like Top Gun? 1549 874 Cordny

Testing like Top Gun

A few years ago I was asked to help a crack remote IT-team from Nagarro (India) to help them with their software testing and quality assurance.
Their assignment then was to create plugins (interfaces) between the client’s marketing Platform (PaaS) and 3rd parties like Microsoft Azure , Google cloud platform and other platforms like Snowflake etc.
There was one catch, little documentation was available and we did not have a dedicated PO (later we got a great one!) and the situation could change by the day.
So, what do you do as a tester then?

source:  https://www.looper.com/831839/the-suprising-reason-top-gun-maverick-shot-a-jaw-dropping-amount-of-footage/

Introducing the OODA loop

Well, about 10 years ago a buddy of mine (and great coach) told me about the OODA loop, a decision-model created and used for making decisions quickly.
It was developed by a US Airforce Colonel, John Boyd, for use in air combat where situations change by the second. Remember Top Gun and its great sequel Top Gun: Maverick ?

How I use the OODA loop with software testing

OODA is an acronym for Observe, Orient, Decide and Act.
My first step was to oversee the situation (Observe) and filter the things necessary for my tests. These things I had to combine (Orient) and create the best fitted tests for the product and the current situation (Decide)
And then the testing started (Act).
But what if things changed?
Well, that’s why it is called a loop, and you can start again from the beginning at Observe.
All in a fast and agile way.
Doing this we created interfaces in a fast way and we were always aware of the constraints and possible risks. As a team, not a bunch of individuals!

Alas, after a while the management team wanted to align us with the other teams and with the company’s processes.
Which is understandable because the company became more a scale up.

But, what a time.
It shouldn’t be a surprise I use the same OODA loop for my clients at TestingSaaS and ICT Rebels.

Always a maverick at TestingSaaS, always a step further, sometimes in the danger zone, but then the OODA-loop helps.
See you in the air, I mean cyberspace….

a bloch sphere visualization of a qubit

Quantum computing, Cordny, are you mad?

Quantum computing, Cordny, are you mad? 776 773 Cordny

Quantum computing, Cordny, are you mad?

This question I got last week from a peer after I announced on LinkedIn I wanted to dive into quantum computing.

Yeah buddy, quantum computing, QC for short.

On high school I wasn’t a physics fan, but I was always fascinated by atoms and its protons, electrons etc.
Regarding mathematics, geometry I hated, but algebra I was more interested in. Even during my biology study theoretical biology attracted me, but zoölogie was always my keen interest, so I graduated in microbiology and bio-informatics. Recognize the interest in micro?
Durig my testing career my curiosity in how things work at the smallest level continued and I also got more experienced in cloud computing (see my blogs and articles)
And then a few months ago a colleague of mine talked about QC and I thought, seeing the possibilities of cloud computing, combined with QC, thank you Quantum Delta NL, why not give it a shot in 2024? Let’s test it out and create content, or better said: Create Content through Testing. Just like I did for identiverse (UMA!!!!!) and the metaverse startups (Fectar 🚀 )

A bloch sphere visualization of a qubit, created with Python



It’s a kind of deja vu I now have, the curiosity and implementing it, just like I had more than 10 years ago with IAM (identity and access management) and UMA (yeah Eve Maler !)
I do not expect to be a pro at QC, but with my company TestingSaaS I will explore the landscape, test the software in the cloud, talk to the quantum computing experts, and I will write about it. Yes, using all my skills.
One step at a time, or better said 1 Qubit at a time :-).

To be continued!!

PS: I must be crazy, but I like it!

testdata and software testing

One of the hardest things in Software Testing

One of the hardest things in Software Testing 870 470 Cordny

When people ask me what the hardest thing in software testing is:

Creating your testdata and then use it wisely.

Testdata, not test automation?
No, because getting the correct testdata for your software testing costs a lot of time, especially when you need lots of data, which is also diverse.

Test automation is also hard, but this is just about automating your tests, not testing it. You first need your testdata, otherwise no test, no test automation.

Luckily we have the knowledge and tools from data science where we can create testdata with, for instance, Python and its libraries.

So, what testdata did you create today for your tests?

PS: Next to this post on TestingSaaS this blogpost was also shared on LinkedIn:

https://www.linkedin.com/posts/cordnynederkoorn_testdata-softwaretesting-datascience-activity-6988401827411021824-ySXd?utm_source=share&utm_medium=member_desktop

My first experience testing Google Assistant with the chatbot testtool Botium

Getting practical with Botium: Testing Google Assistant

Getting practical with Botium: Testing Google Assistant 1394 818 Cordny

How did I end up with testing Google Assistant with Botium?

During my daily software tests I verify a lot of SaaS apps and platforms.

Examples are Chatbots and virtual assistants.

And recently, clients asked me more about testing Google Assistant.

This can be done manually. This takes a long time and errors can occur when testing the same cases over a longer period of time. 

Is it also possible to use test automation?

Yes, with the help of Botium.

This article shows my first experiences when testing Google Assistant automatically with Botium.

But first what is Google Assistant?

What is Google Assistant?

Google Assistant is Google’s voice assistant. By voice commands, voice searching, and voice-activated device control you can finish a number of tasks after you’ve said the “OK Google” or “Hey Google” wake words. Conversational interactions through text or speech is its main goal, making Google Assistant a chatbot.

After the wake words you can start talking to Google without using a ‘trigger word’. Google listens and gives a response.

Google can even recognize different voices, knowing who is talking to it and responding accordingly. 

You can also ask for multiple things in one sentence.

Great, now we know what Google Assistant is. 

But do we want to test Google Assistant manually – with our own voice, or multiple voices – or can this testing be done automatically? 

Yes, by using Botium, the tool for Testing, training and quality assurance for chatbots.

What is Botium?

Selenium is the de-facto-standard for testing web applications. 

Appium is the de-facto-standard for testing smartphone applications. 

Botium is the de-facto-standard for testing conversational AI. 

And just as Selenium and Appium, Botium is free and Open Source, and available on Github.

I can tell you a lot about Botium’s architecture, but that’s out of scope for this article.

Next to testing the conversation flow of a chatbot you can do a lot more:

  • Testing NLP model of a chatbot
  • E2E testing of a chatbot based on Selenium and Appium
  • Load- and Stress testing
  • GDPR testing
  • Security testing
  • CI/CD integration (Jenkins, Bamboo, Azure DevOps Pipelines, IBM Toolchain, …)
  • and many more

For now we will focus on testing Google Assistant with Botium.

How can I test Google Assistant with Botium?

Instead of thinking how I could test Google Assistant with Botium I did a very modern thing.

I Googled the question.

And I got lucky. Botium already made a video on this with the appropriate name ‘Setting Up a Google Assistant Project in Botium Box’

Awesome, now I can copy this completely for my own testing.

The steps in Google Actions, Assistant and Botium are very straightforward.

And soon I was making my own Botium tests and creating scripts for testing Google Assistant.

But, as with all software, it is not always as easy as it seems.

My experiences with Testing Google Assistant with Botium

From what I saw when testing Google Assistant, Botium as a tool is very intuitive.

What I really like is that you can see and hear (!)  the results as they occur when using Google Assistant. Just search in your test result for the following links.

The sound and screen-links overlap, but that’s a cosmetic issue.

The important thing is, it works. 

You can hear the text as it is displayed by using the play button as seen above.

And by clicking the weblink you are forwarded to a page resembling the results you get when using Google Assistant.

One of the things I noticed when testing Google Assistant it can be different in their response than what you expected.

For example, see this result after saying ‘Hallo’ to Google Assistant:

Always update your testcases regularly, Google Assistant is changing fast, and as you can see, it even gets more personalized.

Also, what happens when the answer to your question can be random? Better said, the answer can be given in a different sequence.

Google Assistant will give appropriate prompts, depending on the question you ask.

But see what happens when I ask Google Assistant via Botium the following question:

‘Who is Jack Nicholson’?

Great, the testcase worked fine, but let’s see what happens when I repeat this testcase?

In the second test different buttons appear in the result, failing the testcase.

Now you can delete the expected test results from your Botium code in your testcase, but remember you are testing a voice assistant and Google Assistant returns these buttons so you can click them for more information or an answer to a question.

Therefore in Botium you really have to define your testcases and their expected results to avoid situations like this.

Wrap up

This article showed my first experience testing Google Assistant with Botium.

These results are not exhaustive, I only mentioned the findings important when testing Google Assistant with Botium for the first time.

Other future test cases could be multiple voice testing, asking multiple questions at the same time etc. But that’s for a future article.

Botium is intuitive, easy to use and can be used for different testing processes with chatbots.

As with every tool, you have to stay critical as a real tester should be.

Test automating a chatbot is never easy, but with Botium you have a great tool to work with.

It should be in every chatbot tester’s toolbox!