Preparing yourself for the CTFL Exam

A friend of mine has recently passed the CTFL Exam after one week of a lot of study and effort. CTFL is a popular Certified Tester Foundation Level exam in software testing. It examines your professional knowledge around software testing discipline. The exam has 40 questions and takes 60 minutes.

Should I take the exam ?

This is a very debatable subject and people can discuss for hours about it’s value.

If you are thinking about if this is for you or not, these are some things that you can keep in mind and help you to decide:

  • Experience always has more value than any certification. Certifications don’t provide the exposure and training you get while working on real life projects !
  • THIS CERTIFICATION CAN HELP YOU TO GET YOUR FIRST QA JOB. If you are changing careers, never worked with Software Testing before, then this certification might help you to be selected for an interview. This would be part of your portfolio and as you might not have any experience in this field, this can be used as a parameter when filtering the candidates.
  • THE CERTIFICATION DOESN’T MEAN YOU ARE AN EXPERT OR THAT YOU ARE GETTING THE JOB. Most of the professionals applied to this certification in the beginning of their career and then never study the Syllabus again. This is because you start to use your experience more than the base knowledge you got when studying for the exam. Again the certification doesn’t mean you will get the job, experience and exposure to different projects will.

Online Course

Mock Exams/Material and Syllabus links

When do I know I am ready ?

One technique that I use for actually most of the exams that I apply is practicing the mock exams and once I pass 3 times in a row then it means I am mostly ready to take the real one. You might have other ways to see when you are actually ready, but whatever you follow make sure to prepare and dedicate yourself !

Test Strategy Templates

Hello people,
 
Following up these meetups: Developing a Test Strategy and Desenvolvendo uma Estratégia de Testes, I realised it would be useful to share some templates and examples that I have seen in my previous projects.
 
Every company/project adapted this document and had their own template. There is no right or wrong as long as you have the needed information there to the best of your knowledge at the time is okay.
 
 

 

Template 1

 

Template 2

 

Template 3

 

Template 4

 

Template 5

 

You can mix them, pick one session from one and another session from the other, feel free to create your own Test Strategy according to what you need !

Developing a Test Strategy

Hello everybody,

In case you have missed here it is the link for the meetup about Developing a Test Strategy 30/07/2020.

 

If you can speak portuguese and don’t feel comfortable with english yet you can also watch the portuguese version here:

Open Banking Functional Conformance Suite Test Cases

What is Open Banking ?

Open banking allows the use of open APIs enabling third-party developers to build applications and services around financial institutions. It comes to bring more financial transparency options for account holders ranging from open data to private data.

Open Banking Use Cases (for Users) | by Ştefan Alexandru Băluţ ...

Open Banking Functional Conformance Suite

To be able to get the Functional Conformance Certificate, Open Banking provides a Functional Conformance Tool to allow implementers to check if your API has successfully developed all required functional elements of the OBIE Read/Write API Specifications.

This Open Banking tool allows an ASPSP (Account Servicing Payment Service Provider) and a TPP (Third Party Provider) to test the response of any API endpoint and validate that the JSON and data formats meet the schema, permissions and interfaces against the Functional API standard.

How to identify Test Cases covered in the OB Functional Conformance Suite ?

How do you know what else needs to be covered and if there is indeed something more to cover ? After digging into the project on bitbucket, I found some useful json files where you can check the assertions for each test case, the test cases itself and another file to translate the list of the assertions.

So, you can find the asserts that are being done for each test case inside the manifests folder.

For example, this one contains the assertions for this test case: The x-fapi-interaction-id is replayed for an Account. You can find the file with the accounts transactions test cases here.

Screenshot 2020-07-13 at 17.48.08

Then you would need to check what this assertion actually means, and you can find the dictionary of the assertions on this file.

Screenshot 2020-07-13 at 17.52.47

Remember that all the tests currently assume that consent is granted at the ASPSP portal for each requested PSU Consent (Payment Service User Consent).

Also, you will find that some test cases are missing for instance what should happen when you send an invalid token to the payments endpoint, but you can see there is a test case for the accounts endpoints for when you send a token without the required permissions to get a 401 response.

In this example, you can see that for payments the consent model is a bit different because each access token doesn’t have a range of permissions, but is associated with a single payment consent id. So, in order to get a 401 response, the request can present the wrong token along with a payment call or present no token at all. The conformance tool is not sending any token in this instance.

So make sure you are aware and cover the missing test cases with another approach.

I found quite hard to have a straight answer about what are all the test cases they are covering and also the details, so hope this helps to have a bit more clarity in case you are having the same issues.

Resources:

https://openbankinguk.github.io/knowledge-base-pub/conformance-tools/

https://openbanking.atlassian.net/wiki/spaces/DZ/pages/1061716467/Functional+Conformance

https://medium.com/zoidcoin-network/open-banking-use-cases-for-users-8678d11d770b

https://en.wikipedia.org/wiki/Open_banking

Load Tests: Jmeter vs Gatling

Hello guys,

Continuing on reviewing some performance test tools, today is the turn of Jmeter and Gatling, which looks like more and more people are using nowadays. Remember always check your other options and see what better fits for your project.

 

Jmeter is a great and powerful tool, but depending on what you really need (something more lighter) then Jmeter might become an overcomplex, slow, hard to maintain tool.

Jmeter Gatling
In-built Protocols Support
  • HTTP
  • FTP
  • JDBC
  • SOAP
  • LDAP
  • TCP
  • JMS
  • SMTP
  • POP3
  • IMAP
  • HTTP
  • JMS
  • MQTT
Speed to write tests
  • Slow
  • Fast
Support of “Test as Code”
  • GUI oriented
  • Possibility to create scripts, but too complex and lack of documentation
  • Weak (Java)
  • Hard to maintain
Ramp-up Flexibility
  • Plugins available to be able to configure flexible load
  • Supports ramp-up phases and flexible load
Test Results Analyzing
  • Yes
  • Yes
Resources Consumption
  • Heavy to run tests with multiple users on a single machine, more memory consumption
  • Lighweight and doesn’t take up so much memory of your machine

Screenshot 2020-06-20 at 14.38.36

Easy to use with Version Control Systems
  • No
  • Yes
Number of Concurrent Users
  • Thousands, under restrictions
  • Thousands
Recording Functionality
  • Yes
  • Yes
Distributed Execution
  • Yes
  • Yes
Load Tests Monitoring
  • Add listeners, but consume more memory
  • Yes, logs through the console and reports are created at the end
Screenshot_2020-06-20 Gatling Stats - Global Information

 

Jmeter is most used when:

  • You need to perform a complex load including different protocols
  • You can record scenarios
  • Robust support and training ecosystem
  • Require that a full scenario be written for every test
  • If you need to simulate specific load with some custom ramp-up patterns
  • If you just prefer UI desktop app for scripts creation, or you just do not know Javascript/YAML/JSON well enough

 

Gatling solves some specific problems:

 

Resources:

gatling.io/

Load Tests: Jmeter vs Artillery

Hello guys,

Continuing on reviewing some performance test tools, today I am going to post a comparison of Jmeter and Artillery. Most people still prefer to use Jmeter as it has been longer in the market, but it is always good to check your other options and see what better fits for your project. I have used Locust and Artillery recently and they are also great tools easy to maintain and to create your scripts.

Just to remind again:

Jmeter is a great and powerful tool, but depending on what you really need (something more lighter) then Jmeter might become an overcomplex, slow, hard to maintain tool.

Jmeter Artillery
In-built Protocols Support
  • HTTP
  • FTP
  • JDBC
  • SOAP
  • LDAP
  • TCP
  • JMS
  • SMTP
  • POP3
  • IMAP
  • HTTP
  • Socket.io
  • WebSocket
Speed to write tests
  • Slow
  • Fast
Support of “Test as Code”
  • GUI oriented
  • Possibility to create scripts, but too complex and lack of documentation
  • Weak (Java)
  • Hard to maintain
  • Scripts oriented
  • Strong (JSON/YAML – YAML is the recommended format since it allows comments)
  • Easier to maintain
Ramp-up Flexibility
  • Plugins available to be able to configure flexible load
  • Supports ramp-up phases and flexible load
Test Results Analyzing
  • Yes
  • Yes
Resources Consumption
  • Heavy to run tests with multiple users on a single machine, more memory consumption
  • Light to run tests with multiple users on a single machine, less memory consumption
  • Doesn’t take up so many of your machines’ resources
  • Multicore support

Easy to use with Version Control Systems
  • No
  • Yes
Number of Concurrent Users
  • Thousands, under restrictions
  • Thousands
Recording Functionality
  • Yes
  • No
Distributed Execution
  • Yes
  • Yes
Load Tests Monitoring
  • Add listeners, but consume more memory
  • No. Reports are only created at the end or you can check the terminal logs.

Concurrent users low than expected in the scenario · Issue #434 ...

 

Jmeter is most used when:

  • You need to perform a complex load including different protocols
  • If you need the script recording functionality
  • Require that a full scenario be written for every test
  • If you need to simulate specific load with some custom ramp-up patterns
  • If you just prefer UI desktop app for scripts creation, or you just do not know Javascript/YAML/JSON well enough

 

Artillery solves some specific problems:

  • You can write performance scripts pretty fast, there is even a “quick” mode (where you don’t need to create any script)
  • Push to your VCS and easily maintain the scripts
  • Artillery has WebSocket support out of the box and native support for Socket.io
  • Spend minimum time on maintenance without additional GUI applications
  • Simulate thousands of test users on local machine without the need to have multiple slaves as it uses Node.js is easier to install and lightweight

 

Resources:

https://artillery.io/faq.html

Hiring and Onboarding a QA

Hello everybody !!!

I am super excited to share here my debut meetup and as it was an online event consequently was also my first international talk !!

Thanks everybody for the support and the feedbacks (glad that was useful for so many people). It was a great experience and I will definitely do this more often 🙂

 

Check the slides here

Thank you !!

Reducing the Scope of Load Tests with Machine Learning

Hello hello,

Today I am going to share this really interesting webinar of my friend Julio de Lima, he is one of the top QA influencers in Brazil, and in this video he is talking about how to reduce the scope of load tests using Machine Learning.

Load testing execution produces a huge amount of data. Investigation and analysis are time-consuming, and numbers tend to hide important information about issues and trends. Using machine learning is a good way to solve data issues by giving meaningful insights about what happened during test execution.

Julio de Lima will show you how to use K-means clustering, a machine learning algorithm, to reduce almost 300,000 records to fewer than 1,000 and still get good insights into load testing results. He will explain K-means clustering, detail what use cases and applications this method can be used in, and give the steps to help you reproduce a K-means clustering experiment in your own projects. You’ll learn how to use this machine learning algorithm to reduce the scope of your load testing and getting meaningful analysis from your data faster.

 

Thank you Julio 🙂

Your automation framework smells !

Code smell is everything that can slow down your process or increase the risk of implementing bugs when doing maintenance. 

The vast majority of the places that I have worked think it is okay to have an automation project with poor quality. Unfortunately, this is an idea shared by many QAs as well. Who should test the test automation ? It is probably alright to have duplicated code, layers and layers of abstraction… After all, it is not the product code, why should we bother, right ?

 

Automation is a development project that should follow the same best practices to avoid code smells. You need to ensure the minimum of the quality on your project, so: add a code review process, a code quality tool, and also test your code before pushing the PR (like changing the expectations and see if it is going to fail).

Of course you don’t need to go too deep and create unit/integration/performance tests for your automation project (who test the tests right ?), but you definitely need to ensure you will have a readable, maintainable, scalable automation project. This is going to be maintained by the team, it needs to be simple, direct and easy to understand. If you spend the same amount of time on your automation and on your development code, something is wrong.

You want to have an extremely simple and easy to read automation framework, so you can have a lot more confidence that your tests are correct. 

I will post here some of the most common anti-patterns that I have found during my career. You might have come across some others as well.

 

Common code smells in Automation framework

Long class(God object), you need to scroll for hours to find something, it has loads of methods and functions. You don’t even know what this class is about anymore.

 

– Long BDD scenarios, try to be as simple and straight forward as possible, if you create a long scenario it is going to be hard to maintain, to read and to understand.

 

– BDD scenarios with UI actions, your tests should not rely on the UI, no actions like click, typed, etc. Try to use more generic actions like send, create, things that even if the UI changes the action doesn’t need to change.

 

– Fragile locators / Xpath from hell, any small change on the UI would fail the tests and require to update the locator.

 

– Duplicate code, identical or very similar code exists in more than one location. Even variables should be pain free maintenance. Any change means changing the code in multiple spots.

 

– Overcomplexity, forced usage of overcomplicated design patterns where simpler design would be enough. Do you really need to use dependency injection ?

 

– Indecent Exposure, too many classes can see you, limit your scope.

 

– Shotgun surgery, a single change needs to be applied to multiple classes at the same time.

 

– Inefficient waits, it slows down the automation test pipeline, can make your tests flaky.

 

– Variable mutations, very hard to refactor code since the actual value is unpredictable and hard to reason about.

 

– Boolean blindness, easy to assert on the opposite value and still type checks.

 

Inappropriate intimacy, too many dependencies on implementation details of another class.

– Lazy class / freeloader, a class that doesn’t do much.

– Cyclomatic complexity, too many branches or loops, this may indicate a function needs to be broken into smaller functions, or that it has potential for simplification.

 

– Orphan variable or constant class, a class that typically has a collection of constants which belong elsewhere where those constants should be owned by one of the other member classes.

 

– Data clump, occurs when a group of variables are passed around together in various parts of the program, a long list of parameters and it is hard to read. In general, this suggests that it would be more appropriate to formally group the different variables together into a single object, and pass around only this object instead.

 

– Excessively long identifiers, in particular, the use of naming conventions to provide disambiguation that should be implicit in the software architecture.

 

– Excessively short identifiers, the name of a variable should reflect its function unless the function is obvious.

 

– Excessive return of data, a function or method that returns more than what each of its callers needs.

 

– Excessively long line of code (or God Line), a line of code which is too long, making the code difficult to read, understand, debug, refactor, or even identify possibilities of software reuse.

 

How can you fix these issues ?

  • Follow SOLID principles ! Class, methods should have a single responsibility !
  • Add a code review process and ask the team to review (developers and other QAs).
  • Lookout how many parameters you are sending. Maybe you should just send an Object.
  • Add locators that are resistant to UI changes, focus on ids first.
  • Return an object with the group of the data you need instead of returning loads of variables.
  • Focus to name the methods and classes as direct as possible, remember SOLID principles.
  • If you have a method that just type a text in a textfield, it maybe grouped together to a function that is going to perform the login().
  • If you have long lines of code, you might want to split it up into functions and move some of them to a variable and then formatting this variable, for example.
  • Think twice about the boolean assertions, add a comment if you think it is not straight forward.
  • Follow POM structure with helpers and common shared steps/functions to avoid long classes.
  • Do you really need this wait ? You might be able to use a retry or maybe your current framework have ways to deal with waits properly.
  • Add a code quality tool to review your automation code (eg. ESlint, Code Inspector)

 

Resources:

https://en.wikipedia.org/wiki/Code_smell

https://pureadmin.qub.ac.uk/ws/portalfiles/portal/178889150/MLR_Smells_in_test_code_Dec_9.pdf

https://www.sealights.io/code-quality/the-problem-of-code-smell-and-secrets-to-effective-refactoring/

https://slides.com/angiejones/automation-code-smells-45

https://medium.com/ingeniouslysimple/should-you-refactor-test-code-b9508682816

TestProject Cloud Integrations

Test Clouds are a great solution to have multiple devices and browsers running your tests in parallalel. It is really cost effective since you don’t need to have real devices and machines to be able to get it running, there are some cons as well, like you can have some bandwidth issues.

Many frameworks are already able to run your tests in the cloud, it is really easy to setup as you just need to know the command to pass like you would do on Jenkins or any other CI tool. Currently TestProject is able to run tests in the SauceLabs and BrowserStack clouds and you can setup any of them quite easily following the documents for SauceLabs here and BrowserStack here.

 

Pros vs Cons having your tests running in the cloud

 

Pros Cons
Dynamic test environment easy to setup Possible bandwidth issues
Faster than having real devices Loss of autonomy
Scalable Small security risk
Environment customizable No free tools
Cost-effective
You can access any time 24/7
Improve team collaboration

 

Resources:

https://link.testproject.io/wpq

https://docs.testproject.io/testproject-integrations/browserstack-integration

https://www.lambdatest.com/blog/benefits-of-website-testing-on-cloud/