Using AI to Accelerate Test Automation

Hello hello peeps πŸ‘‹

I have been a bit of a workaholic lately, but all for a good cause 😊

Not sure if you know already, but I started to work on a project The Chaincademy helping Developers (SDETs, Engineers, Coders, Programmers, Test Automation Engineers…), especially the junior ones that are coming to Tech to find their first job πŸ’»

We have launched our MVP before Xmas, and we are testing it with our audience (Junior Developers). So, in case you want to accelerate your career (for now, only web3) and get your first experience as a developer, sign up for our Newsletter to get access πŸŽ‰

First time I actually adventured myself with AI and Machine Learning was back in 2018 in a Machine Learning Workshop. I had to create this iOS app where AI was replacing my face with an emoji based on my expressions πŸ˜†β€‚Really simple, but back in that time, AI was not so good as it is right now (As we say in Brazil: Na minha Γ©poca isso aqui era tudo matoBack in my day, this place was all woods)

And since the launch of chatGPT to speed up all my work, I have been using AI on a daily basis, more Bard actually (Think it is much better than ChatGpt nowadays), so here I am going to give some tips on how I have been using it in test automation:

Test Automation

1. Test Case Generation:

  • Scenarios: You need to pass user stories and acceptance criteria, to generate corresponding test cases with detailed steps and expected outcomes.

    Prompt Example: Given a user tries to register with an invalid email address, describe the steps they would take and verify that the system displays an appropriate error message.
  • Edge Cases: Ask to suggest potential edge cases or corner scenarios to test, ensuring comprehensive coverage of your application’s functionality.

    Prompt Example: For the checkout process, what happens if the user's internet connection drops while entering their payment information? List potential scenarios and expected outcomes.
  • Data-Driven Testing: Generate test data sets based on specific criteria.

    Prompt Example: Generate 10 test cases for the login feature, covering cases with valid and invalid username/password combinations and different user types (admin, regular user).

2. Coding:

  • Test Script Automation: Describe the test actions:

    Prompt Example: I want to test clicking the 'Submit Order' button and verifying the order confirmation page appears. Write a Cypress with javascript script for this scenario.
  • Code Completion: Get test assertions, locator identification, and handling complex interactions.

    Prompt Example: In my Cypress test, I'm trying to assert that the element contains the text 'Welcome back'. Please suggest the next line of code with assertion syntax.
  • Refactoring: Analyze your existing test scripts and suggest improvements like removing redundancy, increasing reusability, or optimizing execution time.

    Prompt Example: Analyze my Pull request for the search functionality. Can you add comments and suggest ways to improve readability, reduce redundancy, and speed up execution?

3. Test Planning and Management:

  • Prioritization and Risk Assessment: Provide the test case details and application knowledge, so it can help prioritize tests based on risk or impact.

    Prompt Example: Given these 20 test cases for the new feature, rank them based on potential impact, speed of delivery and risk of failure. Explain your reasoning for each.
  • Maintenance: Identify outdated or irrelevant test cases and suggest updates or new tests to maintain coverage.

    Prompt Example: The application updated its login page layout. Identify test cases needing modification and suggest relevant updates based on the new UI.

4. Environment Management:

  • Mocks: Describe data needs for specific tests, and generate mock data or API responses, reducing reliance on real environments and dependencies. Remember you can also use contract tests (with Pact for example) and this can be done automatically from the code.

    Prompt Example: Generate mock API responses for the payment gateway integration test, simulating successful and failed scenarios based on test case requirements
  • Environment Configuration: Configurations for different test environments based on your application and testing requirements.

    Prompt Example: Suggest configurations for a staging environment replicating the production database but with limited user access. Include details for network settings and resource allocation.

Thanks to Abel from Graph Protocol πŸ‘ to send over these great resources that I have been using to learn about how to better prompt for Software Development are:

Equal Experts Geek Conference 2023

Hey guys, 4 months ago I had a 5 minutes lightning talk about How the QA will look like in the future at the Equal Experts Conference.

We went through the evolution of the role and how it is right now, then we quickly talk about the trends that are coming so you can already prepare yourself to be up to date πŸ™‚


In this 5-minute talk, we will quickly talk about the future of Quality Assurance (QA) position and discuss the evolution of the QA role in response to emerging trends.

The QA role has come a long way from its traditional focus on manual testing and bug detection. As technology advances, QA professionals are adapting to new demands and becoming integral contributors to the software development process.

TheΒ  future of QA position will be marked by AI Tests,Β  Tests in the Cloud, Web3 Tests, Alerting and Monitoring, along with strong soft skills. By embracing these trends and developing the necessary skills, QA professionals will be well-equipped to drive quality and innovation in the ever-changing software development industry.

Automation challenges

Hello guys !!

I thought about sharing something that we all have gone through at some point of our career. Some of these challenges are related to lack of standards, knowledge and processes, but some others are related to company’s culture and people’s mindset (this is the biggest challenge and most difficult to change).

I have posted about the difference between a growth mindset and a fixed mindset some years ago, after joiningΒ  a workshop where Joanna Chwastowska (Engineering Lead at Google) explains how she learned to have a growth mindset, this picture below summarizes it:

 

Basically, we are always learning and you shouldn’t feel ashamed to admit that πŸ™‚

 

Scope

What scenarios should I have for this layer of tests ? What type of tests should I be responsible for ? These should be the questions that you first ask when deciding the scope of your tests. For each layer you need to have a different scope otherwise you are not only going to duplicate scenarios, but also have an extensive pipeline that is hard/impossible to maintain.Β Β 

Decide what are the layers that you are going to cover: Unit, Integration, E2E Tests… then decide what scenarios for each layer. So, for the Unit tests you need to have an extensive number of scenarios covering edge cases (special characters, etc), then integration between components and finally e2e tests on top with a reduced scope and no mocks. You should also consider UI tests comparing snapshots if you have a frontend as well.Β Β 

 

Team collaboration

Make sure you have a team that is working towards the same goal, improve the communication as much as possible. Do workshops, demos, pair-programming, code reviews, get feedback so you can continue to improve until you find the best way to work and at the same time make the business happy.

Understand the expectations and align them, there is nothing better than having a nice bunch of people to spend ~8 hours together for 5 days a week and achieving something together.

 

Time constraints

How many releases do you have per day ? How many projects are you allocated in ? How many developers do you have in your team ? You might have noticed that you usually have more than one developer for each QA in a team, and this is okay, as long as you manage what you are going to test, you can’t save the world, don’t test everything, focus on the end-user flow above everything else as this is the front door of the product.

Something you need to take into account is the scope of the regression pack, you probably want to have automation for that, right ? I am completely against having manual tests for the regression pack, unless there is a strong reason why.

 

Finding elements

Do you remember when you couldn’t click on that element because there was a div on top of it ? This is one of the problems you might have faced already, or maybe was a bad structured xpath ? or too many elements with the same css-selector ?

If you are testing react apps, here is something that helped when doing the tests, I’ve asked the developers to add one a data-test attribute: data-test, data-test-id, data-testid or data-cy to the elements. Adding this kind of attribute is considered a best practice since makes the automation resilient to change and it is dedicated for tests.

 

Flaky tests

Yeah… we all know the struggle is REAL !

I’ve faced this issue recently when doing tests with Espresso on android apps, instead of using waits remember to use idling resources which synchronizes the subsequent operations in a UI test.

For the react apps you can use frameworks like Cypress, Testcafe and Detox that runs in the same run-loop as the application being tested and can automatically wait for assertions and commands before moving ahead.

These are some of the reasons you can have flaky tests, but there are some other reasons like:

  • Environment/server is not stable
  • 3rd party system integration is not stable
  • Concurrency tests
  • Caching
  • Setup/teardown tests
  • Dynamic content

Identify the reason first to be able to take the correct action, but definitely tag this test as soon as possible since you won’t be able to trust on it until it has been fixed.

 

#BeSafe

TestComplete 10.1 Released with Real-Device Support for iOS

Hello guys, I will put today a post about automation tests with TestComplete on mobile apps. I used to work with Cucumber and Calabash, but if you want to try another tool, this is very used too.

The new TestComplete 10.1 release now supports real-device test automation for iOS applications so you can rest assured that your mobile apps are of the highest quality. In the recent SmartBear study, State of Mobile Testing 2014, 21% of those surveyed told us that app quality was the greatest challenge for success in mobile.

Image

Efficient mobile application testing results in higher quality applications. With a good test automation tool in your toolbox, you can automate tasks like data-driven testing, authentication testing, and functional checks, so your team can focus on more time-intensive application testing that require human interaction.

So what does the new release of TestComplete bring to iOS mobile testing?

  • Real-Device Testing for iOS versions 6 and 7
  • Object Recognition for iOS Applications
  • Multi-Device Testing
  • Device Pools

Native Application Testing

The new release supports testing of native iOS applications without the need to root your devices, including keyword and scripted tests. By using the new Mobile Screen, which replicates your application on your desktop computer as you test, with our Object Browser, you can easily make the connection between screen elements and their associated objects in the code.

Image

Object Recognition

TestComplete 10.1 has full object recognition of tested iOS applications, including object parameters for low-level testing. Use the Name Mapping feature to label objects like grids, buttons, and layers within your mobile app so you can easily identify them in both tests and test results.

By relying on object recognition, your tests are immune to GUI-level changes during the development cycle that can occur when an automation tool relies solely on screen elements.

Image

Multi-Device Testing

By leveraging our object recognition and common controls technology, TestComplete 10.1 ensures that your automated tests are compatible with any Apple device running iOS 6 or 7. This means you can create a test once and run that same test on all of your devices, regardless of screen resolution or aspect ratio.

TestComplete 10.1 also includes a device pool feature that allows you to manage all of the devices in your testing pool. You can start and stop tests on any device from this central pool, allowing you the ultimate flexibility in how, where and when your automated tests run.

This isn’t a free software, but you can try for 30 days.

Bye guys ! This is the other tool very known in the automation market.

Fonts:Β http://blog.smartbear.com/test-automation/testcomplete-10-1-released-with-real-device-support-for-ios/