Talk about QA, Test Automation, Blockchain and Web3
Author: Rafaela Azevedo
Full Stack SDET with +16 years Experience in QA, +14 years Experience in Test Automation and +8 years in Leadership, Delivering and Releasing Softwares in different platforms (Mobile, Desktop, Web)
Became a STEM Ambassador and a STEM Women Member in 2020 making an impact and bringing more people to the STEM area. Contribute to TestProject and instructor of Test Automation University.
Today I am going to share something that I learned recently. I am aware that so many people probably have been using this a long time ago, but I have moved away from java and Selenium quite some time ago and I recently got back into it.
So here is a boilerplate with an automated test project using Dependency Injection of Spring Boot with Selenium, Gradle, and Cucumber.
Having an alert system is crucial to have confidence in your product. There are 2 main points you need to consider, Monitoring and Alerts:
Monitoring consists of dashboards and reports to display the metrics.
Alerts involve taking some type of action such as notifying someone, writing to a log, and raising an alert on a dashboard.
Having an alert system is crucial to delivering the rightinformation to the rightpeople at the righttime
Important Metrics to Watch
The list is long and you can get lost on all the possible metrics you can watch and raise an alert upon. A good strategy is starting with the riskiest ones, the ones that can stop the system to work, stability, and performance. Here is a list of some of the most-watched metrics:
CPU utilization
Memory utilization
Memory breakup
Load balancer (Number of instances running)
Services and processes running
Processor queue length
Disk usage
Network up
Plugin
Crontab executed
Event logs generated
Application details
Expiring Certificates
Docker/Kubernetes containers
These metrics can be swapped or removed or you can even add others according to your needs. The goal here is to find the best metrics to watch and raise different levels of alerts to the specific group of people at the right time. These metrics can help the team to check the system’s health and also increase the confidence and reliability not only in the recovery steps when something goes sideways but also in the alert system itself.
Different Types of Metrics
I found this good framework to create a good alert. It needs to have these properties:
Actionable: Indicates a problem for which the recipient is well placed to take immediate action. Investigable: Indicates a problem whose solution is not yet known by the organization.
Do you need to take an action every time? No, you can specify a capacity limit, and then once the metric reaches this limit you can have the first support acting up on it, then when it reaches the second limit then you have a different action like calling the second line support, or calling a script to restart a service, etc.
Sometimes there is no action to take. For example, “CPU utilization” or “Packet loss.” These are some classic FYI alerts. Instead of alerting, these things should appear on a dashboard for use in troubleshooting when a problem is already known to exist.
Do you need to send the alert straight away? Also no, especially when you have mechanisms of self-healing. There’s no need to bother a human to fix the issue; the response should be automated.
Depending on your recovery strategy, you can wait 5, 10, or15 minutes to check if the metric is still showing a problem, and then, if it is you can raise the alert accordingly. In the alert, you can always have a link to the documentation with recovery steps (e.g. a “runbook” indicating steps to perform).
Remember to have an FYI for when the problem is fixed as well, so in case something has failed in the middle of the evening and in the morning is working, you get the latest update about the problem.
Always keep records and logs to investigate what happened and how the system reacted.
Dan Slimmon has a good spreadsheet example of an alert framework:
Monitoring and Alert Tests
Independently the tool you are using for your metrics and alerts: Grafana, Cloudwatch, Sensu, Prometheus, or New Relic. It is important to have a test strategy to cover this critical part of your system.
Some common tests you need to perform are much more Devops related, like scaling down some instances on AWS, simulating a high memory load with scripts, or even creating scripts specific to remove log files from the Docker container.
Points to keep in mind:
Is it worth automating?
Think about if you have a lot of changes and maintenance to the project. How much effort would be to create the automation for these scenarios? How complex these scenarios would be? Does automating them bring any real value?
There are some tools that you can use to perform chaos engineering:
AWS FIS (Fault injection Actions Simulator) to automate some of the scenarios and add to your continuous delivery pipeline. It’s a fully managed service for running fault injection experiments on AWS that makes it easier to improve an application’s performance, observability, and resiliency.
Chaos Mesh utilizes chaos experiments within Kubernetes environments. It’s able to use various types of scenarios related to fault simulations within a distributed system.
Many other tools can be used and you can check the list in this post.
At what stages do you want to test and what scope?
Check if you need to do manual smoke tests for each feature and then have a bigger regression after merging the tickets, or manual test for the feature and at the same time add the automation for this scenario, so at the end of the pipeline, you have the full regression.
Maybe you don’t even need to have automation running for this project and just doing manual feature tests for the tickets and having a report with the full regression scenarios is enough
Whatever is the strategy that you choose, make sure you have the max confidence you can have with the current constraints and a plan to improve in the near future.
Automating these tests is complex and always involves a good knowledge of infrastructure. I personally had and still have to do a lot of research when it comes to it.
There are several interesting web app automation scenarios that we can improve using AI:
Reduce the execution time: Nowadays you have the feature target function already even without an AI test automation project, but with AI you can add this feature without having cucumber in place or even the need to tag the scenarios or features. The AI should be able to identify the features related to the change automatically.
Convertedmanual test cases to automation: you can use Natural Language Processing (NLP) to automatically translate manual test cases into automated test cases. I have seen this done with cucumber not AI yet, but totally possible as AI models work on datasets.
Creating different data combinations by training the AI to identify the possible combinations based on a dataset is possible. This would increase the data coverage and bring more confidence to the automation project.
Visual validations: Many tools perform this functionality already. I personally tried one tool ages ago called Percy, but you can also try some other popular tools like Applitools and Telerik
Test execution stability or self-healing automation: AI can automatically locate web elements when the primary locators fail. You can see this feature in some cutting-edge automation tools like Mabl and Xray and Functionize. Self-healing employs data analytics to identify objects in a script even after they have changed. When your script fails due to being unable to find the object it expected, the self-healing mechanism provides a fuller understanding and analysis of options. Rather than shutting down the process, it examines objects holistically, evaluates the attributes and properties of all available objects, and uses a weighted scoring system to select the one most similar to the one previously used.
Becoming a Domain Model Expert
Creating a model for your test automation requires a domain expert, therefore is critical to have a test automation specialist that also knows the business so the AI can bring the desired innovation. With such extensive use cases, AI systems will need different parameters from domain experts.
Be careful to not run more automated tests than you actually need it. A stage of supervision when the AI is learning the patterns is definitely needed it.
As part of my STEM Ambassadors activity to bring more young women to the STEM area, I recorded a video telling my career history and how I ended up in tech.
As you know women are still a minority and one of the reasons why is that some people still believe there is a rule saying tech is a man’s job or a woman’s job. This is me when I hear these comments:
I wish the world was binary and simple like that.
Here is the presentation in case you are curious about how I ended up where I am now 😊
Today I am going to post another problem that cost me a lot of research time and pulling out my own hair.
This was the initial code:
public void javascriptExecutor(String script, WebElement element) {
WebDriver driver = new ChromeDriver();
JavascriptExecutor jse = (JavascriptExecutor) driver;
jse.executeScript(script, element);
}
If you ever encounter the error complaining Selenium couldn’t find the Webdriver instance, then check the complete explanation of the problem here.
TL;DR; Webdriver when using PageFactory needs to extract the underlying WebElement with WrapsElement and then the WebDriver from it. Basically, you need to get the real element and then you will be able to use javascript executor:
The first post of the year will be a quick one. Took me a while to figure out since it has been ages I coded in Selenium/Java and used PageFactory (which I thought was deprecated, but it is just deprecated in C#)
If you ever come across an error like this when using PageFactory:
stale element reference: element is not attached to the page document
PageFactory initializes the elements the first time you run the automation and when the page changes (which happens on Angular and React pages) Selenium loses the reference to that element and needs to find it again with the new DOM.
Hello guys, here are the slides from the workshop on Blockchain Tests that I gave earlier this week, as well as some responses to the issues that were addressed during the session. I can say as my first presential workshop after pandemic, it was a great experience with a full room 😃
Unfortunately, there is no recording of it, but you may clone the repo and follow the coding instructions to build the test class and methods, then compare to version 2 of the project, which has the most recent version.
The require Solidity function guarantees validity of conditions that cannot be detected before execution. It checks inputs, contract state variables and return values from calls to external contracts.
The address type comes in two flavours, which are largely identical:
address: Holds a 20 byte value (size of an Ethereum address).
address payable: Same as address, but with the additional members transfer and send.
The idea behind this distinction is that address payable is an address you can send Ether to, while a plain address cannot be sent Ether.
Type conversions
Implicit conversions from address payable to address are allowed, whereas conversions from address to address payable must be explicit via payable(<address>).
Explicit conversions to and from address are allowed for uint160, integer literals, bytes20 and contract types.
Other tools that you can use to test Blockchain applications
Today I am going to post a quick snippet that I used recently and it quite made me spent a lot of time just because I didn’t read the Jmeter documentation in the first place 😅
In the pom.xml you will need to set some jmeter configurations:
Now you can run your jmeter tests passing a json code to the maven command line and if you don’t pass this parameter, then Jmeter is going to use the default one.
Hello guys !! After my talk about Testing Blockchain Applications on the #TechKnowDay, I thought it was worth posting a guide on the same topic as I felt it was complex to also understand Blockchain. So, this is a quick tutorial on how to run tests with Truffle Framework.
If you are still wondering where you can use Blockchain, here it’s a table with the percentage of companies that are focusing on it and the use cases.
We are going to use a bit of Javascript for the web part and Solidity for the blockchain project. For this project you will need to have installed NPM and Node already. We are going to install Truffle as part of the setup. You will also need MetaMask and Ganache, all of them you can find bellow:
Open your MetaMask plugin and click on new Custom RPC Network. Type the following (Currency Symbol will be automatically populated after you type the Chain ID) and save.
Open Ganache, click on Quick Start (Ethereum). Double check if the server has this configuration clicking on the on top right and then Server tab.
Click on Show keys of the account you have created and copy the private key.
Back on MetaMask, click on Import Account and add the private key you have copied from the previous step.
Setting up the project
On your terminal run:
npm install -g truffle
If you want to follow the test steps only, then you need to download the first release that contains the installation files of the project already:
If you want to try the installation for yourself from the scratch, then just go to this link and follow the Getting Started guide. This guide won’t be focusing on the installation of the framework or the setup of the contracts and migrations.
Just a heads up that for this project I am using the pet-shop box instead of the MetaCoin from the Getting Started guide.
Installation
After downloading the project, you will need to run some commands to set it up. Open your terminal on the root of the project and run:
Run the development console
truffle develop
Compile and migrate the smart contracts. Inside the development console you don’t need to type the truffle command.
compile
migrate
To get out of the truffle console type
.exit
Create a new file inside of the test folder called TestAdoption.sol and import all the needed modules, such as assertions, new instances of deployed addresses and the contract that will be tested.
Add the testUserCanAdoptPet() function below the variables block.
Note that here you are creating a function that will test you can adopt a pet and for this you will need to get the address related to the adoption transaction and compare the returnedId with the expectedPetId address
Try to explore and add other asserts like checking if it’s not returning null.
function testUserCanAdoptPet() public {
uint256 returnedId = adoption.adopt(expectedPetId);
Assert.equal(
returnedId,
expectedPetId,
"Adoption of the expected pet should match what is returned."
);
}
Asserting the adopter of the pet
Add the testGetAdopterAddressByPetId() function below the previous function.
This function will check if the adopter address for that pet is the same from the adopters list
Try to explore and add other asserts like comparing the age of the pet is returning correctly, for that you would need to add the age on the Adoption.sol contract, then compile and migrate again.
function testGetAdopterAddressByPetId() public {
address adopter = adoption.adopters(expectedPetId);
Assert.equal(
adopter,
expectedAdopter,
"Owner of the expected pet should be this contract"
);
}
Asserting the list of adopters
Now add the testGetAdopterAddressByPetIdInArray() function below the previous function.
This function will check if the memory address for this petId is the same as the expectedAdopter
Almost the same test as before, but this time we are explicitly storing adopters in memory rather than contract’s storage and then comparing them.
function testGetAdopterAddressByPetIdInArray() public {
address[16] memory adopters = adoption.getAdopters();
Assert.equal(
adopters[expectedPetId],
expectedAdopter,
"Owner of the expected pet should be this contract"
);
}
}
You should have something like this:
pragma solidity >=0.5.0;
// The first two imports are referring to global Truffle files, not a `truffle` directory.
// Gives us various assertions to use in our tests.
import "truffle/Assert.sol";
// When running tests, Truffle will deploy a fresh instance of the contract being tested to the blockchain.
import "truffle/DeployedAddresses.sol";
// The smart contract we want to test.
import "../contracts/Adoption.sol";
contract TestAdoption {
// The address of the adoption contract to be tested
Adoption adoption = Adoption(DeployedAddresses.Adoption());
// The id of the pet that will be used for testing
uint256 expectedPetId = 8;
//The expected owner of adopted pet is this contract
address expectedAdopter = address(this);
// Testing the adopt() function
function testUserCanAdoptPet() public {
uint256 returnedId = adoption.adopt(expectedPetId);
Assert.equal(
returnedId,
expectedPetId,
"Adoption of the expected pet should match what is returned."
);
}
// Testing retrieval of a single pet's owner
function testGetAdopterAddressByPetId() public {
address adopter = adoption.adopters(expectedPetId);
Assert.equal(
adopter,
expectedAdopter,
"Owner of the expected pet should be this contract"
);
}
// Testing retrieval of pet owner storing getAdopters in memory
function testGetAdopterAddressByPetIdInArray() public {
// Store adopters in memory rather than contract's storage
address[16] memory adopters = adoption.getAdopters();
Assert.equal(
adopters[expectedPetId],
expectedAdopter,
"Owner of the expected pet should be this contract"
);
}
}
Running the tests
Open your terminal on the root of the project and run:
truffle test
If everything went okay you will see green checks on your terminal like this:
You can check the final code with the latest release (Spoiler alert: You will see pictures of my dog, my dog’s best friend, my friend’s cat and my previous dog)