Automated tests in a CD/CI pipeline

Good pipelines are stable and can support frequent and small releases. When building the pipeline you need to include not only the build and unit tests part, but also the e2e tests and even the smoke tests and deploy to all the environments, so you have as minimun as human interation as possible, avoiding releases to be deployed by mistake.

CD pipeline

Below you have an image of a simple pipeline where you have the Build, Deploy, Test and Promote/Release stages. Imagine a developer merging his changes to the master branch, triggering a new build on this pipeline. The CI job builds the application, deploy to the specific environment, perform some tests and if everything is ok, the changes are promoted to prod environment if you have a CD environment. If you have any issue in any of these stages, the CI job is going to be stopped and you will have a red pipeline in the history of this job. These basic stages could be followed like:

Build – The stage where you need to build your application, usually I run the unit tests here already.

Deploy – The stage where you are going to deploy your application, so this could be your dev environment or dev and qa environment at the same time.

Test – Depending of which environment you have deployed your application, you will decide what type of tests you are going to run, for instance: smoke tests for the Staging or the e2e tests for the QA environment.

Promote – This stage is where you are going to add the script to deploy your application to the production environment in case the tests passed.

 

Continuous Integration (CI), Continuous Deployment (CD) and DevOps have a common goal: small, frequent releases of high-quality software. Sometimes of course things happen in the middle of the way and you discover a bug after the deployment to prod, but as you have this pipeline with a CI and CD running, the fix will be deployed to prod as soon as it is fixed and there is no other new bugs.  For this reason it doesn’t matter how your release cycle is, integrated automated tests is one of the keys to have a CI/CD job running and doing the job.

Having a continuous integration pipeline doing tests on all the environments after the deployment helps to get a fast feedback, reduce the volume of the merge conflicts, everyone has a clear view of the status of the build and a current “good build” is always available for demos/release.

 

Some test/general recommendations

Separate type of tests for each stage – If you have different type of tests running in your CI pipeline (performance, UI, security tests ), better to separate each of them in a different stage so you have a clear vision when one of the stages fail.

Develop your pipeline as code – This is a general advice as it has been a long time that I don’t see teams developing their pipeline through the UI on jenkins. Always create a .jobdsl containing the pipelines that your project is going to have and create/update them automatically.

Wrap your inputs in a timeout – In case you have a pipeline that demands user interaction at some point, you should always wrap your input in a timeout since you don’t want this job hanging there for weeks. I usually have these inputs on the last release stage (deploy to production)

timeout(time:5, unit:'DAYS') {
    input message:'Approve deployment?', submitter: 'it-ops'
}

Refactor your automated tests – As any development code you need to improve your automation after working in the project and obtaining more knowledge about the product/project. So, to keep your testing efficient, you need to regular look for redundancies that can be eliminated, such as multiple tests that cover the same feature or data-driven tests that use repetitive data values. Techniques such as boundary value analysis can help you to reduce the scope to just the essential cases.

Keep your build fast – Nobody wants a release pipeline that takes hours to finish, it is painful and pointless. Try to trigger the minimum automated tests required to validate your build, keep it simple. Due to their more complex nature, integration tests are usually slower than unit tests. If you have a CD pipeline run a full regression on QA, but if you have a CI and a separate pipeline for the releases run only smoke tests on QA first and then the full regression on the deployment to prod pipeline.

Test in the right environment – You should have an isolated QA platform that is dedicated solely to testing. Your test environment should also be as identical as possible to the production environment, but this can be challenging. Honestly you will probably need to mock certain dependencies such as third-party applications. In complex environments, a virtualization platform or solution such as Docker containers may be an efficient approach to replicate the production environment.

Test in parallel – As speed is essential in a CI/CD environment, save time by distributing your automated tests on multiple stages. As mentioned earlier in this series, keep your automated tests as modular and independent from each other as possible so that you can test in parallel.

parallel {
                stage('Branch A') {
                    agent {
                        label "for-branch-a"
                    }
                    steps {
                        echo "On Branch A"
                    }
                }
                stage('Branch B') {
                    agent {
                        label "for-branch-b"
                    }
                    steps {
                        echo "On Branch B"
                    }
                }
            }

Include non-functional tests – Don’t think that regression tests is only about testing functional end-to-end tests, it takes a combination of automated testing approaches to confirm that your application is ready to release. Make sure you have a bit of performance tests running a happy path with concurrent users, also some security tests and of course the e2e functional tests. Exploratory testing can uncover defects that automated tests miss, but then this should have been done before during the feature tests not the release pipeline

Don’t rely only on unit tests – Unit testing doesn’t tell you enough about how that code will work once it is introduced to the production application. Integration of new or revised code may cause a build to fail for several reasons. Therefore, it’s important to run integration tests, regression tests and high-priority functional UI tests as part of the build verification process.

 
Resources:

https://www.ranorex.com/blog/10-best-practices-7-integrate-with-a-ci-pipeline/?utm_source=ebook&utm_medium=email&utm_campaign=en_ebook_test-automation_follow-up-7

https://www.cloudbees.com/blog/top-10-best-practices-jenkins-pipeline-plugin

AWS Online Tech Talks 2019

Hello guys, just came here quickly to share with you this link containing AWS presentantions for this year, some of them already happened, but you can register and watch the next ones for free, the link is below:

https://aws.amazon.com/about-aws/events/monthlywebinarseries/

How to test internal microservices in a kubernetes cluster

Hello guys, I have been working with kubernetes, docker, postman and newman a lot lately due some tests on internal microservices. To test these internal microservices I am running postman scripts in the same kubernetes cluster where the services are running.

Kubernetes is an open source container orchestrator, which means you can group together all the containers (services) that share information. You don’t need to expose their endpoints for them to be available among themselves. PS: This is just my brief explanation, but feel free to explore a bit more.

To be able to run tests on internal microservices that are inside a Kubernetes cluster I created a Postman collection and together with Newman run the tests pointing to these services without the need to expose the endpoints.

Creating the postman script

– You will need to have postman installed to create the script

– I am not going through the details of this part, because this is not the aim of this post, but after you create the script on postman you will need to export to api/collections folder.

– Also, you will need to export the environment variables that you create it and save it in the api/envs folder.

– You can see an example of the collection here and an example of the environment variable here. Remember that the hostname needs to be the name of the deployment of your service in kubernetes.

Creating the docker file

– I am using this docker image as base, but you can use any other, you just need to install newman.

– Then we need to build a Docker image containing the postman collection, environment and global variables, data files, etc.

– After this we need to create a jenkins file that will build and push the image to the docker hub, the best practice is you to have a separate pipeline just to build the image (so the tests contains just the tests and don’t take longer than necessary to run), but in this example I am going to have just a separate stage to build the image and another to run the tests.

dockerfile:

FROM postman/newman
WORKDIR /etc/newman/
COPY . /etc/newman/
RUN chmod -R 700 /etc/newman

Creating the kubernetes deployment

– Now we need to create the jenkins file to build and push the docker image to docker hub.

– You can get the full code here.

– But this is the important part, the command below will create a kubernetes deployment in the namespace and run the tests from the image that we have just pushed.

kubectl run microservices-tests -i --rm --namespace=${ENVIRONMENT} --restart=Never --image=${IMAGE}:latest --image-pull-policy=Always -- run /etc/newman/api/collections/collection.postman_collection.json -e /etc/newman/api/envs/${ENVIRONMENT}.postman_environment.json --reporters cli --reporter-cli-no-failures

  • microservices-tests is the name of the deployment that you are going to create.
  • -i Keep stdin open on the container.
  • --rm is the argument that says to delete this deployment once the command is finished.
  • --namespace=${ENVIRONMENT} is the name of the namespace (environment) that you will run this deployment, so it needs to be the same as your services are running.
  • --restart=Never is the argument that says to not restart the deployment once is finished. You don’t want your tests running over and over again forever.
  • --image=${IMAGE}:latest is the image that kubernetes is going to pull.
  • --image-pull-policy=Always this is to ensure that kubernetes is always pull the image even thought you have pulled before (this is to ensure you have always the latest).
  • -- this is to mark the end of the kubernetes commands and everything after is going to be the newman commands/arguments to run the postman script.

So, creating this deployment in the same namespace of your internal services, you can hit them and test the endpoints even though they are not external.

You should see something like this on your console:

This means that your script is running as expected in the cluster.

Thank you guys ! See you next time !

Chaos Engineering: Why Breaking Things Should be Practiced

Hello guys,

Last week I went to the WebSummit 2018 Conference in Lisbon and I managed to join some of the AWS talks. The talk that I am posting today is about chaos engineering, which specifically address the uncertainty of distributed systems at scale. The aim of this practice is to uncover the system weakness and build confidence in the system’s capability. 

The harder it is to disrupt the steady state, the more confidence we have in the behavior of the system.  If a weakness is uncovered, we now have a target for improvement before that behavior manifests in the system at large.

Today I am going to post the video on the exact moment that this talk starts.

https://player.twitch.tv/?autoplay=false&t=02h05m17s&video=v333130731

This talk is presented by AWS Technical Evangelist Adrian Hornsby.

You can find tools to help you with the tests in this repo:

https://github.com/dastergon/awesome-chaos-engineering#notable-tools

 

References:

https://principlesofchaos.org/

https://www.twitch.tv/videos/333130731

Amazing repo with content/links about the topic: https://github.com/dastergon/awesome-chaos-engineering

How do you help developers to test ?

Hello guys,

I have joined to some webinars from Test Masters Online, not sure if you heard about them, but this one really called my attention. The title looks a bit extreme, but you will see that is more about how testers and developers can work together to improve the quality of the team. The ideal is to have specialised QAs that can teach developers about automation, performance, security tests and so on. It is more about giving awareness about tests when developing.

The challenge nowadays is changing the developers mindset to contribute along with QAs with the automation tests and also getting support from the managers (which is something that I’ve always highlighted). If you want a team without a bottleneck and where everybody is max contributing for the quality and speed of the deliveries, then yes you need to think about a team sharing all the responsabilities and having a specilised person to guide and teach.

Basically you will see how Joel Montvelisky discusses the issues which prevents software developers to test, and what testers can do to change that.

Thank you !! See you in the next post 🙂

Cypress with Galen Study Project

Hello peeps,

Today I am going to post a study project that I have created to try Cypress and Galen. As some of you know Cypress doesn’t support cross browser testing at the moment. This is because the creators believe that nowadays you don’t really need to do functional tests on all the browsers, so what they suggest is to create an automation project for the functionality and run on only one browser and then create another project to run the layout tests on the other browsers.

I kind agree with this statement, since most of the bugs that I have found in my latest projects are layout bugs and doesn’t affect the funcionality of the feature. I say most of the bugs, because I can remember one or two situations where the layout issue affected the functionality. Eg.: Imagine there is a menu that shows a list of options when the mouse is over it. You could have an issue with the css, where this list is overridden by a div and the options are not displayed and clickable.

To give it a go on this idea, I have created a project doing functional tests with Cypress and layout tests with Galen.

The link to the project is here: https://github.com/rafaelaazevedo/bug-free-garbanzo

As always feel free to fork and improve the code, share ideas, fix bugs, etc…

I am still fixing the Docker image to integrate the Cypress and the Galen tests in the same container. For now you can run the tests in the Cypress docker container.

See you next time 🙂

How to measure exploratory tests ?

Hello guys,

Many people that are not from the QA area doesn’t know how to measure or what are the advantages of doing exploratory tests, but it is a technique really powerful when used correctly. Its effectiveness depends on several intangibles: the skill of the tester, their intuition, their experience, and their ability to follow hunches.

 

Value

  • detects subtle or complex bugs in a system (that are not detected in targeted testing)
  • provides user-oriented feedback to the team

Exploratory testing aims to find new and undiscovered problems. It contrasts with other more prescribed methods of testing, such as test automation, which aims to show scripted tests can complete successfully without defects. It will help you write new automated tests to ensure that problems aren’t repeated.

If you have any doubs about Exploratory tests, like examples and what are the advantages of doing it, have a look on the video below first:

99 Second Introduction to Exploratory Testing | MoT

 

When should you perform exploratory tests

Exploratory testing works best on a system that has enough functionality for you to interact with it in a meaningful way. This could be before you release your first minimum viable product in beta or before you release a major new feature in your service.

How to measure

Always test in sessions:
  1. Charter
  2. Time Box
  3. Debriefing
  4. Mind Maps

 

Charter

  • Mission for the session
  • What should be tested, how it should be tested, and what problems to look for
  • It is not meant to be a detailed plan
  • Specific charters provide better focus, but take more effort to design: “Test clip art insertion. Focus on stress and flow
  • techniques, and make sure to insert into a variety of documents. We’re concerned about resource leaks or anything else that might degrade performance over time.”

99 Second Introduction to Charters | MoT

 

Time Box

  • Focused test effort of fixed duration
  • Brief enough for accurate reporting
  • Brief enough to allow flexible scheduling
  • Brief enough to allow course correction
  • Long enough to get solid testing done
  • Long enough for efficient debriefings
  • Beware of overly precise timing
Sessions time:
  • Short: 60 minutes (+-15)
  • Normal: 90 minutes (+-15)
  • Long: 120 minutes (+-15)

Debriefing

  • Measurement begins with observation
  • Session metrics are checked
  • Charter may be adjusted
  • Session may be extended
  • New sessions may be chartered
  • Coaching happens

 

Mind maps

Mind maps can be useful to document exploratory testing in a diagram, instead of writing the scenarios. It is a visual thinking tool and are quick and easy to record as they don’t follow a linear approach.

 

Session metrics

The session metrics are the primary means to express the status of the exploratory test process. They contain the following elements:

  • Number of sessions completed
  • Number of problems found
  • Function areas covered
  • Percentage of session time spent setting up for testing
  • Percentage of session time spent testing
  • Percentage of session time spent investigating problems

 

Coverage

  • Coverage areas can include anything
  • Areas of the product
  • Test configuration
  • Test strategies
  • System configuration parameters
  • Use the debriefings to check the validity of the specified coverage areas

 

Reporting

  • Create a charter
  • Features you’ve tested
  • Notes on how you conducted the testing
  • Notes on any bugs you found
  • A list of issues (questions and concerns about the product or project that arose during testing)
  • Extra materials you used to support testing
  • How much time you spent creating and executing tests
  • How much time you were investigating and reporting bugs
  • How much time you were setting up the session

 

Tools

I like to use Katalon or Jing, but to be honest this is just to record and take screenshots of the test sessions. To do these kind of tests you just need a paper and a pen to write your notes, concerns and questions.

 

Resources:

http://www.satisfice.com/sbtm/

http://www.satisfice.com/presentations/htmaht.pdf

https://www.gov.uk/service-manual/technology/exploratory-testing

%d bloggers like this: