Swift and XCUI tests for beginners

I went to a workshop last year #TechKnowDay and I saved this one about Swift for beginners in my draft. I didn’t have a chance to participate, but I followed the instructions on the slide of the project: https://github.com/ananogal/Workshop-Swift-for-beginners

I took the chance to do some automation on this project and created the scenarios (without BDD) here:

https://github.com/rafaelaazevedo/Templet

It is really basic and simple, but it is a start point for everybody who wants to learn how to create the tests.

You can also record the actions, just open your project on Xcode:

  • Create a New Target

 

  • Type a name for your UITest target

 

  • Select the UI Testing template

 

  • Then you just need to click inside of the test function and the record button (red circle) will show up on the bottom of the screen

 

Thank you Ana for this great workshop !

Injecting cookies in your testcafe automation

Hello guys,

Today I am going to post an alternative to authenticate without going throught the login page. I have done this before generating a token for Keycloak to authenticate, but in my last project I generated the cookies and added them as a header with testcafe intercepting the HTTP requests to the website.

This is useful for when you don’t need to test the login process or you have a separated feature to test the login page. Then you will be able to save time when running the automation and avoiding to have to sign in every time you launch the scenario.

I had to take this approach for another reason as well, which was this bug here that happened because TestCafe uses a URL-rewritten proxy internally and this proxy is forced to handle cookies manually because the URL of the tested website is changed during test execution.

You will need to add this in a Before hook and generate the cookies before running the scenarios.

So first, install keygrip and cookies in your package:

npm run keygrip base-64

Second you will need to create the cookie based on your authentication process. For example if your cookie is generated with a json like this. You will also need to set the cookie like this and then you can add it in your utils/helper/support class:

import { RequestHook } from 'testcafe';
import Keygrip from 'keygrip';
import Base64 from 'base-64';

class addCookie extends RequestHook {

    constructor (requestFilterRules) {
        super(requestFilterRules);
    }

    async onRequest (event) {
       const cookieName = "name-of-your-cookie-here"; //Change the value with the name of your authentication cookie 
       const cookieSigName = "name-of-your-cookie-here.sig"; //Same as above, but this is the signature cookie 

       let cookieValue = { "name":"username", "email":"username@email.com" }; //Here you have the value that is inside the cookie 
       cookieValue = Base64.encode(JSON.stringify(cookieValue)); //Encode to Base64 the string if you need it 

       const keys = new Keygrip(["SECRET_KEY"]); //Here you will add your secret
       const hash = keys.sign(`${cookieName}=${cookieValue}`); //This is where you are going to sign the cookie with your secret key

       const myDate = new Date();
       myDate.setFullYear(2020);
       event.requestOptions.headers[cookieName]= `${cookieValue};expires=${myDate};domain=${domain};path=/`;
       event.requestOptions.headers[cookieSigName]= `${hash};expires=${myDate};domain=${domain};path=/`; 
    } 

    async onResponse (event) {
       // Anything you need to add when you have the response 
    }

 }

 

You will need to import the page class and then in the feature level:

import addCookie from 'add-cookie.js';

const setCookies = new addCookie(/https?:\/\/your-url.com/);

fixture`Your Feature`
 .page(baseUrl)
.requestHooks(setCookies);

 

This is just an idea of how to skip the login screen page when doing automation and saving you sometime, but there are some other suggestions as well, like generating the token and add them to the header.

 

Resources:

https://github.com/crypto-utils/keygrip

https://github.com/pillarjs/cookies

https://devexpress.github.io/testcafe/documentation/test-api/intercepting-http-requests/creating-a-custom-http-request-hook.html

Performance Tests with Artillery

Hello guys, after a long break I am posting about a previous project where I created some performance tests to check the reliability of the server. Artillery is a npm library that helps you with load tests, is very simple to use and the scripts are written in .yml, so make sure the indentation is right.

So in the load-tests.yml file you will find this script:

  config:
    target: 'https://YOUR-HOST-HERE' //Here you need add your host url
    processor: "helpers/pre-request.js" //This is the pre-request function we are using to create the data
    timeout: 3 // What is the timeout for each request, it is going to stop the flow and tag the scenario as a failure
    ensure:
      p95: 1000 // Force artillery to exit with a non-zero code when a condition is not met, useful for CI/CD
    plugins:
      expect: {}
    environments:
      qa:
        target: "https://YOUR-HOST-HERE-QA-ENV" //Here you need add your QA env url
        phases:
          - duration: 600 //Duration of the test, in this case 10 minutes
            arrivalRate: 2 //Create 2 virtual users every second for 10 minutes
            name: "Sustained max load 2/second" //Run performance tests creating 2 users/second for 10 minutes
      dev:
        target: "https://YOUR-HOST-HERE-DEV-ENV" //Here you need add your Dev env url
        phases:
          - duration: 120
            arrivalRate: 0
            rampTo: 10 //Ramp up from 0 to 10 users with constant arrival rate over 2 minutes
            name: "Warm up the application"
          - duration: 3600
            arrivalCount: 10 //Fixed count of 10 arrivals (approximately 1 every 6 seconds):
            name: "Sustained max load 10 every 6 seconds for 1 hour"
    defaults:
      headers:
        content-type: "application/json" //Default headers needed to send the requests
  scenarios:
    - name: "Send User Data"
      flow:
      - function: "generateRandomData" //Function that we are using to create the random data
      - post:
          headers:
            uuid: "{{ uuid }}" //Variable with value set from generateRandomData function
          url: "/PATH-HERE"//Path of your request 
          json:
            name: "{{ name }}"
          expect:
            - statusCode: 200 //Assertions, in this case we are asserting only the status code
      - log: "Sent name: {{ name }} request to /PATH-HERE"
      - think: 30 //Wait 30 seconds before running next request
      - post:
          headers:
            uuid: "{{ uuid }}"
          url: "/PATH-HERE"
          json:
            name: "{{ mobile }}"
          expect:
            - statusCode: 200
- log: "Sent mobile: {{ mobile }} request to /PATH-HERE"

 

Now, for the function that creates the data you have a Faker library, that you need to install in your package with npm, then you need to export this function. You need to make the variables available using the userContext.vars and remember to always accept the parameters: userContext, events and done, so they can be used in the artillery scripts.

const Faker = require('faker')

module.exports = {
  generateRandomData
}

function generateRandomData (userContext, events, done) {
  userContext.vars.name = `${Faker.name.firstName()} ${Faker.name.lastName()} PerformanceTests`
  userContext.vars.mobile = `+44 0 ${Faker.random.number({min: 1000000, max: 9999999})}`
  userContext.vars.uuid = Faker.random.uuid()
  userContext.vars.email = Faker.internet.email()
  return done()
}

 

This is just an example, but you can see how powerful and simple artillery is on their website.

You can see the entire project with the endurance and load scripts here: https://github.com/rafaelaazevedo/artilleryExamples

See you guys !

Web App Penetration Testing – Full course for beginners

 

This course was created by HackerSploit. You guys can check out the HackerSploit YouTube channel to see more videos: https://www.youtube.com/hackersploit

How to get your test reports from a pod – Kubernetes

Hello guys, today I am going to post a workaround that I am doing to get test reports from a kubernetes pod that has terminated. If you run your tests in a kubernetes pod, you must know by now that you can’t copy your files containing your test reports from a terminated pod. Many people were complaining about and you can see there is an open issue here.

I am going to show an example of api tests with postman and newman where I run the tests in a kubernetes pod and in the same microservices cluster since there is no public api to access them.

Create a Job

  • First you need to create a job and save it in a .yaml file, like the example below:
apiVersion: batch/v1
kind: Job
metadata:
  name: api-tests
  namespace: default
spec:
  parallelism: 1
  template:
    metadata:
      name: api-tests
    spec:
      containers:
      - name: api-tests
        image: postman/newman:alpine
        command: ["run"]
        args: ["/etc/newman/test.postman_collection.json","--reporters","cli","--reporter-cli-no-failures"]
      restartPolicy: Never
  • Then run on your terminal or add it to your jenkins file:

kubectl apply -f job.yaml

Or you can also create a pod

  • You just need to add this command to your jenkins file:
 sh "kubectl run api-tests -i --rm --namespace=${ENVIRONMENT_NAMESPACE} --restart=Never --image=${YOUR_IMAGE}:latest --image-pull-policy=Always -- run /etc/newman/${YOUR_COLLECTION_PATH}.postman_collection.json -e /etc/newman/${YOUR_ENVIRONMENT_CONFIG_PATH}.postman_environment.json --reporters cli --reporter-cli-no-failures"

Why not a Deployment ?

In the beggining of the implementation I first tried to create a deployment, but deployments don’t support the policy to never restart the pod, which means the automation would never stop running and you wouldn’t be able to copy the reports from the container.

 

Copy the reports

So, now that your tests have finished, you can see the logs on jenkins showing they passed (or failed), but you want to extract the report from the logs and have them in a html/json/any file, so you can archive or publish them. Doing this you would be able to see clearer what is the issue and keep easy to access the reports for each pipeline.

Well,kubectl cp doesn’t work like docker cp unfortunatelly. Once your pod is terminated, you are not able to access the reports or anything inside the pod. So, for this reason there is an issue opened on the kubectl github repository that is exactly about that, you can check the progress of the issue here.

Now how can you copy the reports from the container if you can’t access it after the tests are finished ? Well, there is not a perfect way, some people send the reports to S3, some people send the reports to their emails, but I did find better to save the report copying the html code from the logs and saving it in a file.

On your jenkins file you will have the command to run the pod with the tests and after you need to cat the html report generated to be able to get everything inside the html tag and saving it in a file:

      sh "kubectl run api-tests -i --rm --namespace=${ENVIRONMENT_NAMESPACE} --restart=Never --image=${YOUR_IMAGE}:latest --image-pull-policy=Always -- run /etc/newman/${YOUR_COLLECTION_PATH}.postman_collection.json -e /etc/newman/${YOUR_ENVIRONMENT_CONFIG_PATH}.postman_environment.json --reporters cli,html --reporter-html-export api-tests.html --reporter-cli-no-failures ; cat api-tests.html | tee report"
      def report = readFile "report"
      def update = report.substring(report.indexOf('<html>'), report.indexOf('</html>'))
      writeFile file: "${workspace}/api-tests.html", text: update
      sh "rm report"

First, you need to cat the html report that your tests generated (remember to have this script to run the tests in your docker image or the package.json if you use NodeJs, this is just an example). You can see that you will need to grab everything between the html tags. You can do this using substring or awk, whatever is your preference. I am using substring on this example, but if you want to filter out using awk, the code should be something like this awk '/<html>/','/<\\html>/'.

After grabbing the html report, I am saving it in a file and deleting the previous report file that contained the whole logs from the kubernetes pod.

Not perfect, not happy about doing these kind of workarounds, but this is a way to avoid sending the files to S3 or any other place. Hope it helps !

Automated tests in a CD/CI pipeline

Good pipelines are stable and can support frequent and small releases. When building the pipeline you need to include not only the build and unit tests part, but also the e2e tests and even the smoke tests and deploy to all the environments, so you have as minimun as human interation as possible, avoiding releases to be deployed by mistake.

CD pipeline

Below you have an image of a simple pipeline where you have the Build, Deploy, Test and Promote/Release stages. Imagine a developer merging his changes to the master branch, triggering a new build on this pipeline. The CI job builds the application, deploy to the specific environment, perform some tests and if everything is ok, the changes are promoted to prod environment if you have a CD environment. If you have any issue in any of these stages, the CI job is going to be stopped and you will have a red pipeline in the history of this job. These basic stages could be followed like:

Build – The stage where you need to build your application, usually I run the unit tests here already.

Deploy – The stage where you are going to deploy your application, so this could be your dev environment or dev and qa environment at the same time.

Test – Depending of which environment you have deployed your application, you will decide what type of tests you are going to run, for instance: smoke tests for the Staging or the e2e tests for the QA environment.

Promote – This stage is where you are going to add the script to deploy your application to the production environment in case the tests passed.

 

Continuous Integration (CI), Continuous Deployment (CD) and DevOps have a common goal: small, frequent releases of high-quality software. Sometimes of course things happen in the middle of the way and you discover a bug after the deployment to prod, but as you have this pipeline with a CI and CD running, the fix will be deployed to prod as soon as it is fixed and there is no other new bugs.  For this reason it doesn’t matter how your release cycle is, integrated automated tests is one of the keys to have a CI/CD job running and doing the job.

Having a continuous integration pipeline doing tests on all the environments after the deployment helps to get a fast feedback, reduce the volume of the merge conflicts, everyone has a clear view of the status of the build and a current “good build” is always available for demos/release.

 

Some test/general recommendations

Separate type of tests for each stage – If you have different type of tests running in your CI pipeline (performance, UI, security tests ), better to separate each of them in a different stage so you have a clear vision when one of the stages fail.

Develop your pipeline as code – This is a general advice as it has been a long time that I don’t see teams developing their pipeline through the UI on jenkins. Always create a .jobdsl containing the pipelines that your project is going to have and create/update them automatically.

Wrap your inputs in a timeout – In case you have a pipeline that demands user interaction at some point, you should always wrap your input in a timeout since you don’t want this job hanging there for weeks. I usually have these inputs on the last release stage (deploy to production)

timeout(time:5, unit:'DAYS') {
    input message:'Approve deployment?', submitter: 'it-ops'
}

Refactor your automated tests – As any development code you need to improve your automation after working in the project and obtaining more knowledge about the product/project. So, to keep your testing efficient, you need to regular look for redundancies that can be eliminated, such as multiple tests that cover the same feature or data-driven tests that use repetitive data values. Techniques such as boundary value analysis can help you to reduce the scope to just the essential cases.

Keep your build fast – Nobody wants a release pipeline that takes hours to finish, it is painful and pointless. Try to trigger the minimum automated tests required to validate your build, keep it simple. Due to their more complex nature, integration tests are usually slower than unit tests. If you have a CD pipeline run a full regression on QA, but if you have a CI and a separate pipeline for the releases run only smoke tests on QA first and then the full regression on the deployment to prod pipeline.

Test in the right environment – You should have an isolated QA platform that is dedicated solely to testing. Your test environment should also be as identical as possible to the production environment, but this can be challenging. Honestly you will probably need to mock certain dependencies such as third-party applications. In complex environments, a virtualization platform or solution such as Docker containers may be an efficient approach to replicate the production environment.

Test in parallel – As speed is essential in a CI/CD environment, save time by distributing your automated tests on multiple stages. As mentioned earlier in this series, keep your automated tests as modular and independent from each other as possible so that you can test in parallel.

parallel {
                stage('Branch A') {
                    agent {
                        label "for-branch-a"
                    }
                    steps {
                        echo "On Branch A"
                    }
                }
                stage('Branch B') {
                    agent {
                        label "for-branch-b"
                    }
                    steps {
                        echo "On Branch B"
                    }
                }
            }

Include non-functional tests – Don’t think that regression tests is only about testing functional end-to-end tests, it takes a combination of automated testing approaches to confirm that your application is ready to release. Make sure you have a bit of performance tests running a happy path with concurrent users, also some security tests and of course the e2e functional tests. Exploratory testing can uncover defects that automated tests miss, but then this should have been done before during the feature tests not the release pipeline

Don’t rely only on unit tests – Unit testing doesn’t tell you enough about how that code will work once it is introduced to the production application. Integration of new or revised code may cause a build to fail for several reasons. Therefore, it’s important to run integration tests, regression tests and high-priority functional UI tests as part of the build verification process.

 
Resources:

https://www.ranorex.com/blog/10-best-practices-7-integrate-with-a-ci-pipeline/?utm_source=ebook&utm_medium=email&utm_campaign=en_ebook_test-automation_follow-up-7

https://www.cloudbees.com/blog/top-10-best-practices-jenkins-pipeline-plugin

AWS Online Tech Talks 2019

Hello guys, just came here quickly to share with you this link containing AWS presentantions for this year, some of them already happened, but you can register and watch the next ones for free, the link is below:

https://aws.amazon.com/about-aws/events/monthlywebinarseries/

%d bloggers like this: