Contract Testing with Pact.js + GraphQL

Contract Tests vs Integration Tests

  • Trustworthy like the API tests, even though the contract test is mocking the provider/consumer, you know it is mocking based on the contract that was generated.
  • Reliable because you don’t depend on your internet connection to get the same consistency on the results (When your API does’t have third parties integration or you are testing locally).
  • Fast because you don’t need internet connection, everything is mocked using the contract that was generated.
  • Cheap because you don’t spend huge amount of time to create a pact test or to run it, even less to maintain.
Contract Tests API Tests
Trustworthy Trustworthy
Reliable Not realiable
Fast Slow
Cheap Expensive

Remember contract tests are NOT about testing the performance of your microservice. So, if you have API Test that are taking ages to (execute/perform), failing due server no replying fast enough or timeouts, this means you have a performance problem, or it is just your internet connection. In either case you need to separate the problem and create targeted tests that are going to verify the performance of your server and not the expected response body/code.

How it works

You can use a framework like Pact which will generate the contract details and fields from the consumer. You need to  specify the data you are going to send and in the verification part you will use the same function the app would use to do the requests to the API.

Contract test is part of an integration test stage where you don’t really need to hit the API, so it is faster and reliable, completely independent of your internet connection. It is trustworthy since you are generating the contract based on the same function and the same way you would do when using the consumer to hit the provider. Pact is responsible to generate this contract for you, so you just need to worry about passing the data and add the assertions, like response code, headers, etc. If It seems pretty straight forward to know who is the consumer, who is the provider and the contract that you are going to generate, but imagine a more complex real life scenario where you have a structure like:

In this case you have multiple microservices communicating with each other and sometimes this service is the provider and sometimes the same service is the consumer. So, to keep the house organised when maintaining these services you need to create a pact between each one of them.

 

The fun part

So let’s get hands-on now and see how we can actually create these contracts.

Create a helper for the consumer to setup and finalise the provider (this will be the pact mock where the consumer is going to point when creating the pact.

import { Pact } from '@pact-foundation/pact'
import path from 'path'

jasmine.DEFAULT_TIMEOUT_INTERVAL = 10000

export const provider = new Pact({
   port: 20002,
   log: path.resolve(process.cwd(), 'logs', 'mockserver-integration.log'),
   dir: path.resolve(process.cwd(), 'pacts'),
   pactfileWriteMode: 'overwrite',
   consumer: 'GraphQLConsumer',
   provider: 'GraphQLProvider'
})

beforeAll(() => provider.setup())
afterAll(() => provider.finalize())

// verify with Pact, and reset expectations
afterEach(() => provider.verify())

 

Then create a consumer file where you add what is the data you want to check and the response you are expecting from the graphQL API.

import { Matchers, GraphQLInteraction } from '@pact-foundation/pact'
import { addTypenameToDocument } from 'apollo-utilities'
import gql from 'graphql-tag'
import graphql from 'graphQLAPI'

const { like } = Matchers

const product = {
  id: like('456789').contents,
  disabled: false,
  type: like('shampoo').contents,
}

describe('GraphQL', () => {
  describe('query product list', () => {
    beforeEach(() => {
      const graphqlQuery = new GraphQLInteraction()
        .uponReceiving('a list of products')
        .withRequest({
          path: '/graphql',
          method: 'POST'
        })
        .withOperation('ProductsList')
        .withQuery(print(addTypenameToDocument(gql(productsList))))
        .withVariables({})
        .willRespondWith({
          status: 200,
          headers: {
            'Content-Type': 'application/json; charset=utf-8'
          },
          body: {
            data: {
              productsList: {
                items: [
                  product
                ],
                nextToken: null
              }
            }
          }
        })
      return provider.addInteraction(graphqlQuery)
    })

    it('returns the correct response', async () => {
      expect(await graphql.productsList()).toEqual([product])
    })
  })
})

When you run the script above, pact is going to create a .json file in your pacts folder and this will be used to test the provider side. So, this is going to be the source of truth for your tests.

This is the basic template if you are using jest, just set the timeout and then you need to use the same functions that you are going to use for the consumer to communicate with the provider. You just need to decide how you are going to inject the data in your local database, you can pre-generate all the data on the beforeAll or a pre-test and then add a post-test or a function in your afterAll to clean the database once the tests are done.

The provider.js file should be something similar to this one:

import { Verifier } from '@pact-foundation/pact'
import path from 'path
import server from 'server'

jest.setTimeout(30000)

beforeAll(async () => {
         server.start('local')
})

afterAll(async () => {
         server.tearDown()
})

describe('Contract Tests', () => {
       it('validates the pact is correct', () => {
         const config = {
                  pactUrls: [path.resolve(process.cwd(), 'pacts/graphqlconsumer-graphqlprovider.json')],
                  pactBrokerPassword: "Password",
                  pactBrokerUrl: "https://test.pact.com/",
                  pactBrokerUsername: "Username",
                  provider:'GraphQLProvider',
                  providerBaseUrl:server.getGraphQLUrl(),
                  publishVerificationResult:true
         }
         return new Verifier(config).verifyProvider()
       }
})

In the end you just need to verify that the contract is still valid after your changes on provider or consumer, for this reason you don’t need to add edge scenarios, just exactly what the provider is expecting as data.

 

Resources:

https://docs.pact.io/

https://docs.pact.io/pact_broker/advanced_topics/how_pact_works

https://medium.com/humanitec-developers/testing-in-microservice-architectures-b302f584f98c

Quick review of Testproject.io

Hello guys,

I had a look on a test automation platform called TestProject.io this weekend. The idea is very interesting, it is a free e2e test automation for web, mobile and API that is supported by the community.

Here are some insightful points:

  • It is completely free, so more people contributing
  • Cross platform, you can test mobile (iOS and Android), web and API
  • Easy deployment as it is integrated with Slack and Jenkins
  • All the tests are stored in the cloud and you can share across your team
  • No need to worry about keeping SDKs, libraries and tools up to date, as TestProject is responsible for that
  • Forget about spending hours to config the framework
  • Your team doesn’t need to know coding
  • Reports are great and insightful
  • UX was really well planned, but still lots of clicks to create/update a scenario
  • Ability to create addons using open source libraries
  • It is built using Selenium and Appium

For a free tool it is above the expectations as it has the CI/CD integration, deep reports and great collaboration, so not surprised why big companies like Wix, IBM are using this tool. It is fully integrated and hassle free, you save so much time that you can focus only on the important part, the scenarios that your regression pack is going to have.

AWS Lex Chatbot + Kubernetes Test Strategies

Hello guys, this post is really overdue as I left this project some months ago, but still useful to share 🙂

If you are using React, AWS Lex, Kubernetes to develop your chatbot then you might find this article useful as this was the kind of the tech stack that I used on this previous project.

I am going through the test/release approach which I believe worked quite well, caught different type of bugs, Continuous Development with full automated release pipelines, just feature manual tests, but we could have improved on the api integration part (this one required a lot of maintenance).

You need to understand a bit about how the NLP (Neuro-Linguistic Programming) works, so you can plan your regression pack, exploratory tests around the possible scenarios/questions/utterances.

If you think about how your brain learns to communicate you will notice it needs to have like a manual to assimilate words and actions/objects/feelings etc. NLP is a set of tools and communication techniques for one to become fluent in the language of the mind. It is an attitude and a methodology of knowing how to achieve your goals and get results.

 

Test approach

You can have a set of different types of tests, but these are the ones that I most focused and how we used them: Exploratory tests for the new features (manual tests), E2E (UI) tests with an example of each functionality for the regression pack (automated tests), API integration with happy path scenarios (automated tests), Utterances vs Expected Intents (This was a data test to check if the phrases were triggering the correct intent/AWS response), performance tests to review the performance of the internal microservices (automated tests).

 

Exploratory tests

Performing exploratory tests on the new features will bring you more understanding how the algorithm replies and understands mistypes and the user sentences. Also, it is about testing the usability of the bot independently the platform (mobile, web). This type of test is essencial to be certain the user experience is good from the beginning to the end. Imagine if the bot is popping up every time you navigate through the pages on the website ? This would be really annoying right ?

The scope for this kind of tests was really reduced, just acceptance criteria here and checking the usability of the bot. As always there are some exceptions where you might have to test some negative scenarios as well, specially if this is a brand new functionality.

 

End-To-End Tests

The regression test pack was built with Testcafe and contained only the happy path, but randomly choosing the utterance for each intent. Some of the intents were customised in my previous project therefore we had to run some tests to assure that the cards, response messages were rendering correctly on the bot.

We were always using real data, integrating with the real QA server. As the aim was not to test AWS, but the bot, we had a script to run before the tests to download the latest bot version (intents/utterances/slots) in a json file and use this data to run the tests on the bot against the server.

If you are not mocking the server and you want to see how the chatbot is going to behave with a real end user flow, just be aware that this type of tests have pros and cons, sometimes testcafe was failing because there was a problem with the client and sometimes with the server, and that’s ok if you want to make sure the chatbot is working smoothly from the start until the end and you have end-to-end control (backend/frontend).

You can mock the AWS Lex responses using the chatbot JSON file, so you can return the response expected for that utterance, in this case you are not testing if the entire flow is correct, but you are making sure the bot is rendering as expected. Just be aware that in this case, you would be doing UI tests and not E2E, but this is a good approach if you don’t have full control of the product (backend/frontend).

 

Integration Tests

For the integration tests between the internal microservices, you can use Postman and Newman to run on the command line.

Cons:

– It is horrible for maintenance (think how often you are going to update these scripts);

– Need to add timeout conditions to wait for the real server to reply/the response data to change (even though these waits helped us to find bugs on the database);

– It takes a long time to run as it depends of the server’s conditions;

– Debug is not great, you need to add print outs most of the time;

 

Pros:

– It is easy to write the scripts;

– It is fast to get it running, you can add hundred of flows in minutes;

– You can separate the type of tests with collections and use as a feature targeting;

 

In the end, the best approach would be adding some contract tests for the majority of the scenarios where you would save time with the maintenance, wouldn’t need to add retry with timeouts, etc.

 

Performance Tests

You can use a framework like Artillery to create the scripts and check how the services are going to behave under a certain condition, fpr example X number of requests per second. Usually you will have a KPI as a requirement and from this you will create your script. It is pretty simple and you can simulate the flows of the user.

AWS Lex used to have quite a low limit of requests per second (maybe they have upgraded that by now). You can observe that depending on the load that you send, the request is going to fail before even reaching your service. It seems you can only use the $Latest version for manual tests, not automated, so keep this in mind as they suggest.

You can also check how to create a performance script in a previous post here and if you need to run these tests in the same kubernetes cluster (so you are able to reach the internal microservices) you can check it here.

 

Data Tests

If you want to test the intent responses, if they are the expected ones, you can create scripts using AWS Cli Lex models to export the bot in a JSON file and then you can extract from this json the utterances and the expected intent response.

With the JSON file in hand you will be able to go through all the type of the intents and all the utterances and check if they are returning the expected responses (cards, messages, links). A common error is when you have similar/duplicated utterances and the bot responds with a different intent than the expected one maybe because the bot is misunderstanding and getting confuse with the utterance from the other intent.

For example if you have a utterance like:

get my card” that calls an intent X and then you have another utterance “get my credit card” in the intent Y, the bot can get confused and use the same intent X to reply both instead of knowing they are from different intents and not the same. This happens because the bot is constantly learning what the user means and tries to get the most probable intent.

 

Challenges

Some challenges that you migh face during the chatbot development:

– Duplicated utterances accross different intents (utterances triggering the wrong intent), be sure you have a robust map of intents/utterances;

– AWS Lex console quality and support really slow to fix bugs (2/3 months to release a fix);

– Get all the possible ways to ask a question/service/etc. Try to get some help from call center people;

 

Really good talk about how to test chatbot with Lina Zubyte, she explains more about the exploratory tests, comprehension and character of the bot.

 

 

 

Also if you are interested, this is a AI & Machine Learning AWS tech talk from the 2018 WebSummit in Lisbon, Portugal:

https://www.twitch.tv/videos/342323615?t=00h55m50s

 

Resources:

https://aws.amazon.com/lex/

https://aws.amazon.com/what-is-a-chatbot/

http://www.nlp.com/what-is-nlp/

 

Swift and XCUI tests for beginners

I went to a workshop last year #TechKnowDay and I saved this one about Swift for beginners in my draft. I didn’t have a chance to participate, but I followed the instructions on the slide of the project: https://github.com/ananogal/Workshop-Swift-for-beginners

I took the chance to do some automation on this project and created the scenarios (without BDD) here:

https://github.com/rafaelaazevedo/Templet

It is really basic and simple, but it is a start point for everybody who wants to learn how to create the tests.

You can also record the actions, just open your project on Xcode:

  • Create a New Target

 

  • Type a name for your UITest target

 

  • Select the UI Testing template

 

  • Then you just need to click inside of the test function and the record button (red circle) will show up on the bottom of the screen

 

Thank you Ana for this great workshop !

Deep learning with python

Hello guys, last year I joined to a day of workshops from #TechkNowDay here in London. This workshop is about deep learning with python and I am able to share the power point explaining the exercise with you.

You can download the Power Point presentation here and the list of files for this exercise on this link.

Nice coding everyone !

Thank you so much for the workshop and all the explanation Bianca Furtuna !

Injecting cookies in your testcafe automation

Hello guys,

Today I am going to post an alternative to authenticate without going throught the login page. I have done this before generating a token for Keycloak to authenticate, but in my last project I generated the cookies and added them as a header with testcafe intercepting the HTTP requests to the website.

This is useful for when you don’t need to test the login process or you have a separated feature to test the login page. Then you will be able to save time when running the automation and avoiding to have to sign in every time you launch the scenario.

I had to take this approach for another reason as well, which was this bug here that happened because TestCafe uses a URL-rewritten proxy internally and this proxy is forced to handle cookies manually because the URL of the tested website is changed during test execution.

You will need to add this in a Before hook and generate the cookies before running the scenarios.

So first, install keygrip and cookies in your package:

npm run keygrip base-64

Second you will need to create the cookie based on your authentication process. For example if your cookie is generated with a json like this. You will also need to set the cookie like this and then you can add it in your utils/helper/support class:

import { RequestHook } from 'testcafe';
import Keygrip from 'keygrip';
import Base64 from 'base-64';

class addCookie extends RequestHook {

    constructor (requestFilterRules) {
        super(requestFilterRules);
    }

    async onRequest (event) {
       const cookieName = "name-of-your-cookie-here"; //Change the value with the name of your authentication cookie 
       const cookieSigName = "name-of-your-cookie-here.sig"; //Same as above, but this is the signature cookie 

       let cookieValue = { "name":"username", "email":"username@email.com" }; //Here you have the value that is inside the cookie 
       cookieValue = Base64.encode(JSON.stringify(cookieValue)); //Encode to Base64 the string if you need it 

       const keys = new Keygrip(["SECRET_KEY"]); //Here you will add your secret
       const hash = keys.sign(`${cookieName}=${cookieValue}`); //This is where you are going to sign the cookie with your secret key

       const myDate = new Date();
       myDate.setFullYear(2020);
       event.requestOptions.headers[cookieName]= `${cookieValue};expires=${myDate};domain=${domain};path=/`;
       event.requestOptions.headers[cookieSigName]= `${hash};expires=${myDate};domain=${domain};path=/`; 
    } 

    async onResponse (event) {
       // Anything you need to add when you have the response 
    }

 }

 

You will need to import the page class and then in the feature level:

import addCookie from 'add-cookie.js';

const setCookies = new addCookie(/https?:\/\/your-url.com/);

fixture`Your Feature`
 .page(baseUrl)
.requestHooks(setCookies);

 

This is just an idea of how to skip the login screen page when doing automation and saving you sometime, but there are some other suggestions as well, like generating the token and add them to the header.

 

Resources:

https://github.com/crypto-utils/keygrip

https://github.com/pillarjs/cookies

https://devexpress.github.io/testcafe/documentation/test-api/intercepting-http-requests/creating-a-custom-http-request-hook.html

Performance Tests with Artillery

Hello guys, after a long break I am posting about a previous project where I created some performance tests to check the reliability of the server. Artillery is a npm library that helps you with load tests, is very simple to use and the scripts are written in .yml, so make sure the indentation is right.

So in the load-tests.yml file you will find this script:

  config:
    target: 'https://YOUR-HOST-HERE' //Here you need add your host url
    processor: "helpers/pre-request.js" //This is the pre-request function we are using to create the data
    timeout: 3 // What is the timeout for each request, it is going to stop the flow and tag the scenario as a failure
    ensure:
      p95: 1000 // Force artillery to exit with a non-zero code when a condition is not met, useful for CI/CD
    plugins:
      expect: {}
    environments:
      qa:
        target: "https://YOUR-HOST-HERE-QA-ENV" //Here you need add your QA env url
        phases:
          - duration: 600 //Duration of the test, in this case 10 minutes
            arrivalRate: 2 //Create 2 virtual users every second for 10 minutes
            name: "Sustained max load 2/second" //Run performance tests creating 2 users/second for 10 minutes
      dev:
        target: "https://YOUR-HOST-HERE-DEV-ENV" //Here you need add your Dev env url
        phases:
          - duration: 120
            arrivalRate: 0
            rampTo: 10 //Ramp up from 0 to 10 users with constant arrival rate over 2 minutes
            name: "Warm up the application"
          - duration: 3600
            arrivalCount: 10 //Fixed count of 10 arrivals (approximately 1 every 6 seconds):
            name: "Sustained max load 10 every 6 seconds for 1 hour"
    defaults:
      headers:
        content-type: "application/json" //Default headers needed to send the requests
  scenarios:
    - name: "Send User Data"
      flow:
      - function: "generateRandomData" //Function that we are using to create the random data
      - post:
          headers:
            uuid: "{{ uuid }}" //Variable with value set from generateRandomData function
          url: "/PATH-HERE"//Path of your request 
          json:
            name: "{{ name }}"
          expect:
            - statusCode: 200 //Assertions, in this case we are asserting only the status code
      - log: "Sent name: {{ name }} request to /PATH-HERE"
      - think: 30 //Wait 30 seconds before running next request
      - post:
          headers:
            uuid: "{{ uuid }}"
          url: "/PATH-HERE"
          json:
            name: "{{ mobile }}"
          expect:
            - statusCode: 200
- log: "Sent mobile: {{ mobile }} request to /PATH-HERE"

 

Now, for the function that creates the data you have a Faker library, that you need to install in your package with npm, then you need to export this function. You need to make the variables available using the userContext.vars and remember to always accept the parameters: userContext, events and done, so they can be used in the artillery scripts.

const Faker = require('faker')

module.exports = {
  generateRandomData
}

function generateRandomData (userContext, events, done) {
  userContext.vars.name = `${Faker.name.firstName()} ${Faker.name.lastName()} PerformanceTests`
  userContext.vars.mobile = `+44 0 ${Faker.random.number({min: 1000000, max: 9999999})}`
  userContext.vars.uuid = Faker.random.uuid()
  userContext.vars.email = Faker.internet.email()
  return done()
}

 

This is just an example, but you can see how powerful and simple artillery is on their website.

You can see the entire project with the endurance and load scripts here: https://github.com/rafaelaazevedo/artilleryExamples

See you guys !