Contract Testing with Pact.js + GraphQL

Contract Tests vs Integration Tests

  • Trustworthy like the API tests, even though the contract test is mocking the provider/consumer, you know it is mocking based on the contract that was generated.
  • Reliable because you don’t depend on your internet connection to get the same consistency on the results (When your API does’t have third parties integration or you are testing locally).
  • Fast because you don’t need internet connection, everything is mocked using the contract that was generated.
  • Cheap because you don’t spend huge amount of time to create a pact test or to run it, even less to maintain.
Contract Tests API Tests
Trustworthy Trustworthy
Reliable Not realiable
Fast Slow
Cheap Expensive

Remember contract tests are NOT about testing the performance of your microservice. So, if you have API Test that are taking ages to (execute/perform), failing due server no replying fast enough or timeouts, this means you have a performance problem, or it is just your internet connection. In either case you need to separate the problem and create targeted tests that are going to verify the performance of your server and not the expected response body/code.

How it works

You can use a framework like Pact which will generate the contract details and fields from the consumer. You need to  specify the data you are going to send and in the verification part you will use the same function the app would use to do the requests to the API.

Contract test is part of an integration test stage where you don’t really need to hit the API, so it is faster and reliable, completely independent of your internet connection. It is trustworthy since you are generating the contract based on the same function and the same way you would do when using the consumer to hit the provider. Pact is responsible to generate this contract for you, so you just need to worry about passing the data and add the assertions, like response code, headers, etc. If It seems pretty straight forward to know who is the consumer, who is the provider and the contract that you are going to generate, but imagine a more complex real life scenario where you have a structure like:

In this case you have multiple microservices communicating with each other and sometimes this service is the provider and sometimes the same service is the consumer. So, to keep the house organised when maintaining these services you need to create a pact between each one of them.

 

The fun part

So let’s get hands-on now and see how we can actually create these contracts.

Create a helper for the consumer to setup and finalise the provider (this will be the pact mock where the consumer is going to point when creating the pact.

import { Pact } from '@pact-foundation/pact'
import path from 'path'

jasmine.DEFAULT_TIMEOUT_INTERVAL = 10000

export const provider = new Pact({
   port: 20002,
   log: path.resolve(process.cwd(), 'logs', 'mockserver-integration.log'),
   dir: path.resolve(process.cwd(), 'pacts'),
   pactfileWriteMode: 'overwrite',
   consumer: 'GraphQLConsumer',
   provider: 'GraphQLProvider'
})

beforeAll(() => provider.setup())
afterAll(() => provider.finalize())

// verify with Pact, and reset expectations
afterEach(() => provider.verify())

 

Then create a consumer file where you add what is the data you want to check and the response you are expecting from the graphQL API.

import { Matchers, GraphQLInteraction } from '@pact-foundation/pact'
import { addTypenameToDocument } from 'apollo-utilities'
import gql from 'graphql-tag'
import graphql from 'graphQLAPI'

const { like } = Matchers

const product = {
  id: like('456789').contents,
  disabled: false,
  type: like('shampoo').contents,
}

describe('GraphQL', () => {
  describe('query product list', () => {
    beforeEach(() => {
      const graphqlQuery = new GraphQLInteraction()
        .uponReceiving('a list of products')
        .withRequest({
          path: '/graphql',
          method: 'POST'
        })
        .withOperation('ProductsList')
        .withQuery(print(addTypenameToDocument(gql(productsList))))
        .withVariables({})
        .willRespondWith({
          status: 200,
          headers: {
            'Content-Type': 'application/json; charset=utf-8'
          },
          body: {
            data: {
              productsList: {
                items: [
                  product
                ],
                nextToken: null
              }
            }
          }
        })
      return provider.addInteraction(graphqlQuery)
    })

    it('returns the correct response', async () => {
      expect(await graphql.productsList()).toEqual([product])
    })
  })
})

When you run the script above, pact is going to create a .json file in your pacts folder and this will be used to test the provider side. So, this is going to be the source of truth for your tests.

This is the basic template if you are using jest, just set the timeout and then you need to use the same functions that you are going to use for the consumer to communicate with the provider. You just need to decide how you are going to inject the data in your local database, you can pre-generate all the data on the beforeAll or a pre-test and then add a post-test or a function in your afterAll to clean the database once the tests are done.

The provider.js file should be something similar to this one:

import { Verifier } from '@pact-foundation/pact'
import path from 'path
import server from 'server'

jest.setTimeout(30000)

beforeAll(async () => {
         server.start('local')
})

afterAll(async () => {
         server.tearDown()
})

describe('Contract Tests', () => {
       it('validates the pact is correct', () => {
         const config = {
                  pactUrls: [path.resolve(process.cwd(), 'pacts/graphqlconsumer-graphqlprovider.json')],
                  pactBrokerPassword: "Password",
                  pactBrokerUrl: "https://test.pact.com/",
                  pactBrokerUsername: "Username",
                  provider:'GraphQLProvider',
                  providerBaseUrl:server.getGraphQLUrl(),
                  publishVerificationResult:true
         }
         return new Verifier(config).verifyProvider()
       }
})

In the end you just need to verify that the contract is still valid after your changes on provider or consumer, for this reason you don’t need to add edge scenarios, just exactly what the provider is expecting as data.

 

Resources:

https://docs.pact.io/

https://docs.pact.io/pact_broker/advanced_topics/how_pact_works

https://medium.com/humanitec-developers/testing-in-microservice-architectures-b302f584f98c

How to test internal microservices in a kubernetes cluster

Hello guys, I have been working with kubernetes, docker, postman and newman a lot lately due some tests on internal microservices. To test these internal microservices I am running postman scripts in the same kubernetes cluster where the services are running.

Kubernetes is an open source container orchestrator, which means you can group together all the containers (services) that share information. You don’t need to expose their endpoints for them to be available among themselves. PS: This is just my brief explanation, but feel free to explore a bit more.

To be able to run tests on internal microservices that are inside a Kubernetes cluster I created a Postman collection and together with Newman run the tests pointing to these services without the need to expose the endpoints.

Creating the postman script

– You will need to have postman installed to create the script

– I am not going through the details of this part, because this is not the aim of this post, but after you create the script on postman you will need to export to api/collections folder.

– Also, you will need to export the environment variables that you create it and save it in the api/envs folder.

– You can see an example of the collection here and an example of the environment variable here. Remember that the hostname needs to be the name of the deployment of your service in kubernetes.

Creating the docker file

– I am using this docker image as base, but you can use any other, you just need to install newman.

– Then we need to build a Docker image containing the postman collection, environment and global variables, data files, etc.

– After this we need to create a jenkins file that will build and push the image to the docker hub, the best practice is you to have a separate pipeline just to build the image (so the tests contains just the tests and don’t take longer than necessary to run), but in this example I am going to have just a separate stage to build the image and another to run the tests.

dockerfile:

FROM postman/newman
WORKDIR /etc/newman/
COPY . /etc/newman/
RUN chmod -R 700 /etc/newman

Creating the kubernetes deployment

– Now we need to create the jenkins file to build and push the docker image to docker hub.

– You can get the full code here.

– But this is the important part, the command below will create a kubernetes deployment in the namespace and run the tests from the image that we have just pushed.

kubectl run microservices-tests -i --rm --namespace=${ENVIRONMENT} --restart=Never --image=${IMAGE}:latest --image-pull-policy=Always -- run /etc/newman/api/collections/collection.postman_collection.json -e /etc/newman/api/envs/${ENVIRONMENT}.postman_environment.json --reporters cli --reporter-cli-no-failures

  • microservices-tests is the name of the deployment that you are going to create.
  • -i Keep stdin open on the container.
  • --rm is the argument that says to delete this deployment once the command is finished.
  • --namespace=${ENVIRONMENT} is the name of the namespace (environment) that you will run this deployment, so it needs to be the same as your services are running.
  • --restart=Never is the argument that says to not restart the deployment once is finished. You don’t want your tests running over and over again forever.
  • --image=${IMAGE}:latest is the image that kubernetes is going to pull.
  • --image-pull-policy=Always this is to ensure that kubernetes is always pull the image even thought you have pulled before (this is to ensure you have always the latest).
  • -- this is to mark the end of the kubernetes commands and everything after is going to be the newman commands/arguments to run the postman script.

So, creating this deployment in the same namespace of your internal services, you can hit them and test the endpoints even though they are not external.

You should see something like this on your console:

This means that your script is running as expected in the cluster.

Thank you guys ! See you next time !