TestProject New Python SDK

I have adventured myself to test the new TestProject Python SDK this week and I can say it has been quite straight forward. Also, the documentation is extensive and cover a lot of different scenarios and setups.

For those who don’t know, TestProject is a Free Automation Platform that wrappers open source test frameworks (Selenium and Appium) integrating your automation scripts. It consolidates all the needed drivers to run your test automation without additional setup.

1- To start you need to get SDK token from the TestProject Portal (you can register for free here)

2- Download and install TestProject Agent

3- Run the agent locally and verify the status

4- Install the latest version of python (the min. supported Python version is 3.4)

pip install testproject-python-sdk

5- Generate and copy your developer token

6- Create your first test, for example

7- Then you can see the reports published on your TestProject account, for example

 

You can find a lot more examples on their README file.

 

Resources:

https://testproject.io/

https://github.com/testproject-io/python-sdk

Open Banking Functional Conformance Suite Test Cases

What is Open Banking ?

Open banking allows the use of open APIs enabling third-party developers to build applications and services around financial institutions. It comes to bring more financial transparency options for account holders ranging from open data to private data.

Open Banking Use Cases (for Users) | by Ştefan Alexandru Băluţ ...

Open Banking Functional Conformance Suite

To be able to get the Functional Conformance Certificate, Open Banking provides a Functional Conformance Tool to allow implementers to check if your API has successfully developed all required functional elements of the OBIE Read/Write API Specifications.

This Open Banking tool allows an ASPSP (Account Servicing Payment Service Provider) and a TPP (Third Party Provider) to test the response of any API endpoint and validate that the JSON and data formats meet the schema, permissions and interfaces against the Functional API standard.

How to identify Test Cases covered in the OB Functional Conformance Suite ?

How do you know what else needs to be covered and if there is indeed something more to cover ? After digging into the project on bitbucket, I found some useful json files where you can check the assertions for each test case, the test cases itself and another file to translate the list of the assertions.

So, you can find the asserts that are being done for each test case inside the manifests folder.

For example, this one contains the assertions for this test case: The x-fapi-interaction-id is replayed for an Account. You can find the file with the accounts transactions test cases here.

Screenshot 2020-07-13 at 17.48.08

Then you would need to check what this assertion actually means, and you can find the dictionary of the assertions on this file.

Screenshot 2020-07-13 at 17.52.47

Remember that all the tests currently assume that consent is granted at the ASPSP portal for each requested PSU Consent (Payment Service User Consent).

Also, you will find that some test cases are missing for instance what should happen when you send an invalid token to the payments endpoint, but you can see there is a test case for the accounts endpoints for when you send a token without the required permissions to get a 401 response.

In this example, you can see that for payments the consent model is a bit different because each access token doesn’t have a range of permissions, but is associated with a single payment consent id. So, in order to get a 401 response, the request can present the wrong token along with a payment call or present no token at all. The conformance tool is not sending any token in this instance.

So make sure you are aware and cover the missing test cases with another approach.

I found quite hard to have a straight answer about what are all the test cases they are covering and also the details, so hope this helps to have a bit more clarity in case you are having the same issues.

Resources:

https://openbankinguk.github.io/knowledge-base-pub/conformance-tools/

https://openbanking.atlassian.net/wiki/spaces/DZ/pages/1061716467/Functional+Conformance

https://medium.com/zoidcoin-network/open-banking-use-cases-for-users-8678d11d770b

https://en.wikipedia.org/wiki/Open_banking

Load tests: Jmeter vs K6

Hello all,

Today it’s the turn of Jmeter and K6 ! As always, remember to check your other options and see what better fits for your project.

Jmeter is a great and powerful tool, but depending on what you really need (something more lighter) then Jmeter might become an over complex, slow, hard to maintain tool.

Jmeter K6
In-built Protocols Support
  • HTTP
  • FTP
  • JDBC
  • SOAP
  • LDAP
  • TCP
  • JMS
  • SMTP
  • POP3
  • IMAP
  • HTTP 1.1
  • HTTP 2
  • WebSockets
Speed to write tests
  • Slow
  • Fast
Support of “Test as Code”
  • GUI oriented
  • Possibility to create scripts, but too complex and lack of documentation
  • Weak (Java)
  • Hard to maintain
  • Scripts oriented
  • JavaScript
  • Easier to maintain
Ramp-up Flexibility
  • Plugins available to be able to configure flexible load
  • Supports ramp-up phases and flexible load
Test Results Analyzing
  • Yes
  • Yes
Resources Consumption
  • Heavy to run tests with multiple users on a single machine, more memory consumption
  • Lightweight and doesn’t take up so much memory of your machine

Screenshot 2020-07-06 at 23.34.47

Easy to use with Version Control Systems
  • No
  • Yes
Number of Concurrent Users
  • Thousands, under restrictions
  • Thousands
Recording Functionality
  • Yes
  • No, but it allows to auto-generate a k6 script via an HAR file
Distributed Execution
  • Yes
  • Yes
Load Tests Monitoring
  • Add listeners, but consume more memory

Screenshot 2020-07-06 at 23.35.02

Jmeter is most used when:

  • You need to perform a complex load including different protocols
  • You can record scenarios
  • Robust support and training ecosystem
  • Require that a full scenario be written for every test
  • If you need to simulate specific load with some custom ramp-up patterns
  • If you just prefer UI desktop app for scripts creation, or you just do not know Javascript/YAML/JSON well enough

 

K6 solves some specific problems:

  • CLI tool with developer-friendly APIs.
  • You can use HAR files to generate record sessions
  • Checks and Thresholds – for goal-oriented, automation-friendly load testing
  • Open source, great support and documentation
  • Lightweight uses Javascript
  • Does not run in NodeJS and doesn’t run in a browser

 

Resources:

https://k6.io/

Load Tests: Jmeter vs Gatling

Hello guys,

Continuing on reviewing some performance test tools, today is the turn of Jmeter and Gatling, which looks like more and more people are using nowadays. Remember always check your other options and see what better fits for your project.

 

Jmeter is a great and powerful tool, but depending on what you really need (something more lighter) then Jmeter might become an overcomplex, slow, hard to maintain tool.

Jmeter Gatling
In-built Protocols Support
  • HTTP
  • FTP
  • JDBC
  • SOAP
  • LDAP
  • TCP
  • JMS
  • SMTP
  • POP3
  • IMAP
  • HTTP
  • JMS
  • MQTT
Speed to write tests
  • Slow
  • Fast
Support of “Test as Code”
  • GUI oriented
  • Possibility to create scripts, but too complex and lack of documentation
  • Weak (Java)
  • Hard to maintain
Ramp-up Flexibility
  • Plugins available to be able to configure flexible load
  • Supports ramp-up phases and flexible load
Test Results Analyzing
  • Yes
  • Yes
Resources Consumption
  • Heavy to run tests with multiple users on a single machine, more memory consumption
  • Lighweight and doesn’t take up so much memory of your machine

Screenshot 2020-06-20 at 14.38.36

Easy to use with Version Control Systems
  • No
  • Yes
Number of Concurrent Users
  • Thousands, under restrictions
  • Thousands
Recording Functionality
  • Yes
  • Yes
Distributed Execution
  • Yes
  • Yes
Load Tests Monitoring
  • Add listeners, but consume more memory
  • Yes, logs through the console and reports are created at the end
Screenshot_2020-06-20 Gatling Stats - Global Information

 

Jmeter is most used when:

  • You need to perform a complex load including different protocols
  • You can record scenarios
  • Robust support and training ecosystem
  • Require that a full scenario be written for every test
  • If you need to simulate specific load with some custom ramp-up patterns
  • If you just prefer UI desktop app for scripts creation, or you just do not know Javascript/YAML/JSON well enough

 

Gatling solves some specific problems:

 

Resources:

gatling.io/

Load Tests: Jmeter vs Artillery

Hello guys,

Continuing on reviewing some performance test tools, today I am going to post a comparison of Jmeter and Artillery. Most people still prefer to use Jmeter as it has been longer in the market, but it is always good to check your other options and see what better fits for your project. I have used Locust and Artillery recently and they are also great tools easy to maintain and to create your scripts.

Just to remind again:

Jmeter is a great and powerful tool, but depending on what you really need (something more lighter) then Jmeter might become an overcomplex, slow, hard to maintain tool.

Jmeter Artillery
In-built Protocols Support
  • HTTP
  • FTP
  • JDBC
  • SOAP
  • LDAP
  • TCP
  • JMS
  • SMTP
  • POP3
  • IMAP
  • HTTP
  • Socket.io
  • WebSocket
Speed to write tests
  • Slow
  • Fast
Support of “Test as Code”
  • GUI oriented
  • Possibility to create scripts, but too complex and lack of documentation
  • Weak (Java)
  • Hard to maintain
  • Scripts oriented
  • Strong (JSON/YAML – YAML is the recommended format since it allows comments)
  • Easier to maintain
Ramp-up Flexibility
  • Plugins available to be able to configure flexible load
  • Supports ramp-up phases and flexible load
Test Results Analyzing
  • Yes
  • Yes
Resources Consumption
  • Heavy to run tests with multiple users on a single machine, more memory consumption
  • Light to run tests with multiple users on a single machine, less memory consumption
  • Doesn’t take up so many of your machines’ resources
  • Multicore support

Easy to use with Version Control Systems
  • No
  • Yes
Number of Concurrent Users
  • Thousands, under restrictions
  • Thousands
Recording Functionality
  • Yes
  • No
Distributed Execution
  • Yes
  • Yes
Load Tests Monitoring
  • Add listeners, but consume more memory
  • No. Reports are only created at the end or you can check the terminal logs.

Concurrent users low than expected in the scenario · Issue #434 ...

 

Jmeter is most used when:

  • You need to perform a complex load including different protocols
  • If you need the script recording functionality
  • Require that a full scenario be written for every test
  • If you need to simulate specific load with some custom ramp-up patterns
  • If you just prefer UI desktop app for scripts creation, or you just do not know Javascript/YAML/JSON well enough

 

Artillery solves some specific problems:

  • You can write performance scripts pretty fast, there is even a “quick” mode (where you don’t need to create any script)
  • Push to your VCS and easily maintain the scripts
  • Artillery has WebSocket support out of the box and native support for Socket.io
  • Spend minimum time on maintenance without additional GUI applications
  • Simulate thousands of test users on local machine without the need to have multiple slaves as it uses Node.js is easier to install and lightweight

 

Resources:

https://artillery.io/faq.html

Load Tests: Locust vs Jmeter

Hello guys,

Today I am going to post a comparison of these two different load tests framework. I know most people use Jmeter and it has been longer in the market, but I have recently used Locust and also Artillery (which will post a comparison later) and the results are great, the team was able to improve the creation of the tests and also the maintainability.

In the end of the day you need to use the right tool for your needs, Jmeter is a great and powerful tool, but depending on what you really need (something more lighter) then Jmeter might become an overcomplex, slow, hard to maintain tool.

Jmeter Locust
In-built Protocols Support
  • HTTP
  • FTP
  • JDBC
  • SOAP
  • LDAP
  • TCP
  • JMS
  • SMTP
  • POP3
  • IMAP
  • HTTP, but ability to add a customise protocol
Speed to write tests
  • Slow
  • Fast
Support of “Test as Code”
  • GUI oriented
  • Possibility to create scripts, but too complex and lack of documentation
  • Weak (Java)
  • Hard to maintain
  • Scripts oriented
  • Strong (Python)
  • Easier to maintain
Ramp-up Flexibility
  • Plugins available to be able to configure flexible load
  • Ability to specify a linear load only (so no complex scenarios)
Test Results Analyzing
  • Yes
  • Yes
Resources Consumption
  • Heavy to run tests with multiple users on a single machine, more memory consumption
  • Light to run tests with multiple users on a single machine, less memory consumption
  • Doesn’t take up so many of your machines’ resources.
Easy to use with Version Control Systems
  • No
  • Yes
Number of Concurrent Users
  • Thousands, under restrictions
  • Thousands
Recording Functionality
  • Yes
  • No
Distributed Execution
  • Yes
  • Yes
Load Tests Monitoring
  • Add listeners, but consume more memory
  • Ability to monitor a basic load

 

Jmeter is most used when:

  • You need to perform a complex load including different protocols
  • If you need the script recording functionality
  • If you need to simulate specific load with some custom ramp-up patterns
  • If you just prefer UI desktop app for scripts creation, or you just do not know Python well enough

 

Locust solves some specific problems:

  • You can write performance scripts pretty fast
  • Push to your VCS and easily maintain the scripts
  • Spend minimum time on maintenance without additional GUI applications
  • Simulate thousands of test users on local machine without the need to have multiple slaves

 

Reducing the Scope of Load Tests with Machine Learning

Hello hello,

Today I am going to share this really interesting webinar of my friend Julio de Lima, he is one of the top QA influencers in Brazil, and in this video he is talking about how to reduce the scope of load tests using Machine Learning.

Load testing execution produces a huge amount of data. Investigation and analysis are time-consuming, and numbers tend to hide important information about issues and trends. Using machine learning is a good way to solve data issues by giving meaningful insights about what happened during test execution.

Julio de Lima will show you how to use K-means clustering, a machine learning algorithm, to reduce almost 300,000 records to fewer than 1,000 and still get good insights into load testing results. He will explain K-means clustering, detail what use cases and applications this method can be used in, and give the steps to help you reproduce a K-means clustering experiment in your own projects. You’ll learn how to use this machine learning algorithm to reduce the scope of your load testing and getting meaningful analysis from your data faster.

 

Thank you Julio 🙂

Your automation framework smells !

Code smell is everything that can slow down your process or increase the risk of implementing bugs when doing maintenance. 

The vast majority of the places that I have worked think it is okay to have an automation project with poor quality. Unfortunately, this is an idea shared by many QAs as well. Who should test the test automation ? It is probably alright to have duplicated code, layers and layers of abstraction… After all, it is not the product code, why should we bother, right ?

 

Automation is a development project that should follow the same best practices to avoid code smells. You need to ensure the minimum of the quality on your project, so: add a code review process, a code quality tool, and also test your code before pushing the PR (like changing the expectations and see if it is going to fail).

Of course you don’t need to go too deep and create unit/integration/performance tests for your automation project (who test the tests right ?), but you definitely need to ensure you will have a readable, maintainable, scalable automation project. This is going to be maintained by the team, it needs to be simple, direct and easy to understand. If you spend the same amount of time on your automation and on your development code, something is wrong.

You want to have an extremely simple and easy to read automation framework, so you can have a lot more confidence that your tests are correct. 

I will post here some of the most common anti-patterns that I have found during my career. You might have come across some others as well.

 

Common code smells in Automation framework

Long class(God object), you need to scroll for hours to find something, it has loads of methods and functions. You don’t even know what this class is about anymore.

 

– Long BDD scenarios, try to be as simple and straight forward as possible, if you create a long scenario it is going to be hard to maintain, to read and to understand.

 

– BDD scenarios with UI actions, your tests should not rely on the UI, no actions like click, typed, etc. Try to use more generic actions like send, create, things that even if the UI changes the action doesn’t need to change.

 

– Fragile locators / Xpath from hell, any small change on the UI would fail the tests and require to update the locator.

 

– Duplicate code, identical or very similar code exists in more than one location. Even variables should be pain free maintenance. Any change means changing the code in multiple spots.

 

– Overcomplexity, forced usage of overcomplicated design patterns where simpler design would be enough. Do you really need to use dependency injection ?

 

– Indecent Exposure, too many classes can see you, limit your scope.

 

– Shotgun surgery, a single change needs to be applied to multiple classes at the same time.

 

– Inefficient waits, it slows down the automation test pipeline, can make your tests flaky.

 

– Variable mutations, very hard to refactor code since the actual value is unpredictable and hard to reason about.

 

– Boolean blindness, easy to assert on the opposite value and still type checks.

 

Inappropriate intimacy, too many dependencies on implementation details of another class.

– Lazy class / freeloader, a class that doesn’t do much.

– Cyclomatic complexity, too many branches or loops, this may indicate a function needs to be broken into smaller functions, or that it has potential for simplification.

 

– Orphan variable or constant class, a class that typically has a collection of constants which belong elsewhere where those constants should be owned by one of the other member classes.

 

– Data clump, occurs when a group of variables are passed around together in various parts of the program, a long list of parameters and it is hard to read. In general, this suggests that it would be more appropriate to formally group the different variables together into a single object, and pass around only this object instead.

 

– Excessively long identifiers, in particular, the use of naming conventions to provide disambiguation that should be implicit in the software architecture.

 

– Excessively short identifiers, the name of a variable should reflect its function unless the function is obvious.

 

– Excessive return of data, a function or method that returns more than what each of its callers needs.

 

– Excessively long line of code (or God Line), a line of code which is too long, making the code difficult to read, understand, debug, refactor, or even identify possibilities of software reuse.

 

How can you fix these issues ?

  • Follow SOLID principles ! Class, methods should have a single responsibility !
  • Add a code review process and ask the team to review (developers and other QAs).
  • Lookout how many parameters you are sending. Maybe you should just send an Object.
  • Add locators that are resistant to UI changes, focus on ids first.
  • Return an object with the group of the data you need instead of returning loads of variables.
  • Focus to name the methods and classes as direct as possible, remember SOLID principles.
  • If you have a method that just type a text in a textfield, it maybe grouped together to a function that is going to perform the login().
  • If you have long lines of code, you might want to split it up into functions and move some of them to a variable and then formatting this variable, for example.
  • Think twice about the boolean assertions, add a comment if you think it is not straight forward.
  • Follow POM structure with helpers and common shared steps/functions to avoid long classes.
  • Do you really need this wait ? You might be able to use a retry or maybe your current framework have ways to deal with waits properly.
  • Add a code quality tool to review your automation code (eg. ESlint, Code Inspector)

 

Resources:

https://en.wikipedia.org/wiki/Code_smell

https://pureadmin.qub.ac.uk/ws/portalfiles/portal/178889150/MLR_Smells_in_test_code_Dec_9.pdf

https://www.sealights.io/code-quality/the-problem-of-code-smell-and-secrets-to-effective-refactoring/

https://slides.com/angiejones/automation-code-smells-45

https://medium.com/ingeniouslysimple/should-you-refactor-test-code-b9508682816

Automation challenges

Hello guys !!

I thought about sharing something that we all have gone through at some point of our career. Some of these challenges are related to lack of standards, knowledge and processes, but some others are related to company’s culture and people’s mindset (this is the biggest challenge and most difficult to change).

I have posted about the difference between a growth mindset and a fixed mindset some years ago, after joining  a workshop where Joanna Chwastowska (Engineering Lead at Google) explains how she learned to have a growth mindset, this picture below summarizes it:

 

Basically, we are always learning and you shouldn’t feel ashamed to admit that 🙂

 

Scope

What scenarios should I have for this layer of tests ? What type of tests should I be responsible for ? These should be the questions that you first ask when deciding the scope of your tests. For each layer you need to have a different scope otherwise you are not only going to duplicate scenarios, but also have an extensive pipeline that is hard/impossible to maintain.  

Decide what are the layers that you are going to cover: Unit, Integration, E2E Tests… then decide what scenarios for each layer. So, for the Unit tests you need to have an extensive number of scenarios covering edge cases (special characters, etc), then integration between components and finally e2e tests on top with a reduced scope and no mocks. You should also consider UI tests comparing snapshots if you have a frontend as well.  

 

Team collaboration

Make sure you have a team that is working towards the same goal, improve the communication as much as possible. Do workshops, demos, pair-programming, code reviews, get feedback so you can continue to improve until you find the best way to work and at the same time make the business happy.

Understand the expectations and align them, there is nothing better than having a nice bunch of people to spend ~8 hours together for 5 days a week and achieving something together.

 

Time constraints

How many releases do you have per day ? How many projects are you allocated in ? How many developers do you have in your team ? You might have noticed that you usually have more than one developer for each QA in a team, and this is okay, as long as you manage what you are going to test, you can’t save the world, don’t test everything, focus on the end-user flow above everything else as this is the front door of the product.

Something you need to take into account is the scope of the regression pack, you probably want to have automation for that, right ? I am completely against having manual tests for the regression pack, unless there is a strong reason why.

 

Finding elements

Do you remember when you couldn’t click on that element because there was a div on top of it ? This is one of the problems you might have faced already, or maybe was a bad structured xpath ? or too many elements with the same css-selector ?

If you are testing react apps, here is something that helped when doing the tests, I’ve asked the developers to add one a data-test attribute: data-test, data-test-id, data-testid or data-cy to the elements. Adding this kind of attribute is considered a best practice since makes the automation resilient to change and it is dedicated for tests.

 

Flaky tests

Yeah… we all know the struggle is REAL !

I’ve faced this issue recently when doing tests with Espresso on android apps, instead of using waits remember to use idling resources which synchronizes the subsequent operations in a UI test.

For the react apps you can use frameworks like Cypress, Testcafe and Detox that runs in the same run-loop as the application being tested and can automatically wait for assertions and commands before moving ahead.

These are some of the reasons you can have flaky tests, but there are some other reasons like:

  • Environment/server is not stable
  • 3rd party system integration is not stable
  • Concurrency tests
  • Caching
  • Setup/teardown tests
  • Dynamic content

Identify the reason first to be able to take the correct action, but definitely tag this test as soon as possible since you won’t be able to trust on it until it has been fixed.

 

#BeSafe

Contract Testing with Pact.js + GraphQL

Contract Tests vs Integration Tests

  • Trustworthy like the API tests, even though the contract test is mocking the provider/consumer, you know it is mocking based on the contract that was generated.
  • Reliable because you don’t depend on your internet connection to get the same consistency on the results (When your API does’t have third parties integration or you are testing locally).
  • Fast because you don’t need internet connection, everything is mocked using the contract that was generated.
  • Cheap because you don’t spend huge amount of time to create a pact test or to run it, even less to maintain.
Contract Tests API Tests
Trustworthy Trustworthy
Reliable Not realiable
Fast Slow
Cheap Expensive

Remember contract tests are NOT about testing the performance of your microservice. So, if you have API Test that are taking ages to (execute/perform), failing due server no replying fast enough or timeouts, this means you have a performance problem, or it is just your internet connection. In either case you need to separate the problem and create targeted tests that are going to verify the performance of your server and not the expected response body/code.

How it works

You can use a framework like Pact which will generate the contract details and fields from the consumer. You need to  specify the data you are going to send and in the verification part you will use the same function the app would use to do the requests to the API.

Contract test is part of an integration test stage where you don’t really need to hit the API, so it is faster and reliable, completely independent of your internet connection. It is trustworthy since you are generating the contract based on the same function and the same way you would do when using the consumer to hit the provider. Pact is responsible to generate this contract for you, so you just need to worry about passing the data and add the assertions, like response code, headers, etc. If It seems pretty straight forward to know who is the consumer, who is the provider and the contract that you are going to generate, but imagine a more complex real life scenario where you have a structure like:

In this case you have multiple microservices communicating with each other and sometimes this service is the provider and sometimes the same service is the consumer. So, to keep the house organised when maintaining these services you need to create a pact between each one of them.

 

The fun part

So let’s get hands-on now and see how we can actually create these contracts.

Create a helper for the consumer to setup and finalise the provider (this will be the pact mock where the consumer is going to point when creating the pact.

import { Pact } from '@pact-foundation/pact'
import path from 'path'

jasmine.DEFAULT_TIMEOUT_INTERVAL = 10000

export const provider = new Pact({
   port: 20002,
   log: path.resolve(process.cwd(), 'logs', 'mockserver-integration.log'),
   dir: path.resolve(process.cwd(), 'pacts'),
   pactfileWriteMode: 'overwrite',
   consumer: 'GraphQLConsumer',
   provider: 'GraphQLProvider'
})

beforeAll(() => provider.setup())
afterAll(() => provider.finalize())

// verify with Pact, and reset expectations
afterEach(() => provider.verify())

 

Then create a consumer file where you add what is the data you want to check and the response you are expecting from the graphQL API.

import { Matchers, GraphQLInteraction } from '@pact-foundation/pact'
import { addTypenameToDocument } from 'apollo-utilities'
import gql from 'graphql-tag'
import graphql from 'graphQLAPI'

const { like } = Matchers

const product = {
  id: like('456789').contents,
  disabled: false,
  type: like('shampoo').contents,
}

describe('GraphQL', () => {
  describe('query product list', () => {
    beforeEach(() => {
      const graphqlQuery = new GraphQLInteraction()
        .uponReceiving('a list of products')
        .withRequest({
          path: '/graphql',
          method: 'POST'
        })
        .withOperation('ProductsList')
        .withQuery(print(addTypenameToDocument(gql(productsList))))
        .withVariables({})
        .willRespondWith({
          status: 200,
          headers: {
            'Content-Type': 'application/json; charset=utf-8'
          },
          body: {
            data: {
              productsList: {
                items: [
                  product
                ],
                nextToken: null
              }
            }
          }
        })
      return provider.addInteraction(graphqlQuery)
    })

    it('returns the correct response', async () => {
      expect(await graphql.productsList()).toEqual([product])
    })
  })
})

When you run the script above, pact is going to create a .json file in your pacts folder and this will be used to test the provider side. So, this is going to be the source of truth for your tests.

This is the basic template if you are using jest, just set the timeout and then you need to use the same functions that you are going to use for the consumer to communicate with the provider. You just need to decide how you are going to inject the data in your local database, you can pre-generate all the data on the beforeAll or a pre-test and then add a post-test or a function in your afterAll to clean the database once the tests are done.

The provider.js file should be something similar to this one:

import { Verifier } from '@pact-foundation/pact'
import path from 'path
import server from 'server'

jest.setTimeout(30000)

beforeAll(async () => {
         server.start('local')
})

afterAll(async () => {
         server.tearDown()
})

describe('Contract Tests', () => {
       it('validates the pact is correct', () => {
         const config = {
                  pactUrls: [path.resolve(process.cwd(), 'pacts/graphqlconsumer-graphqlprovider.json')],
                  pactBrokerPassword: "Password",
                  pactBrokerUrl: "https://test.pact.com/",
                  pactBrokerUsername: "Username",
                  provider:'GraphQLProvider',
                  providerBaseUrl:server.getGraphQLUrl(),
                  publishVerificationResult:true
         }
         return new Verifier(config).verifyProvider()
       }
})

In the end you just need to verify that the contract is still valid after your changes on provider or consumer, for this reason you don’t need to add edge scenarios, just exactly what the provider is expecting as data.

 

Resources:

https://docs.pact.io/

https://docs.pact.io/pact_broker/advanced_topics/how_pact_works

https://medium.com/humanitec-developers/testing-in-microservice-architectures-b302f584f98c