Reducing the Scope of Load Tests with Machine Learning

Hello hello,

Today I am going to share this really interesting webinar of my friend Julio de Lima, he is one of the top QA influencers in Brazil, and in this video he is talking about how to reduce the scope of load tests using Machine Learning.

Load testing execution produces a huge amount of data. Investigation and analysis are time-consuming, and numbers tend to hide important information about issues and trends. Using machine learning is a good way to solve data issues by giving meaningful insights about what happened during test execution.

Julio de Lima will show you how to use K-means clustering, a machine learning algorithm, to reduce almost 300,000 records to fewer than 1,000 and still get good insights into load testing results. He will explain K-means clustering, detail what use cases and applications this method can be used in, and give the steps to help you reproduce a K-means clustering experiment in your own projects. You’ll learn how to use this machine learning algorithm to reduce the scope of your load testing and getting meaningful analysis from your data faster.

 

Thank you Julio ūüôā

Your automation framework smells !

Code smell is everything that can slow down your process or increase the risk of implementing bugs when doing maintenance. 

The vast majority of the places that I have worked think it is okay to have an automation project with poor quality. Unfortunately, this is an idea shared by many QAs as well. Who should test the test automation ? It is probably alright to have duplicated code, layers and layers of abstraction… After all, it is not the product code, why should we bother, right ?

 

Automation is a development project that should follow the same best practices to avoid code smells. You need to ensure the minimum of the quality on your project, so: add a code review process, a code quality tool, and also test your code before pushing the PR (like changing the expectations and see if it is going to fail).

Of course you don’t need to go too deep and create unit/integration/performance tests for your automation project (who test the tests right ?), but you definitely need to ensure you will have a readable, maintainable, scalable automation project. This is going to be maintained by the team, it needs to be simple, direct and easy to understand. If you spend the same amount of time on your automation and on your development code, something is wrong.

You want to have an extremely simple and easy to read automation framework, so you can have a lot more confidence that your tests are correct. 

I will post here some of the most common anti-patterns that I have found during my career. You might have come across some others as well.

 

Common code smells in Automation framework

Long class(God object), you need to scroll for hours to find something, it has loads of methods and functions. You don’t even know what this class is about anymore.

 

– Long BDD scenarios, try to be as simple and straight forward as possible, if you create a long scenario it is going to be hard to maintain, to read and to understand.

 

– BDD scenarios with UI actions, your tests should not rely on the UI, no actions like click, typed, etc. Try to use more generic actions like send, create, things that even if the UI changes the action doesn’t need to change.

 

– Fragile locators / Xpath from hell, any small change on the UI would fail the tests and require to update the locator.

 

– Duplicate code, identical or very similar code exists in more than one location. Even variables should be pain free maintenance. Any change means changing the code in multiple spots.

 

– Overcomplexity, forced usage of overcomplicated design patterns where simpler design would be enough. Do you really need to use dependency injection ?

 

– Indecent Exposure, too many classes can see you, limit your scope.

 

– Shotgun surgery, a single change needs to be applied to multiple classes at the same time.

 

– Inefficient waits, it slows down the automation test pipeline, can make your tests flaky.

 

– Variable mutations, very hard to refactor code since the actual value is unpredictable and hard to reason about.

 

– Boolean blindness, easy to assert on the opposite value and still type checks.

 

Inappropriate intimacy, too many dependencies on implementation details of another class.

– Lazy class / freeloader, a class that doesn’t do much.

– Cyclomatic complexity, too many branches or loops, this may indicate a function needs to be broken into smaller functions, or that it has potential for simplification.

 

– Orphan variable or constant class, a class that typically has a collection of constants which belong elsewhere where those constants should be owned by one of the other member classes.

 

– Data clump, occurs when a group of variables are passed around together in various parts of the program, a long list of parameters and it is hard to read. In general, this suggests that it would be more appropriate to formally group the different variables together into a single object, and pass around only this object instead.

 

– Excessively long identifiers, in particular, the use of naming conventions to provide disambiguation that should be implicit in the software architecture.

 

– Excessively short identifiers, the name of a variable should reflect its function unless the function is obvious.

 

– Excessive return of data, a function or method that returns more than what each of its callers needs.

 

– Excessively long line of code (or God Line), a line of code which is too long, making the code difficult to read, understand, debug, refactor, or even identify possibilities of software reuse.

 

How can you fix these issues ?

  • Follow SOLID principles ! Class, methods should have a single responsibility !
  • Add a code review process and ask the team to review (developers and other QAs).
  • Lookout how many parameters you are sending. Maybe you should just send an Object.
  • Add locators that are resistant to UI changes, focus on ids first.
  • Return an object with the group of the data you need instead of returning loads of variables.
  • Focus to name the methods and classes as direct as possible, remember SOLID principles.
  • If you have a method that just type a text in a textfield, it maybe grouped together to a function that is going to perform the login().
  • If you have long lines of code, you might want to split it up into functions and move some of them to a variable and then formatting this variable, for example.
  • Think twice about the boolean assertions, add a comment if you think it is not straight forward.
  • Follow POM structure with helpers and common shared steps/functions to avoid long classes.
  • Do you really need this wait ? You might be able to use a retry or maybe your current framework have ways to deal with waits properly.
  • Add a code quality tool to review your automation code (eg. ESlint, Code Inspector)

 

Resources:

https://en.wikipedia.org/wiki/Code_smell

https://pureadmin.qub.ac.uk/ws/portalfiles/portal/178889150/MLR_Smells_in_test_code_Dec_9.pdf

https://www.sealights.io/code-quality/the-problem-of-code-smell-and-secrets-to-effective-refactoring/

https://slides.com/angiejones/automation-code-smells-45

https://medium.com/ingeniouslysimple/should-you-refactor-test-code-b9508682816

TestProject Cloud Integrations

Test Clouds are a great solution to have multiple devices and browsers running your tests in parallalel. It is really cost effective since you don’t need to have real devices and machines to be able to get it running, there are some cons as well, like you can have some bandwidth issues.

Many frameworks are already able to run your tests in the cloud, it is really easy to setup as you just need to know the command to pass like you would do on Jenkins or any other CI tool. Currently TestProject is able to run tests in the SauceLabs and BrowserStack clouds and you can setup any of them quite easily following the documents for SauceLabs here and BrowserStack here.

 

Pros vs Cons having your tests running in the cloud

 

Pros Cons
Dynamic test environment easy to setup Possible bandwidth issues
Faster than having real devices Loss of autonomy
Scalable Small security risk
Environment customizable No free tools
Cost-effective
You can access any time 24/7
Improve team collaboration

 

Resources:

https://link.testproject.io/wpq

https://docs.testproject.io/testproject-integrations/browserstack-integration

https://www.lambdatest.com/blog/benefits-of-website-testing-on-cloud/

Contract Testing with Pact.js + GraphQL

Contract Tests vs Integration Tests

  • Trustworthy like the API tests, even though the contract test is mocking the provider/consumer, you know it is mocking based on the contract that was generated.
  • Reliable because you don’t depend on your internet connection to get the same consistency on the results (When your API does’t have third parties integration or you are testing locally).
  • Fast because you don’t need internet connection, everything is mocked using the contract that was generated.
  • Cheap because you don’t spend huge amount of time to create a pact test or to run it, even less to maintain.
Contract Tests API Tests
Trustworthy Trustworthy
Reliable Not realiable
Fast Slow
Cheap Expensive

Remember contract tests are NOT about testing the performance of your microservice. So, if you have API Test that are taking ages to (execute/perform), failing due server no replying fast enough or timeouts, this means you have a performance problem, or it is just your internet connection. In either case you need to separate the problem and create targeted tests that are going to verify the performance of your server and not the expected response body/code.

How it works

You can use a framework like Pact which will generate the contract details and fields from the consumer. You need to  specify the data you are going to send and in the verification part you will use the same function the app would use to do the requests to the API.

Contract test is part of an integration test stage where you don’t really need to hit the API, so it is faster and reliable, completely independent of your internet connection. It is trustworthy since you are generating the contract based on the same function and the same way you would do when using the consumer to hit the provider. Pact is responsible to generate this contract for you, so you just need to worry about passing the data and add the assertions, like response code, headers, etc. If It seems pretty straight forward to know who is the consumer, who is the provider and the contract that you are going to generate, but imagine a more complex real life scenario where you have a structure like:

In this case you have multiple microservices communicating with each other and sometimes this service is the provider and sometimes the same service is the consumer. So, to keep the house organised when maintaining these services you need to create a pact between each one of them.

 

The fun part

So let’s get hands-on now and see how we can actually create these contracts.

Create a helper for the consumer to setup and finalise the provider (this will be the pact mock where the consumer is going to point when creating the pact.

import { Pact } from '@pact-foundation/pact'
import path from 'path'

jasmine.DEFAULT_TIMEOUT_INTERVAL = 10000

export const provider = new Pact({
   port: 20002,
   log: path.resolve(process.cwd(), 'logs', 'mockserver-integration.log'),
   dir: path.resolve(process.cwd(), 'pacts'),
   pactfileWriteMode: 'overwrite',
   consumer: 'GraphQLConsumer',
   provider: 'GraphQLProvider'
})

beforeAll(() => provider.setup())
afterAll(() => provider.finalize())

// verify with Pact, and reset expectations
afterEach(() => provider.verify())

 

Then create a consumer file where you add what is the data you want to check and the response you are expecting from the graphQL API.

import { Matchers, GraphQLInteraction } from '@pact-foundation/pact'
import { addTypenameToDocument } from 'apollo-utilities'
import gql from 'graphql-tag'
import graphql from 'graphQLAPI'

const { like } = Matchers

const product = {
  id: like('456789').contents,
  disabled: false,
  type: like('shampoo').contents,
}

describe('GraphQL', () => {
  describe('query product list', () => {
    beforeEach(() => {
      const graphqlQuery = new GraphQLInteraction()
        .uponReceiving('a list of products')
        .withRequest({
          path: '/graphql',
          method: 'POST'
        })
        .withOperation('ProductsList')
        .withQuery(print(addTypenameToDocument(gql(productsList))))
        .withVariables({})
        .willRespondWith({
          status: 200,
          headers: {
            'Content-Type': 'application/json; charset=utf-8'
          },
          body: {
            data: {
              productsList: {
                items: [
                  product
                ],
                nextToken: null
              }
            }
          }
        })
      return provider.addInteraction(graphqlQuery)
    })

    it('returns the correct response', async () => {
      expect(await graphql.productsList()).toEqual([product])
    })
  })
})

When you run the script above, pact is going to create a .json file in your pacts folder and this will be used to test the provider side. So, this is going to be the source of truth for your tests.

This is the basic template if you are using jest, just set the timeout and then you need to use the same functions that you are going to use for the consumer to communicate with the provider. You just need to decide how you are going to inject the data in your local database, you can pre-generate all the data on the beforeAll or a pre-test and then add a post-test or a function in your afterAll to clean the database once the tests are done.

The provider.js file should be something similar to this one:

import { Verifier } from '@pact-foundation/pact'
import path from 'path
import server from 'server'

jest.setTimeout(30000)

beforeAll(async () => {
         server.start('local')
})

afterAll(async () => {
         server.tearDown()
})

describe('Contract Tests', () => {
       it('validates the pact is correct', () => {
         const config = {
                  pactUrls: [path.resolve(process.cwd(), 'pacts/graphqlconsumer-graphqlprovider.json')],
                  pactBrokerPassword: "Password",
                  pactBrokerUrl: "https://test.pact.com/",
                  pactBrokerUsername: "Username",
                  provider:'GraphQLProvider',
                  providerBaseUrl:server.getGraphQLUrl(),
                  publishVerificationResult:true
         }
         return new Verifier(config).verifyProvider()
       }
})

In the end you just need to verify that the contract is still valid after your changes on provider or consumer, for this reason you don’t need to add edge scenarios, just exactly what the provider is expecting as data.

 

Resources:

https://docs.pact.io/

https://docs.pact.io/pact_broker/advanced_topics/how_pact_works

https://medium.com/humanitec-developers/testing-in-microservice-architectures-b302f584f98c

AWS Lex Chatbot + Kubernetes Test Strategies

Hello guys, this post is really overdue as I left this project some months ago, but still useful to share ūüôā

If you are using React, AWS Lex, Kubernetes to develop your chatbot then you might find this article useful as this was the kind of the tech stack that I used on this previous project.

I am going through the test/release approach which I believe worked quite well, caught different type of bugs, Continuous Development with full automated release pipelines, just feature manual tests, but we could have improved on the api integration part (this one required a lot of maintenance).

You need to understand a bit about how the NLP (Neuro-Linguistic Programming) works, so you can plan your regression pack, exploratory tests around the possible scenarios/questions/utterances.

If you think about how your brain learns to communicate you will notice it needs to have like a manual to assimilate words and actions/objects/feelings etc. NLP is a set of tools and communication techniques for one to become fluent in the language of the mind. It is an attitude and a methodology of knowing how to achieve your goals and get results.

 

Test approach

You can have a set of different types of tests, but these are the ones that I most focused and how we used them: Exploratory tests for the new features (manual tests), E2E (UI) tests with an example of each functionality for the regression pack (automated tests), API integration with happy path scenarios (automated tests), Utterances vs Expected Intents (This was a data test to check if the phrases were triggering the correct intent/AWS response), performance tests to review the performance of the internal microservices (automated tests).

 

Exploratory tests

Performing exploratory tests on the new features will bring you more understanding how the algorithm replies and understands mistypes and the user sentences. Also, it is about testing the usability of the bot independently the platform (mobile, web). This type of test is essencial to be certain the user experience is good from the beginning to the end. Imagine if the bot is popping up every time you navigate through the pages on the website ? This would be really annoying right ?

The scope for this kind of tests was really reduced, just acceptance criteria here and checking the usability of the bot. As always there are some exceptions where you might have to test some negative scenarios as well, specially if this is a brand new functionality.

 

End-To-End Tests

The regression test pack was built with Testcafe and contained only the happy path, but randomly choosing the utterance for each intent. Some of the intents were customised in my previous project therefore we had to run some tests to assure that the cards, response messages were rendering correctly on the bot.

We were always using real data, integrating with the real QA server. As the aim was not to test AWS, but the bot, we had a script to run before the tests to download the latest bot version (intents/utterances/slots) in a json file and use this data to run the tests on the bot against the server.

If you are not mocking the server and you want to see how the chatbot is going to behave with a real end user flow, just be aware that this type of tests have pros and cons, sometimes testcafe was failing because there was a problem with the client and sometimes with the server, and that’s ok if you want to make sure the chatbot is working smoothly from the start until the end and you have end-to-end control (backend/frontend).

You can mock the AWS Lex responses using the chatbot JSON file, so you can return the response expected for that utterance, in this case you are not testing if the entire flow is correct, but you are making sure the bot is rendering as expected. Just be aware that in this case, you would be doing UI tests and not E2E, but this is a good approach if you don’t have full control of the product (backend/frontend).

 

Integration Tests

For the integration tests between the internal microservices, you can use Postman and Newman to run on the command line.

Cons:

– It is horrible for maintenance (think how often you are going to update these scripts);

– Need to add timeout conditions to wait for the real server to reply/the response data to change (even though these waits helped us to find bugs on the database);

– It takes a long time to run as it depends of the server’s conditions;

– Debug is not great, you need to add print outs most of the time;

 

Pros:

– It is easy to write the scripts;

– It is fast to get it running, you can add hundred of flows in minutes;

– You can separate the type of tests with collections and use as a feature targeting;

 

In the end, the best approach would be adding some contract tests for the majority of the scenarios where you would save time with the maintenance, wouldn’t need to add retry with timeouts, etc.

 

Performance Tests

You can use a framework like Artillery to create the scripts and check how the services are going to behave under a certain condition, fpr example X number of requests per second. Usually you will have a KPI as a requirement and from this you will create your script. It is pretty simple and you can simulate the flows of the user.

AWS Lex used to have quite a low limit of requests per second (maybe they have upgraded that by now). You can observe that depending on the load that you send, the request is going to fail before even reaching your service. It seems you can only use the $Latest version for manual tests, not automated, so keep this in mind as they suggest.

You can also check how to create a performance script in a previous post here and if you need to run these tests in the same kubernetes cluster (so you are able to reach the internal microservices) you can check it here.

 

Data Tests

If you want to test the intent responses, if they are the expected ones, you can create scripts using AWS Cli Lex models to export the bot in a JSON file and then you can extract from this json the utterances and the expected intent response.

With the JSON file in hand you will be able to go through all the type of the intents and all the utterances and check if they are returning the expected responses (cards, messages, links). A common error is when you have similar/duplicated utterances and the bot responds with a different intent than the expected one maybe because the bot is misunderstanding and getting confuse with the utterance from the other intent.

For example if you have a utterance like:

get my card” that calls an intent X and then you have another utterance “get my credit card” in the intent Y, the bot can get confused and use the same intent X to reply both instead of knowing they are from different intents and not the same. This happens because the bot is constantly learning what the user means and tries to get the most probable intent.

 

Challenges

Some challenges that you migh face during the chatbot development:

– Duplicated utterances accross different intents (utterances triggering the wrong intent), be sure you have a robust map of intents/utterances;

– AWS Lex console quality and support really slow to fix bugs (2/3 months to release a fix);

– Get all the possible ways to ask a question/service/etc. Try to get some help from call center people;

 

Really good talk about how to test chatbot with Lina Zubyte, she explains more about the exploratory tests, comprehension and character of the bot.

 

 

 

Also if you are interested, this is a AI & Machine Learning AWS tech talk from the 2018 WebSummit in Lisbon, Portugal:

https://www.twitch.tv/videos/342323615?t=00h55m50s

 

Resources:

https://aws.amazon.com/lex/

https://aws.amazon.com/what-is-a-chatbot/

http://www.nlp.com/what-is-nlp/

 

Chaos Engineering: Why Breaking Things Should be Practiced

Hello guys,

Last week I went to the WebSummit 2018 Conference in Lisbon and I managed to join some of the AWS talks. The talk that I am posting today is about chaos engineering, which specifically address the uncertainty of distributed systems at scale. The aim of this practice is to uncover the system weakness and build confidence in the system’s capability. 

The harder it is to disrupt the steady state, the more confidence we have in the behavior of the system.  If a weakness is uncovered, we now have a target for improvement before that behavior manifests in the system at large.

Today I am going to post the video on the exact moment that this talk starts.

https://player.twitch.tv/?autoplay=false&t=02h05m17s&video=v333130731

This talk is presented by AWS Technical Evangelist Adrian Hornsby.

You can find tools to help you with the tests in this repo:

https://github.com/dastergon/awesome-chaos-engineering#notable-tools

 

References:

https://principlesofchaos.org/

https://www.twitch.tv/videos/333130731

Amazing repo with content/links about the topic: https://github.com/dastergon/awesome-chaos-engineering

How to measure exploratory tests ?

Hello guys,

Many people that are not from the QA area doesn’t know how to measure or what are the advantages of doing exploratory tests, but it is a technique really powerful when used correctly. Its effectiveness depends on several intangibles: the skill of the tester, their intuition, their experience, and their ability to follow hunches.

 

Value

  • detects subtle or complex bugs in a system (that are not detected in targeted testing)
  • provides user-oriented feedback to the team

Exploratory testing aims to find new and undiscovered problems. It contrasts with other more prescribed methods of testing, such as test automation, which aims to show scripted tests can complete successfully without defects. It will help you write new automated tests to ensure that problems aren’t repeated.

If you have any doubs about Exploratory tests, like examples and what are the advantages of doing it, have a look on the video below first:

99 Second Introduction to Exploratory Testing | MoT

 

When should you perform exploratory tests

Exploratory testing works best on a system that has enough functionality for you to interact with it in a meaningful way. This could be before you release your first minimum viable product in beta or before you release a major new feature in your service.

How to measure

Always test in sessions:
  1. Charter
  2. Time Box
  3. Debriefing
  4. Mind Maps

 

Charter

  • Mission for the session
  • What should be tested, how it should be tested, and what problems to look for
  • It is not meant to be a detailed plan
  • Specific charters provide better focus, but take more effort to design: ‚ÄúTest clip art insertion. Focus on stress and flow
  • techniques, and make sure to insert into a variety of documents. We‚Äôre concerned about resource leaks or anything else that might degrade performance over time.‚ÄĚ

99 Second Introduction to Charters | MoT

 

Time Box

  • Focused test effort of fixed duration
  • Brief enough for accurate reporting
  • Brief enough to allow flexible scheduling
  • Brief enough to allow course correction
  • Long enough to get solid testing done
  • Long enough for efficient debriefings
  • Beware of overly precise timing
Sessions time:
  • Short: 60 minutes (+-15)
  • Normal: 90 minutes (+-15)
  • Long: 120 minutes (+-15)

Debriefing

  • Measurement begins with observation
  • Session metrics are checked
  • Charter may be adjusted
  • Session may be extended
  • New sessions may be chartered
  • Coaching happens

 

Mind maps

Mind maps can be useful to document exploratory testing in a diagram, instead of writing the scenarios. It is a visual thinking tool and are quick and easy to record as they don’t follow a linear approach.

 

Session metrics

The session metrics are the primary means to express the status of the exploratory test process. They contain the following elements:

  • Number of sessions completed
  • Number of problems found
  • Function areas covered
  • Percentage of session time spent setting up for testing
  • Percentage of session time spent testing
  • Percentage of session time spent investigating problems

 

Coverage

  • Coverage areas can include anything
  • Areas of the product
  • Test configuration
  • Test strategies
  • System configuration parameters
  • Use the debriefings to check the validity of the specified coverage areas

 

Reporting

  • Create a charter
  • Features you‚Äôve tested
  • Notes on how you conducted the testing
  • Notes on any bugs you found
  • A list of issues (questions and concerns about the product or project that arose during testing)
  • Extra materials you used to support testing
  • How much time you spent creating and executing tests
  • How much time you were investigating and reporting bugs
  • How much time you were setting up the session

 

Tools

I like to use Katalon or Jing, but to be honest this is just to record and take screenshots of the test sessions. To do these kind of tests you just need a paper and a pen to write your notes, concerns and questions.

 

Resources:

http://www.satisfice.com/sbtm/

http://www.satisfice.com/presentations/htmaht.pdf

https://www.gov.uk/service-manual/technology/exploratory-testing

What is the cost of a bug?

If you have wondered how much a bug cost and how to measure this, today I am going to show some research about this. You must be asking, but why invest in testing if you can just fix your mistake after? What’s the true cost of a software bug? The cost depends on when the bug, or defect, is found during the SDLC (Software Development LifeCycle.)

In 2016, the software failures cost the worldwide economy $1.1 trillion. These failures were found at 363 companies, affected 4.4 billion customers, and caused more than 315 years of time lost. Most of these incidents were avoidable, but the software was simply pushed to production without proper tests.

 

 

It’s time to pay attention to how much software errors cost to your company, and start taking steps to recoup those costs.

To illustrate: if a bug is found in the requirements-gathering phase, the cost could be $100. If the product owner doesn’t find that bug until the QA testing phase, then the cost could be $1500. If it’s not found until production, the cost could be $10,000. And if the bug is never found, it could be secretly costing the company money. A 2003 study commissioned by the Department of Commerce’s National Institute of Standards and Technology found that software bugs cost the US economy $59.5 billion annually.

The cost of a bug goes up based on how far down the SDLC (Software Development Life Cycle) the bug is found

Then there’s the domino effect to think about. The software development approach often need to change to accommodate the code fix, which can in turn bump back other code changes. So not only is the bug going to cost more to fix as it moves through a second round of SDLC, but a different code change could be delayed, which adds cost to that code change as well.

 

Test early, test often. Prevention is better than the cure

To ensure that bugs are fixed at an earlier stage, take advantage of the following security testing practices:

  • Get together the team to help identifying issues during the design phase of software development.
  • Implement code review stage
  • Create an¬†automated regression or smoke tests pack and run them often

 

In development, you often have less data, use one browser and use the software exactly as intended. Plus you probably already have a debugger on the machine. The major problem with bugs in production is in the absence of supporting tools.

 

Better testing

The truth is when automated testing is still under-development you still need to do manual testing. A poor testing methodology costs a lot of money in wasted time and can result in a reducing the scale of return.

For instance, if your testing process is too thorough, your developers will be producing new features faster than you can test them, and you end up with a backlog of features waiting to be deployed to production. The same could happen if you don’t have the correct scale between how many people you have in your QA and in your DEV team.

Always add unit tests when software errors are found. It is a bit more painful when handling legacy codes without a good coverage of unit tests.

 

Prioritise your bug fixes

Prioritise software errors measuring the impact that they have on the end users. This way will allow you to allocate your resources accordingly, saving valuable costs throughout the SDLC and reduce the cost of software errors.

Software errors expose your end users to slow, buggy software, compromise the security and safety of your products. Many businesses don’t have visibility on their software errors, so measuring them and their impact can be hard.

Software errors were responsible for a majority of the software failures in Government, entertainment, finance, retail, services and transportation in 2016 thanks to research conducted by tricentis.com:

 

What’s worse is software errors have multiple consequences varying on impact, so you can’t always pinpoint the cause and effect. The effects trickle down ultimately to:

  • Developer costs spent on finding and fixing errors
  • Ongoing lost revenue from unsatisfied customers

Using a few industry averages, we can help you calculate the cost in lost development time for your company.

 

How to calculate the cost of developer labour caused by software errors

Taking the industry averages from 2018, we can estimate the financial costs to your company and investigate where the money is going.

  • The median developer wage for the UK April. 2018 ¬£30,651

The errors in your applications need to be fixed or they will affect end users and cause lost revenue. This is where support costs start to mount. Fixing software errors is low cost, reactive work.

You should be aiming for around 20% reactive work (finding and fixing errors, support costs), 80% proactive work (building features and improving product) rather than vice versa. This is where you’ll add the true value to the business and your users according to Raygun.

Based on a 40 hour work week at the average wage of GBP £30,651, the average software developer could spend 8 hours each week (32 hours each month). Costing around £6,130.20 per year, fixing errors and replicating issues.

This is time spent away not building new and important features for your customers. So, think twice when building your QA team and not bringing key people to earlier stages of a new implementation. Share as early as possible, gather opinions and different perspectives of the platform.

 

Resources:

https://www.payscale.com/

http://blog.celerity.com/the-true-cost-of-a-software-bug

https://crossbrowsertesting.com/blog/development/software-bug-cost/

https://raygun.com/blog/cost-of-software-errors/

https://www.synopsys.com/blogs/software-security/cost-to-fix-bugs-during-each-sdlc-phase/

Leading by example

Hello guys,

Today I am going to post something interesting that I learned through time with my work experience. This time I’m not talking about technical skills, I’m talking about the soft skills and how you can be a better person every day by learning from the constructive critics and ditching the destructive ones that don’t give you any benefit.

I am also sharing what some successful companies are already doing in the Silicon Valley which Jacqueline Yumi Asano explained in her article (it is in Portuguese, but you can check it bellow in the Resources part).

 

Inspire people

 

Say sorry, you are not a machine and you are going to fail at some point

If you want to be a trustworthy person at your workplace, you need to show that you are fair and you recognise when you fail. Saying sorry will show that you are humble enough to have the trust of anyone.

 

Strength the individual

Nothing is more empowering than supporting people to learn something and improve themselves. Everybody wins when this happens. Ask yourself if 20% of your time is about learning or you just do the same task over and over again. Do you have challenges in your company ? Are you growing in this company ?

You might be asking how can you identify if a company will improve you or not before even starting there? Check how they write their specs and roles. Are they asking for specific technologies or are they looking for generalist professionals ? A good company will look for someone that is a machine learning person and not someone that has a fixed technology in their skillset. That means the company will follow the latest technologies and you will be learning most of the time.

We don’t’ assign the work for our developers. They assign up for work
LiveRamp

 

Don’t under estimate the power of the vision and direction

Do you have a clear version of your goal ? Does your company is taking you to this goal ?¬†This is extremely important for your future and you don’t have time to lose. So, double check if your company helps you to achieve this goal and if you have a clear vision and direction to follow.

If not, there is not too much you can do apart from start looking for a place where you can clearly see it leading you to your goals in, let’s say, a years time.

 

Get away from tyrant bosses, look for a strong bond of trust on both sides

If you ever worked in a place where your manager use his lead position to decide something rather than arguments, what are you even doing in a place like this? It is definitely a toxic environment and with this lack of trust it is better to look for something new because discussions and arguments would be pointless. Your opinion will mean nothing, even if you have arguments, analysis and your own carrier experience. Your manager will always ignore what you are saying.

Free choices matter and you should work in a place where your opinion is not invalidated.


Our product is great because we show why we are recommending
an specific insight. This way we build trust.
François Lopitaux, Director of Product, Salesforce

 

Surround yourself with passionate, but not blind people

Passionate is different from blindness. I just want to highlight that I have found many people that come across as passionate people but in reality they were completely blind and never accepted to have failures on their ideas/projects/implementation. This is quite common among developers, for that reason there is this assumption that a QA will always be a developer enemy, since it is part of our job to show bugs on their implementations.

A smart person knows how beneficial is to have a constructive critic and feel glad to have them. So, surround yourself with passionate and smart people. Be smart, take responsibility, accept the failures and improve them.

 

Get others opinions to empower people

I can only see advantages on doing that. Share knowledge as soon as possible with everybody in your team. Is this a new feature ? So, get everybody together and expose your idea, get others opinions. Don’t ignore them, every idea has a value. You can find bugs in the early stages and also this builds an ownership feeling.

Does the company take your opinion into account ? Do you feel that you have autonomy ? There is nothing more empowering than showing that you care about everybody opinions. Everybody feels valued and this will increase the trust on your work since they know that they can be honest with you.

We don’t need as PMs to tell UX Designer what they need to look at
Pinterest

 

Resources:

https://medium.com/mulheres-de-produto/o-que-eu-aprendi-no-vale-do-sil%C3%ADcio-sobre-product-innovation-7f3128f33e3

http://qablog.practitest.com/leading-by-example/

http://www.soulcraft.co/essays/lead_by_example.html

How to use mind maps to clarity your tests

To improve the communication about a project you don’t need to have infinite docs and articles. For someone who is starting or to quick understand the product you need to have something smaller, prettier, and more focused to the audience. Mind maps are a lightweight form of testing documentation, because communicating effectively with the team is the key of a good quality implementation.

Revealing the Complexity of a Problem

Imagine that you have to test an app. You know that you need to test the functionalities and if the behaviour of the app is not clunky and unstable. You can have articles on Confluence explaining the behaviour of the app or you can have a mind map which is more focused and simple.

For example:

Click to expand

 

This mind map will help you to remember of all of the type of tests that you can perform on a mobile app.

The mind map communicated the logic of how our code would be written without the need of looking at code. It can cover all of your use cases and extract connections in a way that would have been difficult to do in a list.

When creating the mind map you can follow Heuristic Test Design, which is a model of tests with different patterns of quality criteria, techniques, elements and environment. It helps testers to remember and design different combinations while creating the test plan.

 

Using mind maps for regression tests

You can use mind maps for so many things, for example as a guide to your regression tests. I think it is far much more easy than reading a list of checklists. Also, it helps people who are just arriving at the company to understand the flow and the connections across the features. This guide helps you to decide whether what’s happening is something you should expect. Not everybody agrees about having mind maps for regression tests which I can understand why, but you can decide this with your team.

Imagine that you have a checklist like this:

You need to follow this checklist to be sure your release is good to go, but imagine that you have a map to follow, wouldn’t be more clear and easy to understand ? You can find the mind map correspondent to this checklist here:

Click to expand

When a button changes, for example, the mind map should be the first thing to look at. You can check if nothing was changed below or up that node (feature). You look at the parent node to see what pressing the button did and make sure it still does that. You update the mind map with the new button shape so that future testers know how it works now.

Mind maps help us test not just the change at hand but the consistency of that change relative to the rest of the product, the product’s history, and the feature’s purpose.

¬†You should share this process and ask for the development team input their thoughts and this will build trust in the regression pack. Also as I always recommend, share and review always. You are not working alone and it’s important to remember that we are not machines and we have blind spots which can be solved by the involvement of a properly engaged team.
Tools
You can use some of these free tools to create your Mind Map, I usually prefer the online ones, but feel free to choose the best one for you:

 

Resources:

https://dojo.ministryoftesting.com/lessons/mind-maps-made-easy

http://www.satisfice.com/tools/htsm.pdf