AI Hackathon – Classview Immersive

Last month, I had the pleasure of hosting a workshop, “How to Get Your First Job in Tech: A Step-by-Step Guide,” remotely via ClassView Desktop. We connected with three ClassView Immersive Rooms, reaching 60 students from:

  • North Kent College, Dartford Campus
  • North Kent College, Tonbridge Campus
  • USP College, Seevic Campus

For 75 minutes, we explored the roadmap to breaking into the tech industry, and different paths and ways to find your path in tech! Specially nowadays with AI changing the space.

What We Covered

Here’s a quick recap of what we dove into during the workshop:

  1. Assessing Your Skills and Finding Your Path
    We talked about understanding your current skills, identifying gaps, and choosing the right tech career pathway tailored to your strengths and interests.
  2. Crafting Standout Portfolios
    A big focus was on creating LinkedIn profiles, CVs, and GitHub portfolios that truly stand out to employers.
  3. Supercharging Your Job Hunt
    I shared practical tips on accelerating your job search—everything from networking hacks to nailing interviews.
  4. Validating Career Goals in Real-Time
    Students received actionable insights to align their career aspirations with their current skills and experiences.

AI Hackathon

One of my favorite parts of the session was seeing students quickly create some impressive portfolios in just four minutes with Lovable! Here are a few:

Huge congratulations to one of the students, who took home the top prize in our final quiz, earning a perfect score! Happy to see this is the moment where people are FULLY engaged, specially because there was a prize in the end ! 👏

My Takeaway

Being able to guide these students on their tech career journeys is always a rewarding experience. I received loads of great feedbacks from the students which is always comforting as I know dealing with teenagers is a BIG challenge 😂

And some pictures !


I can’t help but grapple with deeper questions about the future of our industry:

How can we help as a society to bridge the skills gap that is being widened by the speed of AI development?

The advancement of AI has the potential to exacerbate disparities in skills and opportunities. Are we equipping the next generation with the tools they need in this shifting landscape? Education, mentorship, and accessibility must be at the forefront of our collective effort to ensure no one is left behind.

If juniors and interns are being replaced by AI, are we heading towards a society without seniors?

This is a sobering thought. Without entry-level opportunities, how will we cultivate the experts of tomorrow? Experience is built incrementally, and AI can’t replace the nuanced understanding that comes from hands-on learning. As an industry, we must find ways to preserve pathways for growth while leveraging the capabilities of AI.

Human-AI Collaboration: Ministry of Testing London Meetup Recap

Last week I attended a face-to-face Ministry of Testing Meetup focused on guess what ? AI vs Testers: Friend or Foe? 🤖🧪 !

One of the key takeaways was the recognition that AI isn’t about replacing testers, but rather about increasing their abilities. While 1 or 2 people were concerned about job security, the consensus was that upskilling is crucial.

That’s why I always recommend people to follow emergent technologies. My first interaction with AI was 7 years ago, when I posted about machine learning in 2018 and also on this AI chatbot project that I joined just after.

Focus, learn, practice and stay calm, you are not going to be replaced by AI, maybe for people who use AI 🤷‍♀️


The future of testing lies in leveraging AI tools effectively, and those who adapt will thrive. The discussion highlighted core skills that will remain essential for long-term careers:

  • Clear Thinking: AI can analyse code, but human critical thinking and problem-solving are still key.
  • Passion for Quality: A genuine commitment to quality remains a uniquely human trait.
  • Adaptability: The tech landscape is constantly shifting. Embracing change and learning new technologies, like AI, is essential.

The meetup also talked about the limitations of current AI models. Bias in data sets, as highlighted by the Global Data Quality Report, remains a significant concern. We discussed how even sophisticated simulations, like a “simulated CEO,” struggle to replicate human personality and decision-making.

Testing AI: Challenges and Approaches

Testing AI itself has unique challenges, primarily due to the sheer volume of data involved. Some organisations are using automation with massive datasets, but careful scoping is essential. The human element remains crucial, especially at key decision points. It’s also important to remember that AI can still be “delusional” – producing unexpected or incorrect results.

Practical Advice and Considerations:

Some practical advices:

  • Don’t follow blindly: AI is powerful, but it’s not a silver bullet. Understand the value proposition before implementing it.
  • Be aware of the limitations: AI can slow you down and requires careful planning. Define clear objectives before you start.
  • Embrace thought leadership: Explore AI’s potential for strategic growth and innovation.
  • Research and be cautious: Don’t rely on a single model. Test with different datasets and diverse groups to ensure robustness.

Data and Privacy:

A crucial point raised was data privacy. Concerns were expressed about data being stored in the cloud without proper security measures. The importance of encryption and secure data handling was emphasised, with some companies exploring blockchain technology for data storage ❤️

The meetup reinforced my what I have being saying about: the future of testing lies in the synergy between human intelligence and AI tools. By effectively integrating human expertise with the capabilities of AI, we can achieve higher levels of quality and efficiency in software development. It’s about “mix brain and tool” – leveraging the best of both worlds.

EuroSTAR Conference 2024 – Stockholm

Hello, hello! A bit late as usual, but I’m here to share my experience at the Eurostar Conference this year. My talk was scheduled for 15:15 on Thursday, June 13th. Despite my initial anxiety, I managed to not only deliver my talk but also had time to attend other sessions and join two tutorials. Apparently, joining two tutorials was against the rules (shh 🤫)

The key highlights

Kick Ass Testing Tutorial

  • Finding basis path: Ensure effective control flow testing by identifying the basis path.
  • Draw diagram flow: Create a detailed flowchart diagram to visualize the process.
  • Flipping decisions on baseline: Adjust decisions based on the established baseline to improve accuracy.
  • Flow chart: Use flowcharts to map out the process and identify key decision points.
  • Control flow testing: Test the control flow of the application to ensure all paths are exercised.
  • Code exercise: Focus on exercising the code you wrote, not the code that wasn’t written.
  • Business path analysis with JPath: Tools like JPath may not suffice for business path analysis; use domain analysis and equivalence class partitioning instead.
  • Pairwise workflow: Employ pairwise testing to handle millions of possible tests, as it’s impossible to test everything.
  • User behavior focus: Ask what the user does to the application, not what the application does to the user.
  • Vilfredo Pareto principle: Apply the Pareto principle, noting that 20% of transaction types happen 80% of the time, and start with transaction history analysis.
  • Pairwise tools: Use tools like Allpairs and PICT for pairwise testing, they are quite old school tho. No mention on AI tools to help creating the data, found a bit weird ?!?
  • Data variation: Ensure multiple variations of data and a reasonable amount of data for thorough testing.


See the PDF below:

What Are You Doing Here? Become an Awesome Leader in Testing

My favorite part was discussing the things we’ve heard throughout the years in the QA and testing industry. Some of them include:

  • Automate everything: Avoid unrealistic expectations like “automate everything” and ensure thorough testing to prevent missing bugs.
  • More test cases mean better testing: Quantity over quality in test cases can result in redundant tests that don’t effectively cover critical scenarios.
  • Just test it at the end: Believing that testing can be left until the final stages of development leads to overlooked defects and rushed fixes.
  • Quality is the tester’s job: Assuming that only testers are responsible for quality undermines the collective responsibility of the entire team.
  • We can catch all bugs with testing: Expecting testing to catch every possible defect overlooks the importance of good design and development practices.

Why AI is Killing – Not Improving – the Quality of Testing

This was the big one of the entire conference, largely due to the drama that unfolded at the end of the talk 🎭

I missed the point where the title resonated with the entire talk, and it was my fault for not reading the description and going just because of the title.

They compared the time it takes to build cars from ages ago to now (Ford and Tesla) and showed that it only saved 3 minutes. I’m not sure if they did this on purpose just to prove their point, but the comparison missed the complexity and features that have been added in the new cars, like the entire software and electric systems behind Tesla that didn’t exist before. These aspects weren’t considered in their comparison.

They also presented interesting analysis about when AI will catch up with human intelligence, as well as the gap that AI is creating between junior and senior developers. Not many people talk about this, but indeed, AI is a tool that can help us while also potentially making us lazy, similar to how calculators did; we still need to learn the basics

Basic Coaching Techniques for Emerging Quality Coaches

  • Active listening: It involves fully concentrating, understanding, responding, and remembering what’s being said.
  • Train yourself and learn: Continuously improving active listening skills through practice and feedback helps in understanding others better.
  • Circle of control: Focus on what you can control in conversations—your responses, understanding, and actions.
  • Feedback: Provide constructive feedback that helps the person improve without making them feel punished. Talk about the behaviour not the identity, don’t use BUT, use AND.
  • Keep questions simple: Use straightforward questions that facilitate understanding and encourage deeper thought.
  • Be present: Engage fully in the conversation, maintaining focus and showing genuine interest.
  • 11k impressions: Recognize that perspectives can vary based on personal factors like fatigue and biases
  • Keep questions simple: Frame questions clearly to facilitate understanding and encourage exploration of solutions.
  • Acceptance: Reality gap ! Facts on the table. Easy ? No, necessary: yes
  • You have the questions not necessarily know the answers. Help them to figure out how to find a solution.
  • What are your three top values? Rank 1 to 10. This will help you and your mentee to connect.

QA Outsourcing: Triumphs, Trials, & Takeaways

Unfortunately, I couldn’t make this one as I was back to London, but I watched the video after and the main takeaways are:

  • Strategic move: Outsourcing QA can strategically optimize resources and expertise.
  • Drive success: Effective management of outsourced QA enhances product quality and market competitiveness.
  • Growth: Outsourcing allows scalability and focus on core business functions.
  • Competitive landscape: Leveraging external QA services brings agility and innovation to stay ahead in the market.

A Tester’s Guide to Navigating the Wild West of Web3 Testing

Here I am again, checking the feedback. As expected, the audience was quite different from the one I usually engage with. Since this conference is a bit more corporate, I didn’t anticipate too much variation in the audience. I was also extra nervous for this one, so instead of 45 minutes, I sped up and went into the fast lane, finishing the talk in just 30 minutes. I just gave you all some extra time for coffee! 😆

As always, I needed to gauge the Web3 knowledge level of the majority, and unsurprisingly, there is still a massive gap in education about what Web3 and Blockchain are. Thus, I spent a significant portion of my talk explaining these concepts.

The feedback is quite contradictory. Some people said it was hard to follow because no background was provided, while others mentioned they didn’t know the talk would focus solely on Blockchain (which it did not). 🤷‍♀️

So, if I give more background, people complain. If I reduce the background, people will still complain. My take on that is it’s really hard to please everyone; sometimes I can’t even make my own dog happy! 😄

I still try, though. So, thanks to those who gave constructive feedback ❤️!

I’ll work on improving for the next one 🚀

More random pictures with these great speakers whom I had the pleasure to meet, the cubic challenge, and also random exotic food talks on the boat party.

Using AI to Accelerate Test Automation

Hello hello peeps 👋

I have been a bit of a workaholic lately, but all for a good cause 😊

Not sure if you know already, but I started to work on a project The Chaincademy helping Developers (SDETs, Engineers, Coders, Programmers, Test Automation Engineers…), especially the junior ones that are coming to Tech to find their first job 💻

We have launched our MVP before Xmas, and we are testing it with our audience (Junior Developers). So, in case you want to accelerate your career (for now, only web3) and get your first experience as a developer, sign up for our Newsletter to get access 🎉

First time I actually adventured myself with AI and Machine Learning was back in 2018 in a Machine Learning Workshop. I had to create this iOS app where AI was replacing my face with an emoji based on my expressions 😆 Really simple, but back in that time, AI was not so good as it is right now (As we say in Brazil: Na minha época isso aqui era tudo matoBack in my day, this place was all woods)

And since the launch of chatGPT to speed up all my work, I have been using AI on a daily basis, more Bard actually (Think it is much better than ChatGpt nowadays), so here I am going to give some tips on how I have been using it in test automation:

Test Automation

1. Test Case Generation:

  • Scenarios: You need to pass user stories and acceptance criteria, to generate corresponding test cases with detailed steps and expected outcomes.

    Prompt Example: Given a user tries to register with an invalid email address, describe the steps they would take and verify that the system displays an appropriate error message.
  • Edge Cases: Ask to suggest potential edge cases or corner scenarios to test, ensuring comprehensive coverage of your application’s functionality.

    Prompt Example: For the checkout process, what happens if the user's internet connection drops while entering their payment information? List potential scenarios and expected outcomes.
  • Data-Driven Testing: Generate test data sets based on specific criteria.

    Prompt Example: Generate 10 test cases for the login feature, covering cases with valid and invalid username/password combinations and different user types (admin, regular user).

2. Coding:

  • Test Script Automation: Describe the test actions:

    Prompt Example: I want to test clicking the 'Submit Order' button and verifying the order confirmation page appears. Write a Cypress with javascript script for this scenario.
  • Code Completion: Get test assertions, locator identification, and handling complex interactions.

    Prompt Example: In my Cypress test, I'm trying to assert that the element contains the text 'Welcome back'. Please suggest the next line of code with assertion syntax.
  • Refactoring: Analyze your existing test scripts and suggest improvements like removing redundancy, increasing reusability, or optimizing execution time.

    Prompt Example: Analyze my Pull request for the search functionality. Can you add comments and suggest ways to improve readability, reduce redundancy, and speed up execution?

3. Test Planning and Management:

  • Prioritization and Risk Assessment: Provide the test case details and application knowledge, so it can help prioritize tests based on risk or impact.

    Prompt Example: Given these 20 test cases for the new feature, rank them based on potential impact, speed of delivery and risk of failure. Explain your reasoning for each.
  • Maintenance: Identify outdated or irrelevant test cases and suggest updates or new tests to maintain coverage.

    Prompt Example: The application updated its login page layout. Identify test cases needing modification and suggest relevant updates based on the new UI.

4. Environment Management:

  • Mocks: Describe data needs for specific tests, and generate mock data or API responses, reducing reliance on real environments and dependencies. Remember you can also use contract tests (with Pact for example) and this can be done automatically from the code.

    Prompt Example:Generate mock API responses for the payment gateway integration test, simulating successful and failed scenarios based on test case requirements
  • Environment Configuration: Configurations for different test environments based on your application and testing requirements.

    Prompt Example: Suggest configurations for a staging environment replicating the production database but with limited user access. Include details for network settings and resource allocation.

Thanks to Abel from Graph Protocol 👏 to send over these great resources that I have been using to learn about how to better prompt for Software Development are:

What the QA Position Will Look Like in the Future – TAU Conference

Hey guys, on 16th March I talked a bit about what the QA position will look like in the future on one of the Lightning Talks at Test Automation University Conference.

To watch the session on-demand recording just check out the link: applitools.info/6ob

A recurrent question is if AI is going to steal our jobs. I just want to remind you all we used to work in farms without any equipment, then machines came and some people stayed doing manual work and some others had to adjust themselves to still work in the farm. They had to learn to operate these machines.

What I mean is: It is a cycle. There is no need to panic! 😱 You will need to ADAPT and more important LEARN how you can improve your work using AI. There are many people sharing how you can do it and I will be sharing soon as well, just need to finish a research using ChatGPT haha

And yes there are new jobs already being created, for example: https://www.linkedin.com/jobs/view/3503646880/

Not everybody knows, but on that day I had my myopia laser surgery booked last minute and was rushing to go to the hospital, so apologise for being a bit all around the place ! I was panicking but not because AI is going to steal our jobs 😂

ChatGPT approved seal:

Using openAI to create content for social media

Hello Hello 👋👋👋

Today my post is a bit different from the others as I was having some fun with AI last weekend. I realised I was spending too much time thinking about the content, adding some fun emojis and adding the hashtags for posts on social media.

Then I spent last Saturday evening creating this tool with OpenAI 🤖 quite basic now, but I am planning to have small increments as we go.

It's been a fun couple of hours doing research, coding, and testing, but it's here!  This AI tool can increase efficiency and optimize the content creation. 💪    

This paragraph was generated with the tool

If you are curious about the code and you want to run locally, this is the repo:

https://github.com/rafaelaazevedo/rediskets

You will need to create a .env file and add your OpenAI API Key, which is generated on their website after you create your account there (It is FREE).

If you just want to try it out, open the link and add the text explaining what your content should be about, maybe even add some personality like (friendly, assertive, etc) to help AI to find out what is more close to what you want. It will generate a post with emojis, hashtags and a picture related to what you wrote. The picture is definitely not the brightest feature as you saw above, but maybe you will have some fun like me and generate a picture of dogs without faces 😂

PS: you might get some 502 errors (they are random and unfortunately is because I am using a free trial and OpenAI api is returning the query with some timeouts) 😔

https://rediskets.netlify.app/

This really represents me after I deployed my code !

Benefits of AI in Test Automation

Photo by Kindel Media on Pexels.com

There are several interesting web app automation scenarios that we can improve using AI:

  • Reduce the execution time: Nowadays you have the feature target function already even without an AI test automation project, but with AI you can add this feature without having cucumber in place or even the need to tag the scenarios or features. The AI should be able to identify the features related to the change automatically.
  • Converted manual test cases to automation: you can use Natural Language Processing (NLP) to automatically translate manual test cases into automated test cases. I have seen this done with cucumber not AI yet, but totally possible as AI models work on datasets.
  • Creating different data combinations by training the AI to identify the possible combinations based on a dataset is possible. This would increase the data coverage and bring more confidence to the automation project.
  • Visual validations: Many tools perform this functionality already. I personally tried one tool ages ago called Percy, but you can also try some other popular tools like Applitools and Telerik
  • Test execution stability or self-healing automation: AI can automatically locate web elements when the primary locators fail. You can see this feature in some cutting-edge automation tools like Mabl and Xray and Functionize. Self-healing employs data analytics to identify objects in a script even after they have changed. When your script fails due to being unable to find the object it expected, the self-healing mechanism provides a fuller understanding and analysis of options. Rather than shutting down the process, it examines objects holistically, evaluates the attributes and properties of all available objects, and uses a weighted scoring system to select the one most similar to the one previously used.

Becoming a Domain Model Expert

Creating a model for your test automation requires a domain expert, therefore is critical to have a test automation specialist that also knows the business so the AI can bring the desired innovation. With such extensive use cases, AI systems will need different parameters from domain experts.

Machine Learning Algorithms In Layman's Terms, Part 1 | by Audrey  Lorberfeld | Towards Data Science
https://wordstream-files-prod.s3.amazonaws.com/s3fs-public/machine-learning.png

Be careful to not run more automated tests than you actually need it. A stage of supervision when the AI is learning the patterns is definitely needed it.

Resources: