Building your product with AI Workshops for Tech Women Week

This past week, I had the incredible opportunity to lead two workshops as part of Tech Women Week โ€” one virtual session streamed live on YouTube, and one in-person session hosted with Howard Kennedy LLP. Both were focused on one of my favorite topics: building your product with AI โ€” and honestly, I couldnโ€™t have asked for a better experience even tho I have practiced only twice ๐Ÿ˜‚

Thanks gosh I have been working with AI and products for a long time, so no need to practice at all. If you have missed both, this is the recording:

I was doing a research in the beginning to check what people have been using and also what stage they were in their journeys, really interesting how people still go first to chatgpt even to create the product… ๐Ÿง

  • 42% were in the POC (Proof of Concept) stage,
  • 38% were still shaping the idea,
  • 17% were working on their MVP,
  • and just a small number were in validation or product-market fit.

AI tools are evolving fast โ€” new ones every week, each promising to save time, boost productivity, or unlock creativity. My message during the workshop was simple:
๐Ÿ‘‰ Donโ€™t chase the trend. Start with your goal, then find the AI tool that fits that specific purpose.

Also, another take away important was AI is not flawless and you still need to review security, audit the code before you scale and publish sensitive data !! ๐Ÿšจ Don’t follow blindly AI generated code.

Then the feedback was just amazing: I even had participants come up to me afterward saying, โ€œThis was the best workshop Iโ€™ve ever been to.โ€

No surprise I had really good feedback, but also things to improve, omg ALWAYS about the time ! I need to stop talking so much ๐Ÿ˜‚

Some of my favourite feedbacks:

  • โ€œI learned how straightforward it is to build โ€” I was always stuck in ideation, but now I can move forward.โ€
  • โ€œIt was very practical and actionable.โ€
  • โ€œEverything!โ€

Some people wanted a longer session or a few more technical examples โ€” which, honestly, I love hearing because it means they were ready to go deeper. That tells me weโ€™re building the right kind of momentum.

Seeing women confidently using AI to build, ideate, and ship their first products was everything I hoped for โ™ฅ๏ธ

For the ones that were taking pictures with me after, please send them over, I am a not photo taker and the only picture I took was from the empty reception of the building when I was leaving the building ๐Ÿ˜‚

Dusting off the Blog

Hey hey ๐Ÿ‘‹

Startup life is making me having less and less time for the blog and sharing my learnings, so pardon me ! Gonna put some updates from the past months here and share the side quests I done lately.

I had an ACL reconstruction surgery last week, got my knee ligament completely gone in one of the open mats before my first BJJ competition last year ๐Ÿ˜ญ Unfortunately means no gym for at least 3 months, and being a bit dependent on my friend for a couple of days (which is a near death experience to me), but also it means I have a really good reason why I can’t get out of my cave ๐Ÿ˜‚

In more exciting news, I’ve published 2 e-books to help with the coming events ! They are about using AI tools for building your product/MVP: VibeEbooks.co

The events will be during the Women in Tech Week, but you can come along regardless your gender:

๐Ÿ‘‰ Women in Tech Week 2025: How to Build Your First Product with AI (London ๐Ÿ‡ฌ๐Ÿ‡ง)

๐Ÿ‘‰ Build your MVP with AI webinar for Women (Virtual ๐ŸŒ)

Then, can’t remember exactly when was it, but I was also invited to join Blockdojo during the Angel Investment Awards 2025 and again met Emmie Faust from Female Founders Rise โ™ฅ๏ธ

I promised myself this year I was not going to be a speaker in more than max 3 events, and I have been keeping up quiet good on this, so far I had only one in May and will have more 2 in October !

I have been trying to follow same pattern as Steve Jobs (Do at least 3 things that are important to achieve your next goal per day, the rest is noise). I have been delegating everything that I can (using ChatGpt Agent A LOT ๐Ÿค–) and don’t need my full attention, which gives me time to finally play a bit of Hogwarts Legacy and CS ๐Ÿ˜‚

Robin woke up as soon as I started playing CS tho

Our first Hackathon was extremely better than what I was expecting. We had way more people signing up +100, some people dropped off (as expected) and then from this the founder was able to find not one, but TWO teams he wanted to work with ! Because of this now we are opening this new service for founders that want to have their MVP built during the hackathon or find their development team as consequence of it !

Last, but not least despite the ups and downs, and asking people to reply our survey….

We got a NPS Score of 10/10 from our last survey from the founders and 9/10 from the team ! That’s all for now !!

SeedLegals Startup Awards Event

Again, late to post about this, maybe a month and a half ? ๐Ÿ˜‚

The results of the SeedLegals Startup Awards were so surprising for me, I didn’t go expecting much, but while The Chaincademy didnโ€™t take home a trophy this time, being named a Community Leader finalist and receiving an honourable mention is an achievement worth celebrating.

For a first awards appearance, itโ€™s a strong signal of whatโ€™s to come โค๏ธ โ€” and weโ€™re just getting started !! Although I can’t disguise my disappointed face ๐Ÿ˜‚

Eva Dobrzanska and the winners of the Community Leader Baltic Ventures !

And the winners of 2025 were:

๐ŸŽฅ Social Sensation: Wild
๐Ÿค Community Leader: Baltic Ventures
๐ŸŒฑ Eco Innovator: UNDO
๐Ÿ† Customer Champion: Wild
๐Ÿ’ธ Angel of the Year: Sutin Yang
๐Ÿ’ผ Top Workplace: Zinc
๐Ÿฆ„ Soonicorn (Soon-to-be Unicorn): Fin Sustainable Logistics
๐Ÿ’ก Inspiring Entrepreneur: Dr Tom Pey
โšก Game Changer: Zetta Genomics
๐Ÿ“ˆ Fund of the Year: Haatch
๐ŸŒ Captivating Mission: WeWALK
๐Ÿฅ‡ Outstanding Achievement: Definely
๐Ÿ… Startup of the Year: WineFi

Read more about the winnersย here.

See you guys in my next delayed blog post event !

Testing AI Coded Apps – Challenges and Tips

AI tools like Lovable.dev are changing app development, enabling rapid prototyping and giving the power to everybody to create functional applications through natural language prompts.

These tools are 20x faster to code than a developer, but they also introduce unique challenges in testing, debugging, and maintaining the generated applications. When you add AI to the team, you need to be vigilant.

Let’s explore below some challenges, and common scenarios that can happen and how you can test and identify them.

If you want to be able to use the code as a boilerplate and escalate the product after, don’t add 300 features before checking and testing it! AI creates hundreds lines of code making it harder and harder to review and maintain, test and check the code early as possible.

Also be aware, they will use whatever library they think is the best or they have partnership with. (Example: Lovable.dev will push you to use supabase) and some of these libraries/tools might not be the best/cheaper for your product (Check subscription prices). These AI tools might use libraries that are deprecated creating a conflict with other dependencies as you scale, introducing other bugs.

If you want to just test the market, prototype and you are completely okay to might have this MVP rewritten from the scratch then no need to worry about too much.


Common Challenges in Testing AI Coded Apps

1. Code Quality and Optimisation

Scenario: An e-commerce startup uses Lovable.dev to build a shopping platform. The generated code includes a product listing feature but contains redundant database queries that degrade performance.

Generated Code Example:

// Generated by AI
let products = [];
for (let productId of productIds) {
    let product = db.query(`SELECT * FROM products WHERE id = ${productId}`);
    products.push(product);
}

Issue: The code queries the database inside a loop, resulting in multiple queries for a single operation.

If you only had a happy test scenario you wouldn’t be able to catch this one, so in this case you will need to actively check the database and it’s performance.

2. Limited Customization and Flexibility

Scenario: A nonprofit organization creates an event management app. The appโ€™s AI-generated code fails to include the functionality to calculate the carbon footprint of events.

Generated Code Example:

// Generated by AI
events.forEach(event => {
    console.log(`Event: ${event.name}`);
});

Issue: The AI didnโ€™t include a custom calculation for carbon emissions.

This is typical, sometimes AI only codes the front-end, some of the interactions between the components, and uses hardcoded the data, but it is unable to create the backend or logic behind if not explicitly asked for and send the formula. This can be catch in a simple happy test scenario with different inputs.

3. Debugging Complexity

Scenario: A small business generates a CRM app with an AI tool. The notification system malfunctions, sending duplicate notifications.

Generated Code Example:

// Generated by AI
reminders.forEach(reminder => {
    if (reminder.date === today) {
        sendNotification(reminder.userId, reminder.message);
        sendNotification(reminder.userId, reminder.message);
    }
});

Issue: Duplicate notification logic due to repeated function calls.

Sometimes even AI is able to pick up this one. You know when they suggest to refactor the code ? This one would be easy to catch when doing your happy path scenario, checking if you have received the notification only once.

4. Scalability Concerns

Scenario: A social media startup builds its platform. The AI-generated code fetches user data inefficiently during logins, causing delays as the user base grows.

Generated Code Example:

// Generated by AI
let userData = {};
userIds.forEach(userId => {
    userData[userId] = db.query(`SELECT * FROM users WHERE id = ${userId}`);
});

Issue: The loop-based query structure slows down login times for large user bases.

This one could be identified later in the development cycle, unless you are doing performance tests early on. Probably will catch this only when you have a large database of users, easy to fix, but can be fixed before you have this headache.

5. Security Vulnerabilities

AI coding is great when the stakes arenโ€™t too high

Scenario: A healthcare startup generates a patient portal app. The AI-generated code stores sensitive data without encryption.

Generated Code Example:

// Generated by AI
db.insert(`INSERT INTO patients (name, dob, medicalRecord) VALUES ('${name}', '${dob}', '${medicalRecord}')`);

Issue: Plain text storage of sensitive information.

Another typical one for AI coded generated apps, usually they lack on security of the data. Be extra cautious when checking the data transactions and how the data is being managed and stored.

6. Over-Reliance on AI

Scenario: A freelance entrepreneur creates a budgeting app. When a bug arises in the expense tracker, the entrepreneur struggles to debug it due to limited coding knowledge.

Generated Code Example:

// Generated by AI
let expenses = [];
expenseItems.forEach(item => {
    expenses.push(item.amount);
});
let total = expenses.reduce((sum, amount) => sum + amount, 0) * discount;

Issue: Misapplied logic causes an incorrect total calculation.

Another one that AI can catch while developing the app, because AI mix back and front end code sometimes is hard to debug even when you are a experienced developer, for someone that doesn’t have coding skills, then the challenge can be a bit more complex. AI can also help you to find the error, and you can catch this one probably not only when deploying, but also when doing your happy path scenario.

Not all AI coding platforms create tests on their own code unless explicitly asked for. Loveable for example don’t create any tests for their code. This is another thing you need to keep in mind when using these tools.

Another point is AI is not really good to keep up to date with all latest technologies, for example: All Blockchains, still not possible to do much, but a matter of time maybe ? These technologies keep changing and evolving every second you breath, AI can’t keep up yet, and humans can’t as well ๐Ÿ˜‚

Some tips to maintain AI Coded Apps

  • Conduct Comprehensive Frequent Code Reviews
  • Implement Testing Protocols
  • Train AI to use Code Best Practices
  • Plan for Scalability
  • Prioritise Security
  • Foster Developer Expertise

Comparison of AI Coding Tools

As you probably know I am a big advocate of emerging techs and I love to try new things. I am also lazy and I like to be as much efficient and productive with my time as possible ๐Ÿ˜‚

Also, maybe because I saw the business of my father collapsing as he was not getting up to date with new technologies. Both my mom and my dad were COBOL developers.

My mom later had to become a teacher in Tech and my dad after opening his software development and education business had to close it. Both of my parents were one of the first ones to use and have PCs in my hometown, Santos.

In 2018, I participated in a Machine Learning Workshop where I created an iOS app that used AI to replace facial expressions with emojis and from that day until today I have worked in AI projects and now more than ever this is part of my daily routine.

I have seen this transition in tech happening 2 times before:

  • One with my parents ages ago when they became obsolete and not competitive in the market
  • Second when people were learning about Test Automation and loads of Testers stayed as manual QA.

“AI is your today’s competitive advantage”

ToolCode ExportEase of MaintenancePricingKey FeaturesBest ForDisadvantages
Lovable.devYesHighFree
$20ย / month
$50ย / month
$100ย / month
Text-to-web app generation, Supabase integration, Easy publishingQuick prototyping, MVP validation– Limited customisation after code generation.
– May struggle with complex, domain-specific applications.
ReplitYesMediumFree
$20ย / month
AI-powered code completion, Interactive AI chat, Complete app generationBeginners, Educational purposes– Produce generic solutions, requiring manual refinement.
– Limited scalability for enterprise-grade projects.
Bolt.newYesHighFree
$20ย / month
$50ย / month
$100ย / month
$200ย / month
AI code generation, Manual code editing, Package support, Deployment integrationMVP prototyping, Experimentation– Framework compatibility issues might arise for advanced projects.
– Deployment integrations may not suit all use cases.
AWS PartyRockLimitedMediumFree daily usageNo-code AI app building, Integration with AWS servicesSmall businesses, Non-technical users– Limited code export capabilities restrict flexibility.
– Heavy reliance on AWS ecosystem; less suitable for multi-cloud strategies.
GitHub CopilotYesHighFree
$4 / userย / month
$21ย / user / month
Advanced code generation, Multi-language support, IDE integrationAll developer levels– Known to produce incorrect or insecure code in some scenarios.
– Requires human oversight to validate outputs.
– Subscription costs may be prohibitive for some users.
QodoYesHighFree
$15 / userย / month
$45ย / user / month
Full-stack development support, Real-time collaborationAdvanced developers, Complex projects– Integration complexity with existing workflows.
– Requires skilled personnel for effective use.
A0.devYesMediumFree
$20ย / month
Mobile AI code generation, MVP prototyping, Experimentation– Can introduce code vulnerability and security issues
– Only for Mobile Apps
CursorYesHighFree
$20ย / month
$40ย / month
Code optimisation and refactoring suggestions
Developers looking for AI-assisted coding within a familiar VS Code-like environment
– Difficulty in navigating intricate dependencies and architectures
– Potential security concerns due to uploading local code to cloud services
ClineYesHighFreeReal-time development support within IDEsComplex debugging scenarios and large-scale refactoring– Requires expertise to effectively guide and validate AI suggestions, especially in backend development
– Continuous review necessary to maintain code standards
CodeiumYesHighFree
$15ย / month
$60ย / month
Multi-language support, AI chat function, Code explanationIndividual developers– Struggles with domain-specific requirements and complex workflows.
– Free tier lacks advanced features required by teams or enterprises.

Features and Considerations

Lovable.dev

  • Offers quick prototyping and MVP validation capabilities
  • Integrates with Supabase for backend and database features
  • Provides easy publishing and sharing options โค๏ธ

Replit

  • Includes AI-powered tools like Agent and Assistant
  • Offers a complete development environment with real-time collaboration
  • Suitable for educational purposes and quick experimentation

Bolt.new

  • Supports popular frameworks like Astro, Vite, Next.js, and more
  • Allows manual code editing after AI generation
  • Simplifies deployment with Netlify integration

AWS PartyRock

  • Designed for no-code AI app development
  • Leverages Amazon Bedrock for access to various foundation models
  • Cost-effective solution for small businesses to experiment with AI โค๏ธ

GitHub Copilot

  • Deep integration with the GitHub ecosystem
  • Powered by advanced language models (GPT-4o and Claude 3.5 Sonnet)
  • Offers built-in security scanning and best practices recommendations โค๏ธ

Qodo

  • Specializes in full-stack development support
  • Provides advanced context understanding across multiple files
  • Offers integrated testing and documentation generation

Codeium

  • Supports over 70 programming languages
  • Provides context-aware code suggestions
  • Offers a free tier with many excellent features for individual developers

A0.dev

  • Ideal for quickly generating React Native apps or UI components from basic descriptions
  • The Component Generator allows for the fast creation of individual UI components or screens.
  • The generated React Native projects can be extended and integrated with other tools, APIs, or libraries.

Cursor

  • Smaller coding tasks and projects
  • Teams seeking strong collaboration feature
  • Rapid prototyping and initial code generation

Cline

  • Integration testing and system-level operations
  • Developers who need flexible context management and model switching
  • Front-end tasks and design challenges
  • Projects requiring runtime debugging and end-to-end testing capabilities

What Developers think about it ?

  1. Code Quality Concerns: The generated code not always adhere to best practices, be maintainable, or scale well.
    • Example: Seasoned developers might spend more time refactoring than coding from scratch.
  2. Lack of Customization: AI tools may not fully capture complex or unique requirements, requiring additional effort to adjust.
  3. Over-Reliance Risks: Relying heavily on AI can create dependency issues, making developers less adept at solving problems manually.
  4. Privacy and Intellectual Property: Concerns about the security and data uploaded to these tools.

Overall, devs generally see AI coding tools as valuable for speeding up development and prototyping, which is also my opinion.

I have been exploring and using AI coding tools heavily and I can only recommend, this avoids you to spend money and time building something that you can easily test before you scale into a product.

The devs also emphasise that these tools are best used as assistants rather than replacements, requiring careful oversight and customisation to ensure high-quality and maintainable code.

How Terraform Tests Saved a Prod Deployment

Picture this: Itโ€™s 1 AM. I am not even joking:

Youโ€™ve just refactored your Terraform module to add the auto-scaling magic. You merge. You deploy. You go to bed. The next morning? Production is literally on fire ๐Ÿ”ฅ because your โ€œtinyโ€ change accidentally nuked the database.

How to stop โ€œOopsโ€ from becoming โ€œOH NOโ€ …


Test-Driven Chaos Prevention ๐Ÿงช

Terraform tests (available in v1.6+) let you validate config changes before they touch your infrastructure. Think of them as your codeโ€™s personal bouncer, checking IDs at the door.

# valid_string_concat.tftest.hcl
run "did_i_break_everything" {
  command = plan
  assert {
    condition = aws_s3_bucket.bucket.name == "my-glittery-unicorn-bucket"
    error_message = "Name mismatch! Abort mission! ๐Ÿšจ"
  }
}

Translation: โ€œIf the bucket name isnโ€™t โ€˜my-glittery-unicorn-bucket,โ€™ error and abort.โ€


How Terraform Tests Save You ๐Ÿค—

1๏ธโƒฃ command = plan: Simulate changes without touching real infra. โ€œWhat ifโ€ฆ?โ€ but for adults.
2๏ธโƒฃ Assertions: Like a clingy ex, theyโ€™ll text you 100x if somethingโ€™s wrong. Example:

assert {
  condition = output.bucket_name == "test-bucket" 
  error_message = "This is NOT the bucket youโ€™re looking for. ๐Ÿ‘‹"
}

3๏ธโƒฃ Variables & Overrides: Test edge cases without redeploying. Example: โ€œWhat if someone sets bucket_prefix to ๐Ÿ”ฅ?โ€


Some Tips !

  • Mock Providers (v1.7+): Fake it โ€™til you make it. Test AWS without paying AWS ๐Ÿ‘
  • Expect Failure: Want to validate that a config should break? Use expect_failures. Example:
run "expect_chaos" {
  variables { input = 1 } # Odd number โ†’ should fail validation
  expect_failures = [var.input]
}

Translation: โ€œIf this doesnโ€™t fail, Iโ€™ve lost faith in humanity.โ€ (I have already tbh)

  • Modules in Tests: Reuse setup/teardown logic like a lazy genius. Example: A โ€œtestโ€ module that pre-creates a VPC so you can focus on actual work.
module "consul" {
  source  = "hashicorp/consul/aws"
  version = "0.0.5"

  servers = 3
}

The Takeaway ๐Ÿš€

Testing is like adding seat belts to your code: boring until you crash !

Use run blocks, assertions, and provider mocking to:

  • Avoid โ€œWorks on My Machineโ€ syndrome
  • Sleep better (no 3 AM โ€œWHY IS S3 DOWNโ€)
  • Brag in PR reviews (โ€œMy tests caught 10 bugs. Your move, Karen.โ€)

TL;DR: Write tests. Save your sanity.

Resources:
[1] https://www.paloaltonetworks.com/blog/prisma-cloud/hashicorp-terraform-cloud-run-tasks-integration
[2] https://developer.hashicorp.com/terraform/language/tests

AutomationStar Conference 2024

My best talk so far !

This talk was especially important to me, even though it was the one I practiced the least. My mom was there for the first time to support me! After spending a month with me, she has already gone back to Brazil ๐Ÿ‡ง๐Ÿ‡ท ๐Ÿ™

She also gave me feedback that I should look around the room more instead of focusing on just one side! ๐Ÿ˜„

She probably didn’t understand anything, but she was there ๐Ÿ˜Š

But this time I missed all the other talks ๐Ÿ˜” I have been working on my startup: The Chaincademy most of my days I am going to sleep around 3am and on the day before my talk I went to sleep at 1am ๐Ÿ™ . Gladly, nobody noticed that I was a corpse mopping the floor on that day.

Back to what matters! Unfortunately, I donโ€™t have much to share about the other talks this time.
However, at the speakers’ dinner, I had the pleasure of chatting with some amazing speakers:
Renรฉ Rohner (robot framework and playwright), Mazin Inaad (food and rock bands), Jonathon Wright (the AI guy) , Ana Duarte (why Porto is the best city in Portugal) and Gerard van Engelen (a variety of topics)

This time, I also decided to start a bit differently by being honest about my habit of talking fast at the beginning of sessions. I asked everyone to help pace me if I started speaking too quicklyโ€”sorry in advance! ๐Ÿ˜ฌ

Everyone stayed engaged, even during a 1-hour-and-30-minute session. I felt the hands-on part was a bit rushed and could have been extended, so Iโ€™ll keep that in mind for next time.

Just sharing some additional content after my talk: Iโ€™ve updated the resources to include some Web3 hackathons and a Web3 Test Mindmap.

Hereโ€™s the feedback from this sessionโ€”apparently, I did well, but not quite well enough to win the award! Maybe thatโ€™s why I left right after my talk ๐Ÿ˜‚

It’s okay, thoughโ€”my mom got emotional and teared up when I started reading the positive feedback, so Iโ€™ll count that as a win, even if itโ€™s a bit biased!

Apart from my talk, I also joined Seรฑor Performo in his AutomationStar interview sessions!

Finally met Leandro Melendez.

Iโ€™ve known his work for ages, and I also use Grafana a lot at work these days. It was great to exchange tips on public speaking and chat about mutual friends. During the interview, I shared what weโ€™re doing at The Chaincademy, my journey in tech, and how I ended up where I am today.


“As usual, the best part of my talk is testing whether people were really paying attention or if ADHD is getting the best of the crowd. Itโ€™s also my favorite partโ€”I love a good competition! ๐Ÿฅ‹

And thatโ€™s a wrap! See you at the next conference or meetup! Iโ€™m actually planning to host a webinar on my own soon, so hopefully, youโ€™ll be able to join from anywhere in the world!

EE Global Conference 2024

And this was me again spreading the word about Blockchain and Web3, but this time at the Equal Experts Global Conference 2024. EE is a network of tech professionals that I couldn’t be more proud to be part of โค๏ธ. I am super picky when it comes to work, but this one is a keeper!

While I’m not one to praise companies excessively, I wouldn’t hesitate to recommend Equal Experts as a great place to work and also to have business with. Their integrity and values are rare to find nowadays ๐Ÿ˜†

In the talk, I covered the basics of Web3, including its key differences from blockchain. As you know, I’ve been discussing these topics for quite some time ๐Ÿ˜ฌ

Would you like to review the slides? This is a shorter, abridged version of the in-depth presentation I’ll be giving in October at the AutomationStar Conference. Think of it as a preview:

One of the questions I enjoyed receiving was about how blockchain technology, despite being around for a while, is often perceived as new. Blockchain is actually a combination of technologies that have existed for a long time, such as P2P networks and hashing. However, it wasn’t until these components were brought together that blockchain was truly created and its potential realized. Here are a few resources that explore the evolution and history of blockchain.

Here are a couple of resources that explore the evolution and history of blockchain.


Additionally, I attended another talk before mine that focused on UX/UI and user personas. This is another crucial aspect of QA. Understanding the user is essential when designing test scenarios and improving overall quality, not just from a technical standpoint but also from the perspectives of usability and business.

In conclusion, I solicited feedback from the audience and received valuable insights that I’ll incorporate into my upcoming talk at the AutomationStar Conference in Vienna this October.
See you there ๐Ÿ‘‹

EuroSTAR Conference 2024 – Stockholm

Hello, hello! A bit late as usual, but I’m here to share my experience at the Eurostar Conference this year. My talk was scheduled for 15:15 on Thursday, June 13th. Despite my initial anxiety, I managed to not only deliver my talk but also had time to attend other sessions and join two tutorials. Apparently, joining two tutorials was against the rules (shh ๐Ÿคซ)

The key highlights

Kick Ass Testing Tutorial

  • Finding basis path: Ensure effective control flow testing by identifying the basis path.
  • Draw diagram flow: Create a detailed flowchart diagram to visualize the process.
  • Flipping decisions on baseline: Adjust decisions based on the established baseline to improve accuracy.
  • Flow chart: Use flowcharts to map out the process and identify key decision points.
  • Control flow testing: Test the control flow of the application to ensure all paths are exercised.
  • Code exercise: Focus on exercising the code you wrote, not the code that wasn’t written.
  • Business path analysis with JPath: Tools like JPath may not suffice for business path analysis; use domain analysis and equivalence class partitioning instead.
  • Pairwise workflow: Employ pairwise testing to handle millions of possible tests, as it’s impossible to test everything.
  • User behavior focus: Ask what the user does to the application, not what the application does to the user.
  • Vilfredo Pareto principle: Apply the Pareto principle, noting that 20% of transaction types happen 80% of the time, and start with transaction history analysis.
  • Pairwise tools: Use tools like Allpairs and PICT for pairwise testing, they are quite old school tho. No mention on AI tools to help creating the data, found a bit weird ?!?
  • Data variation: Ensure multiple variations of data and a reasonable amount of data for thorough testing.


See the PDF below:

What Are You Doing Here? Become an Awesome Leader in Testing

My favorite part was discussing the things we’ve heard throughout the years in the QA and testing industry. Some of them include:

  • Automate everything: Avoid unrealistic expectations like “automate everything” and ensure thorough testing to prevent missing bugs.
  • More test cases mean better testing: Quantity over quality in test cases can result in redundant tests that don’t effectively cover critical scenarios.
  • Just test it at the end: Believing that testing can be left until the final stages of development leads to overlooked defects and rushed fixes.
  • Quality is the tester’s job: Assuming that only testers are responsible for quality undermines the collective responsibility of the entire team.
  • We can catch all bugs with testing: Expecting testing to catch every possible defect overlooks the importance of good design and development practices.

Why AI is Killing โ€“ Not Improving โ€“ the Quality of Testing

This was the big one of the entire conference, largely due to the drama that unfolded at the end of the talk ๐ŸŽญ

I missed the point where the title resonated with the entire talk, and it was my fault for not reading the description and going just because of the title.

They compared the time it takes to build cars from ages ago to now (Ford and Tesla) and showed that it only saved 3 minutes. I’m not sure if they did this on purpose just to prove their point, but the comparison missed the complexity and features that have been added in the new cars, like the entire software and electric systems behind Tesla that didn’t exist before. These aspects weren’t considered in their comparison.

They also presented interesting analysis about when AI will catch up with human intelligence, as well as the gap that AI is creating between junior and senior developers. Not many people talk about this, but indeed, AI is a tool that can help us while also potentially making us lazy, similar to how calculators did; we still need to learn the basics

Basic Coaching Techniques for Emerging Quality Coaches

  • Active listening: It involves fully concentrating, understanding, responding, and remembering what’s being said.
  • Train yourself and learn: Continuously improving active listening skills through practice and feedback helps in understanding others better.
  • Circle of control: Focus on what you can control in conversationsโ€”your responses, understanding, and actions.
  • Feedback: Provide constructive feedback that helps the person improve without making them feel punished. Talk about the behaviour not the identity, don’t use BUT, use AND.
  • Keep questions simple: Use straightforward questions that facilitate understanding and encourage deeper thought.
  • Be present: Engage fully in the conversation, maintaining focus and showing genuine interest.
  • 11k impressions: Recognize that perspectives can vary based on personal factors like fatigue and biases
  • Keep questions simple: Frame questions clearly to facilitate understanding and encourage exploration of solutions.
  • Acceptance: Reality gap ! Facts on the table. Easy ? No, necessary: yes
  • You have the questions not necessarily know the answers. Help them to figure out how to find a solution.
  • What are your three top values? Rank 1 to 10. This will help you and your mentee to connect.

QA Outsourcing: Triumphs, Trials, & Takeaways

Unfortunately, I couldn’t make this one as I was back to London, but I watched the video after and the main takeaways are:

  • Strategic move: Outsourcing QA can strategically optimize resources and expertise.
  • Drive success: Effective management of outsourced QA enhances product quality and market competitiveness.
  • Growth: Outsourcing allows scalability and focus on core business functions.
  • Competitive landscape: Leveraging external QA services brings agility and innovation to stay ahead in the market.

A Testerโ€™s Guide to Navigating the Wild West of Web3 Testing

Here I am again, checking the feedback. As expected, the audience was quite different from the one I usually engage with. Since this conference is a bit more corporate, I didn’t anticipate too much variation in the audience. I was also extra nervous for this one, so instead of 45 minutes, I sped up and went into the fast lane, finishing the talk in just 30 minutes. I just gave you all some extra time for coffee! ๐Ÿ˜†

As always, I needed to gauge the Web3 knowledge level of the majority, and unsurprisingly, there is still a massive gap in education about what Web3 and Blockchain are. Thus, I spent a significant portion of my talk explaining these concepts.

The feedback is quite contradictory. Some people said it was hard to follow because no background was provided, while others mentioned they didn’t know the talk would focus solely on Blockchain (which it did not). ๐Ÿคทโ€โ™€๏ธ

So, if I give more background, people complain. If I reduce the background, people will still complain. My take on that is it’s really hard to please everyone; sometimes I can’t even make my own dog happy! ๐Ÿ˜„

I still try, though. So, thanks to those who gave constructive feedback โค๏ธ!

I’ll work on improving for the next one ๐Ÿš€

More random pictures with these great speakers whom I had the pleasure to meet, the cubic challenge, and also random exotic food talks on the boat party.

Load Tests: Jmeter vs PFLB

When it comes to load testing tools, there is a recent tool called PFLB which I received a comparison with the most popular one: JMeter. Each has its own strengths and weaknesses, making them suitable for different scenarios. Let’s delve into a comparison between the two.

PFLBJMeter




Support
– HTPS(S)
– SOAP
– REST
– FTP
– LDAP
– WebSockets
– SMTP/POP3/IMAP
– Citrix ICA
– HTTP
– FTP
– JDBC
– SOAP
– LDAP
– TCP
– JMS
– SMTP
– POP3
– IMAP




Speed to Write Test
FastSlow




Support of โ€œTest as Codeโ€
– Limited Support
– Scripting
– Control Version
– CI/CD Integration
– Reusability
– GUI oriented
– Possibility to create scripts, but too complex and lack of documentation
– Weak (Java)
– Hard to maintain




Ramp-up Flexibility
User-Friendly through GUIPlugins available to be able to configure flexible load




Test Result Analyzing
YesYes




Resource Consumption
Optimizing resource usage involves properly configuring test scenarios and monitoring performance to adjust as needed.Heavy to run tests with multiple users on a single machine, more memory consumption




Easy to use with Version Control Systems
YesNo




Recording Functionality
YesYes




Distributed Execution
YesYes




    Load Test Monitoring 
It reduces memory consumption through asynchronous logging, cloud-based infrastructure, and integration with specialized monitoring tools.Ability to monitor a basic load

PFLB is most used when you need: 

  • Scalability: PFLB tool offers cloud-based load testing, allowing users to scale tests to simulate millions of users without worrying about local resource limitations.
  • Integration: It integrates seamlessly with other monitoring and APM tools (e.g., New Relic, Dynatrace, Datadog), providing comprehensive performance insights and real-time analytics.
  • Ease of Use: PFLB tools are easy to use, with intuitive interfaces and detailed reports, making it easy for teams to set up, run, and analyze load tests.
  • Enterprise-Level Support: PFLB provides robust support and customization options for enterprise clients, ensuring that specific performance testing needs and requirements are met effectively.

JMeter solves some specific problems:

  • Identifying Performance Bottlenecks: JMeter helps detect slow or underperforming parts of an application by simulating various load conditions and monitoring response times.
  • Scalability Testing: It evaluates how an application scales with increased load, ensuring that the system can handle expected traffic and identifying any points of failure.
  • Concurrent User Simulation: JMeter can simulate multiple users accessing the application simultaneously, allowing testers to observe how the application behaves under concurrent usage.
  • Regression Testing: It can automate performance tests as part of a continuous integration process, ensuring that new code changes do not degrade application performance.

Thanks to Victoria from pflb for sending me this comparison !