Talks about QA, Test Automation, Blockchain and Web3
Author: Rafaela Azevedo
Full Stack SDET with +16 years Experience in QA, +14 years Experience in Test Automation and +8 years in Leadership, Delivering and Releasing Softwares in different platforms (Mobile, Desktop, Web)
Became a STEM Ambassador and a STEM Women Member in 2020 making an impact and bringing more people to the STEM area. Contribute to TestProject and instructor of Test Automation University.
This past week, I had the incredible opportunity to lead two workshops as part of Tech Women Week — one virtual session streamed live on YouTube, and one in-person session hosted with Howard Kennedy LLP. Both were focused on one of my favorite topics: building your product with AI — and honestly, I couldn’t have asked for a better experience even tho I have practiced only twice 😂
Thanks gosh I have been working with AI and products for a long time, so no need to practice at all. If you have missed both, this is the recording:
I was doing a research in the beginning to check what people have been using and also what stage they were in their journeys, really interesting how people still go first to chatgpt even to create the product… 🧐
42% were in the POC (Proof of Concept) stage,
38% were still shaping the idea,
17% were working on their MVP,
and just a small number were in validation or product-market fit.
AI tools are evolving fast — new ones every week, each promising to save time, boost productivity, or unlock creativity. My message during the workshop was simple: 👉 Don’t chase the trend. Start with your goal, then find the AI tool that fits that specific purpose.
Also, another take away important was AI is not flawless and you still need to review security, audit the code before you scale and publish sensitive data !! 🚨 Don’t follow blindly AI generated code.
Then the feedback was just amazing: I even had participants come up to me afterward saying, “This was the best workshop I’ve ever been to.”
No surprise I had really good feedback, but also things to improve, omg ALWAYS about the time ! I need to stop talking so much 😂
Some of my favourite feedbacks:
“I learned how straightforward it is to build — I was always stuck in ideation, but now I can move forward.”
“It was very practical and actionable.”
“Everything!”
Some people wanted a longer session or a few more technical examples — which, honestly, I love hearing because it means they were ready to go deeper. That tells me we’re building the right kind of momentum.
Seeing women confidently using AI to build, ideate, and ship their first products was everything I hoped for ♥️
For the ones that were taking pictures with me after, please send them over, I am a not photo taker and the only picture I took was from the empty reception of the building when I was leaving the building 😂
Startup life is making me having less and less time for the blog and sharing my learnings, so pardon me ! Gonna put some updates from the past months here and share the side quests I done lately.
I had an ACL reconstruction surgery last week, got my knee ligament completely gone in one of the open mats before my first BJJ competition last year 😭 Unfortunately means no gym for at least 3 months, and being a bit dependent on my friend for a couple of days (which is a near death experience to me), but also it means I have a really good reason why I can’t get out of my cave 😂
In more exciting news, I’ve published 2 e-books to help with the coming events ! They are about using AI tools for building your product/MVP: VibeEbooks.co
The events will be during the Women in Tech Week, but you can come along regardless your gender:
Then, can’t remember exactly when was it, but I was also invited to join Blockdojo during the Angel Investment Awards 2025 and again met Emmie Faust from Female Founders Rise ♥️
I promised myself this year I was not going to be a speaker in more than max 3 events, and I have been keeping up quiet good on this, so far I had only one in May and will have more 2 in October !
I have been trying to follow same pattern as Steve Jobs (Do at least 3 things that are important to achieve your next goal per day, the rest is noise). I have been delegating everything that I can (using ChatGpt Agent A LOT 🤖) and don’t need my full attention, which gives me time to finally play a bit of Hogwarts Legacy and CS 😂
Robin woke up as soon as I started playing CS tho
Our first Hackathon was extremely better than what I was expecting. We had way more people signing up +100, some people dropped off (as expected) and then from this the founder was able to find not one, but TWO teams he wanted to work with ! Because of this now we are opening this new service for founders that want to have their MVP built during the hackathon or find their development team as consequence of it !
Last, but not least despite the ups and downs, and asking people to reply our survey….
We got a NPS Score of 10/10 from our last survey from the founders and 9/10 from the team ! That’s all for now !!
Again, late to post about this, maybe a month and a half ? 😂
The results of the SeedLegals Startup Awards were so surprising for me, I didn’t go expecting much, but while The Chaincademy didn’t take home a trophy this time, being named a Community Leaderfinalist and receiving an honourable mention is an achievement worth celebrating.
For a first awards appearance, it’s a strong signal of what’s to come ❤️ — and we’re just getting started !! Although I can’t disguise my disappointed face 😂
Eva Dobrzanska and the winners of the Community Leader Baltic Ventures !
And the winners of 2025 were:
🎥 Social Sensation:Wild 🤝 Community Leader:Baltic Ventures 🌱 Eco Innovator:UNDO 🏆 Customer Champion:Wild 💸 Angel of the Year:Sutin Yang 💼 Top Workplace:Zinc 🦄 Soonicorn (Soon-to-be Unicorn):Fin Sustainable Logistics 💡 Inspiring Entrepreneur:Dr Tom Pey ⚡ Game Changer:Zetta Genomics 📈 Fund of the Year:Haatch 🌍 Captivating Mission:WeWALK 🥇 Outstanding Achievement:Definely 🏅 Startup of the Year:WineFi
Last month, I had the pleasure of hosting a workshop, “How to Get Your First Job in Tech: A Step-by-Step Guide,” remotely via ClassView Desktop. We connected with three ClassView Immersive Rooms, reaching 60 students from:
North Kent College, Dartford Campus
North Kent College, Tonbridge Campus
USP College, Seevic Campus
For 75 minutes, we explored the roadmap to breaking into the tech industry, and different paths and ways to find your path in tech! Specially nowadays with AI changing the space.
What We Covered
Here’s a quick recap of what we dove into during the workshop:
Assessing Your Skills and Finding Your Path We talked about understanding your current skills, identifying gaps, and choosing the right tech career pathway tailored to your strengths and interests.
Crafting Standout Portfolios A big focus was on creating LinkedIn profiles, CVs, and GitHub portfolios that truly stand out to employers.
Supercharging Your Job Hunt I shared practical tips on accelerating your job search—everything from networking hacks to nailing interviews.
Validating Career Goals in Real-Time Students received actionable insights to align their career aspirations with their current skills and experiences.
AI Hackathon
One of my favorite parts of the session was seeing students quickly create some impressive portfolios in just four minutes with Lovable! Here are a few:
Huge congratulations to one of the students, who took home the top prize in our final quiz, earning a perfect score! Happy to see this is the moment where people are FULLY engaged, specially because there was a prize in the end ! 👏
My Takeaway
Being able to guide these students on their tech career journeys is always a rewarding experience. I received loads of great feedbacks from the students which is always comforting as I know dealing with teenagers is a BIG challenge 😂
And some pictures !
I can’t help but grapple with deeper questions about the future of our industry:
How can we help as a society to bridge the skills gap that is being widened by the speed of AI development?
The advancement of AI has the potential to exacerbate disparities in skills and opportunities. Are we equipping the next generation with the tools they need in this shifting landscape? Education, mentorship, and accessibility must be at the forefront of our collective effort to ensure no one is left behind.
If juniors and interns are being replaced by AI, are we heading towards a society without seniors?
This is a sobering thought. Without entry-level opportunities, how will we cultivate the experts of tomorrow? Experience is built incrementally, and AI can’t replace the nuanced understanding that comes from hands-on learning. As an industry, we must find ways to preserve pathways for growth while leveraging the capabilities of AI.
AI tools like Lovable.dev are changing app development, enabling rapid prototyping and giving the power to everybody to create functional applications through natural language prompts.
These tools are 20x faster to code than a developer, but they also introduce unique challenges in testing, debugging, and maintaining the generated applications. When you add AI to the team, you need to be vigilant.
Let’s explore below some challenges, and common scenarios that can happen and how you can test and identify them.
If you want to be able to use the code as a boilerplate and escalate the product after, don’t add 300 features before checking and testing it! AI creates hundreds lines of code making it harder and harder to review and maintain, test and check the code early as possible.
Also be aware, they will use whatever library they think is the best or they have partnership with. (Example: Lovable.dev will push you to use supabase) and some of these libraries/tools might not be the best/cheaper for your product (Check subscription prices). These AI tools might use libraries that are deprecated creating a conflict with other dependencies as you scale, introducing other bugs.
If you want to just test the market, prototype and you are completely okay to might have this MVP rewritten from the scratch then no need to worry about too much.
Common Challenges in Testing AI Coded Apps
1. Code Quality and Optimisation
Scenario: An e-commerce startup uses Lovable.dev to build a shopping platform. The generated code includes a product listing feature but contains redundant database queries that degrade performance.
Generated Code Example:
// Generated by AI
let products = [];
for (let productId of productIds) {
let product = db.query(`SELECT * FROM products WHERE id = ${productId}`);
products.push(product);
}
Issue: The code queries the database inside a loop, resulting in multiple queries for a single operation.
If you only had a happy test scenario you wouldn’t be able to catch this one, so in this case you will need to actively check the database and it’s performance.
2. Limited Customization and Flexibility
Scenario: A nonprofit organization creates an event management app. The app’s AI-generated code fails to include the functionality to calculate the carbon footprint of events.
Generated Code Example:
// Generated by AI
events.forEach(event => {
console.log(`Event: ${event.name}`);
});
Issue: The AI didn’t include a custom calculation for carbon emissions.
This is typical, sometimes AI only codes the front-end, some of the interactions between the components, and uses hardcoded the data, but it is unable to create the backend or logic behind if not explicitly asked for and send the formula. This can be catch in a simple happy test scenario with different inputs.
3. Debugging Complexity
Scenario: A small business generates a CRM app with an AI tool. The notification system malfunctions, sending duplicate notifications.
Generated Code Example:
// Generated by AI
reminders.forEach(reminder => {
if (reminder.date === today) {
sendNotification(reminder.userId, reminder.message);
sendNotification(reminder.userId, reminder.message);
}
});
Issue: Duplicate notification logic due to repeated function calls.
Sometimes even AI is able to pick up this one. You know when they suggest to refactor the code ? This one would be easy to catch when doing your happy path scenario, checking if you have received the notification only once.
4. Scalability Concerns
Scenario: A social media startup builds its platform. The AI-generated code fetches user data inefficiently during logins, causing delays as the user base grows.
Generated Code Example:
// Generated by AI
let userData = {};
userIds.forEach(userId => {
userData[userId] = db.query(`SELECT * FROM users WHERE id = ${userId}`);
});
Issue: The loop-based query structure slows down login times for large user bases.
This one could be identified later in the development cycle, unless you are doing performance tests early on. Probably will catch this only when you have a large database of users, easy to fix, but can be fixed before you have this headache.
5. Security Vulnerabilities
AI coding is great when the stakes aren’t too high
Scenario: A healthcare startup generates a patient portal app. The AI-generated code stores sensitive data without encryption.
Generated Code Example:
// Generated by AI
db.insert(`INSERT INTO patients (name, dob, medicalRecord) VALUES ('${name}', '${dob}', '${medicalRecord}')`);
Issue: Plain text storage of sensitive information.
Another typical one for AI coded generated apps, usually they lack on security of the data. Be extra cautious when checking the data transactions and how the data is being managed and stored.
6. Over-Reliance on AI
Scenario: A freelance entrepreneur creates a budgeting app. When a bug arises in the expense tracker, the entrepreneur struggles to debug it due to limited coding knowledge.
Generated Code Example:
// Generated by AI
let expenses = [];
expenseItems.forEach(item => {
expenses.push(item.amount);
});
let total = expenses.reduce((sum, amount) => sum + amount, 0) * discount;
Issue: Misapplied logic causes an incorrect total calculation.
Another one that AI can catch while developing the app, because AI mix back and front end code sometimes is hard to debug even when you are a experienced developer, for someone that doesn’t have coding skills, then the challenge can be a bit more complex. AI can also help you to find the error, and you can catch this one probably not only when deploying, but also when doing your happy path scenario.
Not all AI coding platforms create tests on their own code unless explicitly asked for. Loveable for example don’t create any tests for their code. This is another thing you need to keep in mind when using these tools.
Another point is AI is not really good to keep up to date with all latest technologies, for example: All Blockchains, still not possible to do much, but a matter of time maybe ? These technologies keep changing and evolving every second you breath, AI can’t keep up yet, and humans can’t as well 😂
As you probably know I am a big advocate of emerging techs and I love to try new things. I am also lazy and I like to be as much efficient and productive with my time as possible 😂
Also, maybe because I saw the business of my father collapsing as he was not getting up to date with new technologies. Both my mom and my dad were COBOL developers.
My mom later had to become a teacher in Tech and my dad after opening his software development and education business had to close it. Both of my parents were one of the first ones to use and have PCs in my hometown, Santos.
Advanced code generation, Multi-language support, IDE integration
All developer levels
– Known to produce incorrect or insecure code in some scenarios. – Requires human oversight to validate outputs. – Subscription costs may be prohibitive for some users.
Complex debugging scenarios and large-scale refactoring
– Requires expertise to effectively guide and validate AI suggestions, especially in backend development – Continuous review necessary to maintain code standards
Multi-language support, AI chat function, Code explanation
Individual developers
– Struggles with domain-specific requirements and complex workflows. – Free tier lacks advanced features required by teams or enterprises.
Features and Considerations
Lovable.dev
Offers quick prototyping and MVP validation capabilities
Integrates with Supabase for backend and database features
Provides easy publishing and sharing options ❤️
Replit
Includes AI-powered tools like Agent and Assistant
Offers a complete development environment with real-time collaboration
Suitable for educational purposes and quick experimentation
Bolt.new
Supports popular frameworks like Astro, Vite, Next.js, and more
Allows manual code editing after AI generation
Simplifies deployment with Netlify integration
AWS PartyRock
Designed for no-code AI app development
Leverages Amazon Bedrock for access to various foundation models
Cost-effective solution for small businesses to experiment with AI ❤️
GitHub Copilot
Deep integration with the GitHub ecosystem
Powered by advanced language models (GPT-4o and Claude 3.5 Sonnet)
Offers built-in security scanning and best practices recommendations ❤️
Qodo
Specializes in full-stack development support
Provides advanced context understanding across multiple files
Offers integrated testing and documentation generation
Codeium
Supports over 70 programming languages
Provides context-aware code suggestions
Offers a free tier with many excellent features for individual developers
A0.dev
Ideal for quickly generating React Native apps or UI components from basic descriptions
The Component Generator allows for the fast creation of individual UI components or screens.
The generated React Native projects can be extended and integrated with other tools, APIs, or libraries.
Cursor
Smaller coding tasks and projects
Teams seeking strong collaboration feature
Rapid prototyping and initial code generation
Cline
Integration testing and system-level operations
Developers who need flexible context management and model switching
Front-end tasks and design challenges
Projects requiring runtime debugging and end-to-end testing capabilities
What Developers think about it ?
Code Quality Concerns: The generated code not always adhere to best practices, be maintainable, or scale well.
Example: Seasoned developers might spend more time refactoring than coding from scratch.
Lack of Customization: AI tools may not fully capture complex or unique requirements, requiring additional effort to adjust.
Over-Reliance Risks: Relying heavily on AI can create dependency issues, making developers less adept at solving problems manually.
Privacy and Intellectual Property: Concerns about the security and data uploaded to these tools.
Overall, devs generally see AI coding tools as valuable for speeding up development and prototyping, which is also my opinion.
I have been exploring and using AI coding tools heavily and I can only recommend, this avoids you to spend money and time building something that you can easily test before you scale into a product.
The devs also emphasise that these tools are best used as assistants rather than replacements, requiring careful oversight and customisation to ensure high-quality and maintainable code.
You’ve just refactored your Terraform module to add the auto-scaling magic. You merge. You deploy. You go to bed. The next morning? Production is literally on fire 🔥 because your “tiny” change accidentally nuked the database.
How to stop “Oops” from becoming “OH NO” …
Test-Driven Chaos Prevention 🧪
Terraform tests (available in v1.6+) let you validate config changes before they touch your infrastructure. Think of them as your code’s personal bouncer, checking IDs at the door.
Translation: “If the bucket name isn’t ‘my-glittery-unicorn-bucket,’ error and abort.”
How Terraform Tests Save You 🤗
1️⃣ command = plan: Simulate changes without touching real infra. “What if…?” but for adults. 2️⃣ Assertions: Like a clingy ex, they’ll text you 100x if something’s wrong. Example:
assert {
condition = output.bucket_name == "test-bucket"
error_message = "This is NOT the bucket you’re looking for. 👋"
}
3️⃣ Variables & Overrides: Test edge cases without redeploying. Example: “What if someone sets bucket_prefix to 🔥?”
Some Tips !
Mock Providers (v1.7+): Fake it ’til you make it. Test AWS without paying AWS 👍
Expect Failure: Want to validate that a config should break? Use expect_failures. Example:
run "expect_chaos" {
variables { input = 1 } # Odd number → should fail validation
expect_failures = [var.input]
}
Translation: “If this doesn’t fail, I’ve lost faith in humanity.” (I have already tbh)
Modules in Tests: Reuse setup/teardown logic like a lazy genius. Example: A “test” module that pre-creates a VPC so you can focus on actual work.
One of the key takeaways was the recognition that AI isn’t about replacing testers, but rather about increasing their abilities. While 1 or 2 people were concerned about job security, the consensus was that upskilling is crucial.
That’s why I always recommend people to follow emergent technologies. My first interaction with AI was 7 years ago, when I posted about machine learning in 2018 and also on this AI chatbot project that I joined just after.
Focus, learn, practice and stay calm, you are not going to be replaced by AI, maybe for people who use AI 🤷♀️
The future of testing lies in leveraging AI tools effectively, and those who adapt will thrive. The discussion highlighted core skills that will remain essential for long-term careers:
Clear Thinking: AI can analyse code, but human critical thinking and problem-solving are still key.
Passion for Quality: A genuine commitment to quality remains a uniquely human trait.
Adaptability: The tech landscape is constantly shifting. Embracing change and learning new technologies, like AI, is essential.
The meetup also talked about the limitations of current AI models. Bias in data sets, as highlighted by the Global Data Quality Report, remains a significant concern. We discussed how even sophisticated simulations, like a “simulated CEO,” struggle to replicate human personality and decision-making.
Testing AI: Challenges and Approaches
Testing AI itself has unique challenges, primarily due to the sheer volume of data involved. Some organisations are using automation with massive datasets, but careful scoping is essential. The human element remains crucial, especially at key decision points. It’s also important to remember that AI can still be “delusional” – producing unexpected or incorrect results.
Practical Advice and Considerations:
Some practical advices:
Don’t follow blindly: AI is powerful, but it’s not a silver bullet. Understand the value proposition before implementing it.
Be aware of the limitations: AI can slow you down and requires careful planning. Define clear objectives before you start.
Embrace thought leadership: Explore AI’s potential for strategic growth and innovation.
Research and be cautious: Don’t rely on a single model. Test with different datasets and diverse groups to ensure robustness.
Data and Privacy:
A crucial point raised was data privacy. Concerns were expressed about data being stored in the cloud without proper security measures. The importance of encryption and secure data handling was emphasised, with some companies exploring blockchain technology for data storage ❤️
The meetup reinforced my what I have being saying about: the future of testing lies in the synergy between human intelligence and AI tools. By effectively integrating human expertise with the capabilities of AI, we can achieve higher levels of quality and efficiency in software development. It’s about “mix brain and tool” – leveraging the best of both worlds.
This talk was especially important to me, even though it was the one I practiced the least. My mom was there for the first time to support me! After spending a month with me, she has already gone back to Brazil 🇧🇷 🙏
She also gave me feedback that I should look around the room more instead of focusing on just one side! 😄
She probably didn’t understand anything, but she was there 😊
But this time I missed all the other talks 😔 I have been working on my startup: The Chaincademy most of my days I am going to sleep around 3am and on the day before my talk I went to sleep at 1am 🙏 . Gladly, nobody noticed that I was a corpse mopping the floor on that day.
Back to what matters! Unfortunately, I don’t have much to share about the other talks this time. However, at the speakers’ dinner, I had the pleasure of chatting with some amazing speakers: René Rohner (robot framework and playwright), Mazin Inaad (food and rock bands), Jonathon Wright(the AI guy) , Ana Duarte(why Porto is the best city in Portugal) and Gerard van Engelen (a variety of topics)
This time, I also decided to start a bit differently by being honest about my habit of talking fast at the beginning of sessions. I asked everyone to help pace me if I started speaking too quickly—sorry in advance! 😬
Everyone stayed engaged, even during a 1-hour-and-30-minute session. I felt the hands-on part was a bit rushed and could have been extended, so I’ll keep that in mind for next time.
Just sharing some additional content after my talk: I’ve updated the resources to include some Web3 hackathons and a Web3 Test Mindmap.
Here’s the feedback from this session—apparently, I did well, but not quite well enough to win the award! Maybe that’s why I left right after my talk 😂
It’s okay, though—my mom got emotional and teared up when I started reading the positive feedback, so I’ll count that as a win, even if it’s a bit biased!
Apart from my talk, I also joined Señor Performo in his AutomationStar interview sessions!
Finally met Leandro Melendez.
I’ve known his work for ages, and I also use Grafana a lot at work these days. It was great to exchange tips on public speaking and chat about mutual friends. During the interview, I shared what we’re doing at The Chaincademy, my journey in tech, and how I ended up where I am today.
“As usual, the best part of my talk is testing whether people were really paying attention or if ADHD is getting the best of the crowd. It’s also my favorite part—I love a good competition! 🥋
And that’s a wrap! See you at the next conference or meetup! I’m actually planning to host a webinar on my own soon, so hopefully, you’ll be able to join from anywhere in the world!
And this was me again spreading the word about Blockchain and Web3, but this time at the Equal Experts Global Conference 2024. EE is a network of tech professionals that I couldn’t be more proud to be part of ❤️. I am super picky when it comes to work, but this one is a keeper!
While I’m not one to praise companies excessively, I wouldn’t hesitate to recommend Equal Experts as a great place to work and also to have business with. Their integrity and values are rare to find nowadays 😆
In the talk, I covered the basics of Web3, including its key differences from blockchain. As you know, I’ve been discussing these topics for quite some time 😬
Would you like to review the slides? This is a shorter, abridged version of the in-depth presentation I’ll be giving in October at the AutomationStar Conference. Think of it as a preview:
One of the questions I enjoyed receiving was about how blockchain technology, despite being around for a while, is often perceived as new. Blockchain is actually a combination of technologies that have existed for a long time, such as P2P networks and hashing. However, it wasn’t until these components were brought together that blockchain was truly created and its potential realized. Here are a few resources that explore the evolution and history of blockchain.
Here are a couple of resources that explore the evolution and history of blockchain.
Additionally, I attended another talk before mine that focused on UX/UI and user personas. This is another crucial aspect of QA. Understanding the user is essential when designing test scenarios and improving overall quality, not just from a technical standpoint but also from the perspectives of usability and business.
In conclusion, I solicited feedback from the audience and received valuable insights that I’ll incorporate into my upcoming talk at the AutomationStar Conference in Vienna this October. See you there 👋