This past week, I had the incredible opportunity to lead two workshops as part of Tech Women Week โ one virtual session streamed live on YouTube, and one in-person session hosted with Howard Kennedy LLP. Both were focused on one of my favorite topics: building your product with AI โ and honestly, I couldnโt have asked for a better experience even tho I have practiced only twice ๐
Thanks gosh I have been working with AI and products for a long time, so no need to practice at all. If you have missed both, this is the recording:
I was doing a research in the beginning to check what people have been using and also what stage they were in their journeys, really interesting how people still go first to chatgpt even to create the product… ๐ง
42% were in the POC (Proof of Concept) stage,
38% were still shaping the idea,
17% were working on their MVP,
and just a small number were in validation or product-market fit.
AI tools are evolving fast โ new ones every week, each promising to save time, boost productivity, or unlock creativity. My message during the workshop was simple: ๐ Donโt chase the trend. Start with your goal, then find the AI tool that fits that specific purpose.
Also, another take away important was AI is not flawless and you still need to review security, audit the code before you scale and publish sensitive data !! ๐จ Don’t follow blindly AI generated code.
Then the feedback was just amazing: I even had participants come up to me afterward saying, โThis was the best workshop Iโve ever been to.โ
No surprise I had really good feedback, but also things to improve, omg ALWAYS about the time ! I need to stop talking so much ๐
Some of my favourite feedbacks:
โI learned how straightforward it is to build โ I was always stuck in ideation, but now I can move forward.โ
โIt was very practical and actionable.โ
โEverything!โ
Some people wanted a longer session or a few more technical examples โ which, honestly, I love hearing because it means they were ready to go deeper. That tells me weโre building the right kind of momentum.
Seeing women confidently using AI to build, ideate, and ship their first products was everything I hoped for โฅ๏ธ
For the ones that were taking pictures with me after, please send them over, I am a not photo taker and the only picture I took was from the empty reception of the building when I was leaving the building ๐
Startup life is making me having less and less time for the blog and sharing my learnings, so pardon me ! Gonna put some updates from the past months here and share the side quests I done lately.
I had an ACL reconstruction surgery last week, got my knee ligament completely gone in one of the open mats before my first BJJ competition last year ๐ญ Unfortunately means no gym for at least 3 months, and being a bit dependent on my friend for a couple of days (which is a near death experience to me), but also it means I have a really good reason why I can’t get out of my cave ๐
In more exciting news, I’ve published 2 e-books to help with the coming events ! They are about using AI tools for building your product/MVP: VibeEbooks.co
The events will be during the Women in Tech Week, but you can come along regardless your gender:
Then, can’t remember exactly when was it, but I was also invited to join Blockdojo during the Angel Investment Awards 2025 and again met Emmie Faust from Female Founders Rise โฅ๏ธ
I promised myself this year I was not going to be a speaker in more than max 3 events, and I have been keeping up quiet good on this, so far I had only one in May and will have more 2 in October !
I have been trying to follow same pattern as Steve Jobs (Do at least 3 things that are important to achieve your next goal per day, the rest is noise). I have been delegating everything that I can (using ChatGpt Agent A LOT ๐ค) and don’t need my full attention, which gives me time to finally play a bit of Hogwarts Legacy and CS ๐
Robin woke up as soon as I started playing CS tho
Our first Hackathon was extremely better than what I was expecting. We had way more people signing up +100, some people dropped off (as expected) and then from this the founder was able to find not one, but TWO teams he wanted to work with ! Because of this now we are opening this new service for founders that want to have their MVP built during the hackathon or find their development team as consequence of it !
Last, but not least despite the ups and downs, and asking people to reply our survey….
We got a NPS Score of 10/10 from our last survey from the founders and 9/10 from the team ! That’s all for now !!
Again, late to post about this, maybe a month and a half ? ๐
The results of the SeedLegals Startup Awards were so surprising for me, I didn’t go expecting much, but while The Chaincademy didnโt take home a trophy this time, being named a Community Leaderfinalist and receiving an honourable mention is an achievement worth celebrating.
For a first awards appearance, itโs a strong signal of whatโs to come โค๏ธ โ and weโre just getting started !! Although I can’t disguise my disappointed face ๐
Eva Dobrzanska and the winners of the Community Leader Baltic Ventures !
And the winners of 2025 were:
๐ฅ Social Sensation:Wild ๐ค Community Leader:Baltic Ventures ๐ฑ Eco Innovator:UNDO ๐ Customer Champion:Wild ๐ธ Angel of the Year:Sutin Yang ๐ผ Top Workplace:Zinc ๐ฆ Soonicorn (Soon-to-be Unicorn):Fin Sustainable Logistics ๐ก Inspiring Entrepreneur:Dr Tom Pey โก Game Changer:Zetta Genomics ๐ Fund of the Year:Haatch ๐ Captivating Mission:WeWALK ๐ฅ Outstanding Achievement:Definely ๐ Startup of the Year:WineFi
AI tools like Lovable.dev are changing app development, enabling rapid prototyping and giving the power to everybody to create functional applications through natural language prompts.
These tools are 20x faster to code than a developer, but they also introduce unique challenges in testing, debugging, and maintaining the generated applications. When you add AI to the team, you need to be vigilant.
Let’s explore below some challenges, and common scenarios that can happen and how you can test and identify them.
If you want to be able to use the code as a boilerplate and escalate the product after, don’t add 300 features before checking and testing it! AI creates hundreds lines of code making it harder and harder to review and maintain, test and check the code early as possible.
Also be aware, they will use whatever library they think is the best or they have partnership with. (Example: Lovable.dev will push you to use supabase) and some of these libraries/tools might not be the best/cheaper for your product (Check subscription prices). These AI tools might use libraries that are deprecated creating a conflict with other dependencies as you scale, introducing other bugs.
If you want to just test the market, prototype and you are completely okay to might have this MVP rewritten from the scratch then no need to worry about too much.
Common Challenges in Testing AI Coded Apps
1. Code Quality and Optimisation
Scenario: An e-commerce startup uses Lovable.dev to build a shopping platform. The generated code includes a product listing feature but contains redundant database queries that degrade performance.
Generated Code Example:
// Generated by AI
let products = [];
for (let productId of productIds) {
let product = db.query(`SELECT * FROM products WHERE id = ${productId}`);
products.push(product);
}
Issue: The code queries the database inside a loop, resulting in multiple queries for a single operation.
If you only had a happy test scenario you wouldn’t be able to catch this one, so in this case you will need to actively check the database and it’s performance.
2. Limited Customization and Flexibility
Scenario: A nonprofit organization creates an event management app. The appโs AI-generated code fails to include the functionality to calculate the carbon footprint of events.
Generated Code Example:
// Generated by AI
events.forEach(event => {
console.log(`Event: ${event.name}`);
});
Issue: The AI didnโt include a custom calculation for carbon emissions.
This is typical, sometimes AI only codes the front-end, some of the interactions between the components, and uses hardcoded the data, but it is unable to create the backend or logic behind if not explicitly asked for and send the formula. This can be catch in a simple happy test scenario with different inputs.
3. Debugging Complexity
Scenario: A small business generates a CRM app with an AI tool. The notification system malfunctions, sending duplicate notifications.
Generated Code Example:
// Generated by AI
reminders.forEach(reminder => {
if (reminder.date === today) {
sendNotification(reminder.userId, reminder.message);
sendNotification(reminder.userId, reminder.message);
}
});
Issue: Duplicate notification logic due to repeated function calls.
Sometimes even AI is able to pick up this one. You know when they suggest to refactor the code ? This one would be easy to catch when doing your happy path scenario, checking if you have received the notification only once.
4. Scalability Concerns
Scenario: A social media startup builds its platform. The AI-generated code fetches user data inefficiently during logins, causing delays as the user base grows.
Generated Code Example:
// Generated by AI
let userData = {};
userIds.forEach(userId => {
userData[userId] = db.query(`SELECT * FROM users WHERE id = ${userId}`);
});
Issue: The loop-based query structure slows down login times for large user bases.
This one could be identified later in the development cycle, unless you are doing performance tests early on. Probably will catch this only when you have a large database of users, easy to fix, but can be fixed before you have this headache.
5. Security Vulnerabilities
AI coding is great when the stakes arenโt too high
Scenario: A healthcare startup generates a patient portal app. The AI-generated code stores sensitive data without encryption.
Generated Code Example:
// Generated by AI
db.insert(`INSERT INTO patients (name, dob, medicalRecord) VALUES ('${name}', '${dob}', '${medicalRecord}')`);
Issue: Plain text storage of sensitive information.
Another typical one for AI coded generated apps, usually they lack on security of the data. Be extra cautious when checking the data transactions and how the data is being managed and stored.
6. Over-Reliance on AI
Scenario: A freelance entrepreneur creates a budgeting app. When a bug arises in the expense tracker, the entrepreneur struggles to debug it due to limited coding knowledge.
Generated Code Example:
// Generated by AI
let expenses = [];
expenseItems.forEach(item => {
expenses.push(item.amount);
});
let total = expenses.reduce((sum, amount) => sum + amount, 0) * discount;
Issue: Misapplied logic causes an incorrect total calculation.
Another one that AI can catch while developing the app, because AI mix back and front end code sometimes is hard to debug even when you are a experienced developer, for someone that doesn’t have coding skills, then the challenge can be a bit more complex. AI can also help you to find the error, and you can catch this one probably not only when deploying, but also when doing your happy path scenario.
Not all AI coding platforms create tests on their own code unless explicitly asked for. Loveable for example don’t create any tests for their code. This is another thing you need to keep in mind when using these tools.
Another point is AI is not really good to keep up to date with all latest technologies, for example: All Blockchains, still not possible to do much, but a matter of time maybe ? These technologies keep changing and evolving every second you breath, AI can’t keep up yet, and humans can’t as well ๐
As you probably know I am a big advocate of emerging techs and I love to try new things. I am also lazy and I like to be as much efficient and productive with my time as possible ๐
Also, maybe because I saw the business of my father collapsing as he was not getting up to date with new technologies. Both my mom and my dad were COBOL developers.
My mom later had to become a teacher in Tech and my dad after opening his software development and education business had to close it. Both of my parents were one of the first ones to use and have PCs in my hometown, Santos.
Advanced code generation, Multi-language support, IDE integration
All developer levels
– Known to produce incorrect or insecure code in some scenarios. – Requires human oversight to validate outputs. – Subscription costs may be prohibitive for some users.
Complex debugging scenarios and large-scale refactoring
– Requires expertise to effectively guide and validate AI suggestions, especially in backend development – Continuous review necessary to maintain code standards
Multi-language support, AI chat function, Code explanation
Individual developers
– Struggles with domain-specific requirements and complex workflows. – Free tier lacks advanced features required by teams or enterprises.
Features and Considerations
Lovable.dev
Offers quick prototyping and MVP validation capabilities
Integrates with Supabase for backend and database features
Provides easy publishing and sharing options โค๏ธ
Replit
Includes AI-powered tools like Agent and Assistant
Offers a complete development environment with real-time collaboration
Suitable for educational purposes and quick experimentation
Bolt.new
Supports popular frameworks like Astro, Vite, Next.js, and more
Allows manual code editing after AI generation
Simplifies deployment with Netlify integration
AWS PartyRock
Designed for no-code AI app development
Leverages Amazon Bedrock for access to various foundation models
Cost-effective solution for small businesses to experiment with AI โค๏ธ
GitHub Copilot
Deep integration with the GitHub ecosystem
Powered by advanced language models (GPT-4o and Claude 3.5 Sonnet)
Offers built-in security scanning and best practices recommendations โค๏ธ
Qodo
Specializes in full-stack development support
Provides advanced context understanding across multiple files
Offers integrated testing and documentation generation
Codeium
Supports over 70 programming languages
Provides context-aware code suggestions
Offers a free tier with many excellent features for individual developers
A0.dev
Ideal for quickly generating React Native apps or UI components from basic descriptions
The Component Generator allows for the fast creation of individual UI components or screens.
The generated React Native projects can be extended and integrated with other tools, APIs, or libraries.
Cursor
Smaller coding tasks and projects
Teams seeking strong collaboration feature
Rapid prototyping and initial code generation
Cline
Integration testing and system-level operations
Developers who need flexible context management and model switching
Front-end tasks and design challenges
Projects requiring runtime debugging and end-to-end testing capabilities
What Developers think about it ?
Code Quality Concerns: The generated code not always adhere to best practices, be maintainable, or scale well.
Example: Seasoned developers might spend more time refactoring than coding from scratch.
Lack of Customization: AI tools may not fully capture complex or unique requirements, requiring additional effort to adjust.
Over-Reliance Risks: Relying heavily on AI can create dependency issues, making developers less adept at solving problems manually.
Privacy and Intellectual Property: Concerns about the security and data uploaded to these tools.
Overall, devs generally see AI coding tools as valuable for speeding up development and prototyping, which is also my opinion.
I have been exploring and using AI coding tools heavily and I can only recommend, this avoids you to spend money and time building something that you can easily test before you scale into a product.
The devs also emphasise that these tools are best used as assistants rather than replacements, requiring careful oversight and customisation to ensure high-quality and maintainable code.
Youโve just refactored your Terraform module to add the auto-scaling magic. You merge. You deploy. You go to bed. The next morning? Production is literally on fire ๐ฅ because your โtinyโ change accidentally nuked the database.
How to stop โOopsโ from becoming โOH NOโ …
Test-Driven Chaos Prevention ๐งช
Terraform tests (available in v1.6+) let you validate config changes before they touch your infrastructure. Think of them as your codeโs personal bouncer, checking IDs at the door.
Translation: โIf the bucket name isnโt โmy-glittery-unicorn-bucket,โ error and abort.โ
How Terraform Tests Save You ๐ค
1๏ธโฃ command = plan: Simulate changes without touching real infra. โWhat ifโฆ?โ but for adults. 2๏ธโฃ Assertions: Like a clingy ex, theyโll text you 100x if somethingโs wrong. Example:
assert {
condition = output.bucket_name == "test-bucket"
error_message = "This is NOT the bucket youโre looking for. ๐"
}
3๏ธโฃ Variables & Overrides: Test edge cases without redeploying. Example: โWhat if someone sets bucket_prefix to ๐ฅ?โ
Some Tips !
Mock Providers (v1.7+): Fake it โtil you make it. Test AWS without paying AWS ๐
Expect Failure: Want to validate that a config should break? Use expect_failures. Example:
run "expect_chaos" {
variables { input = 1 } # Odd number โ should fail validation
expect_failures = [var.input]
}
Translation: โIf this doesnโt fail, Iโve lost faith in humanity.โ (I have already tbh)
Modules in Tests: Reuse setup/teardown logic like a lazy genius. Example: A โtestโ module that pre-creates a VPC so you can focus on actual work.
This talk was especially important to me, even though it was the one I practiced the least. My mom was there for the first time to support me! After spending a month with me, she has already gone back to Brazil ๐ง๐ท ๐
She also gave me feedback that I should look around the room more instead of focusing on just one side! ๐
She probably didn’t understand anything, but she was there ๐
But this time I missed all the other talks ๐ I have been working on my startup: The Chaincademy most of my days I am going to sleep around 3am and on the day before my talk I went to sleep at 1am ๐ . Gladly, nobody noticed that I was a corpse mopping the floor on that day.
Back to what matters! Unfortunately, I donโt have much to share about the other talks this time. However, at the speakers’ dinner, I had the pleasure of chatting with some amazing speakers: Renรฉ Rohner (robot framework and playwright), Mazin Inaad (food and rock bands), Jonathon Wright(the AI guy) , Ana Duarte(why Porto is the best city in Portugal) and Gerard van Engelen (a variety of topics)
This time, I also decided to start a bit differently by being honest about my habit of talking fast at the beginning of sessions. I asked everyone to help pace me if I started speaking too quicklyโsorry in advance! ๐ฌ
Everyone stayed engaged, even during a 1-hour-and-30-minute session. I felt the hands-on part was a bit rushed and could have been extended, so Iโll keep that in mind for next time.
Just sharing some additional content after my talk: Iโve updated the resources to include some Web3 hackathons and a Web3 Test Mindmap.
Hereโs the feedback from this sessionโapparently, I did well, but not quite well enough to win the award! Maybe thatโs why I left right after my talk ๐
It’s okay, thoughโmy mom got emotional and teared up when I started reading the positive feedback, so Iโll count that as a win, even if itโs a bit biased!
Apart from my talk, I also joined Seรฑor Performo in his AutomationStar interview sessions!
Finally met Leandro Melendez.
Iโve known his work for ages, and I also use Grafana a lot at work these days. It was great to exchange tips on public speaking and chat about mutual friends. During the interview, I shared what weโre doing at The Chaincademy, my journey in tech, and how I ended up where I am today.
“As usual, the best part of my talk is testing whether people were really paying attention or if ADHD is getting the best of the crowd. Itโs also my favorite partโI love a good competition! ๐ฅ
And thatโs a wrap! See you at the next conference or meetup! Iโm actually planning to host a webinar on my own soon, so hopefully, youโll be able to join from anywhere in the world!
And this was me again spreading the word about Blockchain and Web3, but this time at the Equal Experts Global Conference 2024. EE is a network of tech professionals that I couldn’t be more proud to be part of โค๏ธ. I am super picky when it comes to work, but this one is a keeper!
While I’m not one to praise companies excessively, I wouldn’t hesitate to recommend Equal Experts as a great place to work and also to have business with. Their integrity and values are rare to find nowadays ๐
In the talk, I covered the basics of Web3, including its key differences from blockchain. As you know, I’ve been discussing these topics for quite some time ๐ฌ
Would you like to review the slides? This is a shorter, abridged version of the in-depth presentation I’ll be giving in October at the AutomationStar Conference. Think of it as a preview:
One of the questions I enjoyed receiving was about how blockchain technology, despite being around for a while, is often perceived as new. Blockchain is actually a combination of technologies that have existed for a long time, such as P2P networks and hashing. However, it wasn’t until these components were brought together that blockchain was truly created and its potential realized. Here are a few resources that explore the evolution and history of blockchain.
Here are a couple of resources that explore the evolution and history of blockchain.
Additionally, I attended another talk before mine that focused on UX/UI and user personas. This is another crucial aspect of QA. Understanding the user is essential when designing test scenarios and improving overall quality, not just from a technical standpoint but also from the perspectives of usability and business.
In conclusion, I solicited feedback from the audience and received valuable insights that I’ll incorporate into my upcoming talk at the AutomationStar Conference in Vienna this October. See you there ๐
Hello, hello! A bit late as usual, but I’m here to share my experience at the Eurostar Conference this year. My talk was scheduled for 15:15 on Thursday, June 13th. Despite my initial anxiety, I managed to not only deliver my talk but also had time to attend other sessions and join two tutorials. Apparently, joining two tutorials was against the rules (shh ๐คซ)
Finding basis path: Ensure effective control flow testing by identifying the basis path.
Draw diagram flow: Create a detailed flowchart diagram to visualize the process.
Flipping decisions on baseline: Adjust decisions based on the established baseline to improve accuracy.
Flow chart: Use flowcharts to map out the process and identify key decision points.
Control flow testing: Test the control flow of the application to ensure all paths are exercised.
Code exercise: Focus on exercising the code you wrote, not the code that wasn’t written.
Business path analysis with JPath: Tools like JPath may not suffice for business path analysis; use domain analysis and equivalence class partitioning instead.
Pairwise workflow: Employ pairwise testing to handle millions of possible tests, as it’s impossible to test everything.
User behavior focus: Ask what the user does to the application, not what the application does to the user.
Vilfredo Pareto principle: Apply the Pareto principle, noting that 20% of transaction types happen 80% of the time, and start with transaction history analysis.
Pairwise tools: Use tools like Allpairs and PICT for pairwise testing, they are quite old school tho. No mention on AI tools to help creating the data, found a bit weird ?!?
Data variation: Ensure multiple variations of data and a reasonable amount of data for thorough testing.
My favorite part was discussing the things we’ve heard throughout the years in the QA and testing industry. Some of them include:
Automate everything: Avoid unrealistic expectations like “automate everything” and ensure thorough testing to prevent missing bugs.
More test cases mean better testing: Quantity over quality in test cases can result in redundant tests that don’t effectively cover critical scenarios.
Just test it at the end: Believing that testing can be left until the final stages of development leads to overlooked defects and rushed fixes.
Quality is the tester’s job: Assuming that only testers are responsible for quality undermines the collective responsibility of the entire team.
We can catch all bugs with testing: Expecting testing to catch every possible defect overlooks the importance of good design and development practices.
This was the big one of the entire conference, largely due to the drama that unfolded at the end of the talk ๐ญ
I missed the point where the title resonated with the entire talk, and it was my fault for not reading the description and going just because of the title.
They compared the time it takes to build cars from ages ago to now (Ford and Tesla) and showed that it only saved 3 minutes. I’m not sure if they did this on purpose just to prove their point, but the comparison missed the complexity and features that have been added in the new cars, like the entire software and electric systems behind Tesla that didn’t exist before. These aspects weren’t considered in their comparison.
They also presented interesting analysis about when AI will catch up with human intelligence, as well as the gap that AI is creating between junior and senior developers. Not many people talk about this, but indeed, AI is a tool that can help us while also potentially making us lazy, similar to how calculators did; we still need to learn the basics
Active listening: It involves fully concentrating, understanding, responding, and remembering what’s being said.
Train yourself and learn: Continuously improving active listening skills through practice and feedback helps in understanding others better.
Circle of control: Focus on what you can control in conversationsโyour responses, understanding, and actions.
Feedback: Provide constructive feedback that helps the person improve without making them feel punished. Talk about the behaviour not the identity, don’t use BUT, use AND.
Keep questions simple: Use straightforward questions that facilitate understanding and encourage deeper thought.
Be present: Engage fully in the conversation, maintaining focus and showing genuine interest.
11k impressions: Recognize that perspectives can vary based on personal factors like fatigue and biases
Keep questions simple: Frame questions clearly to facilitate understanding and encourage exploration of solutions.
Acceptance: Reality gap ! Facts on the table. Easy ? No, necessary: yes
You have the questions not necessarily know the answers. Help them to figure out how to find a solution.
What are your three top values? Rank 1 to 10. This will help you and your mentee to connect.
Here I am again, checking the feedback. As expected, the audience was quite different from the one I usually engage with. Since this conference is a bit more corporate, I didn’t anticipate too much variation in the audience. I was also extra nervous for this one, so instead of 45 minutes, I sped up and went into the fast lane, finishing the talk in just 30 minutes. I just gave you all some extra time for coffee! ๐
As always, I needed to gauge the Web3 knowledge level of the majority, and unsurprisingly, there is still a massive gap in education about what Web3 and Blockchain are. Thus, I spent a significant portion of my talk explaining these concepts.
The feedback is quite contradictory. Some people said it was hard to follow because no background was provided, while others mentioned they didn’t know the talk would focus solely on Blockchain (which it did not). ๐คทโโ๏ธ
So, if I give more background, people complain. If I reduce the background, people will still complain. My take on that is it’s really hard to please everyone; sometimes I can’t even make my own dog happy! ๐
I still try, though. So, thanks to those who gave constructive feedback โค๏ธ!
I’ll work on improving for the next one ๐
More random pictures with these great speakers whom I had the pleasure to meet, the cubic challenge, and also random exotic food talks on the boat party.
When it comes to load testing tools, there is a recent tool called PFLB which I received a comparison with the most popular one: JMeter. Each has its own strengths and weaknesses, making them suitable for different scenarios. Let’s delve into a comparison between the two.
– Limited Support – Scripting – Control Version – CI/CD Integration – Reusability
– GUI oriented – Possibility to create scripts, but too complex and lack of documentation – Weak (Java) – Hard to maintain
Ramp-up Flexibility
User-Friendly through GUI
Plugins available to be able to configure flexible load
Test Result Analyzing
Yes
Yes
Resource Consumption
Optimizing resource usage involves properly configuring test scenarios and monitoring performance to adjust as needed.
Heavy to run tests with multiple users on a single machine, more memory consumption
Easy to use with Version Control Systems
Yes
No
Recording Functionality
Yes
Yes
Distributed Execution
Yes
Yes
Load Test Monitoring
It reduces memory consumption through asynchronous logging, cloud-based infrastructure, and integration with specialized monitoring tools.
Ability to monitor a basic load
PFLB is most used when you need:
Scalability: PFLB tool offers cloud-based load testing, allowing users to scale tests to simulate millions of users without worrying about local resource limitations.
Integration: It integrates seamlessly with other monitoring and APM tools (e.g., New Relic, Dynatrace, Datadog), providing comprehensive performance insights and real-time analytics.
Ease of Use: PFLB tools are easy to use, with intuitive interfaces and detailed reports, making it easy for teams to set up, run, and analyze load tests.
Enterprise-Level Support: PFLB provides robust support and customization options for enterprise clients, ensuring that specific performance testing needs and requirements are met effectively.
JMeter solves some specific problems:
Identifying Performance Bottlenecks: JMeter helps detect slow or underperforming parts of an application by simulating various load conditions and monitoring response times.
Scalability Testing: It evaluates how an application scales with increased load, ensuring that the system can handle expected traffic and identifying any points of failure.
Concurrent User Simulation: JMeter can simulate multiple users accessing the application simultaneously, allowing testers to observe how the application behaves under concurrent usage.
Regression Testing: It can automate performance tests as part of a continuous integration process, ensuring that new code changes do not degrade application performance.
Thanks to Victoria from pflb for sending me this comparison !