Manual QAfor startups that need realproduct judgment.
Mantis uses exploratory, human-led QA to uncover bugs, UX friction, edge cases, and broken flows that automated tests alone often miss — run by senior testers who actually use your product like a customer would.
Why thoughtful human QA still matters.
Automation is necessary. It is not sufficient. Real products break in places that scripts were never written to check — and that's exactly where startups lose users.
Manual QA at Mantis is not a checklist run by junior testers. It's senior, product-aware testing — engineers who read the product, follow the user's intent, and notice when something feels wrong even before it technically fails.
We use exploratory testing alongside automation. Each catches different bugs, and the most important ones almost always belong to the human.
Automation tests scripts, not judgment
Automated suites verify what you already know to check. They don't notice the moment a flow becomes confusing, untrustworthy, or quietly broken.
New features need a human first
Before a feature is stable enough to script, someone has to try it the way a real user would — including the paths the product manager didn't write down.
UX friction is a human discovery
Misleading copy, broken empty states, and trust-damaging recovery flows rarely throw errors. They're found by people, not assertions.
Startups need testers, not checkboxes
Cheap manual QA executes a list. Product-aware manual QA reads the product, asks why, and surfaces the issues that change a release decision.
What's actually in scope.
Five disciplines that compound. Each one is run by a senior engineer who has shipped real software — not a tester following someone else's checklist.
Exploratory testing
Senior testers learn the product, form hypotheses about where it's likely to break, and probe those areas with the intent of a real user. Issues are found through judgment, not coverage.
Edge-case discovery
Bad networks, slow devices, weird inputs, interrupted flows, mid-state navigation, repeat actions. The conditions real users hit that scripted tests almost never reproduce.
UX and trust-friction observations
Confusing copy, misleading empty states, silent failures, recovery flows that lose user data. Issues that don't throw errors but quietly cost you users — and the credibility you can't easily rebuild.
Regression validation
Manual sweeps of release-critical flows on real devices and browsers. Catches the regressions automated suites miss because the assertion drifted or the test was never written.
Release-readiness validation
Before each release, a focused pass on the riskiest changes with a clear ship / hold recommendation. Founders and engineering leads get a real signal, not a green checkmark.
Where manual QA actually moves the needle.
Manual QA isn't a service every team needs every week. These are the moments where having a senior human in the loop pays off most.
New feature releases
Features that haven't stabilized yet are too volatile to script. Exploratory manual QA validates real behavior before you commit to automation — or to shipping.
Fast-moving products
When you ship multiple times a week, automation can't keep up with the surface area. A human in the loop catches the regressions your suite hasn't grown to cover yet.
Onboarding, payment & trust flows
Sign-up, checkout, password reset, and account recovery don't tolerate silent failures. These are the flows where a bad UX moment becomes a churn moment.
Teams without QA discipline
If engineers are testing their own work between commits, real issues are slipping through. Manual QA brings a fresh perspective and the rigor an in-house function would.
A simple, sharp process.
No long onboarding decks. No process theatre. Senior engineers ramp into the product, focus on what matters, and tell you what they see.
Learn the product
We read the product, the docs, recent releases, and the support backlog. Before testing anything, we understand what's important and where the risk actually lives.
Test the flows that matter
Coverage focuses on release-critical paths and the areas with the highest user-impact risk. We don't burn hours on what doesn't ship.
Document meaningful issues
Bugs get severity, reproduction, expected behavior, and a clear note on the user impact. No noise, no "works on my machine" tickets, no padded counts.
Support release decisions
Before each release we surface the risks worth knowing about and give engineering leads a real ship-or-hold view — not a green dashboard.
The kinds of issues Mantis catches.
A representative sample of bugs found by Mantis manual QA across recent startup engagements. Anonymized, but every one is real — and every one was missed by the team's existing test coverage.
"Save changes" appears active after the form silently fails to submit.
Network call returned 500. UI showed no error. User believed their changes were saved. Discovered by exploratory session, not by the form's unit tests.
Payment retry sends user back to an empty cart with no error.
After a declined card, the retry path lost cart state. No notification, no explanation. A clean re-add was the only way forward — and several users never made it.
Empty inbox renders the literal string "Error 0".
A valid empty state was being treated as an error by the UI layer. Not a server bug. Not a test failure. Just a user staring at a message that made them lose trust.
2FA SMS arrives after the in-app input has already timed out.
On slower carriers, the input expired before the code arrived. The retry button reset the timer but didn't resend the code. Repeated silent failures, no thrown errors.
Pasting a 10-digit phone number with spaces blocks the submit button.
Validation accepted the value visually but the model still saw the raw string. Button stayed disabled with no message. Real users paste from contacts apps all the time.
Back gesture after deep link returns to a different account's data.
Auth context wasn't being refreshed on navigation. The bug only surfaced when arriving via push notification. No automated test had a reason to walk that exact path.
Customer names withheld under NDA. Bug IDs and copy lightly edited to protect product details. Severity and reproduction steps are tracked in full in our reporting.
Need QA that catches more than obvious bugs?
A 30-minute fit call is enough to know whether senior manual QA is the right next step for your product — and what coverage would actually look like.