"Did you test it?"
That's the question everyone asks after something breaks. But testing isn't a yes/no checkbox — it's a spectrum. Understanding it helps you ask better questions and make better trade-offs.
Why Testing Matters
Every bug that reaches users costs more than catching it earlier:
- Bug caught during development: $1
- Bug caught during testing: $10
- Bug caught after release: $100+
- Bug caught by customers: $1,000+ (includes support, reputation, fixes under pressure)
Testing is about catching problems when they're cheap to fix.
Types of Testing
Unit Testing
What: Testing individual pieces of code in isolation.
Example: Does the "calculate discount" function correctly compute 20% off?
Who does it: Developers, as they write code.
Why it matters: Catches basic errors early. Enables confident changes later.
Integration Testing
What: Testing how pieces work together.
Example: Does the shopping cart correctly talk to the inventory system and payment processor?
Who does it: Developers and QA.
Why it matters: Systems are complicated. Parts that work alone can fail together.
Functional Testing
What: Testing from a user's perspective.
Example: Can a user successfully register, browse products, and complete a purchase?
Who does it: QA team or automated test suites.
Why it matters: Ensures the system does what users need.
User Acceptance Testing (UAT)
What: Real users (often you or your team) testing before launch.
Example: Does this actually work for our business processes?
Who does it: Business stakeholders, end users.
Why it matters: Final check that what was built matches what was needed.
Performance Testing
What: Testing speed and capacity.
Example: How does the system handle 1,000 simultaneous users?
Who does it: Specialists or automated tools.
Why it matters: Works great in demo ≠ works great under load.
Security Testing
What: Testing for vulnerabilities.
Example: Can someone access data they shouldn't? Can they break in?
Who does it: Security specialists.
Why it matters: Breaches are expensive and damaging.
Manual vs. Automated Testing
Manual Testing
A human clicks through the application, checking things work.
Pros:
- Catches unexpected issues
- Good for subjective things (does this feel right?)
- Necessary for exploratory testing
Cons:
- Slow and expensive
- Humans make mistakes
- Doesn't scale
Automated Testing
Code that tests code. Run hundreds of tests in minutes.
Pros:
- Fast and repeatable
- Scales infinitely
- Catches regressions (things that used to work but broke)
Cons:
- Takes time to create
- Only catches what you test for
- Can give false confidence
The reality: You need both. Automate repetitive stuff; use humans for judgment.
What to Ask Your Development Team
-
"What's your testing approach?"
- Good answer: Describes multiple layers of testing
- Bad answer: "We test everything" (too vague)
-
"What's automated vs. manual?"
- Good answer: Core functionality automated, exploratory manual
- Bad answer: "We don't have time for automated tests"
-
"How do you test before release?"
- Good answer: Defined process with checkpoints
- Bad answer: "We'll test it and let you know"
-
"Who's responsible for testing?"
- Good answer: Everyone (developers test their code, QA tests the system)
- Bad answer: "QA catches everything"
-
"What happens when bugs are found?"
- Good answer: Prioritized, tracked, fixed, verified
- Bad answer: Shrug
Testing Trade-offs
Testing isn't free. More testing means:
- Higher upfront cost
- Longer development time
- More confidence in quality
Less testing means:
- Lower upfront cost
- Faster delivery
- More risk of problems
Where to invest more:
- Core functionality
- Financial calculations
- Security-sensitive features
- High-traffic features
Where you might accept less:
- Rarely-used admin features
- Non-critical cosmetic elements
- Features being validated before investment
Red Flags
🚩 "We don't need tests, we're experienced developers" 🚩 No testing environment separate from production 🚩 Testing is always the thing that gets cut when time is short 🚩 "It worked on my machine" 🚩 No regression testing (testing that old things still work)
Your Role in Testing
You're not writing tests, but you have responsibilities:
- Define acceptance criteria — How will we know if this works?
- Participate in UAT — Actually use the system before launch
- Report issues clearly — Steps to reproduce, expected vs. actual
- Prioritize what matters — Help the team focus testing effort
The Bottom Line
Testing is an investment in confidence. The question isn't whether to test, but how much and where.
A good development partner helps you make informed trade-offs, not just tells you "we tested it."
Want a development process that includes serious testing? Let's talk about quality