Feb 8, 2025 at 5:15 PM#6
Jumping in with some statistical perspective on sample size and testing frequency.
With 47 tests over ~18 months, we've established a baseline failure rate of ~19%. But this is biased — people are more likely to test a product they're already suspicious about. The true failure rate for randomly selected products is probably lower, maybe 10-15%.
How often should we test?
For high-volume vendors: every 2-3 months or whenever a batch number changes. This catches:
- Manufacturing drift (gradual quality decline)
- Supplier changes (different API source)
- Intentional reformulation
For low-volume or new vendors: immediately, before anyone commits to multiple purchases.
Statistical power: With ~20 tests per year (our realistic budget), we can maintain coverage of 5-6 active vendors with testing every 3-4 months. That covers the most popular sources and leaves room for a few wildcard tests.
Budget math at $25/person/round, 4 rounds/year:
- 20 contributors × $25 × 4 rounds = $2,000/year
- Basic testing: $2,000 ÷ $250/test = 8 tests/year
- With dual-lab: $2,000 ÷ $500/test = 4 tests/year
- Realistic mix: 6 basic + 1 dual-lab = $2,000
That's 7 tests per year. Enough to cover the top 4-5 vendors with annual retesting plus 2-3 new vendor evaluations. Tight but workable.
Last edited: Feb 8, 2025 at 6:15 PM
2 12Dr.BariatricHTX, LindaRN_retired
Reply Quote Save Share Report