Testing and Validation Guide
This comprehensive guide covers all aspects of testing and validation for the InnoQualis Electronic Quality Management System (EQMS), including unit testing, integration testing, end-to-end testing, contract testing, and validation procedures.
Last Updated: October 24, 2025
Version: Phase 8 In Progress (Documentation Consolidation Complete)
Status: Production Ready
Testing Strategyβ
Testing Pyramidβ
The InnoQualis testing strategy follows the testing pyramid approach:
- Unit Tests (70%): Fast, isolated tests for individual components
- Integration Tests (20%): Tests for component interactions
- End-to-End Tests (10%): Full system workflow tests
Test Coverage Requirementsβ
- Backend: 80%+ code coverage
- Frontend: 80%+ code coverage
- API Endpoints: 100% endpoint coverage
- Critical User Journeys: 100% E2E coverage
- HTML:
backend/htmlcov/index.html - Terminal summary shown during test run.
Optional: Run directly inside backend/ directory:
EQMS_TEST_MODE=1 PYTHONPATH=. conda run -n eqms pytest -q --maxfail=1
Verify coverage >= 80% lines:
- After running tests, check terminal
coveragesummary. - Or parse XML (example one-liner):
python - <<'PY'\nimport xml.etree.ElementTree as ET\nroot = ET.parse('backend/coverage.xml').getroot()\ncov = float(root.attrib['line-rate'])*100\nprint(int(cov) >= 80)\nPY
Frontend (Jest + RTL + MSW)β
Configuration:
frontend/jest.config.tsenforces global thresholds:lines: 80, branches: 75, functions: 75, statements: 80
- Coverage:
collectCoverage: truecoverageReporters: ['text', 'lcov']- lcov output:
frontend/coverage/lcov.info
Run locally:
- From
frontend/:pnpm installpnpm test:coverage -- --runInBand
Troubleshooting:
- MSW/server setup:
- Ensure
__tests__/setupTests.tsinitializes MSW (setupServer/setupWorker) and JSDOM environment.
- Ensure
- Router mocks:
- Tests use
next-router-mock. Ensure imports align with test implementation.
- Tests use
- βNo tests foundβ:
- Make sure tests are under
frontend/__tests__/and named*.test.ts[x]or*.spec.ts[x](as pertestMatch).
- Make sure tests are under
- TypeScript issues:
- Ensure
ts-nodeis available and the tsconfig is compatible with Jest transforms provided bynext/jest.
- Ensure
Continuous Integration (GitHub Actions)β
Workflow:
.github/workflows/tests.yml- Jobs:
- Backend:
- Ubuntu, micromamba (conda env
eqms, Python 3.11) - Install
backend/requirements.txt(+pytest-cov) - Run:
EQMS_TEST_MODE=1 PYTHONPATH=. pytest -q --maxfail=1(inbackend/) - Artifacts:
backend/coverage.xml, optionalbackend/htmlcov/
- Ubuntu, micromamba (conda env
- Frontend:
- Node 18, pnpm v9
pnpm installpnpm test:coverage -- --runInBand- Artifact:
frontend/coverage/lcov.info
- Backend:
- Caching:
- Conda/micromamba environment caching enabled.
- Node/pnpm caching via
actions/setup-nodewithcache: "pnpm".
Quick Commandsβ
Backend (from repo root):
- Install deps:
cd backend && uv pip install -r requirements.txt
- Run tests with coverage:
EQMS_TEST_MODE=1 PYTHONPATH=. conda run -n eqms pytest -q --maxfail=1 -C backend
Frontend:
- Install deps:
cd frontend && pnpm install
- Run tests with coverage:
pnpm test:coverage -- --runInBand
Validationβ
Backend coverage >= 80% (lines):
- After running backend tests, verify via XML:
- macOS/Linux:
python - <<'PY'\nimport xml.etree.ElementTree as ET\nroot = ET.parse('backend/coverage.xml').getroot()\npercent = float(root.attrib['line-rate'])*100\nprint(f\"Backend line coverage: {percent:.2f}%\")\nexit(0 if percent >= 80 else 1)\nPY
- macOS/Linux:
- Terminal should print the percentage and exit 0 if >= 80, non-zero otherwise.
Frontend coverage thresholds:
- Jest enforces coverageThreshold. The test command will fail CI/local run if thresholds are not met.
- To print a concise summary and verify lcov output exists:
test -f frontend/coverage/lcov.info && echo "lcov generated" || (echo "No lcov generated" && exit 1)
End-to-End Testingβ
E2E Testing Strategyβ
End-to-end testing validates complete user workflows across the entire system, following the comprehensive user journey scenarios defined in docs/user-journey-validation.md.
Testing Methodologyβ
E2E tests must:
- Click buttons and fill forms - Not just verify elements exist
- Wait for forms to appear - Fill them with real data
- Submit and verify success - Check for success messages and API responses
- Verify data persistence - Reload pages and verify data persists
- Test multi-user workflows - Verify user isolation and permissions
- Check database state - Verify API calls and database changes
User Journey Validationβ
The current validation scenario follows a 3-week quality management cycle:
- Week 1: Document revision & initial deviations (with dynamic form builder)
- Week 2: SOP revision & implementation (with workflow system)
- Week 3: CAPA completion & external audit
See docs/user-journey-validation.md for complete validation scenario.
E2E Test Structureβ
Example test following user journey:
test('Mike Chen reports deviation with dynamic form builder', async ({ page }) => {
// 1. Navigate to deviations page
await page.goto('/deviations/new');
await expect(page.locator('[data-testid="deviation-form"]')).toBeVisible();
// 2. Fill form with REAL data (using dynamic form builder)
await page.fill('[data-testid="deviation-title-input"]', 'Temperature excursion');
await page.fill('[data-testid="deviation-description-input"]', 'Temperature exceeded limits...');
// 3. Use document picker field
await page.click('[data-testid="document-picker-field"]');
await page.fill('[data-testid="document-search-input"]', 'Environmental Monitoring');
await page.click('[data-testid="document-select-item-1"]');
// 4. Verify AI severity suggestion appears
await expect(page.locator('[data-testid="ai-severity-suggestion"]')).toContainText('High');
await expect(page.locator('[data-testid="ai-confidence-badge"]')).toContainText('85%');
// 5. Upload image attachments
await page.setInputFiles('[data-testid="image-upload-field"]', 'test-image.jpg');
// 6. Submit and verify SUCCESS
await page.click('[data-testid="submit-deviation-btn"]');
await expect(page.locator('[data-testid="success-notification"]')).toContainText('Deviation reported successfully');
// 7. Verify API response
const response = await page.waitForResponse('**/api/deviations');
expect(response.status()).toBe(201);
// 8. Verify deviation appears in list
await page.goto('/deviations');
await expect(page.locator('text=Temperature excursion')).toBeVisible();
});
Critical Workflows to Testβ
Week 1 Tests:
- Deviation reporting with dynamic form builder
- AI severity classification with confidence scores
- Document and CAPA picker fields
- Image/file upload in deviation form
- CAPA auto-creation after 2nd deviation (draft status)
- CAPA draft approval workflow
Week 2 Tests:
- SOP v2.0 approval workflow with workflow system
- Workflow state transitions and auto-advancement
- Training assignment and completion
- Adaptive quiz functionality
- Dashboard widgets for assigned items and pending approvals
Week 3 Tests:
- CAPA completion and effectiveness review
- External auditor access with 6-digit verification
- Audit timeline management
- Report generation and access revocation
- Workflow history visibility in audit interface
Known Issues & Resolutionsβ
Backend Issues (RESOLVED)β
The following backend issues were identified and fixed during E2E testing preparation:
Missing Database Tables:
- Issue:
document_commentstable did not exist, causing 500 errors - Resolution: Created table and updated migration
20251021_01_add_document_comments.pyto be idempotent - Status: β Fixed
DocumentVersion Attribute Mismatch:
- Issue: Code was using
document_idinstead ofparent_document_id - Resolution: Fixed in
backend/app/routers/documents.py - Status: β Fixed
Missing API Endpoints:
- Issue: Training endpoint
/api/training/{id}returned 404 - Resolution: Added
GET /{training_id}endpoint inbackend/app/routers/training.py - Status: β Fixed
Missing Signatures Endpoint:
- Issue:
GET /api/signatures/returned 405 Method Not Allowed - Resolution: Added
GET /endpoint inbackend/app/routers/signatures.py - Status: β Fixed
Testing Methodology Issues (RESOLVED)β
Problem: Tests were only checking if pages loaded, not validating real functionality
Solution: Implemented proper E2E tests that:
- Fill out forms with real data
- Submit and verify success messages
- Check API responses and database state
- Test actual user workflows end-to-end
- Verify AI features (suggestions, pre-filling, etc.)
Test Reportsβ
Current System Statusβ
Last Updated: January 2025
System Health: β
All services running
Feature Status: β
Core features complete and fully functional
Test Dataβ
Test Users:
sarah.johnson@innoqualis.com- QA Manager (19 permissions)mike.chen@innoqualis.com- User (5 permissions)david.kim@innoqualis.com- User (5 permissions)lisa.rodriguez@innoqualis.com- Auditor (4 permissions)- Password for all:
TestPassword123!
Test Documents:
- Environmental Monitoring Procedure v1.0
- Quality Management System SOP
Test CAPAs:
- CAPA-2024-001 (auto-created from deviations)
Validation Proceduresβ
See the comprehensive validation procedures in docs/testing.md for:
- Pilot Go-Live Validation
- Critical Workflow Validation
- Performance Validation
- Security Validation
- Data Integrity Validation
- Rollback Procedures
User Journey Test Stepsβ
The complete step-by-step test procedures for validating the 3-week user journey can be found in:
- Primary Reference:
docs/user-journey-validation.md- Complete validation scenario with workflow system and form builder updates - Detailed Steps: See "User Journey Test Steps" section below for phase-by-phase testing procedures
Phase 1: Document Upload (QA Manager)β
- Login as Sarah Johnson (QA Manager)
- Navigate to Documents β Upload Document
- Select document type and workflow template
- Upload PDF file with metadata
- Verify document created with workflow instance
- Approve document through workflow system
Phase 2: Deviation Reporting (User)β
- Login as Mike Chen (User)
- Navigate to Deviations β New Deviation
- Fill dynamic form builder with sections:
- Basic Information (title, description, severity with AI suggestion)
- Additional Details (product name, lot number, impact assessment)
- Related Items (document picker, CAPA picker)
- Attachments (image upload, file upload)
- Verify AI severity classification with confidence score
- Submit and verify deviation created with form data
Phase 3: CAPA Draft Approval (QA Manager)β
- Login as Sarah Johnson (QA Manager)
- Navigate to CAPA list
- Verify CAPA with "draft" status appears
- Review draft CAPA details
- Approve draft CAPA
- Verify CAPA transitions from 'draft' to 'open'
- Verify default action is auto-created
Phase 4: Document Workflow Approvalβ
- Navigate to document with workflow system
- Verify workflow status and current state
- Approve document in workflow state
- Verify workflow auto-transition to next state
- Verify training assignments created (if workflow defines training gate)
Phase 5: Dashboard Validationβ
- Verify assigned items section shows training, deviations, CAPAs
- Verify pending approvals section shows documents, deviations, CAPAs (including drafts)
- Verify KPI dashboard shows comprehensive metrics
- Verify overdue items and approaching deadlines
- Test filter controls (date range, document type, department)
E2E Testing Best Practicesβ
- Test Real Functionality: Don't just check if pages load - verify actual workflows
- Verify API Calls: Check that API requests succeed and return expected data
- Check Database State: Verify that actions actually persist in the database
- Test Multi-User Scenarios: Verify user isolation and permission enforcement
- Test AI Features: Verify AI suggestions, classifications, and pre-filling work correctly
- Follow User Journeys: Test complete workflows from start to finish
- Verify Error Handling: Test error states and edge cases
- Use Real Test Data: Use realistic test data that matches production scenarios
Notesβ
- Required versions:
- Python 3.11; Node 18; pnpm 9
- The backend uses
EQMS_TEST_MODE=1to ensure in-memory SQLite, preventing accidental writes and speeding up tests. - Coverage thresholds:
- Backend: validated via report inspection (target β₯80% lines).
- Frontend: enforced by Jest
coverageThreshold(build fails if below).
- E2E testing follows the user journey validation scenario in
docs/user-journey-validation.md - All E2E tests should verify real functionality, not just page loads