Skip to main content

Testing and Validation Guide

This comprehensive guide covers all aspects of testing and validation for the InnoQualis Electronic Quality Management System (EQMS), including unit testing, integration testing, end-to-end testing, contract testing, and validation procedures.

Last Updated: October 24, 2025
Version: Phase 8 In Progress (Documentation Consolidation Complete)
Status: Production Ready

Testing Strategy​

Testing Pyramid​

The InnoQualis testing strategy follows the testing pyramid approach:

  1. Unit Tests (70%): Fast, isolated tests for individual components
  2. Integration Tests (20%): Tests for component interactions
  3. End-to-End Tests (10%): Full system workflow tests

Test Coverage Requirements​

  • Backend: 80%+ code coverage
  • Frontend: 80%+ code coverage
  • API Endpoints: 100% endpoint coverage
  • Critical User Journeys: 100% E2E coverage
  • HTML: backend/htmlcov/index.html
  • Terminal summary shown during test run.

Optional: Run directly inside backend/ directory:

  • EQMS_TEST_MODE=1 PYTHONPATH=. conda run -n eqms pytest -q --maxfail=1

Verify coverage >= 80% lines:

  • After running tests, check terminal coverage summary.
  • Or parse XML (example one-liner):
    • python - <<'PY'\nimport xml.etree.ElementTree as ET\nroot = ET.parse('backend/coverage.xml').getroot()\ncov = float(root.attrib['line-rate'])*100\nprint(int(cov) >= 80)\nPY

Frontend (Jest + RTL + MSW)​

Configuration:

  • frontend/jest.config.ts enforces global thresholds:
    • lines: 80, branches: 75, functions: 75, statements: 80
  • Coverage:
    • collectCoverage: true
    • coverageReporters: ['text', 'lcov']
    • lcov output: frontend/coverage/lcov.info

Run locally:

  • From frontend/:
    • pnpm install
    • pnpm test:coverage -- --runInBand

Troubleshooting:

  • MSW/server setup:
    • Ensure __tests__/setupTests.ts initializes MSW (setupServer / setupWorker) and JSDOM environment.
  • Router mocks:
    • Tests use next-router-mock. Ensure imports align with test implementation.
  • β€œNo tests found”:
    • Make sure tests are under frontend/__tests__/ and named *.test.ts[x] or *.spec.ts[x] (as per testMatch).
  • TypeScript issues:
    • Ensure ts-node is available and the tsconfig is compatible with Jest transforms provided by next/jest.

Continuous Integration (GitHub Actions)​

Workflow:

  • .github/workflows/tests.yml
  • Jobs:
    • Backend:
      • Ubuntu, micromamba (conda env eqms, Python 3.11)
      • Install backend/requirements.txt (+ pytest-cov)
      • Run: EQMS_TEST_MODE=1 PYTHONPATH=. pytest -q --maxfail=1 (in backend/)
      • Artifacts: backend/coverage.xml, optional backend/htmlcov/
    • Frontend:
      • Node 18, pnpm v9
      • pnpm install
      • pnpm test:coverage -- --runInBand
      • Artifact: frontend/coverage/lcov.info
  • Caching:
    • Conda/micromamba environment caching enabled.
    • Node/pnpm caching via actions/setup-node with cache: "pnpm".

Quick Commands​

Backend (from repo root):

  • Install deps:
    • cd backend && uv pip install -r requirements.txt
  • Run tests with coverage:
    • EQMS_TEST_MODE=1 PYTHONPATH=. conda run -n eqms pytest -q --maxfail=1 -C backend

Frontend:

  • Install deps:
    • cd frontend && pnpm install
  • Run tests with coverage:
    • pnpm test:coverage -- --runInBand

Validation​

Backend coverage >= 80% (lines):

  • After running backend tests, verify via XML:
    • macOS/Linux:
      • python - <<'PY'\nimport xml.etree.ElementTree as ET\nroot = ET.parse('backend/coverage.xml').getroot()\npercent = float(root.attrib['line-rate'])*100\nprint(f\"Backend line coverage: {percent:.2f}%\")\nexit(0 if percent >= 80 else 1)\nPY
  • Terminal should print the percentage and exit 0 if >= 80, non-zero otherwise.

Frontend coverage thresholds:

  • Jest enforces coverageThreshold. The test command will fail CI/local run if thresholds are not met.
  • To print a concise summary and verify lcov output exists:
    • test -f frontend/coverage/lcov.info && echo "lcov generated" || (echo "No lcov generated" && exit 1)

End-to-End Testing​

E2E Testing Strategy​

End-to-end testing validates complete user workflows across the entire system, following the comprehensive user journey scenarios defined in docs/user-journey-validation.md.

Testing Methodology​

E2E tests must:

  1. Click buttons and fill forms - Not just verify elements exist
  2. Wait for forms to appear - Fill them with real data
  3. Submit and verify success - Check for success messages and API responses
  4. Verify data persistence - Reload pages and verify data persists
  5. Test multi-user workflows - Verify user isolation and permissions
  6. Check database state - Verify API calls and database changes

User Journey Validation​

The current validation scenario follows a 3-week quality management cycle:

  • Week 1: Document revision & initial deviations (with dynamic form builder)
  • Week 2: SOP revision & implementation (with workflow system)
  • Week 3: CAPA completion & external audit

See docs/user-journey-validation.md for complete validation scenario.

E2E Test Structure​

Example test following user journey:

test('Mike Chen reports deviation with dynamic form builder', async ({ page }) => {
// 1. Navigate to deviations page
await page.goto('/deviations/new');
await expect(page.locator('[data-testid="deviation-form"]')).toBeVisible();

// 2. Fill form with REAL data (using dynamic form builder)
await page.fill('[data-testid="deviation-title-input"]', 'Temperature excursion');
await page.fill('[data-testid="deviation-description-input"]', 'Temperature exceeded limits...');

// 3. Use document picker field
await page.click('[data-testid="document-picker-field"]');
await page.fill('[data-testid="document-search-input"]', 'Environmental Monitoring');
await page.click('[data-testid="document-select-item-1"]');

// 4. Verify AI severity suggestion appears
await expect(page.locator('[data-testid="ai-severity-suggestion"]')).toContainText('High');
await expect(page.locator('[data-testid="ai-confidence-badge"]')).toContainText('85%');

// 5. Upload image attachments
await page.setInputFiles('[data-testid="image-upload-field"]', 'test-image.jpg');

// 6. Submit and verify SUCCESS
await page.click('[data-testid="submit-deviation-btn"]');
await expect(page.locator('[data-testid="success-notification"]')).toContainText('Deviation reported successfully');

// 7. Verify API response
const response = await page.waitForResponse('**/api/deviations');
expect(response.status()).toBe(201);

// 8. Verify deviation appears in list
await page.goto('/deviations');
await expect(page.locator('text=Temperature excursion')).toBeVisible();
});

Critical Workflows to Test​

Week 1 Tests:

  • Deviation reporting with dynamic form builder
  • AI severity classification with confidence scores
  • Document and CAPA picker fields
  • Image/file upload in deviation form
  • CAPA auto-creation after 2nd deviation (draft status)
  • CAPA draft approval workflow

Week 2 Tests:

  • SOP v2.0 approval workflow with workflow system
  • Workflow state transitions and auto-advancement
  • Training assignment and completion
  • Adaptive quiz functionality
  • Dashboard widgets for assigned items and pending approvals

Week 3 Tests:

  • CAPA completion and effectiveness review
  • External auditor access with 6-digit verification
  • Audit timeline management
  • Report generation and access revocation
  • Workflow history visibility in audit interface

Known Issues & Resolutions​

Backend Issues (RESOLVED)​

The following backend issues were identified and fixed during E2E testing preparation:

Missing Database Tables:

  • Issue: document_comments table did not exist, causing 500 errors
  • Resolution: Created table and updated migration 20251021_01_add_document_comments.py to be idempotent
  • Status: βœ… Fixed

DocumentVersion Attribute Mismatch:

  • Issue: Code was using document_id instead of parent_document_id
  • Resolution: Fixed in backend/app/routers/documents.py
  • Status: βœ… Fixed

Missing API Endpoints:

  • Issue: Training endpoint /api/training/{id} returned 404
  • Resolution: Added GET /{training_id} endpoint in backend/app/routers/training.py
  • Status: βœ… Fixed

Missing Signatures Endpoint:

  • Issue: GET /api/signatures/ returned 405 Method Not Allowed
  • Resolution: Added GET / endpoint in backend/app/routers/signatures.py
  • Status: βœ… Fixed

Testing Methodology Issues (RESOLVED)​

Problem: Tests were only checking if pages loaded, not validating real functionality

Solution: Implemented proper E2E tests that:

  • Fill out forms with real data
  • Submit and verify success messages
  • Check API responses and database state
  • Test actual user workflows end-to-end
  • Verify AI features (suggestions, pre-filling, etc.)

Test Reports​

Current System Status​

Last Updated: January 2025
System Health: βœ… All services running
Feature Status: βœ… Core features complete and fully functional

Test Data​

Test Users:

  • sarah.johnson@innoqualis.com - QA Manager (19 permissions)
  • mike.chen@innoqualis.com - User (5 permissions)
  • david.kim@innoqualis.com - User (5 permissions)
  • lisa.rodriguez@innoqualis.com - Auditor (4 permissions)
  • Password for all: TestPassword123!

Test Documents:

  • Environmental Monitoring Procedure v1.0
  • Quality Management System SOP

Test CAPAs:

  • CAPA-2024-001 (auto-created from deviations)

Validation Procedures​

See the comprehensive validation procedures in docs/testing.md for:

  • Pilot Go-Live Validation
  • Critical Workflow Validation
  • Performance Validation
  • Security Validation
  • Data Integrity Validation
  • Rollback Procedures

User Journey Test Steps​

The complete step-by-step test procedures for validating the 3-week user journey can be found in:

  • Primary Reference: docs/user-journey-validation.md - Complete validation scenario with workflow system and form builder updates
  • Detailed Steps: See "User Journey Test Steps" section below for phase-by-phase testing procedures

Phase 1: Document Upload (QA Manager)​

  1. Login as Sarah Johnson (QA Manager)
  2. Navigate to Documents β†’ Upload Document
  3. Select document type and workflow template
  4. Upload PDF file with metadata
  5. Verify document created with workflow instance
  6. Approve document through workflow system

Phase 2: Deviation Reporting (User)​

  1. Login as Mike Chen (User)
  2. Navigate to Deviations β†’ New Deviation
  3. Fill dynamic form builder with sections:
    • Basic Information (title, description, severity with AI suggestion)
    • Additional Details (product name, lot number, impact assessment)
    • Related Items (document picker, CAPA picker)
    • Attachments (image upload, file upload)
  4. Verify AI severity classification with confidence score
  5. Submit and verify deviation created with form data

Phase 3: CAPA Draft Approval (QA Manager)​

  1. Login as Sarah Johnson (QA Manager)
  2. Navigate to CAPA list
  3. Verify CAPA with "draft" status appears
  4. Review draft CAPA details
  5. Approve draft CAPA
  6. Verify CAPA transitions from 'draft' to 'open'
  7. Verify default action is auto-created

Phase 4: Document Workflow Approval​

  1. Navigate to document with workflow system
  2. Verify workflow status and current state
  3. Approve document in workflow state
  4. Verify workflow auto-transition to next state
  5. Verify training assignments created (if workflow defines training gate)

Phase 5: Dashboard Validation​

  1. Verify assigned items section shows training, deviations, CAPAs
  2. Verify pending approvals section shows documents, deviations, CAPAs (including drafts)
  3. Verify KPI dashboard shows comprehensive metrics
  4. Verify overdue items and approaching deadlines
  5. Test filter controls (date range, document type, department)

E2E Testing Best Practices​

  1. Test Real Functionality: Don't just check if pages load - verify actual workflows
  2. Verify API Calls: Check that API requests succeed and return expected data
  3. Check Database State: Verify that actions actually persist in the database
  4. Test Multi-User Scenarios: Verify user isolation and permission enforcement
  5. Test AI Features: Verify AI suggestions, classifications, and pre-filling work correctly
  6. Follow User Journeys: Test complete workflows from start to finish
  7. Verify Error Handling: Test error states and edge cases
  8. Use Real Test Data: Use realistic test data that matches production scenarios

Notes​

  • Required versions:
    • Python 3.11; Node 18; pnpm 9
  • The backend uses EQMS_TEST_MODE=1 to ensure in-memory SQLite, preventing accidental writes and speeding up tests.
  • Coverage thresholds:
    • Backend: validated via report inspection (target β‰₯80% lines).
    • Frontend: enforced by Jest coverageThreshold (build fails if below).
  • E2E testing follows the user journey validation scenario in docs/user-journey-validation.md
  • All E2E tests should verify real functionality, not just page loads