Skip to content

SELENIUM AND JEST DOCUMENTATION

yusuf anil yazici edited this page Nov 11, 2025 · 2 revisions

Frontend Test Automation Guide (Selenium & Vitest)

This document explains how the NutriHub frontend project approaches automation with Selenium (end-to-end) and Vitest (component/unit). It doubles as onboarding material for contributors who want to extend or maintain the suites.


1. Strategy Overview

  • Layered coverage: Selenium validates critical user journeys across the full React → Django → MySQL stack, while Vitest guards component logic, hooks, and utilities inside the React/TypeScript codebase.
  • Shift-left mindset: Vitest runs in watch mode during development and on every pull request, catching regressions quickly. Selenium smoke suites run before merges or on a nightly cadence to guard against integration drift.
  • Traceability: Each automated scenario maps to a real user goal (sign-up, meal logging, advanced search) so failing tests produce actionable information rather than noise.

2. Canonical Folder Layout

frontend/
├── src/
│   ├── components/
│   ├── pages/
│   └── test/
│       ├── components/          # Vitest component tests
│       ├── pages/               # Vitest page tests
│       ├── selenium/            # Selenium E2E tests
│       │   ├── selenium.config.ts
│       │   ├── Login.selenium.test.ts
│       │   ├── Signup.selenium.test.ts
│       │   ├── Forum.selenium.test.ts
│       │   ├── Foods.selenium.test.ts
│       │   └── ...
│       └── setup.ts
└── vitest.config.ts

Keep this structure consistent; utilities (auth helpers, API stubs) should live under src/test/selenium so they can be imported across specs.


3. Environment Setup

Requirement Version / Notes
Node.js v20.x (matches project)
Package manager npm 10+ or pnpm 9+
Browsers Chrome/Chromium 121+ (headless supported)
Browser drivers ChromeDriver matching the installed browser (managed automatically by selenium-webdriver when possible)
OS deps zip, unzip, jq (for logs/artifacts)
Env vars BASE_URL (defaults to http://localhost:5173), SELENIUM_GRID_URL if using a remote grid

Setup steps:

  1. cd frontend
  2. npm install
  3. Install Chrome/Chromium (or Firefox + GeckoDriver if required).
  4. For Selenium tests, install drivers: brew install chromedriver (macOS) or sudo apt-get install chromium-chromedriver (Linux).
  5. Optional: Verify driver versions with chromedriver --version.

4. Selenium End-to-End Suite

4.1 Working Principles

  • WebDriver protocol: The JS client issues W3C WebDriver commands to ChromeDriver/GeckoDriver; the driver carries out DOM interactions, navigation, and screenshot capture.
  • Imperative flow: Specs open a session via Builder, exercise a scenario with findElement, click, sendKeys, and assert DOM/URL/API side effects before closing the session in afterAll.
  • Synchronization: Prefer explicit waits (driver.wait(until.elementLocated(...))) or helper wrappers. Avoid implicit waits—they mask latency issues and can produce flakes when React re-renders.
  • Data/control reset: Hooks like beforeEach should clear cookies/localStorage or call backend reset endpoints to keep runs deterministic.

4.2 Running Tests Locally

  1. Ensure the dev server is running: npm run dev (defaults to http://localhost:5173).
  2. In another terminal: cd frontend.
  3. Run the suite: npm test -- src/test/selenium.
  4. For a single test: npm test -- src/test/selenium/Login.selenium.test.ts.
  5. To watch tests run (non-headless), edit src/test/selenium/selenium.config.ts and set headless: false.

Outputs:

  • Console logs per spec.
  • Artifacts/screenshots can be captured on failure by enhancing the config.

4.3 Example Scenario & Expected Outcome

// frontend/src/test/selenium/Login.selenium.test.ts
import { describe, it, expect, beforeAll, afterAll } from 'vitest';
import { WebDriver, By, until } from 'selenium-webdriver';
import { createDriver, quitDriver, defaultConfig } from './selenium.config';

describe('Login flow', () => {
  let driver: WebDriver;

  beforeAll(async () => {
    driver = await createDriver(defaultConfig);
  }, 30000);

  afterAll(async () => {
    await quitDriver(driver);
  });

  it('allows a user to access the dashboard', async () => {
    await driver.get(`${defaultConfig.baseUrl}/login`);
    await driver.findElement(By.id('username')).sendKeys('testuser');
    await driver.findElement(By.id('password')).sendKeys('TestPass123!');
    await driver.findElement(By.css('button[type=submit]')).click();
    await driver.wait(until.elementLocated(By.xpath("//h1[contains(.,'Dashboard') or contains(.,'Forum')]")), 5000);
  }, 30000);
});

Expected outcome

  • Browser navigates successfully after login.
  • Target page loads within 5 seconds.
  • Cookies contain a valid session token; screenshot saved only on failure.

4.4 Debugging & Practicality

  • Strengths: Captures true user behavior (routing, API calls, layouts) and validates integrations.
  • Costs/Risks: Slower feedback (5–20 s per flow), brittle selectors if UI churns, and requires driver maintenance plus seeded data.
  • Operational tips:
    • Use driver.manage().logs().get('browser') to print console errors when assertions fail.
    • Reproduce a failing CI run locally by exporting the same BASE_URL and dataset snapshot.
  • Adoption effort: Medium. Initial investment covers driver setup, config, and npm scripts, after which adding scenarios becomes incremental.
  • Best-fit coverage: Login/signup flows, profile updates, forum interactions, food search, meal planning.
  • When to defer: Pure styling checks or hook-level logic—use Vitest/RTL there.

5. Vitest Component & Utility Suite

5.1 Working Principles

  • All-in-one runner: Vitest bundles the runner, assertion library, mocking, and snapshot tooling; jsdom simulates a browser DOM.
  • Isolation model: Each spec gets its own module registry and fake timers, making results deterministic and fast (<100 ms).
  • Mocking & dependency control: vi.mock, vi.spyOn, and manual mocks let us replace API adapters, Context providers, or localStorage helpers.
  • Feedback loops: npm test -- --watch reruns only affected specs; npm test -- --coverage highlights untested reducers or calculation utilities before reviews.

5.2 Running & CI Commands

  • Local development:
    • npm test → single run with reporters.
    • npm test -- --watch → reruns changed specs.
    • npm test -- --coverage → generates coverage summary at coverage/lcov-report/index.html.
  • CI:
    • npm ci && npm test -- --run for deterministic output.
    • Fail the workflow if coverage on critical directories (src/hooks, src/utils) drops below the configured threshold (set in vitest.config.ts).

5.3 Practicality

  • Strengths: Zero browser dependencies, blazing-fast feedback, ideal for validation-heavy forms, hook logic, Context providers, and shared UI primitives (cards, charts).
  • Costs/Risks: Cannot surface browser-specific issues (service workers, cookie policies) or backend integration bugs.
  • Best-fit coverage: Form validation, nutrition calculations, UI state machines, custom hooks, and data formatting utilities.
  • When to escalate to Selenium: Anything requiring a true browser engine, cross-tab interactions, storage quirks, or validation of backend-rendered data.

Example component test:

// frontend/src/test/components/Logo.test.tsx
import { render, screen } from '@testing-library/react';
import { describe, it, expect } from 'vitest';
import Logo from '../../components/Logo';

describe('Logo Component', () => {
  it('renders logo image and text correctly', () => {
    render(<Logo />);
    expect(screen.getByAltText('NutriHub Logo')).toBeInTheDocument();
    expect(screen.getByText('Nutri')).toBeInTheDocument();
    expect(screen.getByText('Hub')).toBeInTheDocument();
  });
});

6. Adding & Maintaining Selenium Tests

6.1 Standard Workflow

  1. Identify the journey: open an issue referencing the user story plus acceptance criteria.
  2. Update fixtures: extend backend seed scripts so deterministic data exists.
  3. Create the spec: scaffold src/test/selenium/<feature>.selenium.test.ts, import the relevant utilities, and write assertions that mirror the acceptance criteria.
  4. Run locally: npm test -- src/test/selenium/<feature>.selenium.test.ts.
  5. Capture artifacts on failure and attach them to the pull request for easier review.

6.2 Best Practices

  • Prefer id attributes over brittle CSS chains.
  • Keep each spec focused on one user journey; cross-feature dependencies inflate debugging time.
  • Use explicit waits (driver.wait(until.elementLocated(...))) instead of raw sleeps.
  • Always clean up sessions (afterAll) and reset backend state if the scenario mutates data.

6.3 Contribution Checklist

  • Scenario linked to a tracked issue/user story.
  • Spec passes locally in headless mode.
  • Artifacts + logs included for reviewers when failures occur.

7. Quick Reference

  • Run Vitest fast, often; it guards ~70% of frontend logic.
  • Reserve Selenium for 5–10 high-value smoke paths that must work across the stack.
  • Keep CI lean by running Vitest on every push and Selenium on merge/nightly or behind a label.

This approach delivers practical automation coverage: Selenium proves that browser journeys succeed end-to-end, while Vitest keeps day-to-day development velocity high by catching logic regressions immediately.

Clone this wiki locally