Comprehensive automated testing framework for DIGIT Studio Services - validates end-to-end service creation, configuration, application workflows, and checklist management.
This test suite automates the complete DIGIT Studio workflow from MDMS (Master Data Management Service) configuration to application processing through various workflow states. It includes both positive E2E tests and negative/security test scenarios.
Key Features:
- 21 E2E Tests: Complete workflow automation from service creation to application resolution
- Data-Driven Negative Tests: Security, boundary, and validation testing using configurable scenarios
- Automated Wait Handling: Smart 15-minute wait after service initialization
- HTML Reporting: Detailed test reports with execution summaries
# 1. Install dependencies
pip install -r requirements.txt
# 2. Configure environment (create/edit .env file)
BASE_URL=https://unified-uat.digit.org
USERNAME=your_username
PASSWORD=your_password
TENANTID=st
USERTYPE=EMPLOYEE
CLIENT_AUTH_HEADER=Basic ZWdvdi11c2VyLWNsaWVudDo=
# 3. Run E2E tests
pytest test_e2e_flow.py -v -s --html=reports/e2e_report.html --self-contained-html
# 4. View results
# - Terminal: Console output
# - HTML Report: reports/e2e_report.html
# - Output Data: output/*.jsonDuration: ~20-25 minutes (includes 15-minute wait for service initialization)
- Python: 3.10 or higher
- DIGIT Access: Valid credentials with roles:
STUDIO_ADMIN,MDMS_ADMIN,LME - Network: Access to DIGIT API endpoints
- Environment: UAT/QA only (NOT production)
- This suite creates real data in DIGIT (services, applications, configurations)
- Tests take 20-25 minutes to complete
- Run only in UAT/QA environments
studio_automation_script/
├── test_e2e_flow.py # Main E2E orchestrator (21 tests)
├── conftest.py # Pytest configuration & wait logic
├── pytest.ini # Pytest settings
├── .env # Environment variables (credentials)
├── requirements.txt # Python dependencies
│
├── tests/ # Test modules
│ ├── test_studio_services.py # Service setup (draft, create, init)
│ ├── test_application.py # Application lifecycle
│ ├── test_checklist_create.py # Checklist submission
│ ├── test_data_driven.py # Negative/security tests
│ ├── test_*_search.py # Verification tests
│ └── ... # Other test modules
│
├── utils/ # Utility modules
│ ├── api_client.py # API client with auth
│ ├── config.py # Configuration loader
│ ├── data_loader.py # JSON payload loader
│ └── ...
│
├── payloads/ # JSON request templates
│ ├── mdms/ # MDMS payloads
│ ├── Application/ # Application payloads
│ ├── public_service/ # Public service payloads
│ └── checklist/ # Checklist payloads
│
├── output/ # Runtime generated files
│ ├── mdms_response.json
│ ├── application_response.json
│ └── ...
│
└── reports/ # HTML test reports
└── e2e_report.html
Runs all 21 tests sequentially with automatic 15-minute wait:
pytest test_e2e_flow.py -v -s --html=reports/e2e_report.html --self-contained-htmlWhat happens:
- Phase 1: Service Setup (3 tests)
- [15-minute automatic wait]
- Phase 2: Verification (7 tests)
- Phase 3: Application Flow (8 tests)
- Phase 4: Final Verification (3 tests)
Run entire test modules by category:
# Service Setup
pytest tests/test_studio_services.py -v -s # MDMS draft, service create, public service init
# Application Lifecycle
pytest tests/test_application.py -v -s # Create, assign, resolve application
# Checklist Tests
pytest tests/test_checklist_create.py -v -s # Checklist submission
pytest tests/test_checklist_search.py -v -s # Checklist verification
# Verification/Search Tests (run after 15-min wait)
pytest tests/test_actions_roleactions_search.py -v -s # Actions & role-actions
pytest tests/test_roles_search.py -v -s # Roles verification
pytest tests/test_workflow_search.py -v -s # Workflow validation
pytest tests/test_idgen_search.py -v -s # ID generation formats
pytest tests/test_localization_search.py -v -s # Localization keys
# Data Search Tests
pytest tests/test_individual_search.py -v -s # Individual/applicant search
pytest tests/test_process_instance_search.py -v -s # Process instance tracking
pytest tests/test_application_search.py -v -s # Application search
pytest tests/test_inbox_search.py -v -s # Inbox verification
# All verification tests at once
pytest tests/test_*_search.py -v -sRun negative scenarios using test_data_driven.py:
# All negative tests
pytest tests/test_data_driven.py -v -s
# Run specific negative test suites individually
pytest tests/test_data_driven.py::TestMDMSDraftNegative -v -s # MDMS draft negative tests
pytest tests/test_data_driven.py::TestMDMSServiceCreateNegative -v -s # MDMS service create negative tests
pytest tests/test_data_driven.py::TestPublicServiceInitNegative -v -s # Public service init negative tests
pytest tests/test_data_driven.py::TestAuthenticationNegative -v -s # Authentication negative tests
pytest tests/test_data_driven.py::TestApplicationNegative -v -s # Application negative tests
pytest tests/test_data_driven.py::TestWorkflowNegative -v -s # Workflow negative tests
pytest tests/test_data_driven.py::TestChecklistNegative -v -s # Checklist negative tests
pytest tests/test_data_driven.py::TestSearchNegative -v -s # Search negative tests
pytest tests/test_data_driven.py::TestSecurityScenarios -v -s # Security scenarios (XSS, SQL injection)
pytest tests/test_data_driven.py::TestAllNegativeScenarios -v -s # Summary of all scenariosNote: Negative tests require test_scenarios_config.json file with test scenarios.
Run specific E2E tests from the main flow:
# Phase 1: Service Setup
pytest test_e2e_flow.py::test_01_mdms_draft_create -v -s
pytest test_e2e_flow.py::test_02_mdms_service_create -v -s
pytest test_e2e_flow.py::test_03_public_service_init -v -s
# Phase 2: Verification Tests
pytest test_e2e_flow.py::test_04_actions_search -v -s
pytest test_e2e_flow.py::test_05_roleactions_search -v -s
pytest test_e2e_flow.py::test_06_checklist_search -v -s
pytest test_e2e_flow.py::test_07_idgen_search -v -s
pytest test_e2e_flow.py::test_08_localization_search -v -s
pytest test_e2e_flow.py::test_09_roles_search -v -s
pytest test_e2e_flow.py::test_10_workflow_validate -v -s
# Phase 3: Application Flow
pytest test_e2e_flow.py::test_11_application_create -v -s
pytest test_e2e_flow.py::test_12_checklist_create_all_and_submit -v -s
pytest test_e2e_flow.py::test_13_individual_search -v -s
pytest test_e2e_flow.py::test_14_process_instance_after_create -v -s
pytest test_e2e_flow.py::test_15_application_assign -v -s
pytest test_e2e_flow.py::test_16_process_instance_after_assign -v -s
pytest test_e2e_flow.py::test_17_application_resolve -v -s
pytest test_e2e_flow.py::test_18_process_instance_after_resolve -v -s
# Phase 4: Final Verification
pytest test_e2e_flow.py::test_19_application_search -v -s
pytest test_e2e_flow.py::test_20_application_search_by_service_code -v -s
pytest test_e2e_flow.py::test_21_inbox_search -v -sNote: Individual E2E tests may fail if prerequisites haven't run. Always run the full E2E flow first.
Run tests from test_studio_services.py:
# All service tests
pytest tests/test_studio_services.py -v -s
# Individual service tests
pytest tests/test_studio_services.py::test_mdms_draft_create -v -s
pytest tests/test_studio_services.py::test_mdms_service_create -v -s
pytest tests/test_studio_services.py::test_public_service_init -v -s
pytest tests/test_studio_services.py::test_complete_studio_setup -v -s # Runs all 3 in sequenceRun tests from test_application.py:
# All application tests
pytest tests/test_application.py -v -s
# Individual application tests
pytest tests/test_application.py::test_application_create -v -s
pytest tests/test_application.py::test_application_assign -v -s
pytest tests/test_application.py::test_application_resolve -v -s
pytest tests/test_application.py::test_complete_application_flow -v -s # Runs all 3 in sequenceRun individual search/verification tests:
# Actions & Role-Actions
pytest tests/test_actions_roleactions_search.py::test_actions_search -v -s
pytest tests/test_actions_roleactions_search.py::test_roleactions_search -v -s
# Checklist
pytest tests/test_checklist_search.py::test_checklist_search -v -s
pytest tests/test_checklist_create.py::test_checklist_for_current_state -v -s
pytest tests/test_checklist_create.py::test_checklist_create_all_and_submit -v -s
# Roles & Workflow
pytest tests/test_roles_search.py::test_roles_search -v -s
pytest tests/test_workflow_search.py::test_workflow_validate -v -s
# ID Generation & Localization
pytest tests/test_idgen_search.py::test_idgen_search -v -s
pytest tests/test_localization_search.py::test_localization_search -v -s
# Individual & Process Instance
pytest tests/test_individual_search.py::test_individual_search -v -s
pytest tests/test_process_instance_search.py::test_process_instance_after_create -v -s
pytest tests/test_process_instance_search.py::test_process_instance_after_assign -v -s
pytest tests/test_process_instance_search.py::test_process_instance_after_resolve -v -s
# Application Search & Inbox
pytest tests/test_application_search.py::test_application_search -v -s
pytest tests/test_application_search.py::test_application_search_by_service_code -v -s
pytest tests/test_inbox_search.py::test_inbox_search -v -sUse pytest patterns to run multiple related tests:
# Run all search tests
pytest tests/test_*_search.py -v -s
# Run tests matching keyword
pytest -k "search" -v -s # All tests with "search" in name
pytest -k "application" -v -s # All tests with "application" in name
pytest -k "checklist" -v -s # All tests with "checklist" in name
pytest -k "negative" -v -s # All negative tests
# Run tests from specific directory
pytest tests/ -v -s # All tests in tests directoryPHASE 1: SERVICE SETUP
├── test_01: Create MDMS draft
├── test_02: Publish service configuration
└── test_03: Initialize public service
│
▼
[15-MINUTE WAIT]
│
▼
PHASE 2: VERIFY SERVICE SETUP
├── test_04: Verify actions
├── test_05: Verify role-actions
├── test_06: Verify checklists
├── test_07: Verify ID generation
├── test_08: Verify localization
├── test_09: Verify roles
└── test_10: Validate workflow
│
▼
PHASE 3: APPLICATION FLOW
├── test_11: Create application [APPLIED → PENDING_FOR_ASSIGNMENT]
├── test_12: Submit checklists
├── test_13: Verify individual
├── test_14: Verify process instance (create)
├── test_15: Assign application [PENDING_FOR_ASSIGNMENT → PENDING_AT_LME]
├── test_16: Verify process instance (assign)
├── test_17: Resolve application [PENDING_AT_LME → RESOLVED]
└── test_18: Verify process instance (resolve)
│
▼
PHASE 4: FINAL VERIFICATION
├── test_19: Search by application number
├── test_20: Search by service code
└── test_21: Verify inbox
Using test_data_driven.py:
- MDMS Draft: Invalid payloads, missing fields, XSS attempts
- Service Create: Duplicate services, malformed configurations
- Public Service Init: Missing data, invalid tenant
- Application: Invalid mobile, missing name, wrong service code
- Workflow: Invalid state transitions, unauthorized actions
- Checklist: Missing required fields, invalid data types
- Security: XSS, SQL injection, script injection attempts
- Authentication: Invalid tokens, missing auth headers
# DIGIT Platform
BASE_URL=https://unified-uat.digit.org
TENANTID=st
# Authentication
USERNAME=SUPERUSER
PASSWORD=eGov@123
USERTYPE=EMPLOYEE
CLIENT_AUTH_HEADER=Basic ZWdvdi11c2VyLWNsaWVudDo=
# Search Parameters
SEARCH_LIMIT=200
SEARCH_OFFSET=0Edit payloads/mdms/mdms_service_create.json to customize:
- Workflow states and transitions
- Checklist definitions
- Role-action mappings
- ID generation formats
- Localization keys
Test execution generates the following files:
output/mdms_response.json
{
"module": "ModuleXYZ",
"service": "ServiceABC",
"service_code": "ModuleXYZ-ServiceABC-svc-2026-01-22-001",
"unique_id": "abc-123-def-456",
"status": "created"
}output/application_response.json
{
"application_id": "app-uuid-123",
"application_number": "ModuleXYZ-ServiceABC-app-2026-01-22-001",
"workflow_status": "RESOLVED",
"mobile_number": "9876543210"
}reports/e2e_report.html
- Comprehensive HTML report with test results
- Pass/fail status with color coding
- Execution time and detailed outputs
- Summary of output data
Authentication Errors (401)
# Verify credentials
cat .env | grep USERNAME
cat .env | grep PASSWORD
# Ensure user has required roles: STUDIO_ADMIN, MDMS_ADMIN, LMEService Initialization Failed
- Wait full 15 minutes after service initialization
- Check if service already exists (unique constraint)
- Verify tenant ID matches
Tests Fail When Run Individually
- Always use E2E flow:
pytest test_e2e_flow.py - Tests depend on output from previous tests
- Check
output/*.jsonfiles exist
MDMS Duplicate Service Error
# Use unique service names in payloads/mdms/mdms_draft_create.json
# Change "service": "ServiceABC" to "service": "ServiceABC_v2"Application State Transition Failed
- Verify workflow configuration
- Ensure user has required role for action
- Check current application state matches expected
# Maximum verbosity
pytest test_e2e_flow.py -vv -s --tb=long
# Specific test with debugging
pytest tests/test_application.py::test_application_create -vv -s --tb=long| Test Count | Category | Description |
|---|---|---|
| 3 | Service Setup | MDMS draft, service create, public service init |
| 7 | Verification | Actions, roles, workflow, checklists, localization, ID gen |
| 8 | Application | Create, assign, resolve + process instance tracking |
| 3 | Search | Application search, inbox verification |
| 21 | Total | Complete E2E Flow |
| Test Suite | Purpose |
|---|---|
TestMDMSDraftNegative |
MDMS draft validation failures |
TestMDMSServiceCreateNegative |
Service create validation failures |
TestPublicServiceInitNegative |
Public service init failures |
TestAuthenticationNegative |
Authentication failures |
TestApplicationNegative |
Application creation failures |
TestWorkflowNegative |
Invalid workflow transitions |
TestChecklistNegative |
Checklist submission failures |
TestSearchNegative |
Invalid search parameters |
TestSecurityScenarios |
XSS, SQL injection, script injection |
TestAllNegativeScenarios |
Summary of all scenarios |
Run commands: See Section 3: Negative/Security Tests above for detailed commands.
| Module | Test Count | Description |
|---|---|---|
test_studio_services.py |
4 | MDMS draft, service create, public service init, complete setup |
test_application.py |
4 | Application create, assign, resolve, complete flow |
test_checklist_create.py |
2 | Checklist submission for current/all states |
test_checklist_search.py |
2 | Checklist search and validation |
test_actions_roleactions_search.py |
2 | Actions and roleactions verification |
test_roles_search.py |
1 | Roles verification |
test_workflow_search.py |
1 | Workflow validation |
test_idgen_search.py |
1 | ID generation format verification |
test_localization_search.py |
1 | Localization verification |
test_individual_search.py |
1 | Individual search |
test_process_instance_search.py |
3 | Process instance tracking (create/assign/resolve) |
test_application_search.py |
2 | Application search (by number, by service code) |
test_inbox_search.py |
1 | Inbox verification |
Run commands: See Section 2: Test Modules, Section 5: Service Tests, Section 6: Application Tests, and Section 7: Verification Tests for detailed commands.
-
Always use E2E flow for complete testing
pytest test_e2e_flow.py -v -s --html=reports/e2e_report.html --self-contained-html
-
Clean output directory before new runs
rm -f output/*.json output/*.txt
-
Use unique service names to avoid conflicts
- Modify
payloads/mdms/mdms_draft_create.json - Change service name for each test run
- Modify
-
Respect the 15-minute wait
- Don't skip or reduce wait time
- Backend needs time to process
-
Review output files after each phase
cat output/mdms_response.json | jq . cat output/application_response.json | jq .
# ========== RECOMMENDED: Full E2E Test ==========
pytest test_e2e_flow.py -v -s --html=reports/e2e_report.html --self-contained-html
# ========== Run by Phase ==========
# Phase 1: Service Setup
pytest tests/test_studio_services.py -v -s
# Phase 2: Verification (after 15-min wait)
pytest tests/test_*_search.py -v -s
# Phase 3: Application Flow
pytest tests/test_application.py -v -s
pytest tests/test_checklist_create.py -v -s
# ========== Negative Tests ==========
# All negative tests
pytest tests/test_data_driven.py -v -s
# Specific negative test suite
pytest tests/test_data_driven.py::TestApplicationNegative -v -s
pytest tests/test_data_driven.py::TestSecurityScenarios -v -s
# ========== Individual E2E Tests ==========
pytest test_e2e_flow.py::test_01_mdms_draft_create -v -s
pytest test_e2e_flow.py::test_11_application_create -v -s
pytest test_e2e_flow.py::test_21_inbox_search -v -s
# ========== Clean & Rerun ==========
rm -f output/*.json output/*.txt
pytest test_e2e_flow.py -v -s --html=reports/e2e_report.html --self-contained-html
# ========== Debug Mode ==========
pytest test_e2e_flow.py -vv -s --tb=long
pytest tests/test_application.py::test_application_create -vv -s --tb=long
# ========== View Results ==========
cat output/mdms_response.json | jq .
cat output/application_response.json | jq .
open reports/e2e_report.html # macOS
xdg-open reports/e2e_report.html # Linux