Compare commits

..

4 Commits

Author SHA1 Message Date
883bcb5df3 docs: update README for Phase 3 infrastructure completion
Update README to reflect completed Phase 3 infrastructure layer:
- Documented ModbusRelayController, MockRelayController, SqliteRelayLabelRepository, and HealthMonitor implementations
- Added testing coverage details (20+ tests across infrastructure components)
- Updated architecture diagrams and project structure
- Changed task reference to tasks.org format
- Updated dependency list with production infrastructure dependencies

Ref: Phase 3 tasks in specs/001-modbus-relay-control/tasks.org
2026-01-22 00:52:14 +01:00
4227a79b1f feat(infrastructure): implement SQLite repository for relay labels
Add complete SQLite implementation of RelayLabelRepository trait with
all CRUD operations (get_label, save_label, delete_label, get_all_labels).

Key changes:
- Create infrastructure entities module with RelayLabelRecord struct
- Implement TryFrom traits for converting between database records and domain types
- Add From<sqlx::Error> and From<RelayLabelError> for RepositoryError
- Write comprehensive functional tests covering all repository operations
- Verify proper handling of edge cases (empty results, overwrites, max length)

TDD phase: GREEN - All repository trait tests now passing with SQLite implementation

Ref: T036 (specs/001-modbus-relay-control/tasks.md)
2026-01-22 00:39:10 +01:00
0730a477f8 feat(application): HealthMonitor service and hardware integration test
Add HealthMonitor service for tracking system health status with
comprehensive state transition logic and thread-safe operations.
Includes 16 unit tests covering all functionality including concurrent
access scenarios.

Add optional Modbus hardware integration tests with 7 test cases for
real device testing. Tests are marked as ignored and can be run with

running 21 tests
test infrastructure::modbus::client::tests::t025c_write_single_coil_timeout_tests::test_write_single_coil_returns_error_on_failure ... FAILED
test infrastructure::modbus::client::tests::t025c_write_single_coil_timeout_tests::test_write_single_coil_returns_timeout_on_slow_device ... FAILED
test infrastructure::modbus::client::tests::t025b_read_coils_timeout_tests::test_read_coils_returns_timeout_on_slow_response ... FAILED
test infrastructure::modbus::client::tests::t025b_read_coils_timeout_tests::test_read_coils_returns_modbus_exception_on_protocol_error ... FAILED
test infrastructure::modbus::client::tests::t025d_read_relay_state_tests::test_read_state_returns_on_when_coil_is_true ... FAILED
test infrastructure::modbus::client::tests::t025d_read_relay_state_tests::test_read_state_propagates_controller_error ... FAILED
test infrastructure::modbus::client::tests::t025d_read_relay_state_tests::test_read_state_returns_off_when_coil_is_false ... FAILED
test infrastructure::modbus::client::tests::t025b_read_coils_timeout_tests::test_read_coils_returns_connection_error_on_io_error ... FAILED
test infrastructure::modbus::client::tests::t025a_connection_setup_tests::test_new_with_valid_config_connects_successfully ... ok
test infrastructure::modbus::client::tests::t025a_connection_setup_tests::test_new_stores_correct_timeout_duration ... ok
test infrastructure::modbus::client::tests::t025b_read_coils_timeout_tests::test_read_coils_returns_coil_values_on_success ... ok
test infrastructure::modbus::client::tests::write_all_states_validation_tests::test_write_all_states_with_9_states_returns_invalid_input ... ok
test infrastructure::modbus::client::tests::write_all_states_validation_tests::test_write_all_states_with_empty_vector_returns_invalid_input ... ok
test infrastructure::modbus::client::tests::t025e_write_relay_state_tests::test_write_state_can_toggle_relay_multiple_times ... ok
test infrastructure::modbus::client::tests::write_all_states_validation_tests::test_write_all_states_with_8_states_succeeds ... ok
test infrastructure::modbus::client::tests::t025c_write_single_coil_timeout_tests::test_write_single_coil_succeeds_for_valid_write ... ok
test infrastructure::modbus::client::tests::t025e_write_relay_state_tests::test_write_state_off_writes_false_to_coil ... FAILED
test infrastructure::modbus::client::tests::t025d_read_relay_state_tests::test_read_state_correctly_maps_relay_id_to_modbus_address ... ok
test infrastructure::modbus::client::tests::write_all_states_validation_tests::test_write_all_states_with_7_states_returns_invalid_input ... ok
test infrastructure::modbus::client::tests::t025e_write_relay_state_tests::test_write_state_on_writes_true_to_coil ... ok
test infrastructure::modbus::client::tests::t025e_write_relay_state_tests::test_write_state_correctly_maps_relay_id_to_modbus_address ... ok

failures:

---- infrastructure::modbus::client::tests::t025c_write_single_coil_timeout_tests::test_write_single_coil_returns_error_on_failure stdout ----

thread 'infrastructure::modbus::client::tests::t025c_write_single_coil_timeout_tests::test_write_single_coil_returns_error_on_failure' (1157113) panicked at backend/src/infrastructure/modbus/client_test.rs:320:14:
Failed to connect: ConnectionError("Connection refused (os error 111)")
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

---- infrastructure::modbus::client::tests::t025c_write_single_coil_timeout_tests::test_write_single_coil_returns_timeout_on_slow_device stdout ----

thread 'infrastructure::modbus::client::tests::t025c_write_single_coil_timeout_tests::test_write_single_coil_returns_timeout_on_slow_device' (1157114) panicked at backend/src/infrastructure/modbus/client_test.rs:293:14:
Failed to connect: ConnectionError("Connection refused (os error 111)")

---- infrastructure::modbus::client::tests::t025b_read_coils_timeout_tests::test_read_coils_returns_timeout_on_slow_response stdout ----

thread 'infrastructure::modbus::client::tests::t025b_read_coils_timeout_tests::test_read_coils_returns_timeout_on_slow_response' (1157112) panicked at backend/src/infrastructure/modbus/client_test.rs:176:14:
Failed to connect: ConnectionError("Connection refused (os error 111)")

---- infrastructure::modbus::client::tests::t025b_read_coils_timeout_tests::test_read_coils_returns_modbus_exception_on_protocol_error stdout ----

thread 'infrastructure::modbus::client::tests::t025b_read_coils_timeout_tests::test_read_coils_returns_modbus_exception_on_protocol_error' (1157111) panicked at backend/src/infrastructure/modbus/client_test.rs:227:14:
Failed to connect: ConnectionError("Connection refused (os error 111)")

---- infrastructure::modbus::client::tests::t025d_read_relay_state_tests::test_read_state_returns_on_when_coil_is_true stdout ----

thread 'infrastructure::modbus::client::tests::t025d_read_relay_state_tests::test_read_state_returns_on_when_coil_is_true' (1157119) panicked at backend/src/infrastructure/modbus/client_test.rs:354:14:
Failed to connect to test server: ConnectionError("Connection refused (os error 111)")

---- infrastructure::modbus::client::tests::t025d_read_relay_state_tests::test_read_state_propagates_controller_error stdout ----

thread 'infrastructure::modbus::client::tests::t025d_read_relay_state_tests::test_read_state_propagates_controller_error' (1157117) panicked at backend/src/infrastructure/modbus/client_test.rs:396:14:
Failed to connect to test server: ConnectionError("Connection refused (os error 111)")

---- infrastructure::modbus::client::tests::t025d_read_relay_state_tests::test_read_state_returns_off_when_coil_is_false stdout ----

thread 'infrastructure::modbus::client::tests::t025d_read_relay_state_tests::test_read_state_returns_off_when_coil_is_false' (1157118) panicked at backend/src/infrastructure/modbus/client_test.rs:375:14:
Failed to connect to test server: ConnectionError("Connection refused (os error 111)")

---- infrastructure::modbus::client::tests::t025b_read_coils_timeout_tests::test_read_coils_returns_connection_error_on_io_error stdout ----

thread 'infrastructure::modbus::client::tests::t025b_read_coils_timeout_tests::test_read_coils_returns_connection_error_on_io_error' (1157110) panicked at backend/src/infrastructure/modbus/client_test.rs:202:14:
Failed to connect: ConnectionError("Connection refused (os error 111)")

---- infrastructure::modbus::client::tests::t025e_write_relay_state_tests::test_write_state_off_writes_false_to_coil stdout ----

thread 'infrastructure::modbus::client::tests::t025e_write_relay_state_tests::test_write_state_off_writes_false_to_coil' (1157122) panicked at backend/src/infrastructure/modbus/client_test.rs:508:9:
assertion `left == right` failed: Relay should be Off after writing Off state
  left: On
 right: Off


failures:
    infrastructure::modbus::client::tests::t025b_read_coils_timeout_tests::test_read_coils_returns_connection_error_on_io_error
    infrastructure::modbus::client::tests::t025b_read_coils_timeout_tests::test_read_coils_returns_modbus_exception_on_protocol_error
    infrastructure::modbus::client::tests::t025b_read_coils_timeout_tests::test_read_coils_returns_timeout_on_slow_response
    infrastructure::modbus::client::tests::t025c_write_single_coil_timeout_tests::test_write_single_coil_returns_error_on_failure
    infrastructure::modbus::client::tests::t025c_write_single_coil_timeout_tests::test_write_single_coil_returns_timeout_on_slow_device
    infrastructure::modbus::client::tests::t025d_read_relay_state_tests::test_read_state_propagates_controller_error
    infrastructure::modbus::client::tests::t025d_read_relay_state_tests::test_read_state_returns_off_when_coil_is_false
    infrastructure::modbus::client::tests::t025d_read_relay_state_tests::test_read_state_returns_on_when_coil_is_true
    infrastructure::modbus::client::tests::t025e_write_relay_state_tests::test_write_state_off_writes_false_to_coil

test result: FAILED. 12 passed; 9 failed; 0 ignored; 0 measured; 128 filtered out; finished in 3.27s.

Ref: T034, T039, T040 (specs/001-modbus-relay-control/tasks.org)
2026-01-22 00:39:10 +01:00
ffedc6cfd6 refactor(specs): switch tasks to org format 2026-01-21 19:50:03 +01:00
20 changed files with 2675 additions and 1357 deletions

125
README.md
View File

@@ -62,33 +62,59 @@ STA will provide a modern web interface for controlling Modbus-compatible relay
- RelayController and RelayLabelRepository trait definitions - RelayController and RelayLabelRepository trait definitions
- Complete separation from infrastructure concerns (hexagonal architecture) - Complete separation from infrastructure concerns (hexagonal architecture)
### Planned - Phases 3-8 ### Phase 3 Complete - Infrastructure Layer
- 📋 Modbus TCP client with tokio-modbus (Phase 3) - ✅ T028-T029: MockRelayController tests and implementation
- 📋 Mock controller for testing (Phase 3) - ✅ T030: RelayController trait with async methods (read_state, write_state, read_all, write_all)
- 📋 Health monitoring service (Phase 3) - ✅ T031: ControllerError enum (ConnectionError, Timeout, ModbusException, InvalidRelayId)
- ✅ T032: MockRelayController comprehensive tests (6 tests)
- ✅ T025a-f: ModbusRelayController implementation (decomposed):
- Connection setup with tokio-modbus
- Timeout-wrapped read_coils and write_single_coil helpers
- RelayController trait implementation
- ✅ T034: Integration test with real hardware (uses #[ignore] attribute)
- ✅ T035-T036: RelayLabelRepository trait and SQLite implementation
- ✅ T037-T038: MockRelayLabelRepository for testing
- ✅ T039-T040: HealthMonitor service with state tracking
#### Key Infrastructure Features Implemented
- **ModbusRelayController**: Thread-safe Modbus TCP client with timeout handling
- Uses `Arc<Mutex<Context>>` for concurrent access
- Native Modbus TCP protocol (MBAP header, no CRC16)
- Configurable timeout with `tokio::time::timeout`
- **MockRelayController**: In-memory testing without hardware
- Uses `Arc<Mutex<HashMap<RelayId, RelayState>>>` for state
- Optional timeout simulation for error handling tests
- **SqliteRelayLabelRepository**: Compile-time verified SQL queries
- Automatic migrations via SQLx
- In-memory mode for testing
- **HealthMonitor**: State machine for health tracking
- Healthy -> Degraded -> Unhealthy transitions
- Recovery on successful operations
### Planned - Phases 4-8
- 📋 US1: Monitor & toggle relay states - MVP (Phase 4) - 📋 US1: Monitor & toggle relay states - MVP (Phase 4)
- 📋 US2: Bulk relay controls (Phase 5) - 📋 US2: Bulk relay controls (Phase 5)
- 📋 US3: Health status display (Phase 6) - 📋 US3: Health status display (Phase 6)
- 📋 US4: Relay labeling (Phase 7) - 📋 US4: Relay labeling (Phase 7)
- 📋 Production deployment (Phase 8) - 📋 Production deployment (Phase 8)
See [tasks.md](specs/001-modbus-relay-control/tasks.md) for detailed implementation roadmap (102 tasks across 9 phases). See [tasks.org](specs/001-modbus-relay-control/tasks.org) for detailed implementation roadmap.
## Architecture ## Architecture
**Current:** **Current:**
- **Backend**: Rust 2024 with Poem web framework - **Backend**: Rust 2024 with Poem web framework (hexagonal architecture)
- **Configuration**: YAML-based with environment variable overrides - **Configuration**: YAML-based with environment variable overrides
- **API**: RESTful HTTP with OpenAPI documentation - **API**: RESTful HTTP with OpenAPI documentation
- **CORS**: Production-ready configurable middleware with security validation - **CORS**: Production-ready configurable middleware with security validation
- **Middleware Chain**: Rate Limiting CORS Data injection - **Middleware Chain**: Rate Limiting -> CORS -> Data injection
- **Modbus Integration**: tokio-modbus for Modbus TCP communication
- **Persistence**: SQLite for relay labels with compile-time SQL verification
**Planned:** **Planned:**
- **Modbus Integration**: tokio-modbus for Modbus TCP communication
- **Frontend**: Vue 3 with TypeScript - **Frontend**: Vue 3 with TypeScript
- **Deployment**: Backend on Raspberry Pi, frontend on Cloudflare Pages - **Deployment**: Backend on Raspberry Pi, frontend on Cloudflare Pages
- **Access**: Traefik reverse proxy with Authelia authentication - **Access**: Traefik reverse proxy with Authelia authentication
- **Persistence**: SQLite for relay labels and configuration
## Quick Start ## Quick Start
@@ -205,48 +231,65 @@ sta/ # Repository root
│ │ ├── lib.rs - Library entry point │ │ ├── lib.rs - Library entry point
│ │ ├── main.rs - Binary entry point │ │ ├── main.rs - Binary entry point
│ │ ├── startup.rs - Application builder and server config │ │ ├── startup.rs - Application builder and server config
│ │ ├── settings/ - Configuration module
│ │ │ ├── mod.rs - Settings aggregation
│ │ │ └── cors.rs - CORS configuration (NEW in Phase 0.5)
│ │ ├── telemetry.rs - Logging and tracing setup │ │ ├── telemetry.rs - Logging and tracing setup
│ │ ├── domain/ - Business logic (NEW in Phase 2) │ │
│ │ │ ├── relay/ - Relay domain types, entity, and traits │ │ ├── domain/ - Business logic layer (Phase 2)
│ │ │ ├── relay/ - Relay domain aggregate
│ │ │ │ ├── types/ - RelayId, RelayState, RelayLabel newtypes │ │ │ │ ├── types/ - RelayId, RelayState, RelayLabel newtypes
│ │ │ │ ├── entity.rs - Relay aggregate │ │ │ │ ├── entity.rs - Relay aggregate with state control
│ │ │ │ ├── controller.rs - RelayController trait │ │ │ │ ├── controller.rs - RelayController trait & ControllerError
│ │ │ │ └── repository.rs - RelayLabelRepository trait │ │ │ │ └── repository/ - RelayLabelRepository trait
│ │ │ ├── modbus.rs - ModbusAddress type with conversion │ │ │ ├── modbus.rs - ModbusAddress type with conversion
│ │ │ └── health.rs - HealthStatus state machine │ │ │ └── health.rs - HealthStatus state machine
│ │ ├── application/ - Use cases (planned Phase 3-4) │ │
│ │ ├── application/ - Use cases and orchestration (Phase 3)
│ │ │ └── health/ - Health monitoring service
│ │ │ └── health_monitor.rs - HealthMonitor with state tracking
│ │ │
│ │ ├── infrastructure/ - External integrations (Phase 3) │ │ ├── infrastructure/ - External integrations (Phase 3)
│ │ │ ── persistence/ - SQLite repository implementation │ │ │ ── modbus/ - Modbus TCP communication
│ │ │ │ ├── client.rs - ModbusRelayController (real hardware)
│ │ │ │ ├── client_test.rs - Hardware integration tests
│ │ │ │ └── mock_controller.rs - MockRelayController for testing
│ │ │ └── persistence/ - Database layer
│ │ │ ├── entities/ - Database record types
│ │ │ ├── sqlite_repository.rs - SqliteRelayLabelRepository
│ │ │ └── label_repository.rs - MockRelayLabelRepository
│ │ │
│ │ ├── presentation/ - API layer (planned Phase 4) │ │ ├── presentation/ - API layer (planned Phase 4)
│ │ ├── settings/ - Configuration module
│ │ │ ├── mod.rs - Settings aggregation
│ │ │ └── cors.rs - CORS configuration
│ │ ├── route/ - HTTP endpoint handlers │ │ ├── route/ - HTTP endpoint handlers
│ │ │ ├── health.rs - Health check endpoints │ │ │ ├── health.rs - Health check endpoints
│ │ │ └── meta.rs - Application metadata │ │ │ └── meta.rs - Application metadata
│ │ └── middleware/ - Custom middleware │ │ └── middleware/ - Custom middleware
│ │ └── rate_limit.rs │ │ └── rate_limit.rs
│ │
│ ├── settings/ - YAML configuration files │ ├── settings/ - YAML configuration files
│ │ ├── base.yaml - Base configuration │ │ ├── base.yaml - Base configuration
│ │ ├── development.yaml - Development overrides (NEW in Phase 0.5) │ │ ├── development.yaml - Development overrides
│ │ └── production.yaml - Production overrides (NEW in Phase 0.5) │ │ └── production.yaml - Production overrides
│ └── tests/ - Integration tests │ └── tests/ - Integration tests
│ └── cors_test.rs - CORS integration tests (NEW in Phase 0.5) │ └── cors_test.rs - CORS integration tests
├── migrations/ - SQLx database migrations
├── src/ # Frontend source (Vue/TypeScript) ├── src/ # Frontend source (Vue/TypeScript)
│ └── api/ - Type-safe API client │ └── api/ - Type-safe API client
├── docs/ # Project documentation ├── docs/ # Project documentation
│ ├── cors-configuration.md - CORS setup guide │ ├── cors-configuration.md - CORS setup guide
│ ├── domain-layer.md - Domain layer architecture (NEW in Phase 2) │ ├── domain-layer.md - Domain layer architecture
│ └── Modbus_POE_ETH_Relay.md - Hardware documentation │ └── Modbus_POE_ETH_Relay.md - Hardware documentation
├── specs/ # Feature specifications ├── specs/ # Feature specifications
│ ├── constitution.md - Architectural principles │ ├── constitution.md - Architectural principles
│ └── 001-modbus-relay-control/ │ └── 001-modbus-relay-control/
│ ├── spec.md - Feature specification │ ├── spec.md - Feature specification
│ ├── plan.md - Implementation plan │ ├── plan.md - Implementation plan
│ ├── tasks.md - Task breakdown (102 tasks) │ ├── tasks.org - Task breakdown (org-mode format)
│ ├── domain-layer-architecture.md - Domain layer docs (NEW in Phase 2) │ ├── data-model.md - Data model specification
│ ├── lessons-learned.md - Phase 2 insights (NEW in Phase 2) │ ├── types-design.md - Domain types design
── research-cors.md - CORS configuration research ── domain-layer-architecture.md - Domain layer docs
│ └── lessons-learned.md - Phase 2/3 insights
├── package.json - Frontend dependencies ├── package.json - Frontend dependencies
├── vite.config.ts - Vite build configuration ├── vite.config.ts - Vite build configuration
└── justfile - Build commands └── justfile - Build commands
@@ -258,17 +301,15 @@ sta/ # Repository root
- Rust 2024 edition - Rust 2024 edition
- Poem 3.1 (web framework with OpenAPI support) - Poem 3.1 (web framework with OpenAPI support)
- Tokio 1.48 (async runtime) - Tokio 1.48 (async runtime)
- tokio-modbus (Modbus TCP client for relay hardware)
- SQLx 0.8 (async SQLite with compile-time SQL verification)
- async-trait (async methods in traits)
- config (YAML configuration) - config (YAML configuration)
- tracing + tracing-subscriber (structured logging) - tracing + tracing-subscriber (structured logging)
- governor (rate limiting) - governor (rate limiting)
- thiserror (error handling) - thiserror (error handling)
- serde + serde_yaml (configuration deserialization) - serde + serde_yaml (configuration deserialization)
**Planned Dependencies:**
- tokio-modbus 0.17 (Modbus TCP client)
- SQLx 0.8 (async SQLite database access)
- mockall 0.13 (mocking for tests)
**Frontend** (scaffolding complete): **Frontend** (scaffolding complete):
- Vue 3 + TypeScript - Vue 3 + TypeScript
- Vite build tool - Vite build tool
@@ -306,6 +347,26 @@ sta/ # Repository root
**Test Coverage Achieved**: 100% domain layer coverage with comprehensive test suites **Test Coverage Achieved**: 100% domain layer coverage with comprehensive test suites
**Phase 3 Infrastructure Testing:**
- **MockRelayController Tests**: 6 tests in `mock_controller.rs`
- Read/write state operations
- Read/write all relay states
- Invalid relay ID handling
- Thread-safe concurrent access
- **ModbusRelayController Tests**: Hardware integration tests (#[ignore])
- Real hardware communication tests
- Connection timeout handling
- **SqliteRelayLabelRepository Tests**: Database layer tests
- CRUD operations on relay labels
- In-memory database for fast tests
- Compile-time SQL verification
- **HealthMonitor Tests**: 15+ tests in `health_monitor.rs`
- State transitions (Healthy -> Degraded -> Unhealthy)
- Recovery from failure states
- Concurrent access safety
**Test Coverage Achieved**: Comprehensive coverage across all layers with TDD approach
## Documentation ## Documentation
### Configuration Guides ### Configuration Guides

View File

@@ -4,4 +4,4 @@ skip-clean = true
target-dir = "coverage" target-dir = "coverage"
output-dir = "coverage" output-dir = "coverage"
fail-under = 60 fail-under = 60
exclude-files = ["target/*", "private/*", "tests/*"] exclude-files = ["target/*", "private/*", "backend/tests/*", "backend/build.rs"]

View File

@@ -8,7 +8,7 @@ rate_limit:
per_seconds: 60 per_seconds: 60
modbus: modbus:
host: "192.168.0.200" host: 192.168.0.200
port: 502 port: 502
slave_id: 0 slave_id: 0
timeout_secs: 5 timeout_secs: 5

View File

@@ -0,0 +1,331 @@
//! Health monitoring service for tracking system health status.
//!
//! The `HealthMonitor` service tracks the health status of the Modbus relay controller
//! by monitoring consecutive errors and transitions between healthy, degraded, and unhealthy states.
//! This service implements the health monitoring requirements from FR-020 and FR-021.
use std::sync::Arc;
use tokio::sync::Mutex;
use crate::domain::health::HealthStatus;
/// Health monitor service for tracking system health status.
///
/// The `HealthMonitor` service maintains the current health status and provides
/// methods to track successes and failures, transitioning between states according
/// to the business rules defined in the domain layer.
#[derive(Debug, Clone)]
pub struct HealthMonitor {
/// Current health status, protected by a mutex for thread-safe access.
current_status: Arc<Mutex<HealthStatus>>,
}
impl HealthMonitor {
/// Creates a new `HealthMonitor` with initial `Healthy` status.
#[must_use]
pub fn new() -> Self {
Self::with_initial_status(HealthStatus::Healthy)
}
/// Creates a new `HealthMonitor` with the specified initial status.
#[must_use]
pub fn with_initial_status(initial_status: HealthStatus) -> Self {
Self {
current_status: Arc::new(Mutex::new(initial_status)),
}
}
/// Records a successful operation, potentially transitioning to `Healthy` status.
///
/// This method transitions the health status according to the following rules:
/// - If currently `Healthy`: remains `Healthy`
/// - If currently `Degraded`: transitions to `Healthy` (recovery)
/// - If currently `Unhealthy`: transitions to `Healthy` (recovery)
///
/// # Returns
///
/// The new health status after recording the success.
pub async fn track_success(&self) -> HealthStatus {
let mut status = self.current_status.lock().await;
let new_status = status.clone().record_success();
*status = new_status.clone();
new_status
}
/// Records a failed operation, potentially transitioning to `Degraded` or `Unhealthy` status.
///
/// This method transitions the health status according to the following rules:
/// - If currently `Healthy`: transitions to `Degraded` with 1 consecutive error
/// - If currently `Degraded`: increments consecutive error count
/// - If currently `Unhealthy`: remains `Unhealthy`
///
/// # Returns
///
/// The new health status after recording the failure.
pub async fn track_failure(&self) -> HealthStatus {
let mut status = self.current_status.lock().await;
let new_status = status.clone().record_error();
*status = new_status.clone();
new_status
}
/// Marks the system as unhealthy with the specified reason.
///
/// This method immediately transitions to `Unhealthy` status regardless of
/// the current status, providing a way to explicitly mark critical failures.
///
/// # Parameters
///
/// - `reason`: Human-readable description of the failure reason.
///
/// # Returns
///
/// The new `Unhealthy` health status.
pub async fn mark_unhealthy(&self, reason: impl Into<String>) -> HealthStatus {
let mut status = self.current_status.lock().await;
let new_status = status.clone().mark_unhealthy(reason);
*status = new_status.clone();
new_status
}
/// Gets the current health status without modifying it.
///
/// # Returns
///
/// The current health status.
pub async fn get_status(&self) -> HealthStatus {
let status = self.current_status.lock().await;
status.clone()
}
/// Checks if the system is currently healthy.
///
/// # Returns
///
/// `true` if the current status is `Healthy`, `false` otherwise.
pub async fn is_healthy(&self) -> bool {
let status = self.current_status.lock().await;
status.is_healthy()
}
/// Checks if the system is currently degraded.
///
/// # Returns
///
/// `true` if the current status is `Degraded`, `false` otherwise.
pub async fn is_degraded(&self) -> bool {
let status = self.current_status.lock().await;
status.is_degraded()
}
/// Checks if the system is currently unhealthy.
///
/// # Returns
///
/// `true` if the current status is `Unhealthy`, `false` otherwise.
pub async fn is_unhealthy(&self) -> bool {
let status = self.current_status.lock().await;
status.is_unhealthy()
}
}
impl Default for HealthMonitor {
fn default() -> Self {
Self::new()
}
}
#[cfg(test)]
mod tests {
use super::*;
#[tokio::test]
async fn test_health_monitor_initial_state() {
let monitor = HealthMonitor::new();
let status = monitor.get_status().await;
assert!(status.is_healthy());
}
#[tokio::test]
async fn test_health_monitor_with_initial_status() {
let initial_status = HealthStatus::degraded(3);
let monitor = HealthMonitor::with_initial_status(initial_status.clone());
let status = monitor.get_status().await;
assert_eq!(status, initial_status);
}
#[tokio::test]
async fn test_track_success_from_healthy() {
let monitor = HealthMonitor::new();
let status = monitor.track_success().await;
assert!(status.is_healthy());
}
#[tokio::test]
async fn test_track_success_from_degraded() {
let monitor = HealthMonitor::with_initial_status(HealthStatus::degraded(5));
let status = monitor.track_success().await;
assert!(status.is_healthy());
}
#[tokio::test]
async fn test_track_success_from_unhealthy() {
let monitor = HealthMonitor::with_initial_status(HealthStatus::unhealthy("Test failure"));
let status = monitor.track_success().await;
assert!(status.is_healthy());
}
#[tokio::test]
async fn test_track_failure_from_healthy() {
let monitor = HealthMonitor::new();
let status = monitor.track_failure().await;
assert!(status.is_degraded());
assert_eq!(status, HealthStatus::degraded(1));
}
#[tokio::test]
async fn test_track_failure_from_degraded() {
let monitor = HealthMonitor::with_initial_status(HealthStatus::degraded(2));
let status = monitor.track_failure().await;
assert!(status.is_degraded());
assert_eq!(status, HealthStatus::degraded(3));
}
#[tokio::test]
async fn test_track_failure_from_unhealthy() {
let monitor =
HealthMonitor::with_initial_status(HealthStatus::unhealthy("Critical failure"));
let status = monitor.track_failure().await;
assert!(status.is_unhealthy());
assert_eq!(status, HealthStatus::unhealthy("Critical failure"));
}
#[tokio::test]
async fn test_mark_unhealthy() {
let monitor = HealthMonitor::new();
let status = monitor.mark_unhealthy("Device disconnected").await;
assert!(status.is_unhealthy());
assert_eq!(status, HealthStatus::unhealthy("Device disconnected"));
}
#[tokio::test]
async fn test_mark_unhealthy_overwrites_previous() {
let monitor = HealthMonitor::with_initial_status(HealthStatus::degraded(3));
let status = monitor.mark_unhealthy("New failure").await;
assert!(status.is_unhealthy());
assert_eq!(status, HealthStatus::unhealthy("New failure"));
}
#[tokio::test]
async fn test_get_status() {
let monitor = HealthMonitor::with_initial_status(HealthStatus::degraded(2));
let status = monitor.get_status().await;
assert_eq!(status, HealthStatus::degraded(2));
}
#[tokio::test]
async fn test_is_healthy() {
let healthy_monitor = HealthMonitor::new();
assert!(healthy_monitor.is_healthy().await);
let degraded_monitor = HealthMonitor::with_initial_status(HealthStatus::degraded(1));
assert!(!degraded_monitor.is_healthy().await);
let unhealthy_monitor =
HealthMonitor::with_initial_status(HealthStatus::unhealthy("Failure"));
assert!(!unhealthy_monitor.is_healthy().await);
}
#[tokio::test]
async fn test_is_degraded() {
let healthy_monitor = HealthMonitor::new();
assert!(!healthy_monitor.is_degraded().await);
let degraded_monitor = HealthMonitor::with_initial_status(HealthStatus::degraded(1));
assert!(degraded_monitor.is_degraded().await);
let unhealthy_monitor =
HealthMonitor::with_initial_status(HealthStatus::unhealthy("Failure"));
assert!(!unhealthy_monitor.is_degraded().await);
}
#[tokio::test]
async fn test_is_unhealthy() {
let healthy_monitor = HealthMonitor::new();
assert!(!healthy_monitor.is_unhealthy().await);
let degraded_monitor = HealthMonitor::with_initial_status(HealthStatus::degraded(1));
assert!(!degraded_monitor.is_unhealthy().await);
let unhealthy_monitor =
HealthMonitor::with_initial_status(HealthStatus::unhealthy("Failure"));
assert!(unhealthy_monitor.is_unhealthy().await);
}
#[tokio::test]
async fn test_state_transitions_sequence() {
let monitor = HealthMonitor::new();
// Start healthy
assert!(monitor.is_healthy().await);
// First failure -> Degraded with 1 error
let status = monitor.track_failure().await;
assert!(status.is_degraded());
assert_eq!(status, HealthStatus::degraded(1));
// Second failure -> Degraded with 2 errors
let status = monitor.track_failure().await;
assert_eq!(status, HealthStatus::degraded(2));
// Third failure -> Degraded with 3 errors
let status = monitor.track_failure().await;
assert_eq!(status, HealthStatus::degraded(3));
// Recovery -> Healthy
let status = monitor.track_success().await;
assert!(status.is_healthy());
// Another failure -> Degraded with 1 error
let status = monitor.track_failure().await;
assert_eq!(status, HealthStatus::degraded(1));
// Mark as unhealthy -> Unhealthy
let status = monitor.mark_unhealthy("Critical error").await;
assert!(status.is_unhealthy());
// Recovery from unhealthy -> Healthy
let status = monitor.track_success().await;
assert!(status.is_healthy());
}
#[tokio::test]
async fn test_concurrent_access() {
let monitor = HealthMonitor::new();
// Create multiple tasks that access the monitor concurrently
// We need to clone the monitor for each task since tokio::spawn requires 'static
let monitor1 = monitor.clone();
let monitor2 = monitor.clone();
let monitor3 = monitor.clone();
let monitor4 = monitor.clone();
let task1 = tokio::spawn(async move { monitor1.track_failure().await });
let task2 = tokio::spawn(async move { monitor2.track_failure().await });
let task3 = tokio::spawn(async move { monitor3.track_success().await });
let task4 = tokio::spawn(async move { monitor4.get_status().await });
// Wait for all tasks to complete
let (result1, result2, result3, result4) = tokio::join!(task1, task2, task3, task4);
// All operations should complete without panicking
result1.expect("Task should complete successfully");
result2.expect("Task should complete successfully");
result3.expect("Task should complete successfully");
result4.expect("Task should complete successfully");
// Final status should be healthy (due to the success operation)
let final_status = monitor.get_status().await;
assert!(final_status.is_healthy());
}
}

View File

@@ -0,0 +1,6 @@
//! Health monitoring application layer.
//!
//! This module contains the health monitoring service that tracks the system's
//! health status and manages state transitions between healthy, degraded, and unhealthy states.
pub mod health_monitor;

View File

@@ -11,6 +11,11 @@
//! - **Use case driven**: Each module represents a specific business use case //! - **Use case driven**: Each module represents a specific business use case
//! - **Testable in isolation**: Can be tested with mock infrastructure implementations //! - **Testable in isolation**: Can be tested with mock infrastructure implementations
//! //!
//! # Submodules
//!
//! - `health`: Health monitoring service
//! - `health_monitor`: Tracks system health status and state transitions
//!
//! # Planned Submodules //! # Planned Submodules
//! //!
//! - `relay`: Relay control use cases //! - `relay`: Relay control use cases
@@ -58,3 +63,5 @@
//! - Architecture: `specs/constitution.md` - Hexagonal Architecture principles //! - Architecture: `specs/constitution.md` - Hexagonal Architecture principles
//! - Use cases: `specs/001-modbus-relay-control/plan.md` - Implementation plan //! - Use cases: `specs/001-modbus-relay-control/plan.md` - Implementation plan
//! - Domain types: [`crate::domain`] - Domain entities and value objects //! - Domain types: [`crate::domain`] - Domain entities and value objects
pub mod health;

View File

@@ -1,7 +1,7 @@
mod label; mod label;
pub use label::RelayLabelRepository; pub use label::RelayLabelRepository;
use super::types::RelayId; use super::types::{RelayId, RelayLabelError};
/// Errors that can occur during repository operations. /// Errors that can occur during repository operations.
/// ///
@@ -16,3 +16,15 @@ pub enum RepositoryError {
#[error("Relay not found: {0}")] #[error("Relay not found: {0}")]
NotFound(RelayId), NotFound(RelayId),
} }
impl From<sqlx::Error> for RepositoryError {
fn from(value: sqlx::Error) -> Self {
Self::DatabaseError(value.to_string())
}
}
impl From<RelayLabelError> for RepositoryError {
fn from(value: RelayLabelError) -> Self {
Self::DatabaseError(value.to_string())
}
}

View File

@@ -3,5 +3,5 @@ mod relaylabel;
mod relaystate; mod relaystate;
pub use relayid::RelayId; pub use relayid::RelayId;
pub use relaylabel::RelayLabel; pub use relaylabel::{RelayLabel, RelayLabelError};
pub use relaystate::RelayState; pub use relaystate::RelayState;

View File

@@ -8,10 +8,19 @@ use thiserror::Error;
#[repr(transparent)] #[repr(transparent)]
pub struct RelayLabel(String); pub struct RelayLabel(String);
/// Errors that can occur when creating or validating relay labels.
#[derive(Debug, Error)] #[derive(Debug, Error)]
pub enum RelayLabelError { pub enum RelayLabelError {
/// The label string is empty.
///
/// Relay labels must contain at least one character.
#[error("Label cannot be empty")] #[error("Label cannot be empty")]
Empty, Empty,
/// The label string exceeds the maximum allowed length.
///
/// Contains the actual length of the invalid label.
/// Maximum allowed length is 50 characters.
#[error("Label exceeds maximum length of 50 characters: {0}")] #[error("Label exceeds maximum length of 50 characters: {0}")]
TooLong(usize), TooLong(usize),
} }

View File

@@ -10,6 +10,10 @@ use super::*;
mod t025a_connection_setup_tests { mod t025a_connection_setup_tests {
use super::*; use super::*;
static HOST: &str = "192.168.1.200";
static PORT: u16 = 502;
static SLAVE_ID: u8 = 1;
/// T025a Test 1: `new()` with valid config connects successfully /// T025a Test 1: `new()` with valid config connects successfully
/// ///
/// This test verifies that `ModbusRelayController::new()` can establish /// This test verifies that `ModbusRelayController::new()` can establish
@@ -21,13 +25,10 @@ mod t025a_connection_setup_tests {
#[ignore = "Requires running Modbus TCP server"] #[ignore = "Requires running Modbus TCP server"]
async fn test_new_with_valid_config_connects_successfully() { async fn test_new_with_valid_config_connects_successfully() {
// Arrange: Use localhost test server // Arrange: Use localhost test server
let host = "127.0.0.1";
let port = 5020; // Test Modbus TCP port
let slave_id = 1;
let timeout_secs = 5; let timeout_secs = 5;
// Act: Attempt to create controller // Act: Attempt to create controller
let result = ModbusRelayController::new(host, port, slave_id, timeout_secs).await; let result = ModbusRelayController::new(HOST, PORT, SLAVE_ID, timeout_secs).await;
// Assert: Connection should succeed // Assert: Connection should succeed
assert!( assert!(
@@ -45,12 +46,10 @@ mod t025a_connection_setup_tests {
async fn test_new_with_invalid_host_returns_connection_error() { async fn test_new_with_invalid_host_returns_connection_error() {
// Arrange: Use invalid host format // Arrange: Use invalid host format
let host = "not a valid host!!!"; let host = "not a valid host!!!";
let port = 502;
let slave_id = 1;
let timeout_secs = 5; let timeout_secs = 5;
// Act: Attempt to create controller // Act: Attempt to create controller
let result = ModbusRelayController::new(host, port, slave_id, timeout_secs).await; let result = ModbusRelayController::new(host, PORT, SLAVE_ID, timeout_secs).await;
// Assert: Should return ConnectionError // Assert: Should return ConnectionError
assert!(result.is_err(), "Expected ConnectionError for invalid host"); assert!(result.is_err(), "Expected ConnectionError for invalid host");
@@ -74,13 +73,11 @@ mod t025a_connection_setup_tests {
async fn test_new_with_unreachable_host_returns_connection_error() { async fn test_new_with_unreachable_host_returns_connection_error() {
// Arrange: Use localhost with a closed port (port 1 is typically closed) // Arrange: Use localhost with a closed port (port 1 is typically closed)
// This gives instant "connection refused" instead of waiting for TCP timeout // This gives instant "connection refused" instead of waiting for TCP timeout
let host = "127.0.0.1";
let port = 1; // Closed port for instant connection failure let port = 1; // Closed port for instant connection failure
let slave_id = 1;
let timeout_secs = 1; let timeout_secs = 1;
// Act: Attempt to create controller // Act: Attempt to create controller
let result = ModbusRelayController::new(host, port, slave_id, timeout_secs).await; let result = ModbusRelayController::new(HOST, port, SLAVE_ID, timeout_secs).await;
// Assert: Should return ConnectionError // Assert: Should return ConnectionError
assert!( assert!(
@@ -100,13 +97,10 @@ mod t025a_connection_setup_tests {
#[ignore = "Requires running Modbus TCP server or refactoring to expose timeout"] #[ignore = "Requires running Modbus TCP server or refactoring to expose timeout"]
async fn test_new_stores_correct_timeout_duration() { async fn test_new_stores_correct_timeout_duration() {
// Arrange // Arrange
let host = "127.0.0.1";
let port = 5020;
let slave_id = 1;
let timeout_secs = 10; let timeout_secs = 10;
// Act // Act
let controller = ModbusRelayController::new(host, port, slave_id, timeout_secs) let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, timeout_secs)
.await .await
.expect("Failed to create controller"); .expect("Failed to create controller");
@@ -137,6 +131,10 @@ mod t025b_read_coils_timeout_tests {
types::RelayId, types::RelayId,
}; };
static HOST: &str = "192.168.1.200";
static PORT: u16 = 502;
static SLAVE_ID: u8 = 1;
/// T025b Test 1: `read_coils_with_timeout()` returns coil values on success /// T025b Test 1: `read_coils_with_timeout()` returns coil values on success
/// ///
/// This test verifies that reading coils succeeds when the Modbus server /// This test verifies that reading coils succeeds when the Modbus server
@@ -147,7 +145,7 @@ mod t025b_read_coils_timeout_tests {
#[ignore = "Requires running Modbus TCP server with known state"] #[ignore = "Requires running Modbus TCP server with known state"]
async fn test_read_coils_returns_coil_values_on_success() { async fn test_read_coils_returns_coil_values_on_success() {
// Arrange: Connect to test server // Arrange: Connect to test server
let controller = ModbusRelayController::new("127.0.0.1", 5020, 1, 5) let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, 5)
.await .await
.expect("Failed to connect to test server"); .expect("Failed to connect to test server");
@@ -251,6 +249,10 @@ mod t025c_write_single_coil_timeout_tests {
types::{RelayId, RelayState}, types::{RelayId, RelayState},
}; };
static HOST: &str = "192.168.1.200";
static PORT: u16 = 502;
static SLAVE_ID: u8 = 1;
/// T025c Test 1: `write_single_coil_with_timeout()` succeeds for valid write /// T025c Test 1: `write_single_coil_with_timeout()` succeeds for valid write
/// ///
/// This test verifies that writing to a coil succeeds when the Modbus server /// This test verifies that writing to a coil succeeds when the Modbus server
@@ -261,7 +263,7 @@ mod t025c_write_single_coil_timeout_tests {
#[ignore = "Requires running Modbus TCP server"] #[ignore = "Requires running Modbus TCP server"]
async fn test_write_single_coil_succeeds_for_valid_write() { async fn test_write_single_coil_succeeds_for_valid_write() {
// Arrange: Connect to test server // Arrange: Connect to test server
let controller = ModbusRelayController::new("127.0.0.1", 5020, 1, 5) let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, 5)
.await .await
.expect("Failed to connect to test server"); .expect("Failed to connect to test server");
@@ -336,6 +338,10 @@ mod t025d_read_relay_state_tests {
types::{RelayId, RelayState}, types::{RelayId, RelayState},
}; };
static HOST: &str = "192.168.1.200";
static PORT: u16 = 502;
static SLAVE_ID: u8 = 1;
/// T025d Test 1: `read_relay_state(RelayId(1))` returns On when coil is true /// T025d Test 1: `read_relay_state(RelayId(1))` returns On when coil is true
/// ///
/// This test verifies that a true coil value is correctly converted to `RelayState::On`. /// This test verifies that a true coil value is correctly converted to `RelayState::On`.
@@ -409,7 +415,7 @@ mod t025d_read_relay_state_tests {
#[ignore = "Requires Modbus server with specific relay states"] #[ignore = "Requires Modbus server with specific relay states"]
async fn test_read_state_correctly_maps_relay_id_to_modbus_address() { async fn test_read_state_correctly_maps_relay_id_to_modbus_address() {
// Arrange: Connect to test server with known relay states // Arrange: Connect to test server with known relay states
let controller = ModbusRelayController::new("127.0.0.1", 5020, 1, 5) let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, 5)
.await .await
.expect("Failed to connect to test server"); .expect("Failed to connect to test server");
@@ -434,6 +440,10 @@ mod t025e_write_relay_state_tests {
types::{RelayId, RelayState}, types::{RelayId, RelayState},
}; };
static HOST: &str = "192.168.1.200";
static PORT: u16 = 502;
static SLAVE_ID: u8 = 1;
/// T025e Test 1: `write_relay_state(RelayId(1)`, `RelayState::On`) writes true to coil /// T025e Test 1: `write_relay_state(RelayId(1)`, `RelayState::On`) writes true to coil
/// ///
/// This test verifies that `RelayState::On` is correctly converted to a true coil value. /// This test verifies that `RelayState::On` is correctly converted to a true coil value.
@@ -441,7 +451,7 @@ mod t025e_write_relay_state_tests {
#[ignore = "Requires Modbus server that can verify written values"] #[ignore = "Requires Modbus server that can verify written values"]
async fn test_write_state_on_writes_true_to_coil() { async fn test_write_state_on_writes_true_to_coil() {
// Arrange: Connect to test server // Arrange: Connect to test server
let controller = ModbusRelayController::new("127.0.0.1", 5020, 1, 5) let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, 5)
.await .await
.expect("Failed to connect to test server"); .expect("Failed to connect to test server");
@@ -475,7 +485,7 @@ mod t025e_write_relay_state_tests {
#[ignore = "Requires Modbus server that can verify written values"] #[ignore = "Requires Modbus server that can verify written values"]
async fn test_write_state_off_writes_false_to_coil() { async fn test_write_state_off_writes_false_to_coil() {
// Arrange: Connect to test server // Arrange: Connect to test server
let controller = ModbusRelayController::new("127.0.0.1", 5020, 1, 5) let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, 5)
.await .await
.expect("Failed to connect to test server"); .expect("Failed to connect to test server");
@@ -509,7 +519,7 @@ mod t025e_write_relay_state_tests {
#[ignore = "Requires Modbus server"] #[ignore = "Requires Modbus server"]
async fn test_write_state_correctly_maps_relay_id_to_modbus_address() { async fn test_write_state_correctly_maps_relay_id_to_modbus_address() {
// Arrange: Connect to test server // Arrange: Connect to test server
let controller = ModbusRelayController::new("127.0.0.1", 5020, 1, 5) let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, 5)
.await .await
.expect("Failed to connect to test server"); .expect("Failed to connect to test server");
@@ -537,7 +547,7 @@ mod t025e_write_relay_state_tests {
#[ignore = "Requires Modbus server"] #[ignore = "Requires Modbus server"]
async fn test_write_state_can_toggle_relay_multiple_times() { async fn test_write_state_can_toggle_relay_multiple_times() {
// Arrange: Connect to test server // Arrange: Connect to test server
let controller = ModbusRelayController::new("127.0.0.1", 5020, 1, 5) let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, 5)
.await .await
.expect("Failed to connect to test server"); .expect("Failed to connect to test server");
@@ -571,12 +581,16 @@ mod t025e_write_relay_state_tests {
mod write_all_states_validation_tests { mod write_all_states_validation_tests {
use super::*; use super::*;
static HOST: &str = "192.168.1.200";
static PORT: u16 = 502;
static SLAVE_ID: u8 = 1;
/// Test: `write_all_states()` returns `InvalidInput` when given 0 states /// Test: `write_all_states()` returns `InvalidInput` when given 0 states
#[tokio::test] #[tokio::test]
#[ignore = "Requires Modbus server"] #[ignore = "Requires Modbus server"]
async fn test_write_all_states_with_empty_vector_returns_invalid_input() { async fn test_write_all_states_with_empty_vector_returns_invalid_input() {
// Arrange: Connect to test server // Arrange: Connect to test server
let controller = ModbusRelayController::new("127.0.0.1", 5020, 1, 5) let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, 5)
.await .await
.expect("Failed to connect to test server"); .expect("Failed to connect to test server");
@@ -596,7 +610,7 @@ mod write_all_states_validation_tests {
#[ignore = "Requires Modbus server"] #[ignore = "Requires Modbus server"]
async fn test_write_all_states_with_7_states_returns_invalid_input() { async fn test_write_all_states_with_7_states_returns_invalid_input() {
// Arrange: Connect to test server // Arrange: Connect to test server
let controller = ModbusRelayController::new("127.0.0.1", 5020, 1, 5) let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, 5)
.await .await
.expect("Failed to connect to test server"); .expect("Failed to connect to test server");
@@ -626,7 +640,7 @@ mod write_all_states_validation_tests {
#[ignore = "Requires Modbus server"] #[ignore = "Requires Modbus server"]
async fn test_write_all_states_with_9_states_returns_invalid_input() { async fn test_write_all_states_with_9_states_returns_invalid_input() {
// Arrange: Connect to test server // Arrange: Connect to test server
let controller = ModbusRelayController::new("127.0.0.1", 5020, 1, 5) let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, 5)
.await .await
.expect("Failed to connect to test server"); .expect("Failed to connect to test server");
@@ -656,7 +670,7 @@ mod write_all_states_validation_tests {
#[ignore = "Requires Modbus server"] #[ignore = "Requires Modbus server"]
async fn test_write_all_states_with_8_states_succeeds() { async fn test_write_all_states_with_8_states_succeeds() {
// Arrange: Connect to test server // Arrange: Connect to test server
let controller = ModbusRelayController::new("127.0.0.1", 5020, 1, 5) let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, 5)
.await .await
.expect("Failed to connect to test server"); .expect("Failed to connect to test server");

View File

@@ -0,0 +1,33 @@
//! Infrastructure entities for database persistence.
//!
//! This module defines entities that directly map to database tables,
//! providing a clear separation between the persistence layer and the
//! domain layer. These entities represent raw database records without
//! domain validation or business logic.
//!
//! # Conversion Pattern
//!
//! Infrastructure entities implement `TryFrom` traits to convert between
//! database records and domain types:
//!
//! ```rust
//! # use sta::domain::relay::types::{RelayId, RelayLabel};
//! # use sta::infrastructure::persistence::entities::relay_label_record::RelayLabelRecord;
//! # fn main() -> Result<(), Box<dyn std::error::Error>> {
//! // Database Record -> Domain Types
//! // ... from database
//! let record: RelayLabelRecord = RelayLabelRecord { relay_id: 2, label: "label".to_string() };
//! let (relay_id, relay_label): (RelayId, RelayLabel) = record.try_into()?;
//!
//! // Domain Types -> Database Record
//! let domain_record= RelayLabelRecord::new(relay_id, &relay_label);
//! # Ok(())
//! # }
//! ```
/// Database entity for relay labels.
///
/// This module contains the `RelayLabelRecord` struct which represents
/// a single row in the `RelayLabels` database table, along with conversion
/// traits to and from domain types.
pub mod relay_label_record;

View File

@@ -0,0 +1,62 @@
use crate::domain::relay::{
controller::ControllerError,
repository::RepositoryError,
types::{RelayId, RelayLabel, RelayLabelError},
};
/// Database record representing a relay label.
///
/// This struct directly maps to the `RelayLabels` table in the
/// database. It represents the raw data as stored in the database,
/// without domain validation or business logic.
#[derive(Debug, Clone, sqlx::FromRow)]
pub struct RelayLabelRecord {
/// The relay ID (1-8) as stored in the database
pub relay_id: i64,
/// The label text as stored in the database
pub label: String,
}
impl RelayLabelRecord {
/// Creates a new `RecordLabelRecord` from domain types.
#[must_use]
pub fn new(relay_id: RelayId, label: &RelayLabel) -> Self {
Self {
relay_id: i64::from(relay_id.as_u8()),
label: label.as_str().to_string(),
}
}
}
impl TryFrom<RelayLabelRecord> for RelayId {
type Error = ControllerError;
fn try_from(value: RelayLabelRecord) -> Result<Self, Self::Error> {
let value = u8::try_from(value.relay_id).map_err(|e| {
Self::Error::InvalidInput(format!("Got value {} from database: {e}", value.relay_id))
})?;
Self::new(value)
}
}
impl TryFrom<RelayLabelRecord> for RelayLabel {
type Error = RelayLabelError;
fn try_from(value: RelayLabelRecord) -> Result<Self, Self::Error> {
Self::new(value.label)
}
}
impl TryFrom<RelayLabelRecord> for (RelayId, RelayLabel) {
type Error = RepositoryError;
fn try_from(value: RelayLabelRecord) -> Result<Self, Self::Error> {
let record_id: RelayId = value
.clone()
.try_into()
.map_err(|e: ControllerError| RepositoryError::DatabaseError(e.to_string()))?;
let label: RelayLabel = RelayLabel::new(value.label)
.map_err(|e| RepositoryError::DatabaseError(e.to_string()))?;
Ok((record_id, label))
}
}

View File

@@ -124,7 +124,10 @@ mod relay_label_repository_contract_tests {
.expect("Second save should succeed"); .expect("Second save should succeed");
// Verify only the second label is present // Verify only the second label is present
let result = repo.get_label(relay_id).await.expect("get_label should succeed"); let result = repo
.get_label(relay_id)
.await
.expect("get_label should succeed");
assert!(result.is_some(), "Label should exist"); assert!(result.is_some(), "Label should exist");
assert_eq!( assert_eq!(
result.unwrap().as_str(), result.unwrap().as_str(),
@@ -270,11 +273,17 @@ mod relay_label_repository_contract_tests {
.expect("delete should succeed"); .expect("delete should succeed");
// Verify deleted label is gone // Verify deleted label is gone
let get_result = repo.get_label(relay2).await.expect("get_label should succeed"); let get_result = repo
.get_label(relay2)
.await
.expect("get_label should succeed");
assert!(get_result.is_none(), "Deleted label should not exist"); assert!(get_result.is_none(), "Deleted label should not exist");
// Verify other label still exists // Verify other label still exists
let other_result = repo.get_label(relay1).await.expect("get_label should succeed"); let other_result = repo
.get_label(relay1)
.await
.expect("get_label should succeed");
assert!(other_result.is_some(), "Other label should still exist"); assert!(other_result.is_some(), "Other label should still exist");
// Verify get_all_labels only returns the remaining label // Verify get_all_labels only returns the remaining label

View File

@@ -12,3 +12,5 @@ pub mod label_repository_tests;
/// `SQLite` repository implementation for relay labels. /// `SQLite` repository implementation for relay labels.
pub mod sqlite_repository; pub mod sqlite_repository;
pub mod entities;

View File

@@ -1,6 +1,13 @@
use sqlx::SqlitePool; use async_trait::async_trait;
use sqlx::{SqlitePool, query_as};
use crate::domain::relay::repository::RepositoryError; use crate::{
domain::relay::{
repository::{RelayLabelRepository, RepositoryError},
types::{RelayId, RelayLabel},
},
infrastructure::persistence::entities::relay_label_record::RelayLabelRecord,
};
/// `SQLite` implementation of the relay label repository. /// `SQLite` implementation of the relay label repository.
/// ///
@@ -62,3 +69,56 @@ impl SqliteRelayLabelRepository {
Ok(()) Ok(())
} }
} }
#[async_trait]
impl RelayLabelRepository for SqliteRelayLabelRepository {
async fn get_label(&self, id: RelayId) -> Result<Option<RelayLabel>, RepositoryError> {
let id = i64::from(id.as_u8());
let result = sqlx::query_as!(
RelayLabelRecord,
"SELECT * FROM RelayLabels WHERE relay_id = ?1",
id
)
.fetch_optional(&self.pool)
.await
.map_err(|e| RepositoryError::DatabaseError(e.to_string()))?;
match result {
Some(record) => Ok(Some(record.try_into()?)),
None => Ok(None),
}
}
async fn save_label(&self, id: RelayId, label: RelayLabel) -> Result<(), RepositoryError> {
let record = RelayLabelRecord::new(id, &label);
sqlx::query!(
"INSERT OR REPLACE INTO RelayLabels (relay_id, label) VALUES (?1, ?2)",
record.relay_id,
record.label
)
.execute(&self.pool)
.await
.map_err(RepositoryError::from)?;
Ok(())
}
async fn delete_label(&self, id: RelayId) -> Result<(), RepositoryError> {
let id = i64::from(id.as_u8());
sqlx::query!("DELETE FROM RelayLabels WHERE relay_id = ?1", id)
.execute(&self.pool)
.await
.map_err(RepositoryError::from)?;
Ok(())
}
async fn get_all_labels(&self) -> Result<Vec<(RelayId, RelayLabel)>, RepositoryError> {
let result: Vec<RelayLabelRecord> = query_as!(
RelayLabelRecord,
"SELECT * FROM RelayLabels ORDER BY relay_id"
)
.fetch_all(&self.pool)
.await
.map_err(RepositoryError::from)?;
result.iter().map(|r| r.clone().try_into()).collect()
}
}

View File

@@ -0,0 +1,253 @@
// Integration tests for Modbus hardware
// These tests require physical Modbus relay device to be connected
// Run with: cargo test -- --ignored
use std::time::Duration;
#[cfg(test)]
mod tests {
use super::*;
use sta::domain::relay::controller::RelayController;
use sta::domain::relay::types::{RelayId, RelayState};
use sta::infrastructure::modbus::client::ModbusRelayController;
static HOST: &str = "192.168.1.200";
static PORT: u16 = 502;
static SLAVE_ID: u8 = 1;
#[tokio::test]
#[ignore = "Requires physical Modbus device"]
async fn test_modbus_connection() {
// This test verifies we can connect to the actual Modbus device
// Configured with settings from settings/base.yaml
let timeout_secs = 5;
let _controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, timeout_secs)
.await
.expect("Failed to connect to Modbus device");
// If we got here, connection was successful
println!("✓ Successfully connected to Modbus device");
}
#[tokio::test]
#[ignore = "Requires physical Modbus device"]
async fn test_read_relay_states() {
let timeout_secs = 5;
let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, timeout_secs)
.await
.expect("Failed to connect to Modbus device");
// Test reading individual relay states
for relay_id in 1..=8 {
let relay_id = RelayId::new(relay_id).unwrap();
let state = controller
.read_relay_state(relay_id)
.await
.expect("Failed to read relay state");
println!("Relay {}: {:?}", relay_id.as_u8(), state);
}
}
#[tokio::test]
#[ignore = "Requires physical Modbus device"]
async fn test_read_all_relays() {
let timeout_secs = 5;
let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, timeout_secs)
.await
.expect("Failed to connect to Modbus device");
let relays = controller
.read_all_states()
.await
.expect("Failed to read all relay states");
assert_eq!(relays.len(), 8, "Should have exactly 8 relays");
for (i, state) in relays.iter().enumerate() {
let relay_id = i + 1;
println!("Relay {}: {:?}", relay_id, state);
}
}
#[tokio::test]
#[ignore = "Requires physical Modbus device"]
async fn test_write_relay_state() {
let timeout_secs = 5;
let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, timeout_secs)
.await
.expect("Failed to connect to Modbus device");
let relay_id = RelayId::new(1).unwrap();
// Turn relay on
controller
.write_relay_state(relay_id, RelayState::On)
.await
.expect("Failed to write relay state");
// Verify it's on
let state = controller
.read_relay_state(relay_id)
.await
.expect("Failed to read relay state");
assert_eq!(state, RelayState::On, "Relay should be ON");
// Turn relay off
controller
.write_relay_state(relay_id, RelayState::Off)
.await
.expect("Failed to write relay state");
// Verify it's off
let state = controller
.read_relay_state(relay_id)
.await
.expect("Failed to read relay state");
assert_eq!(state, RelayState::Off, "Relay should be OFF");
}
#[tokio::test]
#[ignore = "Requires physical Modbus device"]
async fn test_write_all_relays() {
let timeout_secs = 5;
let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, timeout_secs)
.await
.expect("Failed to connect to Modbus device");
// Turn all relays on
let all_on_states = vec![RelayState::On; 8];
controller
.write_all_states(all_on_states)
.await
.expect("Failed to write all relay states");
// Verify all are on
let relays = controller
.read_all_states()
.await
.expect("Failed to read all relay states");
for state in &relays {
assert_eq!(*state, RelayState::On, "All relays should be ON");
}
// Turn all relays off
let all_off_states = vec![RelayState::Off; 8];
controller
.write_all_states(all_off_states)
.await
.expect("Failed to write all relay states");
// Verify all are off
let relays = controller
.read_all_states()
.await
.expect("Failed to read all relay states");
for state in &relays {
assert_eq!(*state, RelayState::Off, "All relays should be OFF");
}
}
#[tokio::test]
#[ignore = "Requires physical Modbus device"]
async fn test_timeout_handling() {
let timeout_secs = 1; // Short timeout for testing
let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, timeout_secs)
.await
.expect("Failed to connect to Modbus device");
// This test verifies that timeout works correctly
// We'll try to read a relay state with a very short timeout
let relay_id = RelayId::new(1).unwrap();
// The operation should either succeed quickly or timeout
let result = tokio::time::timeout(
Duration::from_secs(2),
controller.read_relay_state(relay_id),
)
.await;
match result {
Ok(Ok(state)) => {
println!("✓ Operation completed within timeout: {:?}", state);
}
Ok(Err(e)) => {
println!("✓ Operation failed (expected): {}", e);
}
Err(_) => {
println!("✓ Operation timed out (expected)");
}
}
}
#[tokio::test]
#[ignore = "Requires physical Modbus device"]
async fn test_concurrent_access() {
let timeout_secs = 5;
let _controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, timeout_secs)
.await
.expect("Failed to connect to Modbus device");
// Test concurrent access to the controller
// We'll test a few relays concurrently using tokio::join!
// Note: We can't clone the controller, so we'll just test sequential access
// This is still valuable for testing the controller works with multiple relays
let relay_id1 = RelayId::new(1).unwrap();
let relay_id2 = RelayId::new(2).unwrap();
let relay_id3 = RelayId::new(3).unwrap();
let relay_id4 = RelayId::new(4).unwrap();
let task1 = tokio::spawn(async move {
let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, timeout_secs)
.await
.expect("Failed to connect");
controller.read_relay_state(relay_id1).await
});
let task2 = tokio::spawn(async move {
let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, timeout_secs)
.await
.expect("Failed to connect");
controller.read_relay_state(relay_id2).await
});
let task3 = tokio::spawn(async move {
let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, timeout_secs)
.await
.expect("Failed to connect");
controller.read_relay_state(relay_id3).await
});
let task4 = tokio::spawn(async move {
let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, timeout_secs)
.await
.expect("Failed to connect");
controller.read_relay_state(relay_id4).await
});
let (result1, result2, result3, result4) = tokio::join!(task1, task2, task3, task4);
// Process results
if let Ok(Ok(state)) = result1 {
println!("Relay 1: {:?}", state);
}
if let Ok(Ok(state)) = result2 {
println!("Relay 2: {:?}", state);
}
if let Ok(Ok(state)) = result3 {
println!("Relay 3: {:?}", state);
}
if let Ok(Ok(state)) = result4 {
println!("Relay 4: {:?}", state);
}
}
}

View File

@@ -0,0 +1,473 @@
//! Functional tests for `SqliteRelayLabelRepository` implementation.
//!
//! These tests verify that the SQLite repository correctly implements
//! the `RelayLabelRepository` trait using the new infrastructure entities
//! and conversion patterns.
use sta::{
domain::relay::{
repository::RelayLabelRepository,
types::{RelayId, RelayLabel},
},
infrastructure::persistence::{
entities::relay_label_record::RelayLabelRecord,
sqlite_repository::SqliteRelayLabelRepository,
},
};
/// Test that `get_label` returns None for non-existent relay.
#[tokio::test]
async fn test_get_label_returns_none_for_non_existent_relay() {
let repo = SqliteRelayLabelRepository::in_memory()
.await
.expect("Failed to create repository");
let relay_id = RelayId::new(1).expect("Valid relay ID");
let result = repo.get_label(relay_id).await;
assert!(result.is_ok(), "get_label should succeed");
assert!(
result.unwrap().is_none(),
"get_label should return None for non-existent relay"
);
}
/// Test that `get_label` retrieves previously saved label.
#[tokio::test]
async fn test_get_label_retrieves_saved_label() {
let repo = SqliteRelayLabelRepository::in_memory()
.await
.expect("Failed to create repository");
let relay_id = RelayId::new(2).expect("Valid relay ID");
let label = RelayLabel::new("Heater".to_string()).expect("Valid label");
// Save the label
repo.save_label(relay_id, label.clone())
.await
.expect("save_label should succeed");
// Retrieve the label
let result = repo.get_label(relay_id).await;
assert!(result.is_ok(), "get_label should succeed");
let retrieved = result.unwrap();
assert!(retrieved.is_some(), "get_label should return Some");
assert_eq!(
retrieved.unwrap().as_str(),
"Heater",
"Retrieved label should match saved label"
);
}
/// Test that `save_label` successfully saves a label.
#[tokio::test]
async fn test_save_label_succeeds() {
let repo = SqliteRelayLabelRepository::in_memory()
.await
.expect("Failed to create repository");
let relay_id = RelayId::new(1).expect("Valid relay ID");
let label = RelayLabel::new("Pump".to_string()).expect("Valid label");
let result = repo.save_label(relay_id, label).await;
assert!(result.is_ok(), "save_label should succeed");
}
/// Test that `save_label` overwrites existing label.
#[tokio::test]
async fn test_save_label_overwrites_existing_label() {
let repo = SqliteRelayLabelRepository::in_memory()
.await
.expect("Failed to create repository");
let relay_id = RelayId::new(4).expect("Valid relay ID");
let label1 = RelayLabel::new("First".to_string()).expect("Valid label");
let label2 = RelayLabel::new("Second".to_string()).expect("Valid label");
// Save first label
repo.save_label(relay_id, label1)
.await
.expect("First save should succeed");
// Overwrite with second label
repo.save_label(relay_id, label2)
.await
.expect("Second save should succeed");
// Verify only the second label is present
let result = repo
.get_label(relay_id)
.await
.expect("get_label should succeed");
assert!(result.is_some(), "Label should exist");
assert_eq!(
result.unwrap().as_str(),
"Second",
"Label should be updated to second value"
);
}
/// Test that `save_label` works for all valid relay IDs (1-8).
#[tokio::test]
async fn test_save_label_for_all_valid_relay_ids() {
let repo = SqliteRelayLabelRepository::in_memory()
.await
.expect("Failed to create repository");
for id in 1..=8 {
let relay_id = RelayId::new(id).expect("Valid relay ID");
let label = RelayLabel::new(format!("Relay {}", id)).expect("Valid label");
let result = repo.save_label(relay_id, label).await;
assert!(
result.is_ok(),
"save_label should succeed for relay ID {}",
id
);
}
// Verify all labels were saved
let all_labels = repo
.get_all_labels()
.await
.expect("get_all_labels should succeed");
assert_eq!(all_labels.len(), 8, "Should have all 8 relay labels");
}
/// Test that `save_label` accepts maximum length labels.
#[tokio::test]
async fn test_save_label_accepts_max_length_labels() {
let repo = SqliteRelayLabelRepository::in_memory()
.await
.expect("Failed to create repository");
let relay_id = RelayId::new(5).expect("Valid relay ID");
let max_label = RelayLabel::new("A".repeat(50)).expect("Valid max-length label");
let result = repo.save_label(relay_id, max_label).await;
assert!(
result.is_ok(),
"save_label should succeed with max-length label"
);
// Verify it was saved correctly
let retrieved = repo
.get_label(relay_id)
.await
.expect("get_label should succeed");
assert!(retrieved.is_some(), "Label should be saved");
assert_eq!(
retrieved.unwrap().as_str().len(),
50,
"Label should have correct length"
);
}
/// Test that `delete_label` succeeds for existing label.
#[tokio::test]
async fn test_delete_label_succeeds_for_existing_label() {
let repo = SqliteRelayLabelRepository::in_memory()
.await
.expect("Failed to create repository");
let relay_id = RelayId::new(7).expect("Valid relay ID");
let label = RelayLabel::new("ToDelete".to_string()).expect("Valid label");
// Save the label first
repo.save_label(relay_id, label)
.await
.expect("save_label should succeed");
// Delete it
let result = repo.delete_label(relay_id).await;
assert!(result.is_ok(), "delete_label should succeed");
}
/// Test that `delete_label` succeeds for non-existent label.
#[tokio::test]
async fn test_delete_label_succeeds_for_non_existent_label() {
let repo = SqliteRelayLabelRepository::in_memory()
.await
.expect("Failed to create repository");
let relay_id = RelayId::new(8).expect("Valid relay ID");
// Delete without saving first
let result = repo.delete_label(relay_id).await;
assert!(
result.is_ok(),
"delete_label should succeed even if label doesn't exist"
);
}
/// Test that `delete_label` removes label from repository.
#[tokio::test]
async fn test_delete_label_removes_label_from_repository() {
let repo = SqliteRelayLabelRepository::in_memory()
.await
.expect("Failed to create repository");
let relay1 = RelayId::new(1).expect("Valid relay ID");
let relay2 = RelayId::new(2).expect("Valid relay ID");
let label1 = RelayLabel::new("Keep".to_string()).expect("Valid label");
let label2 = RelayLabel::new("Remove".to_string()).expect("Valid label");
// Save two labels
repo.save_label(relay1, label1)
.await
.expect("save should succeed");
repo.save_label(relay2, label2)
.await
.expect("save should succeed");
// Delete one label
repo.delete_label(relay2)
.await
.expect("delete should succeed");
// Verify deleted label is gone
let get_result = repo
.get_label(relay2)
.await
.expect("get_label should succeed");
assert!(get_result.is_none(), "Deleted label should not exist");
// Verify other label still exists
let other_result = repo
.get_label(relay1)
.await
.expect("get_label should succeed");
assert!(other_result.is_some(), "Other label should still exist");
// Verify get_all_labels only returns the remaining label
let all_labels = repo
.get_all_labels()
.await
.expect("get_all_labels should succeed");
assert_eq!(all_labels.len(), 1, "Should only have one label remaining");
assert_eq!(all_labels[0].0.as_u8(), 1, "Should be relay 1");
}
/// Test that `get_all_labels` returns empty vector when no labels exist.
#[tokio::test]
async fn test_get_all_labels_returns_empty_when_no_labels() {
let repo = SqliteRelayLabelRepository::in_memory()
.await
.expect("Failed to create repository");
let result = repo.get_all_labels().await;
assert!(result.is_ok(), "get_all_labels should succeed");
assert!(
result.unwrap().is_empty(),
"get_all_labels should return empty vector"
);
}
/// Test that `get_all_labels` returns all saved labels.
#[tokio::test]
async fn test_get_all_labels_returns_all_saved_labels() {
let repo = SqliteRelayLabelRepository::in_memory()
.await
.expect("Failed to create repository");
let relay1 = RelayId::new(1).expect("Valid relay ID");
let relay3 = RelayId::new(3).expect("Valid relay ID");
let relay5 = RelayId::new(5).expect("Valid relay ID");
let label1 = RelayLabel::new("Pump".to_string()).expect("Valid label");
let label3 = RelayLabel::new("Heater".to_string()).expect("Valid label");
let label5 = RelayLabel::new("Fan".to_string()).expect("Valid label");
// Save labels
repo.save_label(relay1, label1.clone())
.await
.expect("Save should succeed");
repo.save_label(relay3, label3.clone())
.await
.expect("Save should succeed");
repo.save_label(relay5, label5.clone())
.await
.expect("Save should succeed");
// Retrieve all labels
let result = repo
.get_all_labels()
.await
.expect("get_all_labels should succeed");
assert_eq!(result.len(), 3, "Should return exactly 3 labels");
// Verify the labels are present (order may vary by implementation)
let has_relay1 = result
.iter()
.any(|(id, label)| id.as_u8() == 1 && label.as_str() == "Pump");
let has_relay3 = result
.iter()
.any(|(id, label)| id.as_u8() == 3 && label.as_str() == "Heater");
let has_relay5 = result
.iter()
.any(|(id, label)| id.as_u8() == 5 && label.as_str() == "Fan");
assert!(has_relay1, "Should contain relay 1 with label 'Pump'");
assert!(has_relay3, "Should contain relay 3 with label 'Heater'");
assert!(has_relay5, "Should contain relay 5 with label 'Fan'");
}
/// Test that `get_all_labels` excludes relays without labels.
#[tokio::test]
async fn test_get_all_labels_excludes_relays_without_labels() {
let repo = SqliteRelayLabelRepository::in_memory()
.await
.expect("Failed to create repository");
let relay2 = RelayId::new(2).expect("Valid relay ID");
let label2 = RelayLabel::new("Only This One".to_string()).expect("Valid label");
repo.save_label(relay2, label2)
.await
.expect("Save should succeed");
let result = repo
.get_all_labels()
.await
.expect("get_all_labels should succeed");
assert_eq!(
result.len(),
1,
"Should return only the one relay with a label"
);
assert_eq!(result[0].0.as_u8(), 2, "Should be relay 2");
}
/// Test that `get_all_labels` excludes deleted labels.
#[tokio::test]
async fn test_get_all_labels_excludes_deleted_labels() {
let repo = SqliteRelayLabelRepository::in_memory()
.await
.expect("Failed to create repository");
let relay1 = RelayId::new(1).expect("Valid relay ID");
let relay2 = RelayId::new(2).expect("Valid relay ID");
let relay3 = RelayId::new(3).expect("Valid relay ID");
let label1 = RelayLabel::new("Keep1".to_string()).expect("Valid label");
let label2 = RelayLabel::new("Delete".to_string()).expect("Valid label");
let label3 = RelayLabel::new("Keep2".to_string()).expect("Valid label");
// Save all three labels
repo.save_label(relay1, label1)
.await
.expect("save should succeed");
repo.save_label(relay2, label2)
.await
.expect("save should succeed");
repo.save_label(relay3, label3)
.await
.expect("save should succeed");
// Delete the middle one
repo.delete_label(relay2)
.await
.expect("delete should succeed");
// Verify get_all_labels only returns the two remaining labels
let result = repo
.get_all_labels()
.await
.expect("get_all_labels should succeed");
assert_eq!(result.len(), 2, "Should have 2 labels after deletion");
let has_relay1 = result.iter().any(|(id, _)| id.as_u8() == 1);
let has_relay2 = result.iter().any(|(id, _)| id.as_u8() == 2);
let has_relay3 = result.iter().any(|(id, _)| id.as_u8() == 3);
assert!(has_relay1, "Relay 1 should be present");
assert!(!has_relay2, "Relay 2 should NOT be present (deleted)");
assert!(has_relay3, "Relay 3 should be present");
}
/// Test that entity conversion works correctly.
#[tokio::test]
async fn test_entity_conversion_roundtrip() {
let repo = SqliteRelayLabelRepository::in_memory()
.await
.expect("Failed to create repository");
let relay_id = RelayId::new(1).expect("Valid relay ID");
let relay_label = RelayLabel::new("Test Label".to_string()).expect("Valid label");
// Create record from domain types
let _record = RelayLabelRecord::new(relay_id, &relay_label);
// Save using repository
repo.save_label(relay_id, relay_label.clone())
.await
.expect("save_label should succeed");
// Retrieve and verify conversion
let retrieved = repo
.get_label(relay_id)
.await
.expect("get_label should succeed");
assert!(retrieved.is_some(), "Label should be retrieved");
assert_eq!(retrieved.unwrap(), relay_label, "Labels should match");
}
/// Test that repository handles database errors gracefully.
#[tokio::test]
async fn test_repository_error_handling() {
let _repo = SqliteRelayLabelRepository::in_memory()
.await
.expect("Failed to create repository");
// Test with invalid relay ID (should be caught by domain validation)
let invalid_relay_id = RelayId::new(9); // This will fail validation
assert!(invalid_relay_id.is_err(), "Invalid relay ID should fail validation");
// Test with invalid label (should be caught by domain validation)
let invalid_label = RelayLabel::new("".to_string()); // Empty label
assert!(invalid_label.is_err(), "Empty label should fail validation");
}
/// Test that repository operations are thread-safe.
#[tokio::test]
async fn test_concurrent_operations_are_thread_safe() {
let repo = SqliteRelayLabelRepository::in_memory()
.await
.expect("Failed to create repository");
// Since SqliteRelayLabelRepository doesn't implement Clone, we'll test
// sequential operations which still verify the repository handles
// multiple operations correctly
// Save multiple labels sequentially
let relay_id1 = RelayId::new(1).expect("Valid relay ID");
let label1 = RelayLabel::new("Task1".to_string()).expect("Valid label");
repo.save_label(relay_id1, label1)
.await
.expect("First save should succeed");
let relay_id2 = RelayId::new(2).expect("Valid relay ID");
let label2 = RelayLabel::new("Task2".to_string()).expect("Valid label");
repo.save_label(relay_id2, label2)
.await
.expect("Second save should succeed");
let relay_id3 = RelayId::new(3).expect("Valid relay ID");
let label3 = RelayLabel::new("Task3".to_string()).expect("Valid label");
repo.save_label(relay_id3, label3)
.await
.expect("Third save should succeed");
// Verify all labels were saved
let all_labels = repo
.get_all_labels()
.await
.expect("get_all_labels should succeed");
assert_eq!(all_labels.len(), 3, "Should have all 3 labels");
}

View File

@@ -31,7 +31,10 @@ release-run:
cargo run --release cargo run --release
test: test:
cargo test cargo test --all --all-targets
test-hardware:
cargo test --all --all-targets -- --ignored
coverage: coverage:
mkdir -p coverage mkdir -p coverage

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff