feat(application): HealthMonitor service and hardware integration test
Add HealthMonitor service for tracking system health status with
comprehensive state transition logic and thread-safe operations.
Includes 16 unit tests covering all functionality including concurrent
access scenarios.
Add optional Modbus hardware integration tests with 7 test cases for
real device testing. Tests are marked as ignored and can be run with
running 21 tests
test infrastructure::modbus::client::tests::t025c_write_single_coil_timeout_tests::test_write_single_coil_returns_error_on_failure ... FAILED
test infrastructure::modbus::client::tests::t025c_write_single_coil_timeout_tests::test_write_single_coil_returns_timeout_on_slow_device ... FAILED
test infrastructure::modbus::client::tests::t025b_read_coils_timeout_tests::test_read_coils_returns_timeout_on_slow_response ... FAILED
test infrastructure::modbus::client::tests::t025b_read_coils_timeout_tests::test_read_coils_returns_modbus_exception_on_protocol_error ... FAILED
test infrastructure::modbus::client::tests::t025d_read_relay_state_tests::test_read_state_returns_on_when_coil_is_true ... FAILED
test infrastructure::modbus::client::tests::t025d_read_relay_state_tests::test_read_state_propagates_controller_error ... FAILED
test infrastructure::modbus::client::tests::t025d_read_relay_state_tests::test_read_state_returns_off_when_coil_is_false ... FAILED
test infrastructure::modbus::client::tests::t025b_read_coils_timeout_tests::test_read_coils_returns_connection_error_on_io_error ... FAILED
test infrastructure::modbus::client::tests::t025a_connection_setup_tests::test_new_with_valid_config_connects_successfully ... ok
test infrastructure::modbus::client::tests::t025a_connection_setup_tests::test_new_stores_correct_timeout_duration ... ok
test infrastructure::modbus::client::tests::t025b_read_coils_timeout_tests::test_read_coils_returns_coil_values_on_success ... ok
test infrastructure::modbus::client::tests::write_all_states_validation_tests::test_write_all_states_with_9_states_returns_invalid_input ... ok
test infrastructure::modbus::client::tests::write_all_states_validation_tests::test_write_all_states_with_empty_vector_returns_invalid_input ... ok
test infrastructure::modbus::client::tests::t025e_write_relay_state_tests::test_write_state_can_toggle_relay_multiple_times ... ok
test infrastructure::modbus::client::tests::write_all_states_validation_tests::test_write_all_states_with_8_states_succeeds ... ok
test infrastructure::modbus::client::tests::t025c_write_single_coil_timeout_tests::test_write_single_coil_succeeds_for_valid_write ... ok
test infrastructure::modbus::client::tests::t025e_write_relay_state_tests::test_write_state_off_writes_false_to_coil ... FAILED
test infrastructure::modbus::client::tests::t025d_read_relay_state_tests::test_read_state_correctly_maps_relay_id_to_modbus_address ... ok
test infrastructure::modbus::client::tests::write_all_states_validation_tests::test_write_all_states_with_7_states_returns_invalid_input ... ok
test infrastructure::modbus::client::tests::t025e_write_relay_state_tests::test_write_state_on_writes_true_to_coil ... ok
test infrastructure::modbus::client::tests::t025e_write_relay_state_tests::test_write_state_correctly_maps_relay_id_to_modbus_address ... ok
failures:
---- infrastructure::modbus::client::tests::t025c_write_single_coil_timeout_tests::test_write_single_coil_returns_error_on_failure stdout ----
thread 'infrastructure::modbus::client::tests::t025c_write_single_coil_timeout_tests::test_write_single_coil_returns_error_on_failure' (1157113) panicked at backend/src/infrastructure/modbus/client_test.rs:320:14:
Failed to connect: ConnectionError("Connection refused (os error 111)")
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
---- infrastructure::modbus::client::tests::t025c_write_single_coil_timeout_tests::test_write_single_coil_returns_timeout_on_slow_device stdout ----
thread 'infrastructure::modbus::client::tests::t025c_write_single_coil_timeout_tests::test_write_single_coil_returns_timeout_on_slow_device' (1157114) panicked at backend/src/infrastructure/modbus/client_test.rs:293:14:
Failed to connect: ConnectionError("Connection refused (os error 111)")
---- infrastructure::modbus::client::tests::t025b_read_coils_timeout_tests::test_read_coils_returns_timeout_on_slow_response stdout ----
thread 'infrastructure::modbus::client::tests::t025b_read_coils_timeout_tests::test_read_coils_returns_timeout_on_slow_response' (1157112) panicked at backend/src/infrastructure/modbus/client_test.rs:176:14:
Failed to connect: ConnectionError("Connection refused (os error 111)")
---- infrastructure::modbus::client::tests::t025b_read_coils_timeout_tests::test_read_coils_returns_modbus_exception_on_protocol_error stdout ----
thread 'infrastructure::modbus::client::tests::t025b_read_coils_timeout_tests::test_read_coils_returns_modbus_exception_on_protocol_error' (1157111) panicked at backend/src/infrastructure/modbus/client_test.rs:227:14:
Failed to connect: ConnectionError("Connection refused (os error 111)")
---- infrastructure::modbus::client::tests::t025d_read_relay_state_tests::test_read_state_returns_on_when_coil_is_true stdout ----
thread 'infrastructure::modbus::client::tests::t025d_read_relay_state_tests::test_read_state_returns_on_when_coil_is_true' (1157119) panicked at backend/src/infrastructure/modbus/client_test.rs:354:14:
Failed to connect to test server: ConnectionError("Connection refused (os error 111)")
---- infrastructure::modbus::client::tests::t025d_read_relay_state_tests::test_read_state_propagates_controller_error stdout ----
thread 'infrastructure::modbus::client::tests::t025d_read_relay_state_tests::test_read_state_propagates_controller_error' (1157117) panicked at backend/src/infrastructure/modbus/client_test.rs:396:14:
Failed to connect to test server: ConnectionError("Connection refused (os error 111)")
---- infrastructure::modbus::client::tests::t025d_read_relay_state_tests::test_read_state_returns_off_when_coil_is_false stdout ----
thread 'infrastructure::modbus::client::tests::t025d_read_relay_state_tests::test_read_state_returns_off_when_coil_is_false' (1157118) panicked at backend/src/infrastructure/modbus/client_test.rs:375:14:
Failed to connect to test server: ConnectionError("Connection refused (os error 111)")
---- infrastructure::modbus::client::tests::t025b_read_coils_timeout_tests::test_read_coils_returns_connection_error_on_io_error stdout ----
thread 'infrastructure::modbus::client::tests::t025b_read_coils_timeout_tests::test_read_coils_returns_connection_error_on_io_error' (1157110) panicked at backend/src/infrastructure/modbus/client_test.rs:202:14:
Failed to connect: ConnectionError("Connection refused (os error 111)")
---- infrastructure::modbus::client::tests::t025e_write_relay_state_tests::test_write_state_off_writes_false_to_coil stdout ----
thread 'infrastructure::modbus::client::tests::t025e_write_relay_state_tests::test_write_state_off_writes_false_to_coil' (1157122) panicked at backend/src/infrastructure/modbus/client_test.rs:508:9:
assertion `left == right` failed: Relay should be Off after writing Off state
left: On
right: Off
failures:
infrastructure::modbus::client::tests::t025b_read_coils_timeout_tests::test_read_coils_returns_connection_error_on_io_error
infrastructure::modbus::client::tests::t025b_read_coils_timeout_tests::test_read_coils_returns_modbus_exception_on_protocol_error
infrastructure::modbus::client::tests::t025b_read_coils_timeout_tests::test_read_coils_returns_timeout_on_slow_response
infrastructure::modbus::client::tests::t025c_write_single_coil_timeout_tests::test_write_single_coil_returns_error_on_failure
infrastructure::modbus::client::tests::t025c_write_single_coil_timeout_tests::test_write_single_coil_returns_timeout_on_slow_device
infrastructure::modbus::client::tests::t025d_read_relay_state_tests::test_read_state_propagates_controller_error
infrastructure::modbus::client::tests::t025d_read_relay_state_tests::test_read_state_returns_off_when_coil_is_false
infrastructure::modbus::client::tests::t025d_read_relay_state_tests::test_read_state_returns_on_when_coil_is_true
infrastructure::modbus::client::tests::t025e_write_relay_state_tests::test_write_state_off_writes_false_to_coil
test result: FAILED. 12 passed; 9 failed; 0 ignored; 0 measured; 128 filtered out; finished in 3.27s.
Ref: T034, T039, T040 (specs/001-modbus-relay-control/tasks.org)
This commit is contained in:
@@ -4,4 +4,4 @@ skip-clean = true
|
||||
target-dir = "coverage"
|
||||
output-dir = "coverage"
|
||||
fail-under = 60
|
||||
exclude-files = ["target/*", "private/*", "tests/*"]
|
||||
exclude-files = ["target/*", "private/*", "backend/tests/*", "backend/build.rs"]
|
||||
|
||||
@@ -8,7 +8,7 @@ rate_limit:
|
||||
per_seconds: 60
|
||||
|
||||
modbus:
|
||||
host: "192.168.0.200"
|
||||
host: 192.168.0.200
|
||||
port: 502
|
||||
slave_id: 0
|
||||
timeout_secs: 5
|
||||
|
||||
331
backend/src/application/health/health_monitor.rs
Normal file
331
backend/src/application/health/health_monitor.rs
Normal file
@@ -0,0 +1,331 @@
|
||||
//! Health monitoring service for tracking system health status.
|
||||
//!
|
||||
//! The `HealthMonitor` service tracks the health status of the Modbus relay controller
|
||||
//! by monitoring consecutive errors and transitions between healthy, degraded, and unhealthy states.
|
||||
//! This service implements the health monitoring requirements from FR-020 and FR-021.
|
||||
|
||||
use std::sync::Arc;
|
||||
use tokio::sync::Mutex;
|
||||
|
||||
use crate::domain::health::HealthStatus;
|
||||
|
||||
/// Health monitor service for tracking system health status.
|
||||
///
|
||||
/// The `HealthMonitor` service maintains the current health status and provides
|
||||
/// methods to track successes and failures, transitioning between states according
|
||||
/// to the business rules defined in the domain layer.
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct HealthMonitor {
|
||||
/// Current health status, protected by a mutex for thread-safe access.
|
||||
current_status: Arc<Mutex<HealthStatus>>,
|
||||
}
|
||||
|
||||
impl HealthMonitor {
|
||||
/// Creates a new `HealthMonitor` with initial `Healthy` status.
|
||||
#[must_use]
|
||||
pub fn new() -> Self {
|
||||
Self::with_initial_status(HealthStatus::Healthy)
|
||||
}
|
||||
|
||||
/// Creates a new `HealthMonitor` with the specified initial status.
|
||||
#[must_use]
|
||||
pub fn with_initial_status(initial_status: HealthStatus) -> Self {
|
||||
Self {
|
||||
current_status: Arc::new(Mutex::new(initial_status)),
|
||||
}
|
||||
}
|
||||
|
||||
/// Records a successful operation, potentially transitioning to `Healthy` status.
|
||||
///
|
||||
/// This method transitions the health status according to the following rules:
|
||||
/// - If currently `Healthy`: remains `Healthy`
|
||||
/// - If currently `Degraded`: transitions to `Healthy` (recovery)
|
||||
/// - If currently `Unhealthy`: transitions to `Healthy` (recovery)
|
||||
///
|
||||
/// # Returns
|
||||
///
|
||||
/// The new health status after recording the success.
|
||||
pub async fn track_success(&self) -> HealthStatus {
|
||||
let mut status = self.current_status.lock().await;
|
||||
let new_status = status.clone().record_success();
|
||||
*status = new_status.clone();
|
||||
new_status
|
||||
}
|
||||
|
||||
/// Records a failed operation, potentially transitioning to `Degraded` or `Unhealthy` status.
|
||||
///
|
||||
/// This method transitions the health status according to the following rules:
|
||||
/// - If currently `Healthy`: transitions to `Degraded` with 1 consecutive error
|
||||
/// - If currently `Degraded`: increments consecutive error count
|
||||
/// - If currently `Unhealthy`: remains `Unhealthy`
|
||||
///
|
||||
/// # Returns
|
||||
///
|
||||
/// The new health status after recording the failure.
|
||||
pub async fn track_failure(&self) -> HealthStatus {
|
||||
let mut status = self.current_status.lock().await;
|
||||
let new_status = status.clone().record_error();
|
||||
*status = new_status.clone();
|
||||
new_status
|
||||
}
|
||||
|
||||
/// Marks the system as unhealthy with the specified reason.
|
||||
///
|
||||
/// This method immediately transitions to `Unhealthy` status regardless of
|
||||
/// the current status, providing a way to explicitly mark critical failures.
|
||||
///
|
||||
/// # Parameters
|
||||
///
|
||||
/// - `reason`: Human-readable description of the failure reason.
|
||||
///
|
||||
/// # Returns
|
||||
///
|
||||
/// The new `Unhealthy` health status.
|
||||
pub async fn mark_unhealthy(&self, reason: impl Into<String>) -> HealthStatus {
|
||||
let mut status = self.current_status.lock().await;
|
||||
let new_status = status.clone().mark_unhealthy(reason);
|
||||
*status = new_status.clone();
|
||||
new_status
|
||||
}
|
||||
|
||||
/// Gets the current health status without modifying it.
|
||||
///
|
||||
/// # Returns
|
||||
///
|
||||
/// The current health status.
|
||||
pub async fn get_status(&self) -> HealthStatus {
|
||||
let status = self.current_status.lock().await;
|
||||
status.clone()
|
||||
}
|
||||
|
||||
/// Checks if the system is currently healthy.
|
||||
///
|
||||
/// # Returns
|
||||
///
|
||||
/// `true` if the current status is `Healthy`, `false` otherwise.
|
||||
pub async fn is_healthy(&self) -> bool {
|
||||
let status = self.current_status.lock().await;
|
||||
status.is_healthy()
|
||||
}
|
||||
|
||||
/// Checks if the system is currently degraded.
|
||||
///
|
||||
/// # Returns
|
||||
///
|
||||
/// `true` if the current status is `Degraded`, `false` otherwise.
|
||||
pub async fn is_degraded(&self) -> bool {
|
||||
let status = self.current_status.lock().await;
|
||||
status.is_degraded()
|
||||
}
|
||||
|
||||
/// Checks if the system is currently unhealthy.
|
||||
///
|
||||
/// # Returns
|
||||
///
|
||||
/// `true` if the current status is `Unhealthy`, `false` otherwise.
|
||||
pub async fn is_unhealthy(&self) -> bool {
|
||||
let status = self.current_status.lock().await;
|
||||
status.is_unhealthy()
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for HealthMonitor {
|
||||
fn default() -> Self {
|
||||
Self::new()
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_health_monitor_initial_state() {
|
||||
let monitor = HealthMonitor::new();
|
||||
let status = monitor.get_status().await;
|
||||
assert!(status.is_healthy());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_health_monitor_with_initial_status() {
|
||||
let initial_status = HealthStatus::degraded(3);
|
||||
let monitor = HealthMonitor::with_initial_status(initial_status.clone());
|
||||
let status = monitor.get_status().await;
|
||||
assert_eq!(status, initial_status);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_track_success_from_healthy() {
|
||||
let monitor = HealthMonitor::new();
|
||||
let status = monitor.track_success().await;
|
||||
assert!(status.is_healthy());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_track_success_from_degraded() {
|
||||
let monitor = HealthMonitor::with_initial_status(HealthStatus::degraded(5));
|
||||
let status = monitor.track_success().await;
|
||||
assert!(status.is_healthy());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_track_success_from_unhealthy() {
|
||||
let monitor = HealthMonitor::with_initial_status(HealthStatus::unhealthy("Test failure"));
|
||||
let status = monitor.track_success().await;
|
||||
assert!(status.is_healthy());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_track_failure_from_healthy() {
|
||||
let monitor = HealthMonitor::new();
|
||||
let status = monitor.track_failure().await;
|
||||
assert!(status.is_degraded());
|
||||
assert_eq!(status, HealthStatus::degraded(1));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_track_failure_from_degraded() {
|
||||
let monitor = HealthMonitor::with_initial_status(HealthStatus::degraded(2));
|
||||
let status = monitor.track_failure().await;
|
||||
assert!(status.is_degraded());
|
||||
assert_eq!(status, HealthStatus::degraded(3));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_track_failure_from_unhealthy() {
|
||||
let monitor =
|
||||
HealthMonitor::with_initial_status(HealthStatus::unhealthy("Critical failure"));
|
||||
let status = monitor.track_failure().await;
|
||||
assert!(status.is_unhealthy());
|
||||
assert_eq!(status, HealthStatus::unhealthy("Critical failure"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_mark_unhealthy() {
|
||||
let monitor = HealthMonitor::new();
|
||||
let status = monitor.mark_unhealthy("Device disconnected").await;
|
||||
assert!(status.is_unhealthy());
|
||||
assert_eq!(status, HealthStatus::unhealthy("Device disconnected"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_mark_unhealthy_overwrites_previous() {
|
||||
let monitor = HealthMonitor::with_initial_status(HealthStatus::degraded(3));
|
||||
let status = monitor.mark_unhealthy("New failure").await;
|
||||
assert!(status.is_unhealthy());
|
||||
assert_eq!(status, HealthStatus::unhealthy("New failure"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_get_status() {
|
||||
let monitor = HealthMonitor::with_initial_status(HealthStatus::degraded(2));
|
||||
let status = monitor.get_status().await;
|
||||
assert_eq!(status, HealthStatus::degraded(2));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_is_healthy() {
|
||||
let healthy_monitor = HealthMonitor::new();
|
||||
assert!(healthy_monitor.is_healthy().await);
|
||||
|
||||
let degraded_monitor = HealthMonitor::with_initial_status(HealthStatus::degraded(1));
|
||||
assert!(!degraded_monitor.is_healthy().await);
|
||||
|
||||
let unhealthy_monitor =
|
||||
HealthMonitor::with_initial_status(HealthStatus::unhealthy("Failure"));
|
||||
assert!(!unhealthy_monitor.is_healthy().await);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_is_degraded() {
|
||||
let healthy_monitor = HealthMonitor::new();
|
||||
assert!(!healthy_monitor.is_degraded().await);
|
||||
|
||||
let degraded_monitor = HealthMonitor::with_initial_status(HealthStatus::degraded(1));
|
||||
assert!(degraded_monitor.is_degraded().await);
|
||||
|
||||
let unhealthy_monitor =
|
||||
HealthMonitor::with_initial_status(HealthStatus::unhealthy("Failure"));
|
||||
assert!(!unhealthy_monitor.is_degraded().await);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_is_unhealthy() {
|
||||
let healthy_monitor = HealthMonitor::new();
|
||||
assert!(!healthy_monitor.is_unhealthy().await);
|
||||
|
||||
let degraded_monitor = HealthMonitor::with_initial_status(HealthStatus::degraded(1));
|
||||
assert!(!degraded_monitor.is_unhealthy().await);
|
||||
|
||||
let unhealthy_monitor =
|
||||
HealthMonitor::with_initial_status(HealthStatus::unhealthy("Failure"));
|
||||
assert!(unhealthy_monitor.is_unhealthy().await);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_state_transitions_sequence() {
|
||||
let monitor = HealthMonitor::new();
|
||||
|
||||
// Start healthy
|
||||
assert!(monitor.is_healthy().await);
|
||||
|
||||
// First failure -> Degraded with 1 error
|
||||
let status = monitor.track_failure().await;
|
||||
assert!(status.is_degraded());
|
||||
assert_eq!(status, HealthStatus::degraded(1));
|
||||
|
||||
// Second failure -> Degraded with 2 errors
|
||||
let status = monitor.track_failure().await;
|
||||
assert_eq!(status, HealthStatus::degraded(2));
|
||||
|
||||
// Third failure -> Degraded with 3 errors
|
||||
let status = monitor.track_failure().await;
|
||||
assert_eq!(status, HealthStatus::degraded(3));
|
||||
|
||||
// Recovery -> Healthy
|
||||
let status = monitor.track_success().await;
|
||||
assert!(status.is_healthy());
|
||||
|
||||
// Another failure -> Degraded with 1 error
|
||||
let status = monitor.track_failure().await;
|
||||
assert_eq!(status, HealthStatus::degraded(1));
|
||||
|
||||
// Mark as unhealthy -> Unhealthy
|
||||
let status = monitor.mark_unhealthy("Critical error").await;
|
||||
assert!(status.is_unhealthy());
|
||||
|
||||
// Recovery from unhealthy -> Healthy
|
||||
let status = monitor.track_success().await;
|
||||
assert!(status.is_healthy());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_concurrent_access() {
|
||||
let monitor = HealthMonitor::new();
|
||||
|
||||
// Create multiple tasks that access the monitor concurrently
|
||||
// We need to clone the monitor for each task since tokio::spawn requires 'static
|
||||
let monitor1 = monitor.clone();
|
||||
let monitor2 = monitor.clone();
|
||||
let monitor3 = monitor.clone();
|
||||
let monitor4 = monitor.clone();
|
||||
|
||||
let task1 = tokio::spawn(async move { monitor1.track_failure().await });
|
||||
let task2 = tokio::spawn(async move { monitor2.track_failure().await });
|
||||
let task3 = tokio::spawn(async move { monitor3.track_success().await });
|
||||
let task4 = tokio::spawn(async move { monitor4.get_status().await });
|
||||
|
||||
// Wait for all tasks to complete
|
||||
let (result1, result2, result3, result4) = tokio::join!(task1, task2, task3, task4);
|
||||
|
||||
// All operations should complete without panicking
|
||||
result1.expect("Task should complete successfully");
|
||||
result2.expect("Task should complete successfully");
|
||||
result3.expect("Task should complete successfully");
|
||||
result4.expect("Task should complete successfully");
|
||||
|
||||
// Final status should be healthy (due to the success operation)
|
||||
let final_status = monitor.get_status().await;
|
||||
assert!(final_status.is_healthy());
|
||||
}
|
||||
}
|
||||
6
backend/src/application/health/mod.rs
Normal file
6
backend/src/application/health/mod.rs
Normal file
@@ -0,0 +1,6 @@
|
||||
//! Health monitoring application layer.
|
||||
//!
|
||||
//! This module contains the health monitoring service that tracks the system's
|
||||
//! health status and manages state transitions between healthy, degraded, and unhealthy states.
|
||||
|
||||
pub mod health_monitor;
|
||||
@@ -11,6 +11,11 @@
|
||||
//! - **Use case driven**: Each module represents a specific business use case
|
||||
//! - **Testable in isolation**: Can be tested with mock infrastructure implementations
|
||||
//!
|
||||
//! # Submodules
|
||||
//!
|
||||
//! - `health`: Health monitoring service
|
||||
//! - `health_monitor`: Tracks system health status and state transitions
|
||||
//!
|
||||
//! # Planned Submodules
|
||||
//!
|
||||
//! - `relay`: Relay control use cases
|
||||
@@ -58,3 +63,5 @@
|
||||
//! - Architecture: `specs/constitution.md` - Hexagonal Architecture principles
|
||||
//! - Use cases: `specs/001-modbus-relay-control/plan.md` - Implementation plan
|
||||
//! - Domain types: [`crate::domain`] - Domain entities and value objects
|
||||
|
||||
pub mod health;
|
||||
|
||||
@@ -10,6 +10,10 @@ use super::*;
|
||||
mod t025a_connection_setup_tests {
|
||||
use super::*;
|
||||
|
||||
static HOST: &str = "192.168.1.200";
|
||||
static PORT: u16 = 502;
|
||||
static SLAVE_ID: u8 = 1;
|
||||
|
||||
/// T025a Test 1: `new()` with valid config connects successfully
|
||||
///
|
||||
/// This test verifies that `ModbusRelayController::new()` can establish
|
||||
@@ -21,13 +25,10 @@ mod t025a_connection_setup_tests {
|
||||
#[ignore = "Requires running Modbus TCP server"]
|
||||
async fn test_new_with_valid_config_connects_successfully() {
|
||||
// Arrange: Use localhost test server
|
||||
let host = "127.0.0.1";
|
||||
let port = 5020; // Test Modbus TCP port
|
||||
let slave_id = 1;
|
||||
let timeout_secs = 5;
|
||||
|
||||
// Act: Attempt to create controller
|
||||
let result = ModbusRelayController::new(host, port, slave_id, timeout_secs).await;
|
||||
let result = ModbusRelayController::new(HOST, PORT, SLAVE_ID, timeout_secs).await;
|
||||
|
||||
// Assert: Connection should succeed
|
||||
assert!(
|
||||
@@ -45,12 +46,10 @@ mod t025a_connection_setup_tests {
|
||||
async fn test_new_with_invalid_host_returns_connection_error() {
|
||||
// Arrange: Use invalid host format
|
||||
let host = "not a valid host!!!";
|
||||
let port = 502;
|
||||
let slave_id = 1;
|
||||
let timeout_secs = 5;
|
||||
|
||||
// Act: Attempt to create controller
|
||||
let result = ModbusRelayController::new(host, port, slave_id, timeout_secs).await;
|
||||
let result = ModbusRelayController::new(host, PORT, SLAVE_ID, timeout_secs).await;
|
||||
|
||||
// Assert: Should return ConnectionError
|
||||
assert!(result.is_err(), "Expected ConnectionError for invalid host");
|
||||
@@ -74,13 +73,11 @@ mod t025a_connection_setup_tests {
|
||||
async fn test_new_with_unreachable_host_returns_connection_error() {
|
||||
// Arrange: Use localhost with a closed port (port 1 is typically closed)
|
||||
// This gives instant "connection refused" instead of waiting for TCP timeout
|
||||
let host = "127.0.0.1";
|
||||
let port = 1; // Closed port for instant connection failure
|
||||
let slave_id = 1;
|
||||
let timeout_secs = 1;
|
||||
|
||||
// Act: Attempt to create controller
|
||||
let result = ModbusRelayController::new(host, port, slave_id, timeout_secs).await;
|
||||
let result = ModbusRelayController::new(HOST, port, SLAVE_ID, timeout_secs).await;
|
||||
|
||||
// Assert: Should return ConnectionError
|
||||
assert!(
|
||||
@@ -100,13 +97,10 @@ mod t025a_connection_setup_tests {
|
||||
#[ignore = "Requires running Modbus TCP server or refactoring to expose timeout"]
|
||||
async fn test_new_stores_correct_timeout_duration() {
|
||||
// Arrange
|
||||
let host = "127.0.0.1";
|
||||
let port = 5020;
|
||||
let slave_id = 1;
|
||||
let timeout_secs = 10;
|
||||
|
||||
// Act
|
||||
let controller = ModbusRelayController::new(host, port, slave_id, timeout_secs)
|
||||
let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, timeout_secs)
|
||||
.await
|
||||
.expect("Failed to create controller");
|
||||
|
||||
@@ -137,6 +131,10 @@ mod t025b_read_coils_timeout_tests {
|
||||
types::RelayId,
|
||||
};
|
||||
|
||||
static HOST: &str = "192.168.1.200";
|
||||
static PORT: u16 = 502;
|
||||
static SLAVE_ID: u8 = 1;
|
||||
|
||||
/// T025b Test 1: `read_coils_with_timeout()` returns coil values on success
|
||||
///
|
||||
/// This test verifies that reading coils succeeds when the Modbus server
|
||||
@@ -147,7 +145,7 @@ mod t025b_read_coils_timeout_tests {
|
||||
#[ignore = "Requires running Modbus TCP server with known state"]
|
||||
async fn test_read_coils_returns_coil_values_on_success() {
|
||||
// Arrange: Connect to test server
|
||||
let controller = ModbusRelayController::new("127.0.0.1", 5020, 1, 5)
|
||||
let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, 5)
|
||||
.await
|
||||
.expect("Failed to connect to test server");
|
||||
|
||||
@@ -251,6 +249,10 @@ mod t025c_write_single_coil_timeout_tests {
|
||||
types::{RelayId, RelayState},
|
||||
};
|
||||
|
||||
static HOST: &str = "192.168.1.200";
|
||||
static PORT: u16 = 502;
|
||||
static SLAVE_ID: u8 = 1;
|
||||
|
||||
/// T025c Test 1: `write_single_coil_with_timeout()` succeeds for valid write
|
||||
///
|
||||
/// This test verifies that writing to a coil succeeds when the Modbus server
|
||||
@@ -261,7 +263,7 @@ mod t025c_write_single_coil_timeout_tests {
|
||||
#[ignore = "Requires running Modbus TCP server"]
|
||||
async fn test_write_single_coil_succeeds_for_valid_write() {
|
||||
// Arrange: Connect to test server
|
||||
let controller = ModbusRelayController::new("127.0.0.1", 5020, 1, 5)
|
||||
let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, 5)
|
||||
.await
|
||||
.expect("Failed to connect to test server");
|
||||
|
||||
@@ -336,6 +338,10 @@ mod t025d_read_relay_state_tests {
|
||||
types::{RelayId, RelayState},
|
||||
};
|
||||
|
||||
static HOST: &str = "192.168.1.200";
|
||||
static PORT: u16 = 502;
|
||||
static SLAVE_ID: u8 = 1;
|
||||
|
||||
/// T025d Test 1: `read_relay_state(RelayId(1))` returns On when coil is true
|
||||
///
|
||||
/// This test verifies that a true coil value is correctly converted to `RelayState::On`.
|
||||
@@ -409,7 +415,7 @@ mod t025d_read_relay_state_tests {
|
||||
#[ignore = "Requires Modbus server with specific relay states"]
|
||||
async fn test_read_state_correctly_maps_relay_id_to_modbus_address() {
|
||||
// Arrange: Connect to test server with known relay states
|
||||
let controller = ModbusRelayController::new("127.0.0.1", 5020, 1, 5)
|
||||
let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, 5)
|
||||
.await
|
||||
.expect("Failed to connect to test server");
|
||||
|
||||
@@ -434,6 +440,10 @@ mod t025e_write_relay_state_tests {
|
||||
types::{RelayId, RelayState},
|
||||
};
|
||||
|
||||
static HOST: &str = "192.168.1.200";
|
||||
static PORT: u16 = 502;
|
||||
static SLAVE_ID: u8 = 1;
|
||||
|
||||
/// T025e Test 1: `write_relay_state(RelayId(1)`, `RelayState::On`) writes true to coil
|
||||
///
|
||||
/// This test verifies that `RelayState::On` is correctly converted to a true coil value.
|
||||
@@ -441,7 +451,7 @@ mod t025e_write_relay_state_tests {
|
||||
#[ignore = "Requires Modbus server that can verify written values"]
|
||||
async fn test_write_state_on_writes_true_to_coil() {
|
||||
// Arrange: Connect to test server
|
||||
let controller = ModbusRelayController::new("127.0.0.1", 5020, 1, 5)
|
||||
let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, 5)
|
||||
.await
|
||||
.expect("Failed to connect to test server");
|
||||
|
||||
@@ -475,7 +485,7 @@ mod t025e_write_relay_state_tests {
|
||||
#[ignore = "Requires Modbus server that can verify written values"]
|
||||
async fn test_write_state_off_writes_false_to_coil() {
|
||||
// Arrange: Connect to test server
|
||||
let controller = ModbusRelayController::new("127.0.0.1", 5020, 1, 5)
|
||||
let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, 5)
|
||||
.await
|
||||
.expect("Failed to connect to test server");
|
||||
|
||||
@@ -509,7 +519,7 @@ mod t025e_write_relay_state_tests {
|
||||
#[ignore = "Requires Modbus server"]
|
||||
async fn test_write_state_correctly_maps_relay_id_to_modbus_address() {
|
||||
// Arrange: Connect to test server
|
||||
let controller = ModbusRelayController::new("127.0.0.1", 5020, 1, 5)
|
||||
let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, 5)
|
||||
.await
|
||||
.expect("Failed to connect to test server");
|
||||
|
||||
@@ -537,7 +547,7 @@ mod t025e_write_relay_state_tests {
|
||||
#[ignore = "Requires Modbus server"]
|
||||
async fn test_write_state_can_toggle_relay_multiple_times() {
|
||||
// Arrange: Connect to test server
|
||||
let controller = ModbusRelayController::new("127.0.0.1", 5020, 1, 5)
|
||||
let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, 5)
|
||||
.await
|
||||
.expect("Failed to connect to test server");
|
||||
|
||||
@@ -571,12 +581,16 @@ mod t025e_write_relay_state_tests {
|
||||
mod write_all_states_validation_tests {
|
||||
use super::*;
|
||||
|
||||
static HOST: &str = "192.168.1.200";
|
||||
static PORT: u16 = 502;
|
||||
static SLAVE_ID: u8 = 1;
|
||||
|
||||
/// Test: `write_all_states()` returns `InvalidInput` when given 0 states
|
||||
#[tokio::test]
|
||||
#[ignore = "Requires Modbus server"]
|
||||
async fn test_write_all_states_with_empty_vector_returns_invalid_input() {
|
||||
// Arrange: Connect to test server
|
||||
let controller = ModbusRelayController::new("127.0.0.1", 5020, 1, 5)
|
||||
let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, 5)
|
||||
.await
|
||||
.expect("Failed to connect to test server");
|
||||
|
||||
@@ -596,7 +610,7 @@ mod write_all_states_validation_tests {
|
||||
#[ignore = "Requires Modbus server"]
|
||||
async fn test_write_all_states_with_7_states_returns_invalid_input() {
|
||||
// Arrange: Connect to test server
|
||||
let controller = ModbusRelayController::new("127.0.0.1", 5020, 1, 5)
|
||||
let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, 5)
|
||||
.await
|
||||
.expect("Failed to connect to test server");
|
||||
|
||||
@@ -626,7 +640,7 @@ mod write_all_states_validation_tests {
|
||||
#[ignore = "Requires Modbus server"]
|
||||
async fn test_write_all_states_with_9_states_returns_invalid_input() {
|
||||
// Arrange: Connect to test server
|
||||
let controller = ModbusRelayController::new("127.0.0.1", 5020, 1, 5)
|
||||
let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, 5)
|
||||
.await
|
||||
.expect("Failed to connect to test server");
|
||||
|
||||
@@ -656,7 +670,7 @@ mod write_all_states_validation_tests {
|
||||
#[ignore = "Requires Modbus server"]
|
||||
async fn test_write_all_states_with_8_states_succeeds() {
|
||||
// Arrange: Connect to test server
|
||||
let controller = ModbusRelayController::new("127.0.0.1", 5020, 1, 5)
|
||||
let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, 5)
|
||||
.await
|
||||
.expect("Failed to connect to test server");
|
||||
|
||||
|
||||
@@ -124,7 +124,10 @@ mod relay_label_repository_contract_tests {
|
||||
.expect("Second save should succeed");
|
||||
|
||||
// Verify only the second label is present
|
||||
let result = repo.get_label(relay_id).await.expect("get_label should succeed");
|
||||
let result = repo
|
||||
.get_label(relay_id)
|
||||
.await
|
||||
.expect("get_label should succeed");
|
||||
assert!(result.is_some(), "Label should exist");
|
||||
assert_eq!(
|
||||
result.unwrap().as_str(),
|
||||
@@ -270,11 +273,17 @@ mod relay_label_repository_contract_tests {
|
||||
.expect("delete should succeed");
|
||||
|
||||
// Verify deleted label is gone
|
||||
let get_result = repo.get_label(relay2).await.expect("get_label should succeed");
|
||||
let get_result = repo
|
||||
.get_label(relay2)
|
||||
.await
|
||||
.expect("get_label should succeed");
|
||||
assert!(get_result.is_none(), "Deleted label should not exist");
|
||||
|
||||
// Verify other label still exists
|
||||
let other_result = repo.get_label(relay1).await.expect("get_label should succeed");
|
||||
let other_result = repo
|
||||
.get_label(relay1)
|
||||
.await
|
||||
.expect("get_label should succeed");
|
||||
assert!(other_result.is_some(), "Other label should still exist");
|
||||
|
||||
// Verify get_all_labels only returns the remaining label
|
||||
|
||||
253
backend/tests/modbus_hardware_test.rs
Normal file
253
backend/tests/modbus_hardware_test.rs
Normal file
@@ -0,0 +1,253 @@
|
||||
// Integration tests for Modbus hardware
|
||||
// These tests require physical Modbus relay device to be connected
|
||||
// Run with: cargo test -- --ignored
|
||||
|
||||
use std::time::Duration;
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use sta::domain::relay::controller::RelayController;
|
||||
use sta::domain::relay::types::{RelayId, RelayState};
|
||||
use sta::infrastructure::modbus::client::ModbusRelayController;
|
||||
|
||||
static HOST: &str = "192.168.1.200";
|
||||
static PORT: u16 = 502;
|
||||
static SLAVE_ID: u8 = 1;
|
||||
|
||||
#[tokio::test]
|
||||
#[ignore = "Requires physical Modbus device"]
|
||||
async fn test_modbus_connection() {
|
||||
// This test verifies we can connect to the actual Modbus device
|
||||
// Configured with settings from settings/base.yaml
|
||||
let timeout_secs = 5;
|
||||
|
||||
let _controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, timeout_secs)
|
||||
.await
|
||||
.expect("Failed to connect to Modbus device");
|
||||
|
||||
// If we got here, connection was successful
|
||||
println!("✓ Successfully connected to Modbus device");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
#[ignore = "Requires physical Modbus device"]
|
||||
async fn test_read_relay_states() {
|
||||
let timeout_secs = 5;
|
||||
|
||||
let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, timeout_secs)
|
||||
.await
|
||||
.expect("Failed to connect to Modbus device");
|
||||
|
||||
// Test reading individual relay states
|
||||
for relay_id in 1..=8 {
|
||||
let relay_id = RelayId::new(relay_id).unwrap();
|
||||
let state = controller
|
||||
.read_relay_state(relay_id)
|
||||
.await
|
||||
.expect("Failed to read relay state");
|
||||
|
||||
println!("Relay {}: {:?}", relay_id.as_u8(), state);
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
#[ignore = "Requires physical Modbus device"]
|
||||
async fn test_read_all_relays() {
|
||||
let timeout_secs = 5;
|
||||
|
||||
let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, timeout_secs)
|
||||
.await
|
||||
.expect("Failed to connect to Modbus device");
|
||||
|
||||
let relays = controller
|
||||
.read_all_states()
|
||||
.await
|
||||
.expect("Failed to read all relay states");
|
||||
|
||||
assert_eq!(relays.len(), 8, "Should have exactly 8 relays");
|
||||
|
||||
for (i, state) in relays.iter().enumerate() {
|
||||
let relay_id = i + 1;
|
||||
println!("Relay {}: {:?}", relay_id, state);
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
#[ignore = "Requires physical Modbus device"]
|
||||
async fn test_write_relay_state() {
|
||||
let timeout_secs = 5;
|
||||
|
||||
let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, timeout_secs)
|
||||
.await
|
||||
.expect("Failed to connect to Modbus device");
|
||||
|
||||
let relay_id = RelayId::new(1).unwrap();
|
||||
|
||||
// Turn relay on
|
||||
controller
|
||||
.write_relay_state(relay_id, RelayState::On)
|
||||
.await
|
||||
.expect("Failed to write relay state");
|
||||
|
||||
// Verify it's on
|
||||
let state = controller
|
||||
.read_relay_state(relay_id)
|
||||
.await
|
||||
.expect("Failed to read relay state");
|
||||
|
||||
assert_eq!(state, RelayState::On, "Relay should be ON");
|
||||
|
||||
// Turn relay off
|
||||
controller
|
||||
.write_relay_state(relay_id, RelayState::Off)
|
||||
.await
|
||||
.expect("Failed to write relay state");
|
||||
|
||||
// Verify it's off
|
||||
let state = controller
|
||||
.read_relay_state(relay_id)
|
||||
.await
|
||||
.expect("Failed to read relay state");
|
||||
|
||||
assert_eq!(state, RelayState::Off, "Relay should be OFF");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
#[ignore = "Requires physical Modbus device"]
|
||||
async fn test_write_all_relays() {
|
||||
let timeout_secs = 5;
|
||||
|
||||
let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, timeout_secs)
|
||||
.await
|
||||
.expect("Failed to connect to Modbus device");
|
||||
|
||||
// Turn all relays on
|
||||
let all_on_states = vec![RelayState::On; 8];
|
||||
controller
|
||||
.write_all_states(all_on_states)
|
||||
.await
|
||||
.expect("Failed to write all relay states");
|
||||
|
||||
// Verify all are on
|
||||
let relays = controller
|
||||
.read_all_states()
|
||||
.await
|
||||
.expect("Failed to read all relay states");
|
||||
|
||||
for state in &relays {
|
||||
assert_eq!(*state, RelayState::On, "All relays should be ON");
|
||||
}
|
||||
|
||||
// Turn all relays off
|
||||
let all_off_states = vec![RelayState::Off; 8];
|
||||
controller
|
||||
.write_all_states(all_off_states)
|
||||
.await
|
||||
.expect("Failed to write all relay states");
|
||||
|
||||
// Verify all are off
|
||||
let relays = controller
|
||||
.read_all_states()
|
||||
.await
|
||||
.expect("Failed to read all relay states");
|
||||
|
||||
for state in &relays {
|
||||
assert_eq!(*state, RelayState::Off, "All relays should be OFF");
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
#[ignore = "Requires physical Modbus device"]
|
||||
async fn test_timeout_handling() {
|
||||
let timeout_secs = 1; // Short timeout for testing
|
||||
|
||||
let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, timeout_secs)
|
||||
.await
|
||||
.expect("Failed to connect to Modbus device");
|
||||
|
||||
// This test verifies that timeout works correctly
|
||||
// We'll try to read a relay state with a very short timeout
|
||||
let relay_id = RelayId::new(1).unwrap();
|
||||
|
||||
// The operation should either succeed quickly or timeout
|
||||
let result = tokio::time::timeout(
|
||||
Duration::from_secs(2),
|
||||
controller.read_relay_state(relay_id),
|
||||
)
|
||||
.await;
|
||||
|
||||
match result {
|
||||
Ok(Ok(state)) => {
|
||||
println!("✓ Operation completed within timeout: {:?}", state);
|
||||
}
|
||||
Ok(Err(e)) => {
|
||||
println!("✓ Operation failed (expected): {}", e);
|
||||
}
|
||||
Err(_) => {
|
||||
println!("✓ Operation timed out (expected)");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
#[ignore = "Requires physical Modbus device"]
|
||||
async fn test_concurrent_access() {
|
||||
let timeout_secs = 5;
|
||||
|
||||
let _controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, timeout_secs)
|
||||
.await
|
||||
.expect("Failed to connect to Modbus device");
|
||||
|
||||
// Test concurrent access to the controller
|
||||
// We'll test a few relays concurrently using tokio::join!
|
||||
// Note: We can't clone the controller, so we'll just test sequential access
|
||||
// This is still valuable for testing the controller works with multiple relays
|
||||
|
||||
let relay_id1 = RelayId::new(1).unwrap();
|
||||
let relay_id2 = RelayId::new(2).unwrap();
|
||||
let relay_id3 = RelayId::new(3).unwrap();
|
||||
let relay_id4 = RelayId::new(4).unwrap();
|
||||
|
||||
let task1 = tokio::spawn(async move {
|
||||
let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, timeout_secs)
|
||||
.await
|
||||
.expect("Failed to connect");
|
||||
controller.read_relay_state(relay_id1).await
|
||||
});
|
||||
let task2 = tokio::spawn(async move {
|
||||
let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, timeout_secs)
|
||||
.await
|
||||
.expect("Failed to connect");
|
||||
controller.read_relay_state(relay_id2).await
|
||||
});
|
||||
let task3 = tokio::spawn(async move {
|
||||
let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, timeout_secs)
|
||||
.await
|
||||
.expect("Failed to connect");
|
||||
controller.read_relay_state(relay_id3).await
|
||||
});
|
||||
let task4 = tokio::spawn(async move {
|
||||
let controller = ModbusRelayController::new(HOST, PORT, SLAVE_ID, timeout_secs)
|
||||
.await
|
||||
.expect("Failed to connect");
|
||||
controller.read_relay_state(relay_id4).await
|
||||
});
|
||||
|
||||
let (result1, result2, result3, result4) = tokio::join!(task1, task2, task3, task4);
|
||||
|
||||
// Process results
|
||||
if let Ok(Ok(state)) = result1 {
|
||||
println!("Relay 1: {:?}", state);
|
||||
}
|
||||
if let Ok(Ok(state)) = result2 {
|
||||
println!("Relay 2: {:?}", state);
|
||||
}
|
||||
if let Ok(Ok(state)) = result3 {
|
||||
println!("Relay 3: {:?}", state);
|
||||
}
|
||||
if let Ok(Ok(state)) = result4 {
|
||||
println!("Relay 4: {:?}", state);
|
||||
}
|
||||
}
|
||||
}
|
||||
5
justfile
5
justfile
@@ -31,7 +31,10 @@ release-run:
|
||||
cargo run --release
|
||||
|
||||
test:
|
||||
cargo test
|
||||
cargo test --all --all-targets
|
||||
|
||||
test-hardware:
|
||||
cargo test --all --all-targets -- --ignored
|
||||
|
||||
coverage:
|
||||
mkdir -p coverage
|
||||
|
||||
@@ -331,13 +331,13 @@
|
||||
|
||||
--------------
|
||||
|
||||
*** STARTED T025: ModbusRelayController Implementation (DECOMPOSED) [9/13]
|
||||
*** STARTED T025: ModbusRelayController Implementation (DECOMPOSED) [12/13]
|
||||
- Complexity :: High → Broken into 6 sub-tasks
|
||||
- Uncertainty :: High
|
||||
- Rationale :: Nested Result handling, =Arc<Mutex>= synchronization, timeout wrapping
|
||||
- Protocol :: Native Modbus TCP (MBAP header, no CRC16 validation)
|
||||
|
||||
- [X] *T025a* [US1] [TDD] Implement =ModbusRelayController= connection setup
|
||||
- [X] *T025a* [US1] [TDD] Implement =ModbusRelayController= connection setup [3/3]
|
||||
|
||||
- Struct: =ModbusRelayController { ctx: Arc<Mutex<Context>>, timeout_duration: Duration }=
|
||||
- Constructor: =new(host, port, slave_id, timeout_secs) → Result<Self, ControllerError>=
|
||||
@@ -382,7 +382,7 @@
|
||||
- [X] Test: =new()= with invalid host returns =ConnectionError=
|
||||
- [X] Test: =new()= stores correct timeout_duration
|
||||
|
||||
- [X] *T025b* [US1] [TDD] Implement timeout-wrapped =read_coils= helper
|
||||
- [X] *T025b* [US1] [TDD] Implement timeout-wrapped =read_coils= helper [4/4]
|
||||
|
||||
- Private method: =read_coils_with_timeout(addr: u16, count: u16) → Result<Vec<bool>, ControllerError>=
|
||||
- Wrap =ctx.read_coils()= with =tokio::time::timeout()=
|
||||
@@ -421,7 +421,7 @@
|
||||
- [X] Test: =read_coils_with_timeout()= returns =ConnectionError= on =io::Error=
|
||||
- [X] Test: =read_coils_with_timeout()= returns =ModbusException= on protocol error
|
||||
|
||||
- [X] *T025c* [US1] [TDD] Implement timeout-wrapped =write_single_coil= helper
|
||||
- [X] *T025c* [US1] [TDD] Implement timeout-wrapped =write_single_coil= helper [3/3]
|
||||
|
||||
- Private method: =write_single_coil_with_timeout(addr: u16, value: bool) → Result<(), ControllerError>=
|
||||
- Similar nested Result handling as T025b
|
||||
@@ -454,7 +454,7 @@
|
||||
- [X] Test: =write_single_coil_with_timeout()= returns Timeout on slow device
|
||||
- [X] Test: =write_single_coil_with_timeout()= returns appropriate error on failure
|
||||
|
||||
- [X] *T025d* [US1] [TDD] Implement =RelayController::read_state()= using helpers
|
||||
- [X] *T025d* [US1] [TDD] Implement =RelayController::read_state()= using helpers [3/3]
|
||||
|
||||
- Convert =RelayId= → =ModbusAddress= (0-based)
|
||||
- Call =read_coils_with_timeout(addr, 1)=
|
||||
@@ -482,7 +482,7 @@
|
||||
- [X] Test: =read_state(RelayId(1))= returns =Off= when coil is false
|
||||
- [X] Test: =read_state()= propagates =ControllerError= from helper
|
||||
|
||||
- [X] *T025e* [US1] [TDD] Implement =RelayController::write_state()= using helpers
|
||||
- [X] *T025e* [US1] [TDD] Implement =RelayController::write_state()= using helpers [2/2]
|
||||
|
||||
- Convert =RelayId= → =ModbusAddress=
|
||||
- Convert =RelayState= → bool (On=true, Off=false)
|
||||
@@ -505,7 +505,7 @@
|
||||
- [X] Test: =write_state(RelayId(1), RelayState::On)= writes true to coil
|
||||
- [X] Test: =write_state(RelayId(1), RelayState::Off)= writes false to coil
|
||||
|
||||
- [X] *T025f* [US1] [TDD] Implement =RelayController::read_all()= and =write_all()=
|
||||
- [X] *T025f* [US1] [TDD] Implement =RelayController::read_all()= and =write_all()= [3/3]
|
||||
|
||||
- =read_all()=: Call =read_coils_with_timeout(0, 8)=, map to =Vec<(RelayId, RelayState)>=
|
||||
- =write_all()=: Loop over RelayId 1-8, call =write_state()= for each
|
||||
@@ -545,7 +545,7 @@
|
||||
|
||||
--------------
|
||||
|
||||
- [ ] *T034* [US1] [TDD] Integration test with real hardware (optional)
|
||||
- [X] *T034* [US1] [TDD] Integration test with real hardware (optional)
|
||||
- *REQUIRES PHYSICAL DEVICE*: Test against actual Modbus relay at configured IP
|
||||
- Skip if device unavailable, rely on =MockRelayController= for CI
|
||||
- *File*: =tests/integration/modbus_hardware_test.rs=
|
||||
@@ -570,12 +570,12 @@
|
||||
- HashMap-based implementation
|
||||
- *File*: =src/infrastructure/persistence/mock_label_repository.rs=
|
||||
- *Complexity*: Low | *Uncertainty*: Low
|
||||
- [ ] *T039* [US3] [TDD] Write tests for =HealthMonitor= service
|
||||
- [X] *T039* [US3] [TDD] Write tests for =HealthMonitor= service
|
||||
- Test: =track_success()= transitions =Degraded= → =Healthy=
|
||||
- Test: =track_failure()= transitions =Healthy= → =Degraded= → =Unhealthy=
|
||||
- *File*: =src/application/health_monitor.rs=
|
||||
- *Complexity*: Medium | *Uncertainty*: Low
|
||||
- [ ] *T040* [US3] [TDD] Implement =HealthMonitor= service
|
||||
- [X] *T040* [US3] [TDD] Implement =HealthMonitor= service
|
||||
- Track consecutive errors, transition states per FR-020, FR-021
|
||||
- *File*: =src/application/health_monitor.rs=
|
||||
- *Complexity*: Medium | *Uncertainty*: Low
|
||||
|
||||
Reference in New Issue
Block a user