Stress Analytics
Welcome to PruTAN's Stress Analytics, your dedicated platform for gaining insights into the performance and resilience of your application under various stress scenarios. In the ever-evolving landscape of digital experiences, understanding how PruTAN responds to heightened loads is paramount for ensuring a seamless user experience.
Analytics provides comprehensive data on how your system behaves under different testing modes, offering valuable perspectives on both Volume and Duration scenarios.
Below, you'll find an explanation of the terms used on PruTAN's Stress Analytics page to help you understand your test results better.
Image 1: Test Result
The Test Result section displays the overall performance metrics and detailed reports from your stress test execution. Users can view comprehensive transaction data broken down by time buckets, including success rates, error counts, timeouts, and performance indicators like Tpms (Transactions Per Millisecond), latency, and response times. This section provides a holistic view of how your system performed throughout the test duration.
Result Terms:
Bucket: The overall duration of the performance test can be divided into multiple time ranges or buckets as needed. Each bucket represents a specific period of time in milliseconds.
Total Transaction: Total Transactions refers to the aggregate number of interactions or operations performed in a defined period.
Success: Success refers to the aggregate number of successful interactions or operations performed in a defined period.
Error: Error refers to the aggregate number of failed interactions or operations performed in a defined period.
TimeOut: TimeOut refers to the aggregate number of interactions or operations which took more than the specified Timeout time to complete.
Tpms: Transactions Per Millisecond (Tpms) refers to the rate at which transactions, operations, or requests are processed within a single millisecond. It provides a fine-grained assessment of performance, particularly in real-time or high-speed processing environments where response times are critical.
Latency: Latency refers to the time it takes for a system to respond to a request or perform an operation under load. It's the delay between initiating an action and seeing its first response. High latency indicates potential performance issues or bottlenecks within the system.
Response Time: Response time refers to the duration between sending a request to a system under load and receiving a complete response from that system. It encompasses the time taken for the system to process the request and generate a response, including any network latency, server processing time, and data transfer time.
Not Applicable: Those requests which take more time to execute than the Timeout fall within this category.
Warm-up: Those requests which execute during the WarmUp period fall within this category.
Image 2: Region
The Region tab provides individual performance data for each host selected when multiple hosts have been configured for the test. This allows users to analyze performance metrics across different geographic regions or server locations, helping identify regional performance variations, network latency issues, or load distribution problems. Users can compare metrics like response times, success rates, and throughput for each region separately.
Image 3: Request Configuration
The Request Configuration section displays the specific settings and parameters that were used to execute the performance test. It shows details such as the number of virtual users, test duration, ramp-up period, timeout values, and other configuration parameters that were set during test setup. This information helps users understand the exact conditions under which the test was performed and correlate results with the test configuration.