Performance Engineering & Certification
Typical multi-layered applications are very complex, usually based on multiple components and technologies that go from the user interface to the database and multiple layers in between. These layers can be based on frameworks developed in-house or third party offerings, either commercial or open source. Loosely coupled architectures based on SOA create additional complexities. Finding the perfect balance between performance and trade-offs for all these layers can be a challenging experience.
Performance engineering is an integral part of the creative process of architecting software solutions. As such it can be described as the verification and validation of architectural design options, where they are prototyped and tested so that a decision can be made based on the priorities and constraints defined for the application, thus finding the ideal balance and best performance.
Early and rigorous performance analysis of applications is a critical step in software architecture to guarantee that the developed components and their assemblies will satisfy their operational requirements.
In general, performance engineering is an interactive process, where the software architect works closely with the developers and the perfmetrix performance engineer to better understand the options and improve the overall performance and security of the application.
The investigative process the team goes through during the software architecture stage allows it to gain a very close knowledge of the behavior of the application, to the point that one of the deliverables typically included is the most appropriate configuration of the multiple components that make up the application, fine tuned for the expected work load in the production environment.
Usually performance testing is thought of as just running a series of tests. However, it is much more than this and it requires a clear methodology that guarantees that the results are consistently accurate and reliable. Our methodology, published in various books on the subject, is simple and flexible, yet extremely powerful. It is based on the following steps:
- Define the performance criteria
- Define the test metrics
- Define the test conditions, including the monitoring tools
- Realistically simulate the application usage
- Execute the tests
The actual tests can be grouped into the following categories:
- Load tests, which are used to understand the behavior of the application under the normal load conditions expected in the production environment. The defined metrics and monitoring tools will present a clear picture of the execution of the application, including any issues that might occur. These tests are also used to verify that the application service level agreement (SLA) will be met when placed in the production environment. An additional use for the load tests is the verification that the fault and recovery processes are working correctly.
- Stress tests, which are used to better understand what happens when the application exceeds the normal load conditions. The main objective is to understand how much will the defined performance metrics degrade as the load conditions increase. The information gained during these tests provide excellent input for capacity planning.
- Endurance tests, which explore the behavior of the application over longer time periods. These tests usually focus on the consumption of resources and tend to uncover memory leaks and database issues.
- Architectural tests, which are specific to the performance engineering functions. The biggest difference with the previous tests is that they usually do not simulate the typical usage of the application and the test conditions are extremely stressful.
Performance Certification Most release management processes include a performance certification step. Depending on the process and the complexity of the application, this step might be spawned over the QA testing and the integration testing stage. Performance certification typically verifies and validates that the baseline of the defined performance metrics taken for the previously released version are met or exceeded by the candidate release of the application.
Our services include the definition of the performance criteria and metrics, test environments and conditions (including check lists), the simulated test loads and the types of performance tests that will be used for this certification.
Performance Monitoring Once the application has been successfully placed in the production environment, we strongly encourage corporations to proactively monitor the performance of the application as used by their customers. Any divergence from the Service Level Agreements should create alerts to the customer service group, which then can act promptly to resolve the issue. Our services include the definition and creation of the monitoring agents and the subsequent processes to achieve prompt resolution.
The Opallios Advantage
We believe that successful Web 2.0 products need UX and engineering to cohesively work together. In our ecosystem, UX teams work hand-in-hand with our engineering folks in an agile manner to create awesome products for you.