Performax - The art of performance engineering

Central to our methodology for characterizing and improving application performance is the ability to emulate a realistic workload in a controlled environment. Trends in response time, throughput, and hardware utilization are observed as load is varied to determine application stability and scalability. The definition and construction of an automated workload is the first and most critical phase in any performance project. If the wrong business processes are selected or the emulation is not correct, the project will either fail to deliver the expected return on investment, or worse, set the wrong expectations on the application's performance characteristics. Once the ability to emulate a realistic workload is established, different types of analysis can be performed depending on project goals.

PDF File Methodology Slide Show

Type of Analysis Methodology Deliverables
Regression
  • Measure response time and throughput vs load on arbitrary hardware. Ideally, application data base is representative of a majority of customers in volume and data demographics.
  • Tests compare relative performance while making one change at a time.
  • Typical changes evaluated are new versions of software, tuning, code optimization, etc.
Regression analysis is ideal for testing as part of the QA cycle, system tuning, database tuning, and application optimization. It is not useful for setting expectations for production environments due to lack of sensitivity analysis and lack of production strength hardware.

Reports typically include the following: Response time, throughput, and hardware resource utilization vs. load, as well as application stability characteristics under stress.
Benchmark
  • Hardware and data is representative of a specific production environment.
  • An iterative approach to testing using univariant analysis is used to optimize code and tune the environment until project goals are realized or a point of diminishing returns is reached.
Benchmark Analysis is useful for setting expectations for a specific production environment. A pass or fail result will be reported according to predefined requirements.

Documentation typically includes the following: Environment, pass/fail criteria, response time, throughput, hardware utilization statistics, and optimizations performed to achieve results.
Performance Characterization
  • Measure capacity on a variety of hardware configurations varying the number and speed of processors, and/or IO subsystems.
  • Data volumes and/or content are varied as part of sensitivity analysis.
  • Software is optimized and evaluated on "production strength" hardware.
Characterization report and/or server sizing tools which set expectations on capacity, throughput, and responsiveness for various operating systems, databases, server configurations, data volumes, and hardware topologies. Used for infrastructure budgeting and capacity planning.