Action Items Needed to Drive Performance Improvement

Below is a list of actions many companies often require in order to have success with load testing and actually have performance improvements in their applications. Many of these action items have to do with developing what I have called a "performance consciousness." Not just slapping a feature together, but getting developers, managers, testers and PMs  thinking "when we code this feature, how will it perform and how will it impact  the performance of the site as a whole?" 

A "performance culture" is a work organization which has a commitment to measurement, and has processes in place to move the direction of the team on the basis of those measurements.

Poorly performing applications are in that state because the development organizations are not measuring. As Rico Mariani, Performance Engineer on the DotNet Framework says:

    "If you're not measuring, you're not engineering." 

Many organizations are not active enough about collecting and measuring the performance of their application, and therefore lack the data and information needed to prioritize and schedule performance improvements. Therefore the improvements never get made.

There is a tendency  for organizations to rely on LoadRunner as a "magic bullet" which is supposed to solve all the load problems (after all, it costs enough money, right? It ought to  be able to write code itself for that price!) But LoadRunner is really only a measurement tool. It is a set of calipers. Nothing else. It doesn’t make code changes for you. The performance tester and the developer have to sit down and read the data together, decide on what changes might make a performance improvement, make those changes and then test again.

Ever book on performance testing describes it as an iterative process.

The following are types of measurements organizations frequently miss:

Measurement not taken

Information Obtained

Actions Implied

Failing to collect and analyze SQL Server traces 

Tells you the time taken by the SQL sever to process transactions.     

Well-tuned SQL Server does not take seconds to process requests which many users call for during the peak hours. The DB is architected to process those requests in a second or less. Whittle down the transactions which take the longest. 

Not collecting and analyzing  IIS logs and page hit data 

Tells you the time taken to process requests by IIS and the SQL server in combination. Correlates with SQL data and lets you decide if the ASP.Net code is adding excessive page load time.     

Allows you to structure accurate load tests.

Not collecting and analyzing WAN data  

Allows you to see your application the way the end user sees it.   

Whittle down page weight, and possible network infrastructure issues.  

Application performance improvement comes when a team starts to measure accurately, and have clear reporting in front of them.  When these things occur, applications begin to show performance improvement. People don’t need to be beaten over the head to make the improvements.

The Importance of Quantification
It takes measurement and clear reports about those measurements in order to catalyze a team to make changes. Developers often have clear ideas about what things will make a performance improvement. However, their excellent ideas frequently don't make it into the build because project managers need numbers in order to prioritize performance improvements. Without measurements, the priority of a particular idea will keep slipping and not make it to the dev schedule. Until page load times get quantified, an organization tends to be paralyzed in terms of making performance improvements.

Administrators and managers also need accurate, reliable, believable, numbers in order to justify their budget for purchasing faster hardware or more dev time.  That is, they need to know that if they make a performance improvement, they can save X number of dollars on hardware, for instance.

The whole organization revolves around measurement, and without it, and organization is paralyzed.

Getting Active and Agile
Many organizations need to get more active and agile with the "calipers," with their load and performance measuring tool, whether that is LoadRunner or some other tool.  The controller needs to moved around to different locations with the network architecture (inside vs. outside the firewall for instance), the measurements need to be more frequent, and people in the organization need to read and understand the measurements. This needs to occur for all the reasons mechanical engineers create accurate blueprints made to scale. In making the blueprint, the ME does not just measure with the calipers once, he measures from all different angles.  Only then can a team start to take action and work as a team on the basis of the measurements.

Isolating Peak Hours
Organizations often have how many pages hits they have per day or per week (they need this data for sales reasons). However, hits per day or per week hour tells little about why users cannot get on to your site during certain hours, or what you CPU loads are throughout the day, or if your firewall is overloaded. In order to give developers measurements which point to something they can actually fix, organizations need to measure what the end-users are seeing, and need to think more in terms ofa peak hour, not a “peak day/”daily measurements.  When asked about average CPU utilization, everyone, from the operations team to developers to project managers should know that we mean average for peak hour, not daily average..

Tasks Typically Needed to Get the Most Out Of Performance Testing






Develop rigid dedication to accuracy of execution and reporting of Load Tests. Money and time spent on LR is useless if the data is not accurate and the tests are not run *exactly* the same way every time. Since we want to measure our progress, BE SURE TO LABEL and KEEP all FILES! The following items would be of help in this area:   

Very easy to loose pieces of load tests, or run the test with the wrong settings



·        Load Test pre-launch checklist developed and used      

·        Load Test Run Time checklist developed and usedThings to check to make sure test run is running correctly

·        Load Test post launch checklist developed and used     

Run time settings accurate?



Was HTML report published in correct place? Were .lrr files saved and labeled (necessary to go back and check data.)   




Cross-training. Other team members need to learn to run load tests.  What if the main load tester gets sick? This also helps build confidence in load reports.

Appreciation of the work that goes into load testing. 


Team members  need to learn to read the summary page of a LoadRunner report. And by this know if it a run is good or bad. Need to understand how this summary page correlates with Production page hits.              


Automated WAN testing with some tool run on a continuing basis. There are tools which do this. Keynote, Omniture, or these test can be created by getting data from remote machines.         



Weekly load and performanceproject management meetings. This would cover what performance fixes are going to take place, upcoming tests that are needed. Such meetings should be separate from L & Ptraining. L&P PM meetings seem to work best in a smaller team of 5 - 6.


Implement code-automated processes to gather and analyze various metrics on a weekly basis.E.g., what the site is doing behind the scenes. Page weights, DB calls,  # of sproc calls, which pages call what sprocs, etc.  This is something that should be discussed in a load and performance meeting


Weekly analysis of stored procedure counts and metrics from production SQL server traces

Weekly analysis of  WildMetrix or other Page Load data for number of page hits and load time of each page

Weekly analysis of Page Load times locally and across WAN              

A process to trickle the information generated by the above analysis into prioritizing fixes. E.g., the longest running stored procs might be tackled first. Or the longest running pages.             

A process (hardware environment as well as a human resource process) to allow developers to run unit-level load tests BEFORE they check something into the build

Solidify understanding of different tests. Capacity tests vs unit tests.         

Thorough analysis of all the calls being made to the DB by our pages on the major paths through the web application.

Teams should know the reason for every call our pages make to the database, and why it's done that way.   

Load Scenarios for different times of the Month/Year need to be developed

Require load testing of some kind before checking in a feature

Get a list of page weights of all our pages. This is actually a feature of LoadRunner.        

Another way of saying all of the above about measurement is the more commonly used phrase "Test Driven Development."

The measurements of the tests drive development.

An organization is headed in the right direction if it is gathering tests/measurements as their end users see the web site, and is and making coding decisions based on those tests/measurements.