Classically, 4 types of people constitute a smoothly functioning Load and Performance team:
- A Performance PM
- A Load and Performance Developer. Generally this is the strongest developer in the company.
- Testers familiar with load tools who have a sense of system engineering, a sense of precision and don’t mind not doing a lot of real coding.
- System Support personnel to support the Load environment.
The charter of an L n P test team is to catch L n P bugs as early in the development cycle as possible, including and especially at the conception and design levels, before a line of code is ever written, and before an architectural flaw has lead to costly a re-engineering process.
While it is possible for a company to modify this classical form (for instance developers might do some of their own testing on their own machines. Black box testers could be monitoring page load times locally and on WAN simulator machines set up for London, China, NY, etc.), things tend to get messy when organizations wander too far from this form.
L n P teams typically have an independence and executive power over other developers. The L n P team’s goal is not to slow up development, but to continually review code and system architecture and make suggestions and produce test results which back up their suggestions.
A classic L n P team has the form it does for a second reason: to maximize return on investment of the load equipment. Load hardware and load testing software costs money. A lot of money. A software company needs to minimize the down time of all that load equipment. A load environment should be kept humming 24 hours a day. Tests need to be queued up and fed into the system. The human resources are organized around the fact s of the expensiveness of the equipment and the pressure of the release cycle:
- the PM’s job is to scan the upcoming feature list for features which will need L n P testing, making sure the testers have received documents from the feature testers (the test spec) and the developers (the dev spec) in front of them so they can write a load test for the feature quickly.
- The performance dev lead’s job is to make code changes which deviate from the normal build and which install them into the load environment
- The tester's job is to remain glued to the equipment, crank out tests results, and send them out automatically to an email alias which is monitored by the dev lead and other parties interested in L n P. Testers make daily load tests. It’s not really a tester’s job to evaluate the results, just report the results accurately. He interfaces with the Dev's enough to produce an accurate test of the feature. He does not have to be an expert on the feature.
- Support personnel are needed to do various things with the hardware and network, which can cause a lot of down time: set up more web servers, re-set WAN configurations, move load generators from a SQL Subnet to the web subnet, sometimes reconfigure a switch, move the load generator from one location to another (inside the firewall/ outside the firewall), create WAN emulators from Linux boxes, etc.
After the environment is set up and agreed upon, the tester is the final authority about the quality of the test environment and responsible for maintaining the environment. (he is the final authority because he is the person cranking out the tests every day and knows when something is wrong with it.) In addition to testing features, the load tester should run tests which validate the environment’s stability and validate that it matches production. It is important that the same person runs tests. When testers are changed you can expect to see slightly different results in the test.
Longer running tests, such as capacity tests, should be run overnight. Shorter running unit tests can be run during the day. This keeps the environment humming and results coming out.
Detailed Job Descriptions and Qualifications for the Positions on an L and P Team
1. Performance Project Manager (PM)
- acts as a liaison between the performance team and management. Communicates performance teams needs to management. Keeps pushing the quality of the LOAD environment infrastructure forward
- communicates to directors how performance testing is done,
- works to eliminate obstacles which stand in the way of testers testing new code every day. E.g., why aren’t builds being deployed correctly? Why wasn’t the load or STAGE environment in a good condition to test?
- Helps a “Performance Culture” by communicating results to other testers, developers and PM’s in the company.
- along with the Dev lead, the performance PM reviews features which need L & P testing. Specifically, I see the PM’s role as reviewing test plans and insisting on several things:
- that the Performance team is supplied with test plans created by functional testers about the feature to test
- that the Dev lead and testers fill out test case sections of the test plan before the feature is tested
- that the test cases are executed
- that the results of test case execution are recorded in the test plan and/or in TFS
A good PM shepherds the project through the testing process and insists that there be:
- a) test plans for all features
- b) a load and performance section in every test plan.
The PM is responsible for gathering and queuing up the features that need to be tested and feeding them to the testers and developers DEV
The PM may be a temporary position, until the director’s begin to understand how L & P is being done.
- Project Management Experience on several other projects.
- Project Management Certifications
- Project Managing Load and performance will necessarily involve the PM with almost all other people in the company: the hardware support people, production support, functional testers etc.
2. Load and Performance Developer Lead
Required Skills for a Performance Developer
- Oversees development of all code from a performance perspective
- Is heavily involved in the code development from an architectural standpoint.
- Reviews code before coding begins ,
- Attends directorial level meetings and advises about how new features might best be integrated into the existing architecture.
- Does not disagree with the business needs, but provides developer-level input on how to achieve the business goals.
- Does not let directors just dream up anything they want and throw it into the build – advises on how this could be best integrated (an example here is our web services).
- In an agile environment it in NOT enough to catch coding mistakes after the coding cycle has taken place. This is a bad idea which Mercury Interactive has brainwashed people into doing. Mercury promotes a very “waterfall” idea of load testing, not an agile idea. Mercury promotes the idea of an L & P tester waiting around for the finished product. Agile environments need something better than that. The Dev lead’s job is to insist on design specs from feature testers, think through the design, use past experience to suggest that coding one way is a bad path to go down.
- The performance dev receives test plans from other lead developers and does a performance code review before the coding on that feature begins.
- Reviews design specs and places a seal of approval on all design spec / ideas being worked on by feature developers. The feature developer doesn’t start coding without approval from the performance DEV.
- The performance developer plays a key role in keeping hardware costs down and in designing the overall architecture of the system.
- When feature developers are stuck, the Performance Dev steps in and codes a well-performing solution to the business problem.
- Develops and implements solutions to correct complex performance issues.
- Leads, manages and mentors a team of 2-3 performance testing engineers and support personnel.
- Designs the performance lab, gives direction to testers and people maintaining the equipment
- Works closely with software development groups during the development process
- Designs performance benchmarks
- Designs under-load stability, reliability and failover tests
- Performs profiling experiments with AVI code during product development
- Responsible for asking provocative questions challenging dev leads from feature teams
- Partners with production support to ensure quality standards, practices and methodologies, as well as work on critical issues and escalations
- Develops custom builds and MSI’s to deploy to the LOAD environment.
- Personally responsible for designing several large-scale ASP.net projects (it is NOT enough to have experience with desktop applications only. A web application brings in Networking and server-side programming which are an entirely new set of problems. )
- Hands-on performance engineering experience including database tuning, SQL optimization, memory/resource leak testing
- Hands-on, working experience with Gomez, , and other web site and web application performance testing tools
- Must know ASP.Net inside and out.
- Quick to grasp complex system architecture
- Significant Transact SQL experience (Will be reviewing, but not necessarily writing Transact-SQL Code)
- Make recommendations on optimizing database
- 3+ years of solid application programming with at least 2+ years of Object-Oriented programming in C#, ASP.NET, ADO.NET, MS SQL, XML, XSL and XSLT
- BS in Computer Science or related disciplines
- IIS x Windows (Windows 2003, XP) , XML, MSSQL,
- Strong verbal and written communication skills
- · Must be able to work in an Extreme Programming / Agile rapid development environment with deliverables every 4 weeks
Must enjoy being a heads-down coding programmer.
3. Performance Analyst, or Performance Test Engineer (i.e., people to execute tests and produce data)
- Uses Load Runner (or Visual Studio Team System or Badboy) to design and execute load and performance tests against Web applications, web services and SQL Server.
- Works under the direction of a Performance Developer
- analyzes the data gathered from performance tests with an eye towards application performance, availability and capacity with the goals of ensuring the optimal user experience and reducing hardware costs.
- Executes test scripts; through automated or manual methods and reports findings according to a defined process
- Develops data driven test automation scripts and executes performance and load testing of client products
- Accurately documents performance/capacity components
- Writes and participates in execution of technical test plans
- Has a sense of system engineering,
- Able to think beyond just what he is seeing on the screen to the servers and architecture of the entire system.
- Responsible for scripting tests
- Responsible for designing and executing daily BVTS
- Once tests are scripted, works on automating testing
- Able to monitor, measure, and optimize application performance on Windows Machines and occasionally a Linux machine
- is aware of performance standards and works to achieve optimal measurements of an application
- Is able to gather and interpret performance metrics for applications
- Writes basic test scripts
- Has understanding of test environment
- Looks beneath the surface to identify root cause
- Recognizes the need to escalate problems to a higher level
- Willing to work a flexible schedule in order to gather the data from long running reports and have them ready the next day for review by Dev and test leads.
- Writes clear, audience aware and business-like e-mails, status reports and summaries
- Understanding of web protocols including HTTP, HTTPS, TCP/IP, and DNS
- Knowledge of web page composition (static vs. dynamic elements, browser behavior, etc.)
- Demonstrable knowledge of performance bottlenecks and end-to-end performance web performance measures (server response time, throughput, network latency, etc.)
- Past experience in performance testing using tools
- Programming Experience in any language. C#, ASP.Net
- Strong commitment to provide top-rate service
- Keen attention to details and ability to troubleshoot issues to resolution
- Highly organized with ability to handle multiple tasks
- Ability to work independently as well as collaboratively customers
- Ability to learn new tools & technologies quickly
- Experience in collecting data, performing analysis on that data, and preparing reports
- 2-3 years previous experience developing and/or testing solutions with an established language
The ideal candidate is an insightful, consultative, strategic thinker who is passionate about the ways that the Web will continue to revolutionize information distribution, interaction and business processes.
It is the tester’s job to produce a chart or a graph that goes out to the WV Load & Performance daily. And reports on Response time of the features currently being tested.
Dice.com Pos. ID 435336 http://seeker.dice.com/jobsearch/servlet/JobSearch?op=302&dockey=xml/c/8/c83a5c440a9b63ed870393c55c5dcf92@endecaindex&source=19&FREE_TEXT=Performance+ASP
The L and P tester shouldn’ft be interrupted by other people too much. He has to keep an eye on the tests. Many (not all) tests are long running and require monitoring. To get the job done in an 8 hour day, he needs other people to just keep feeding him tests. The burden of interacting with the company is taken off the tester by the Performance DEV and the PM. The tester tests, reports results, and automates. The Performance DEV engineers the web site and SQL code, the Test Engineer engineers the test and takes the burden of testing off of the DEV.
Load and performance testing spans the entire spectrum of the development cycle and beyond. Testing begins with the unit testing of DLLS and individual features on individual web pages all the way to the Mixed Load Tests in STAGE and should go all the way up to load tests in the production environment. Security is usually the reason Load tests are not conducted in production. If this is the case, the production support team should be doing its own load testing. This is the way it is a large organizations such as MSN and Windows Media. It’fs true that there is a separation between test and Production. That separation is very necessary. But then the Production Support team has its own load equipment. The production support team's test should validate what comes out of the STAGE environment.
NO SUPRISES. That is the mandate of L n P. There should be no surprises when an application is released to product because everything has been tested.
4. Support Person to re-configure equipment. (Technical Support Engineer)
- Reconfigures web servers, network, load generators, WAN simulators according the needs of the performance team.
- Interfaces with the normal network support and help desk staff
- Examples of the support needs of the performance team include:
- rebuilding FAST Servers,
- adding new web server,
- moving 30 GB databases from one SQL Server to another
- maintains the performance engineering lab and tools
The ideal candidate for this position should have a combined 3-5 years of experience in any of the following areas; Operations/Production Support, Performance Test Engineering, Systems Engineering, or Systems Analysis
- knowledge of Networking, routers, switches, DB Admin skills,
- Ability to work effectively and flexibly
- Expert level knowledge of Windows operating systems including general troubleshooting,
- understanding of User Administration & Rights.
- Willing to learn Linux setup
- Knowledge of installation and configuration of web server technologies
- Strong in attention to detail and problem solving
- Educational Background / Equivalent Experience: Candidate should possess a Bachelor-s degree in Computer or Information Science or 4-6 years of directly related work experience
Dice Job ID 51615005 http://seeker.dice.com/jobsearch/servlet/JobSearch?op=302&dockey=xml/b/7/b78acce920e689c6995411fa8537a893@endecaindex&source=19&FREE_TEXT=Performance+ASP
Basically, most companies already has most of the elements needed for the team testing process I have describe here. What they lack is the idea of structure. As the IBM article I have attached explains, it is the structure which allows performance testing to persist – to have a life beyond the employment of the individuals who comprise the team.
The basic process and workflow needs to be put into place which would look like this:
Sometimes there can be some cross-over, the testers if he has the ability, can go work with the developers.
- Get a Solid environment
- Testers run load tests against this setup and report their results to an email alias. (It is on these reports, the data, numbers, the test results, that most of the decision making about L & P should be done, not on the basis of one Word document which really only rolls up the results).
- Dev provides custom builds to the testers.
- PM troubleshoots problems in the above process.
Both the Performance Developers AND the Performance Testers should have developer quality machines. They should be able to deploy the web site to their own local machine and execute some load tests locally. One reason they should have this ability is because when the Performance Developer goes on vacation, one of the Performance Testers should be able to do some filling in for him.
A Strong Word about Service-Oriented Relationships
One idea which helps make better performing, more robust applications is for all teams to start immediately moving towards a client, service-oriented model. This includes especially the Load and Performance team. The L n P team is NOT there to slow up development. It is there to provide a service – a service to the company, to other testers, and especially to the developers. Those groups are an L n P team’s customers. If the customers are unhappy, they should have the right to go elsewhere. If the L n P team sucks at providing the service it is chartered to provide, it should be disbanded, or reconstructed. Replacements should be looked for.
But the Service-oriented, customer oriented focus goes for all the other teams as well. When the L n P teams needs some information about something, every other team should bend over backwards to get that information to the L n P team. Just as they should expect the L n P team to bend over backwards for them. The L n P team especially needs all information about the Product ion Environment, (including disk configuration, machine CPU and RAM, etc) as well as expert opinions from other specialists such as the networking teams. Sometimes the L n P team may as a dumb question. Such questions are not meant to imply that the production team members or Networking team members are not doing their jobs. It is just simply the nature of load and performance testing that it requires a lot of information about all aspects and infrastructure surrounding the application. We have to configure our tests to simulate production, and then we also have to change the values (e.g., maybe we need to test a different TCP packet size.).
Companies generally need to get away from the idea that “a” test (one test) is all that a feature needs. Performance bugs can pop up anywhere along the line including configuration errors when the code is moved from the load or stage environment to production.
Typically, company's hire “a load tester” (role # 3) and end up pushing them into roles 1, 2 and 4 (PM, Dev lead, and infrastructure support.) When this happens the load tester feels stressed. Things calm down when the tester no longer has the exclusive role of planning of the test infrastructure, and trying to support it.
Historically and industry-wide there has always been an adversarial relationship between developers and testers. This is basically a healthy thing. The L n P’s team’s job is to send the feature developer back to searching the MSDN website to find a better way to code his feature. The Performance Dev’s experience on prior coding projects can be of great help in suggesting better ways to code the feature, but quite frankly and realistically, part of having an L n P team is to create a fear factor in the feature developers. To know that their bugs will be caught.