Saturday, May 3, 2014

important Functions

Once a script is recorded using the Virtual User Generator with the best recording options (please refer to the link ), the next important thing is to understand the complete script and the functions recorded in the script.

At a very high level, the entire script can be classified into three groups i.e.


·         Protocol Specific functions
·         LoadRunner functions
·         Language specific functions
Protocol Specific functions: 
These functions can be used with a specific protocol only and they cannot be used with any other protocol. For a Web (Http/HTML) protocol, the commonly seen functions are:


·         web_url(), web_image(), web_link() - All these functions are used to simulate a GET request
·         Web_submit_form() is used to simulate a POST request
·         web_submit_data() is used to simulate both GET and POST requests. The attribute "Method" tells if that request is a GET or POST.

web_reg_find() and web_reg_save_param() are the service functions used for page verification and correlation respectively.
All the above functions are starting with the word "web" indicating that they are web protocol specific functions and cannot be used outside Web protocol.
Few other web protocol functions are:

web_set_user(), web_set_max_html_param_len(), web_cache_cleanup(), web_cleanup_cookies() etc

LoadRunner functions:
All these functions are loadRunner specific and can be used across any protocol. All these functions start with lr_. Few examples for LoadRunner functions are:

lr_start_transaction() - To start a transaction to measure the response time
lr_end_transaction() - To stop a transaction to measure the response time
lr_think_time() - to wait for a specified duration
lr_eval_string() - To evaluate the value of a LoadRunner parameter placed in {}
lr_save_string() - To save a string to a LoadRunner parameter
lr_save_int() - To save an integer value to a LoadRunner parameter
lr_exit() - exit a loop/iteration/user from execution
lr_set_debug_message() - To control the log settings of the replay log
lr_output_message() - To write to the output log with an information level setting
lr_error_message() - To write to the output message with error level
etc

Language specific functions:
These functions are not a part of the tool, all the functions that can be used in a language(C or Java) can be directly used in the script provided that the protocol is supported in a language.
For example, The Web(HTTP/HTML) protocol is supported by the C language. Hence all the C functions can be used directly in Web script.
The commonly used functions are:

rand() - to generate a random number
atoi()  - converting a string into an integer
sprintf - Saving a formatted output to a string buffer
strcat - string concatenation
strcpy - Copying into a string
strtok - String tokenizer function

The JAVA protocol cannot support the C language and hence these functions cannot be used by the Java based script.

Hope the information provided about the classification a LoadRunner script is helpful in understanding how the scripts work.

Wednesday, March 26, 2014

Very Important Document For Performance Testing

                       Performance Testing Scenario Design

After Having multiple discussions with client and will understand the application architecture, server configuration and his application. Next step will identify the business scenarios based on the following parameters. (Most of the cases, the business scenarios are identified by the client, if not we will help them in finalizing the scenarios).

1. The scenario that is being used by most of the users.
2. The scenario which generates more revenue to the client.
3. The scenario which is critical to the application.
After finalizing the scenarios, will have a walkthrough of scenarios to make sure that the functionality of the scenarios is fine (Functionality means whether the screens and expected output is coming or not, not end to end functionality).
Then, have to decide the number of users and types of performance testing.
                      Performance Testing process

Recording Script by using Load Runner:
Recording: Once Project  get the sign off from the customer, have to start with scripting.if application is web based have to record script with web (HTTP/HTML) protocol.There are couple of recording mode options

Html based script: This is the general recording mode have to select where the data will be captured at the HTML level and appears in the script in form of GET & POST requests.

There are two advanced options available here
a. web_link, web_submit_form: This will capture the data in terms of link and form names.(In other words, it is page level recording and it is contextual recording which means each request is dependent on the previous request).

b. web_url, web_submit_data: This will capture the data in terms of link & form data. (In other words it is network level recording; it is also called as context less recording, each request is independent of the other requests).

URL based script: This is generally used for component based testing.Each and every request (js, css, img, gif) will be captured as a separate request and hence the script will be huge size. This is the reason we always prefer the HTML mode recording. The purpose of having the URL based recording is, if the component communicates with the server occasionally, it may not be recorded by HTML (for example in case of applets). In such situations, we should go for URL mode.

Once the record button is pressed in the VUGen, all information related to the business scenario will be saved to a log file called Recording Log. Based on the recording settings that we choose, the script will be created and the entries are logged to Generation Log.
                     Performance Testing Bottlenecks

Bottleneck1: In .net application, Gen0 should collect all objects during garbage collection.1/10thof GC0 should be collected at GC1 and 1/10th of GC1 should be at GC2 In my application GC1 has collected around 1/3rd of GC0.So, I inferred that garbage collection has not done properly.

I suggested them to do application profiling for the scenario and find out the objects that were not properly garbage collected. 
below are the Application profiling tools:
JProbe –Java
Ant,CLR -- .net

Theseprofilers will show us all the objects that were being created during thescenario and alsoshow the garbage collection details of these objects.With this, we can find how the garbage collection has happened.

Bottleneck2: In .netapplication I found the sp’s that were causing deadlocks at the DB level.I have done SQL Server profiling to identify this.SQL Profiling gives you the details of sp’s that were being run during test execution, time takenby them and show us if any sp is creating deadlock to other SP. Also,we check the server logs for deadlocks.With this I inferred that deadlocks are more for this particular transaction and you need to resolve them.

Bottleneck3: In Javaapplication, I found that for a particular scenario the CPU has hit 95% and was  there for some part of time.

I found that this is because of limited number of threads available in the tomcat server and since the requests were increasing, they started to be in queue which has affected the CPU processing and ultimately it hit the CPU to peak 95%.

I suggested them to increase the number of server threads so that there will less number of threads in queue and CPU peak utilization will be reduce. If it stills peaking, I advised them to increase the CPU RAM.
                      Load runner Analysis
Once the test is completed, the results can be viewed by clicking the analyze results option. It launches the Analysis component of the Loadrunner which is used to analyze the test results.
The default analysis summary contains the details like the test duration, totalthroughput,averagethroughput, total hits and average hits per second, passed transactions, andfailed transactions,the min, max, avg and 90% of the response times. Filters can be applied on thesummary to get the statistics for a specified duration or for a specified loadgenerator or specified transaction and percentile requested.
Theother section of the Analysis is the graphs section, the defaults are:transaction summary, throughput, hitsper second, avg transaction response time etc;
New graphs can be added, merged with the existing graphs and filters can be appliedon any graphs.These statistics and graphs will be helpful in analyzing the trends, bottlenecks.

We can generate reports either in html/word report and can add our suggestions for the graphs and share them with the client.

           Analyzing the test results and finding bottlenecks: 

This section gives a brief idea on how to start the analysis of the test results. Once the test is completed, the test results should be divided into three sections:

1. Till the normal and expected behavior
2. from where the response times, started increasing exponentially and started seeing errors
3. from where the errors increased drastically to till the end of the test.

Note down the user loads at all these three levels.

The next classification would be based on the requests:
1. Identify the requests that are having high response times
2. Identify the requests having more failed transactions

Next classification should be based on the request type whether the request is GET or POST. All the POSTs send the request to the database. If the high response times or the failed rounds are observed for the POSTs, it could be possibly due to the database issues.
Next classification should be on kind of errors, whether we are seeing 500 internal server errors,503 service unavailable errors or step download timeout errors or page validation errors. If most of the errors are step-download timeout errors that indicate a possible network issues. If most of the errors are due to 500 or 503, it could be due to the application issues.
If a particular request is taking very long time and the response times are high, it can be analyzed using the Web Page Diagnostics graph. This graph provides the information at
1. Different layers – time to taken to establish, time taken in the SSL,time taken to sendthe request, time taken to receive the first byte (Time to First buffer an important counter that should be considered while analyzing for network issues) and thereceive time. If the receive time is relatively high compared to others, it could be a network issue. If no network issues are observed and most of the time was spent during time to first buffer, then the focus should be changed to the resource level.

2. Resource level statistics – Web Page diagnostics also provides the statistics at the resource level. If the request with high response time is downloading 15 resources ,the time taken for each resource would be provided in the graph. If the static resources like images, css and java scripts are taking more time, it could be due to the Web server issue. If none of them are the reason, then it would be finally the blame on the application server/db server.

Please note all the analysis that has been done is at the client end to find out the bottlenecks. If servers are also monitored, the bottleneck can be pinpointed whether it was due to the hardware resources or the configuration issues on Web/App/DB server or the due to bad programming or due to poor SQL or a table that is not indexed properly
                                Load runner Monitors

If the scope includes monitoring the servers as well, it can be done in two ways.

1. Monitoring from LoadRunner
2. Executing the utilities available on the mach

Performance Monitors in Loadrunner:

LoadRunner has the facility to monitor almost all the servers(for the hardware and application specific counters). To monitor the hardware resources the steps are as below:
1. Goto the Ran tab of the LoadRunner scenario
2. Click on monitors -> Select Windows/Unix Monitoring (based on the OS,of the servers)
3. On the right a graph will be shown -> Right click -> Add Measurements
4.Provide the IP address of the server and select the counters to be monitored
Tomonitor the application specific counters, it requires configuring somesettings at the applicationend.

Monitoringusing the other tools: Perfmon, Site scope
                       Loadrunner Controller

As abest practice, we need to create the scenarios well in advance for the final tests. There are two types of Scenarios (do not confuse this from the business scenario, inLoadRunner scenario represents the test set-up)
Goal Oriented Scenario in Controller: If the customers requirement is a specific goal in terms of Hits per Second or pages per second or transactions per second, this option would be used to configure the goal. The user load would be automatically updated based on the goal (We do not need to increase the load manually or we do not need to set-up the ramp-up patterns).

Manual Scenario in Controller: This is the typical scenario that is used in the day to day life of a performance engineer. The main objective here is the User Load. The ramp-up, duration and ramp-down are to be specified manually for this option. The Ramp-up patterns can be specified for the overall scenario (set of groups) or for the individual group (each individual script).The options are Schedule by Group and Schedule by Scenario.

Schedule by Group in ControllerIf the customers’ objective is to execute a group after the completion of another group or after 15 minutes of the first group etc, then this option is set up. This is also used when each group has its own ramp-up, ramp-down patterns We will use this when there is an inter dependency of scripts.

Ex:scenario1 is raising a request for shift change

Scenario2 is accepting therequest by some other user. In thiscase, script 2 can be executed only after script1 is executed at least for sometime or completely.

Schedule by Scenario in Controller: If the customers’ objective to pump the user load at a specific rate irrespective of the groups, then this option is used.InLoadRunner Scenario, We have two tabs

Designtab: the schedule for test execution can be set here
Run tab: To start the test, monitor the results, monitor the servers.

Designtab:
Here,we will set the range of user load and run the tests.
Othersettings in design tab are
1. Loadgenerators (we can add different load generators by giving the LG installed m/cip's)
2. AddVusers (we can add the users during run time using this option)
3. AddGroups: we can add the scripts
4.Runtime settings: we can modify the run time settings of a script for a test.
5.Detail: details of a script
6. View Script: it willopen the script in vugen.

Run Tab:
Here werun the scenario by giving path for storing the results.
Thistab will shows you the details like passed transactions, failed transactions,hits/sec and errors.
Thistab also shows you the min, max and avg. response times of the transactions ofa script.
                         
                                Page Validation
What? To verify a static text of a particular web page

Why? When executed for multiple users, it is not possible to verify whether all the pages are successful or not using the “Browser Display”. To overcome this, we can verify for a static text of a particular page in the response of the request. If the static text is found, it is considered as success else it is considered as a failure.

How? The function web_reg_find () can be used to check whether we got the expected page by giving the static text of the expected page.Once the scripting is done, we do a smoke test with limited number of users to make sure that scripts &application is working fine.
                  Performance testing Definition
Testing an application under general or normal conditions tomeasure the speed of an application, server and network devices.

 The main objective behind this performance testing is to measure Response times, Hits/sec, Through put, passedtransactions and Resource utilization like CPU, RAM, Hard disk
                       Software Performance Testing
Load Testing: The test execution with a specified user load under given conditions.Major focus will be on system stability, response times and success and failures.
Stress Testing: The test execution where we continuously increase the load on the application until it crashes. Major focus will be on the threshold limit of users and response time is not at all considered.
Endurance Testing: The test execution where we load the application continuously for longer durations like days or months to identify bottlenecks like memory leaks.
Spike Testing: The test execution where the application is loaded with varied user load suddenly (sudden increase and decrease) to check the system behavior.
Volume Testing: The test execution where we send the continuous data to the db to check whether it can handle that much volume of data at a time. Once we decide all these things, we have to decide other things like think time, ramp up,duration, server monitoring and analysis

                  Waterfall Model
The waterfall model follows a 'top down' approach for both software development and Software testing. The basic steps involved in this software testing methodology are:
- Requirement analysis
- Test case design
- Test case implementation
- Testing
- Debugging and validating the code or product
- Deployment
- Maintenance

     This methodology follows step by step process like start with requirement analysis then Test case design then next steps as explained above. There is no scope for jumping backward or forward or performing two steps simultaneously. Also, this model follows a non-iterative approach. The main benefit of this methodology is its simplistic, systematic and orthodox approach. However, it has many shortcomings since bugs and errors in the code are not discovered until and unless the testing stage is reached. This can often lead to wastage of time, money and valuable resources. This methods mostly used to develop a  Product. It wont work of Service based companies or Application development.
Entry Criteria:- Create Test data in test Database- should get sign off for Unit test cases - Deploy Build and configuration changes should complete

Task: 

- Start Execution of  Test Cases
- Test result compare
- Expected result against actual result
- Recording defects in the test log/Test Management Tool

Exit Criteria:

- All test cases/conditions have been satisfied
- All Priority defects discovered have been fixed
- In the next iteration of testing, the defects raised on the previous cycle will be monitored.
- Signing off Unit Testing 
                Software Testing Review Types
Below are the 3 types of Reviews done with in the software testing:

- Informal Review or Peer Review
- Semi Formal Review orWalk-through
- Formal Review or Inspection
                         Cost of Defects

Phase
% Cost
Requirements
0
Design
10
Coding
20
Testing
50
Customer Site
100

- If QA find any document related or functional related issues during the requirement analysis those defect cost will be zero why because can easily change the document.
- If QA/Dev find defect during the Design phase that will cost the bug 10% because have to go back to the Requirement phase and update the docs also already implemented design. 
- If defect find during the coding then that will cost 20%  during code most of the defect found by Developers which are Design or Requirement docs related issues.
- If defects found during the QA testing then that will cost 50% because required Developers support and time
- If defects find in Production then that defect will cost huge because customer also charge some thing for that defect and lot of Developer and QA support required 
                       Why Software Testing?
Todiscover defects.
Toavoid user detecting problems
To prove that the software has no faults
To learn about the reliability of the software.
Toensure that product works as user expected.
To stay in business
To avoid being sued by customers
Todetect defects early, which helps in reducing the cost of defect fixing.
                    What Is Software Testing?
An examination of the behavior of the program by executing on sample data sets. Testing is executing a program with an indent of finding Error/Fault and Failure.
Difference Between Fault, error and failure?

Fault is a condition that causes the software to fail to perform its required function.
Error refers to difference between Actual Output and Expected Output.
Failure is the inability of a system or component to perform required function according to its specification.
Failure is an event; fault is a state of the software, caused by an error.
           Agile Software Development With Scrum

First let me explain what is Agile methodology briefly: Agile methodology is not a new methodology it’s been there more than a decade. But still many of the Software professionals don’t know exactly what Agile is and how it works. Let's come to the point what is Agile methodology: Agile methodology is a group of Software development methods which execute on Iterative and incremental development where customer requirements and solution evolves through collaboration among self-organizing and cross functional teams. 
Agile software development with scrum
The first thing in Agile methodology is daily stand up meetings which are called as “Daily Scrum”. daily The meeting helps team members to discuss day to day activities and it is extremely beneficial for teams in the long run, as they help departments remain focused, energized and coordinated. Daily Scrums keep team members responsible to one another. Each team member discuss clearly in scrum what he/she working that helps to other team members what the others are handling on and where their code fits into the application. Transparency and connectivity are staples of an efficient team.  

In most companies, development is slowed down by issues identified as impediments during the daily meetings or planning and review meetings. With Scrum, these impediments are prioritized and systematically removed, further increasing productivity and quality. Well-run Scrums achieve the Toyota effect: four times industry average productivity and twelve times better quality.  

Scrum removes management pressure from teams. Teams are allowed to select their own work, and then self-organize through close communication and mutual agreement within the team on how best to accomplish the work. In a successful Scrum, this autonomy can significantly improve the quality of life for developers and enhance employee retention for managers.  

Too often, however, daily Scrum meetings become weekly meetings, which in turn become monthly meetings. Pretty soon, you know longer recognize Bill from development. So set a time that works for everyone, re-introduce yourself to Bill, and do your best to stick to the schedule. It also helps to appoint a Scrum Master who will lead the meetings and ensure the Scrum schedule is adhered to.
                             what is Test Plan
test plan is a documentfor the entire project that defines the scope, approach to be taken, and schedules of testing activities.It identifies test items, the features to be tested, the testing tasks,who will do each task and any risks requiring contingency planning.

The test planning can be done well before the actual testing commences and can be done in parallel with thecoding and design phase. The inputs for forming test plan are below

1.Project plan
2.Requirement spec. Doc.
3.Architecture and designdocument.

Requirements document and Designdocument are the basic documents used for selecting the test units and decidingthe approaches to be used during testing.
Test plan should contain below tasks also
•- Test unit specifications
-• Features to be used
•- Approaches for testing
-• Test deliverables
•- Schedule
•- Personnel allocation
    
Test Unit:Test unit is aset of one or more modules together with associated data, that are from asingle computer program and that are the object of testing.  Test unit may be a module or few modules or acomplete system.

Features to be tested:Include all software features and combinations of features that shouldbe tested.  A software feature issoftware characteristics specified or implied by the requirements or designdocument.

Approach for Testing: Specifies the overall approach to be followedin the current project. The technique that will be used to judge the testingeffort should also be specified.

Test Deliverable's:Should bespecified in the test plan before the actual testing begins
Deliverables could be
Test cases that were used
Detailed results of testing
Test summary report
      Ingeneral
            Test case specification report
            Test summary report and
            TestLog report.   
Should be specified asdeliverables.

Test summary Report: It defines the items tested,environment in which testing was done, and any variations from the specifications.
Test Log Report:Provides chronological record of relevant details about the executionsof the test cases.
Schedule: Specifies the amount of time and effort to be spent ondifferent activities of testing and testing of different units that have beenidentified.
Personnel Allocation: Identifies the persons responsible for performing the differentactivities.
Test Case Execution and Analysis: Steps to be performed to executethe test cases are specified in a separate document called the 'test procedurespecification' that exists for setting the test environment and describes themethods and formats for reporting the result of testing.
Output of the test case executionis:
Test log report
Test summaryreport
Bug report.
Test log:  Describesthe details of testing
Test summary report: Gives total number of test casesexecuted, the number and nature of bugs found, and summary of any metrics data.
Bug Report:Thesummary of all defects found
                    LoadRunner Diagnostics
Before any one use HP Diagnostics with LoadRunner, should specify the Diagnostics Server details in LoadRunner.

Before view HP Diagnostics data in a load test scenario,configure the Diagnostics parameters for that scenario.To use LoadRunner’s diagnostics, follow these steps:

1 Prepare for generating diagnostics data.Make sure that the Mediator machine is installed. The Mediator collects and processes the diagnostics data.Configure the sever machine to enable the diagnostics feature.Prepare the Controller machine to generate diagnostics data and communicate with the Mediator machines.

2 Collect and prepare the diagnostics data.During the load test, the Mediator collects the data and processes the diagnostics information.

3 Create the results.After the load test, the Controller collects the aggregated data from the Mediator machines and collates the results.

4 Present the data.Use the Analysis graphs to view the diagnostics data and drill down to the problematic areas.

Diagnostics for J2EE & .NET: Diagnostics for J2EE & .NET helps rapidly identify and resolve Java 2 Enterprise Edition (J2EE) or .NET applications performance problems. This complete pre-production solution allows you to rapidly drill down to pinpoint problem areas in any layer of the application—all the way down to the method or SQL statement.Can use this diagnostics data to isolate and resolve issues such as synchronization and deadlocking, intermittent slow method instances,memory leaks, and thrashing.

Siebel Diagnostics: Siebel Diagnostics provides performance breakdown of the Siebel application server layers. These graphs provide detailed information about the Siebel layers, areas, sub-areas, servers, and scripts which most commonly require optimization.

Siebel DB Diagnostics: Many of the performance problems on Siebel systems are database-related.Siebel DB Diagnostics helps you rapidly identify and resolve database performance problems. You can view the SQLs for each transaction, identify the problematic SQL queries of each script, and identify at what point problems occurred
Oracle 11i Diagnostics: Oracle 11i Diagnostics helps pinpoint performance problems on Oracle NCA systems. The diagnostics information drills down from the transaction, to the SQL statements, and the SQL stages of each statement.

SAP Diagnostics: SAP Diagnostics helps pinpoint performance problems on SAP application server systems. The diagnostics information is broken down from the transaction, into the dialog steps and server time components, and other associated data.
                                       Defect Tracking Standards
Bug Tracking or Defect Tracking Standards:
Defect Title: A short summary of the defect. To give developer enough clue about what is the nature of the defect. 
Defect Description: The detailed description of the defect. It should include the following information:
• Description of the artifact: Detailed description of the defect.
• Steps to reproduce: All the steps required to reproduce the defect
• Expected result.
• Actual result.
• Testing instance where the testing was done
• Build number on which defect was found. 
Status: The current status of the defect, which would be “New” by default.
Category: The category of the defect, which could be TAR, Defect or Issue. “TAR” will be the default value of the field.
Priority: The urgency with which the defect needs to be fixed. This will be assigned to the defect after the defect review meeting. Its default value will be “None”.
Assigned to: The person to whom the defect is to be assigned. When a new defect is logged it will be allocated to QA TL.
Discovered In: The phase of testing under which defect was found. Following values can be given under this field:

• Peer Review: Defect found during peer review of the documents.
• Unit Testing: Defect found by developer during Unit testing 
• QA Testing: Defect found by QA during System testing.
• Client Testing: Defect found by the Client during UAT.

Due Date: The tentative date by which defect would be fixed. This date would be given in Defect meeting, only after defect has be assigned with Priority.

Fixed in Iteration – the Iteration number in which defect is fixed.
Impact Severity – the severity of the defect given by the tester.
Module – The module under which defect was found.
Opened By – the person logging the defect.
Regression Test – brief result of retesting done on the defect by the tester.
Root Cause – the root cause of the defect, to be filled in by the Developer fixing the defect.
Root Cause Description – the Description of the reason due to which defect had arisen. This will help in “Causal Analysis of the Defect” and would help in taking preventative measures against occurrence of the defect in future.

Attachment: The screenshot or some other support document that can help out in defect investigation and give better clarity of the defect.
Note: If developer is not clear about the defect then that defect needs to be moved/assigned back to the tester with comments. Tester will then try to attach more screenshots or documents and add his comments. There might be cases when tester will be required to reproduce the defect to the developer in QA region
                   Defect Priority and Severity
Defect Priority and Defect Severity Definitions:
Defect Priority:
Priority is the urgency attached to analyzing and providing a fix for the fault. While Severity, is the impact of that fault on the System under test. Each fault should be considered on its own merits.Priority to a defect will be assigned during TAR session only after adjudging that it’s a valid defect and is to be fixed.

Defect Priority
Guidelines
1 – Critical
Must be fixed immediately. Serious affect on Testing.
2 – High
Must be fixed before testing has completed.
3 – Medium
Fixed if possible before application implemented in production.
4 – Low
Fixed if time available.

Defect Severity:
Severity is the indication of the impact of the fault on Test and/or in Production.

Defect Severity
Guidelines
Likely Action
1 - Show Stopper
All/most Test activities suspended due to fault. System un-usable and/or major user groups prevented for using system. No workaround available.
All testing stopped until fixed
2 - Major
Test Activities for module suspended. Major module of system un-usable and/or groups of users unable to work. Workarounds may be available.
All testing for this group of tests stopped
3 - Minor
Test script suspended but other testing can continue. Individual functions affected and/or individual users prevented from completing tasks. Workarounds available.
A single test stopped until fixed
4 - Nominal Error/Cosmetic
Minor fault or Cosmetic Error that has no impact on testing schedule. Users can continue working.
No testing stopped

                     Defect Tracking Software
Defect tracking Software or system is a software application that is designed to help quality assurance and programmers keep track of reported software defects or bugs during the system development. 
Defect Tracking system also called as Bug Tracking System.Many bug-tracking systems such as those used by most open source software projects allow users to enter bug reports directly.Typically bug tracking systems are integrated with other software project management applications.

There are lot of defect tracking software's are available in market among those below are few:
Quality Center
Test Director
Bugzilla
IBM Rational ClearQuest
Jira
Assembla Tickets
Bontq
BugTracker.NET
Cerebro
FogBugz
Microsoft Dynamics CRM
Remedy Action and Request System
               What is LoadRunner and advantages
HP LoadRunner load tests your application by emulating an environment in which multiple users work concurrently. While the application is under load, LoadRunner accurately measures, monitors, and analyzes a system’s performance and functionality.

LoadRunner addresses the drawbacks of manual performance testing:

- LoadRunner reduces personnel requirements by replacing human users with virtual users or Vusers. These Vusers emulate the behavior of real users—
operating real applications.
- Because numerous Vusers can run on a single computer, LoadRunner reduces the amount of hardware required for testing.
- The HP LoadRunner Controller allows you to easily and effectively control all the Vusers—from a single point of control.
- LoadRunner monitors the application performance online, enabling you to fine-tune your system during test execution.
- LoadRunner automatically records the performance of the application during a test. You can choose from a wide variety of graphs and reports to view the performance data.
- LoadRunner checks where performance delays occur: network or client delays, CPU performance, I/O delays, database locking, or other issues at the database server. LoadRunner monitors the network and server resources to help you improve performance.
- Because LoadRunner tests are fully automated, you can easily repeat them as often as you need.

Understanding various log files in LoadRunner

Once a script is recorded using LoadRunner tool, one can notice 4 different tabs as part of the output window. This article is all about these tabs and their usage. The 4 tabs that you notice are: 

·           Replay Log
·          Recording Log
·         Correlation Results
·         Generation Log
Let us start the way these logs are generated.
Recording Log:
When a script is being recorded, the Virtual User Generator records all the communication that has happened between the client and the server into a log called “Recording Log.” Though this is not in a much readable format, the recording log will be the base for the generation log.
The option Regenerate script (Navigate to tools à Regenerate script) works purely using the recording log. If the recording log is missing, the script cannot be regenerated with different recording options
Generation Log:
Generation contains the information about the recording options used for the script, the request header, request body, response header and response body and the function that simulates the request. This function may be varied based on the recording options used.
The generation log content may be changed based on the recording options that are used and for the generation Log, recording log is input file
Once generated, the contents of the recording and generation logs are not altered
Replay Log
This log displays the output when the script is replayed. This log is helpful to debug the script and  customize the script. The contents of this log file can be controlled based on the run time settings (Vuser à Run Time settings à Log àeither standard log or extended log)
The output functions like lr_output_message, lr_log_message() and lr_error_message() would write their content to the Replay Log.
Correlation Results
Once a script is recorded and replayed,  the script can be verified for the dynamic data.  The option “Scan for correlations” is used to compare the recording  and replay data and to highlight the dynamic data if any. The script can be correlated directly from this tab and this is one form of auto correlating the script.
As it compares the recording time data and the replay data, it is always necessary to have the “data” folder inside the script folder

Understanding a LoadRunner Web (HTTP/HTML) script

Once a script is recorded using the Virtual User Generator with the best recording options (please refer to the link ), the next important thing is to understand the complete script and the functions recorded in the script.
At a very high level, the entire script can be classified into three groups i.e. 
·         Protocol Specific functions
·         LoadRunner functions
·         Language specific functions
Protocol Specific functions: 
These functions can be used with a specific protocol only and they cannot be used with any other protocol. For a Web (Http/HTML) protocol, the commonly seen functions are
·         web_url(), web_image(), web_link() - All these functions are used to simulate a GET request
·         Web_submit_form() is used to simulate a POST request
·         web_submit_data() is used to simulate both GET and POST requests. The attribute "Method" tells if that request is a GET or POST.

web_reg_find() and web_reg_save_param() are the service functions used for page verification and correlation respectively.
All the above functions are starting with the word "web" indicating that they are web protocol specific functions and cannot be used outside Web protocol.
Few other web protocol functions are:
web_set_user(), web_set_max_html_param_len(), web_cache_cleanup(), web_cleanup_cookies() etc
LoadRunner functions:
All these functions are loadRunner specific and can be used across any protocol. All these functions start with lr_. Few examples for LoadRunner functions are:
lr_start_transaction() - To start a transaction to measure the response time
lr_end_transaction() - To stop a transaction to measure the response time
lr_think_time() - to wait for a specified duration
lr_eval_string() - To evaluate the value of a LoadRunner parameter placed in {}
lr_save_string() - To save a string to a LoadRunner parameter
lr_save_int() - To save an integer value to a LoadRunner parameter
lr_exit() - exit a loop/iteration/user from execution
lr_set_debug_message() - To control the log settings of the replay log
lr_output_message() - To write to the output log with an information level setting
lr_error_message() - To write to the output message with error level
etc 

Language specific functions:
These functions are not a part of the tool, all the functions that can be used in a language(C or Java) can be directly used in the script provided that the protocol is supported in a language.
For example, The Web(HTTP/HTML) protocol is supported by the C language. Hence all the C functions can be used directly in Web script.
The commonly used functions are:
rand() - to generate a random number
atoi()  - converting a string into an integer
sprintf - Saving a formatted output to a string buffer
strcat - string concatenation
strcpy - Copying into a string
strtok - String tokenizer function
The JAVA protocol cannot support the C language and hence these functions cannot be used by the Java based script.
Hope the information provided about the classification a LoadRunner script is helpful in understanding how the scripts work.

Context based Recording Vs Context less Recording and HTML Vs URL based recording

To understand the "context based recording", if some one asks you a question "How is he doing?", you would definitely ask "Whom are you referring to?". But if the same question is asked during a discussion about one of your friends whose name is  Karthik, you would not ask the question because in the current context HE refers to Karthik. So you can understand who "HE" is.

The context less question would be like "How is Karthik doing?". There won't be any more questions. Because you are explicitly pointing to a person and not using any generic terms like He, it. This is called context less mode.

In the above example of "HE", "HE" refers to Karthik only for that discussion and for a different discussion "HE" may refer to someone else.  The context is only until the discussion.
There are two recording modes available in LoadRunner to record the user actions of a Web Application. They are
·         HTML mode
·         URL Mode
HTML mode – In this mode of recording, each USER ACTION is recorded as a separate request.  To put it in simple terms, each HTML content (usually a page except in case of HTML Frames) is recorded as a request

If all the user actions are recorded into a Single ACTION of the script, then the HTML mode of recording acts as "Context Based Recording".
The Virtual User Generator understands the Context by looking at the previous request's response. Hence it identifies the Forms, Images etc and you would notice the below functions in a context based recording of the script.
web_submit_form() - to simulate a POST Request
web_image(), web_link() - to simulate a GET request

In VUGen, the context is applicable only till the Action. If you record a user action into a different action, the context would reset and again the context has to be created.
In a case where the tool has to enter some data in a form and if the form is not found in its previous response, the tool will halt the execution of the script.  Every request is directly dependant on the previous request’s response and has high chances of failure with the UI changes to the web application
The advantages of using the HTML recording mode is that the size is very compact and the customization efforts would be very less. The other side of the coin is that, with the UI changes to the web applications, these scripts would require very high maintenance costs.
URL Mode: In this mode of recording each resource requested by the user is recorded as a separate request.  in other words, whatever the content (like images, CSS, JS, HTML) that makes the HTML page is recorded as a separate request. When a web site is launched apart from the HTML content, there would be lot of images, java script files, CSS files downloaded. All these are called resources and each resource is recorded as a separate request.
URL mode is always context less recording because this mode refers to the data and the URL of a request directly instead of depending on previous response. This mode does not depend on the UI of the application, rather the actions associated with the user actions performed on UI. As each resource is recorded, the size of the script would be very high and this also involves in lot of customization. The benefit of having scripts with URL recording mode is that the maintenance cost associated with these are very less and can be used across various releases of the product despite lot of UI changes.
Usually URL mode of recording is used with Non –browser applications in other words any thick client activity is recorded using the URL mode.
Trade off – with the HTML recording mode, another option is available under “Advanced options” of HTML mode. The option available is Explicit URLs only mode.
The benefits of the HTML mode and URL mode is clubbed together in HTML à Explicit URLs only. With this mode, the size of the script would be compact (as only the user action is recorded as request, not at the UI level but at the request level) and requires a bit of more customization efforts but has the advance of high maintainability for longer durations. This is the most recommended recording mode for web applications where the scripts have to be maintained for longer durations
Tip: Have you forgotten to record the script using HTML à Explicit URLs mode? No problem....
Change the recording options to HTML à Explicit URLs only and now navigate to Toolsà Regenerate script. The regenerated script is as fresh as a recorded script using HTML Explicit URL’s only. But do remember that whatever the changes that are made to the script would be gone if the script is regenerated.

Date Randomization in LoadRunner

One of the common requirements that a performance tester may experience is randomizing the date.
For example in a online booking of train ticket or air tickets, we may want to book a ticket for a post date which can be any where between tomorrow or 60 days from today. Usually if it is a date is constant, that can be parameterized in LoadRunner. If thats not a fixed date and needs to be randomized, the below code will be helpful.
rand() is a C function that is used to generate a random number.
rand()%30 will give us a random number any where between 0 -29.

randNum = rand()%30 +1 //gives you a random number anywhere between 1 - 30
lr_save_datetime("%d/%m/%Y", DATE_NOW + ONE_DAY*randNum, "JrnyDate");
lr_save_datetime() is the loadrunner function to save a date of a specified format to a loadrunner variable.
In the above example, the date in the format of Date/Month/Year (25/05/2012) is saved to a LoadRunner parameter called "JrnyDate". DATE_NOW gives us the current date and ONE_DAY*randnum will give us a future random date.