Thursday, February 15, 2007

Loadrunner, QTP and Winrunner

Load / Stress Testing of Websites
1. The Importance of Scalability & Load Testing?
Some very high profile websites have suffered from serious outages and/or performance issues due to the number of people hitting their website. E-commerce sites that spent heavily on advertising but not nearly enough on ensuring the quality or reliability of their service have ended up with poor web-site performance, system downtime and/or serious errors, with the predictable result that customers are being lost.

In the case of toysrus.com, its web site couldn't handle the approximately 1000 percent increase in traffic that their advertising campaign generated. Similarly, Encyclopaedia Britannica was unable to keep up with the amount of users during the immediate weeks following their promotion of free access to its online database. The truth is, these problems could probably have been prevented, had adequate load testing taken place.

When creating an eCommerce portal, companies will want to know whether their infrastructure can handle the predicted levels of traffic, to measure performance and verify stability.

These types of services include Scalability / Load / Stress testing, as well as Live Performance Monitoring.

Load testing tools can be used to test the system behaviour and performance under stressful conditions by emulating thousands of virtual users. These virtual users stress the application even harder than real users would, while monitoring the behaviour and response times of the different components. This enables companies to minimise test cycles and optimise performance, hence accelerating deployment, while providing a level of confidence in the system. Once launched, the site can be regularly checked using Live Performance Monitoring tools to monitor site performance in real time, in order to detect and report any performance problems - before users can experience them.
2. Preparing for a Load Test
The first step in designing a Web site load test is to measure as accurately as possible the current load levels.
Measuring Current Load Levels
The best way to capture the nature of Web site load is to identify and track, [e.g. using a log analyzer] a set of key user session variables that are applicable and relevant to your Web site traffic.
Some of the variables that could be tracked include:
the length of the session (measured in pages)
the duration of the session (measured in minutes and seconds)
the type of pages that were visited during the session (e.g., home page, product information page, credit card information page etc.)
the typical/most popular ‘flow’ or path through the website
the % of ‘browse’ vs. ‘purchase’ sessions
the % type of users (new user vs. returning registered user)


Measure how many people visit the site per week/month or day. Then break down these current traffic patterns into one-hour time slices, and identify the peak-hours (i.e. if you get lots of traffic during lunch time etc.), and the numbers of users during those peak hours. This information can then be used to estimate the number of concurrent users on your site.
3. Concurrent Users
Although your site may be handling x number of users per day, only a small percentage of these users would be hitting your site at the same time. For example, if you have 3000 unique users hitting your site on one day, all 3000 are not going to be using the site between 11.01 and 11.05 am.
So, once you have identified your peak hour, divide this hour into 5 or 10 minute slices [you should use your own judgement here, based on the length of the average user session] to get the number of concurrent users for that time slice.
4. Estimating Target Load Levels
Once you have identified the current load levels, the next step is to understand as accurately and as objectively as possible the nature of the load that must be generated during the testing.

Using the current usage figures, estimate how many people will visit the site per week/month or day. Then divide that number to attain realistic peak-hour scenarios.

It is important to understand the volume patterns, and to determine what load levels your web site might be subjected to (and must therefore be tested for).

There are four key variables that must be understood in order to estimate target load levels:
how the overall amount of traffic to your Web site is expected to grow
the peak load level which might occur within the overall traffic
how quickly the number of users might ramp up to that peak load level
how long that peak load level is expected to last

Once you have an estimate of overall traffic growth, you’ll need to estimate the peak level you might expect within that overall volume.

5. Estimating Test Duration
The duration of the peak is also very important-a Web site that may deal very well with a peak level for five or ten minutes may crumble if that same load level is sustained longer than that. You should use the length of the average user session as a base for determining the load test duration.

6. Ramp-up Rate
As mentioned earlier, Although your site may be handling x number of users per day, only a small percentage of these users would be hitting your site at the same time.

Therefore, when preparing your load test scenario, you should take into account the fact that users will hit the website at different times, and that during your peak hour the number of concurrent users will likely gradually build up to reach the peak number of users, before tailing off as the peak hour comes to a close.

The rate at which the number of users build up, the "Ramp-up Rate" should be factored into the load test scenarios (i.e. you should not just jump to the maximum value, but increase in a series of steps).

7. Scenario Identification
The information gathered during the analysis of the current traffic is used to create the scenarios that are to be used to load test the web site.
The identified scenarios aim to accurately emulate the behavior of real users navigating through the Web site.
for example, a seven-page session that results in a purchase is going to create more load on the Web site than a seven-page session that involves only browsing. A browsing session might only involve the serving of static pages, while a purchase session will involve a number of elements, including the inventory database, the customer database, a credit card transaction with verification going through a third-party system, and a notification email. A single purchase session might put as much load on some of the system’s resources as twenty browsing sessions.
Similar reasoning may apply to purchases from new vs. returning users. A new user purchase might involve a significant amount of account setup and verification —something existing users may not require. The database load created by a single new user purchase may equal that of five purchases by existing users, so you should differentiate the two types of purchases.
8. Script Preparation
Next, program your load test tool to run each scenario with the number of types of users concurrently playing back to give you a the load scenario.

The key elements of a load test design are:

test objective
pass/fail criteria
script description
scenario description

Load Test Objective
The objective of this load test is to determine if the Web site, as currently configured, will be able to handle the X number of sessions/hr peak load level anticipated. If the system fails to scale as anticipated, the results will be analyzed to identify the bottlenecks.

Pass/Fail Criteria
The load test will be considered a success if the Web site will handle the target load of X number of sessions/hr while maintaining the pre-defined average page response times (if applicable). The page response time will be measured and will represent the elapsed time between a page request and the time the last byte is received.

Since in most cases the user sessions follow just a few navigation patterns, you will not need hundreds of individual scripts to achieve realism—if you choose carefully, a dozen scripts will take care of most Web sites.

9. Script Execution
Scripts should be combined to describe a load testing scenario. A basic scenario includes the scripts that will be executed, the percentages in which those scripts will be executed, and a description of how the load will be ramped up.
By emulating multiple business processes, the load testing can generate a load equivalent to X numbers of virtual users on a Web application. During these load tests, real-time performance monitors are used to measure the response times for each transaction and check that the correct content is being delivered to users. In this way, they can determine how well the site is handling the load and identify any bottlenecks.
The execution of the scripts opens X number of HTTP sessions (each simulating a user) with the target Web site and replays the scripts over and over again. Every few minutes it adds X more simulated users and continues to do so until the web site fails to meet a specific performance goal.

10. System Performance Monitoring
It is vital during the execution phase to monitor all aspects of the website. This includes measuring and monitoring the CPU usage and performance aspects of the various components of the website – i.e. not just the webserver, but the database and other parts aswell (such as firewalls, load balancing tools etc.)
For example, one etailer, whose site fell over (apparently due to a high load), when analysing the performance bottlenecks on their site discovered that the webserver had in fact only been operating at 50% of capacity. Further investigation revealed that the credit card authorisation engine was the cause of failure – it was not responding quick enough for the website, which then fellover when it was waiting for too many responses from the authorisation engine. They resolved this issue by changing the authorisation engine, and amending the website coding so that if there were any issues with authorisation responses in future, the site would not crash.
Similarly, another ecommerce site found that the performance issues that they were experiencing were due to database performance issues – while the webserver CPU usage was only at 25%, the backend db server CPU usage was 86%. Their solution was to upgrade the db server.
Therefore, it is necessary to use (install if necessary) performance monitoring tools to check each aspect of the website architecture during the execution phase.
11. Suggested Execution Strategy:
Start with a test at 50% of the expected virtual user capacity for 15 minutes and a medium ramp rate. The different members of the team [testers will also need to be monitoring the CPU usage during the testing] should be able to check whether your website is handling the load efficiently or some resources are already showing high utilization.
After making any system adjustments, run the test again or proceed to 75% of expected load. Continue with the testing and proceed to 100%; then up to 150% of the expected load, while monitoring and making the necessary adjustments to your system as you go along.
12. Results Analysis
Often the first indication that something is wrong is the end user response times start to climb. Knowing which pages are failing will help you narrow down where the problem is.
Whichever load test tool you use, it will need to produce reports that will highlight the following:

• Page response time by load level
• Completed and abandoned session by load level
• Page views and page hits by load level
• HTTP and network errors by load level
• Concurrent user by minute
• Missing links report, if applicable
• Full detailed report which includes response time by page and by transaction, lost
• sales opportunities, analysis and recommendations

13. Important Considerations
When testing websites, it is critically important to test from outside the firewall. In addition, web-based load testing services, based outside the firewall, can identify bottlenecks that are only found by testing in this manner.
Web-based stress testing of web sites are therefore more accurate when it comes to measuring a site's capacity constraints.
Web traffic is rarely uniformly distributed, and most Web sites exhibit very noticeable peaks in their volume patterns. Typically, there are a few points in time (one or two days out of the week, or a couple of hours each day) when the traffic to the Web site is highest.




















Automated Testing Detail Test Plan
Automated Testing DTP Overview
This Automated Testing Detail Test Plan (ADTP) will identify the specific tests that are to be performed to ensure the quality of the delivered product. System/Integration Test ensures the product functions as designed and all parts work together. This ADTP will cover information for Automated testing during the System/Integration Phase of the project and will map to the specification or requirements documentation for the project. This mapping is done in conjunction with the Traceability Matrix document, that should be completed along with the ADTP and is referenced in this document.
This ADTP refers to the specific portion of the product known as PRODUCT NAME. It provides clear entry and exit criteria, and roles and responsibilities of the Automated Test Team are identified such that they can execute the test.
The objectives of this ADTP are:
• Describe the test to be executed.
• Identify and assign a unique number for each specific test.
• Describe the scope of the testing.
• List what is and is not to be tested.
• Describe the test approach detailing methods, techniques, and tools.
• Outline the Test Design including:
• Functionality to be tested.
• Test Case Definition.
• Test Data Requirements.
• Identify all specifications for preparation.
• Identify issues and risks.
• Identify actual test cases.
• Document the design point
Test Identification
This ADTP is intended to provide information for System/Integration Testing for the PRODUCT NAME module of the PROJECT NAME. The test effort may be referred to by its PROJECT REQUEST (PR) number and its project title for tracking and monitoring of the testing progress.

Test Purpose and Objectives
Automated testing during the System/Integration Phase as referenced in this document is intended to ensure that the product functions as designed directly from customer requirements. The testing goal is to identify the quality of the structure, content, accuracy and consistency, some response times and latency, and performance of the application as defined in the project documentation.

Assumptions, Constraints, and Exclusions
Factors which may affect the automated testing effort, and may increase the risk associated with the success of the test include:
• Completion of development of front-end processes
• Completion of design and construction of new processes
• Completion of modifications to the local database
• Movement or implementation of the solution to the appropriate testing or production environment
• Stability of the testing or production environment
• Load Discipline
• Maintaining recording standards and automated processes for the project
• Completion of manual testing through all applicable paths to ensure that reusable automated scripts are valid

Entry Criteria
The ADTP is complete, excluding actual test results. The ADTP has been signed-off by appropriate sponsor representatives indicating consent of the plan for testing. The Problem Tracking and Reporting tool is ready for use. The Change Management and Configuration Management rules are in place.
The environment for testing, including databases, application programs, and connectivity has been defined, constructed, and verified.

Exit Criteria

In establishing the exit/acceptance criteria for the Automated Testing during the System/Integration Phase of the test, the Project Completion Criteria defined in the Project Definition Document (PDD) should provide a starting point. All automated test cases have been executed as documented. The percent of successfully executed test cases met the defined criteria. Recommended criteria: No Critical or High severity problem logs remain open and all Medium problem logs have agreed upon action plans; successful execution of the application to validate accuracy of data, interfaces, and connectivity.
Pass/Fail Criteria
The results for each test must be compared to the pre-defined expected test results, as documented in the ADTP (and DTP where applicable). The actual results are logged in the Test Case detail within the Detail Test Plan if those results differ from the expected results. If the actual results match the expected results, the Test Case can be marked as a passed item, without logging the duplicated results.
A test case passes if it produces the expected results as documented in the ADTP or Detail Test Plan (manual test plan). A test case fails if the actual results produced by its execution do not match the expected results. The source of failure may be the application under test, the test case, the expected results, or the data in the test environment. Test case failures must be logged regardless of the source of the failure. Any bugs or problems will be logged in the DEFECT TRACKING TOOL.
The responsible application resource corrects the problem and tests the repair. Once this is complete, the tester who generated the problem log is notified, and the item is re-tested. If the retest is successful, the status is updated and the problem log is closed.
If the retest is unsuccessful, or if another problem has been identified, the problem log status is updated and the problem description is updated with the new findings. It is then returned to the responsible application personnel for correction and test.
Severity Codes are used to prioritize work in the test phase. They are assigned by the test group and are not modifiable by any other group. The following standard Severity Codes to be used for identifying defects are:
Table 1 Severity Codes
Severity Code Number Severity Code Name Description
1. Critical Automated tests cannot proceed further within applicable test case (no work around)
2. High The test case or procedure can be completed, but produces incorrect output when valid information is input.
3. Medium The test case or procedure can be completed and produces correct output when valid information is input, but produces incorrect output when invalid information is input.(e.g. no special characters are allowed as part of specifications but when a special character is a part of the test and the system allows a user to continue, this is a medium severity)
4. Low All test cases and procedures passed as written, but there could be minor revisions, cosmetic changes, etc. These defects do not impact functional execution of system
The use of the standard Severity Codes produces four major benefits:
• Standard Severity Codes are objective and can be easily and accurately assigned by those executing the test. Time spent in discussion about the appropriate priority of a problem is minimized.
• Standard Severity Code definitions allow an independent assessment of the risk to the on-schedule delivery of a product that functions as documented in the requirements and design documents.
• Use of the standard Severity Codes works to ensure consistency in the requirements, design, and test documentation with an appropriate level of detail throughout.
• Use of the standard Severity Codes promote effective escalation procedures.

Test Scope
The scope of testing identifies the items which will be tested and the items which will not be tested within the System/Integration Phase of testing.
Items to be tested by Automation (PRODUCT NAME ...)
Items not to be tested by Automation(PRODUCT NAME ...)

Test Approach
Description of Approach
The mission of Automated Testing is the process of identifying recordable test cases through all appropriate paths of a website, creating repeatable scripts, interpreting test results, and reporting to project management. For the Generic Project, the automation test team will focus on positive testing and will complement the manual testing undergone on the system. Automated test results will be generated, formatted into reports and provided on a consistent basis to Generic project management.
System testing is the process of testing an integrated hardware and software system to verify that the system meets its specified requirements. It verifies proper execution of the entire set of application components including interfaces to other applications. Project teams of developers and test analysts are responsible for ensuring that this level of testing is performed.
Integration testing is conducted to determine whether or not all components of the system are working together properly. This testing focuses on how well all parts of the web site hold together, whether inside and outside the website are working, and whether all parts of the website are connected. Project teams of developers and test analyst are responsible for ensuring that this level of testing is performed.
For this project, the System and Integration ADTP and Detail Test Plan complement each other.
Since the goal of the System and Integration phase testing is to identify the quality of the structure, content, accuracy and consistency, response time and latency, and performance of the application, test cases are included which focus on determining how well this quality goal is accomplished.
Content testing focuses on whether the content of the pages match what is supposed to be there, whether key phrases exist continually in changeable pages, and whether the pages maintain quality content from version to version.
Accuracy and consistency testing focuses on whether today’s copies of the pages download the same as yesterday’s, and whether the data presented to the user is accurate enough.
Response time and latency testing focuses on whether the web site server responds to a browser request within certain performance parameters, whether response time after a SUBMIT is acceptable, or whether parts of a site are so slow that the user discontinues working. Although Loadrunner provides the full measure of this test, there will be various AD HOC time measurements within certain Winrunner Scripts as needed.
Performance testing (Loadrunner) focuses on whether performance varies by time of day or by load and usage, and whether performance is adequate for the application.
Completion of automated test cases is denoted in the test cases with indication of pass/fail and follow-up action.
Test Definition
This section addresses the development of the components required for the specific test. Included are identification of the functionality to be tested by automation, the associated automated test cases and scenarios. The development of the test components parallels, with a slight lag, the development of the associated product components.

Test Functionality Definition (Requirements Testing)
The functionality to be automated tested is listed in the Traceability Matrix, attached as an appendix. For each function to undergo testing by automation, the Test Case is identified. Automated Test Cases are given unique identifiers to enable cross-referencing between related test documentation, and to facilitate tracking and monitoring the test progress.
As much information as is available is entered into the Traceability Matrix in order to complete the scope of automation during the System/Integration Phase of the test.

Test Case Definition (Test Design)
Each Automated Test Case is designed to validate the associated functionality of a stated requirement. Automated Test Cases include unambiguous input and output specifications. This information is documented within the Automated Test Cases in Appendix 8.5 of this ADTP.

Test Data Requirements
The automated test data required for the test is described below. The test data will be used to populate the data bases and/or files used by the application/system during the System/Integration Phase of the test. In most cases, the automated test data will be built by the OTS Database Analyst or OTS Automation Test Analyst.

Automation Recording Standards
Initial Automation Testing Rules for the Generic Project:
1. Ability to move through all paths within the applicable system
2. Ability to identify and record the GUI Maps for all associated test items in each path
3. Specific times for loading into automation test environment
4. Code frozen between loads into automation test environment
5. Minimum acceptable system stability
Winrunner Menu Settings
1. Default recording mode is CONTEXT SENSITIVE
2. Record owner-drawn buttons as OBJECT
3. Maximum length of list item to record is 253 characters
4. Delay for Window Synchronization is 1000 milliseconds (unless Loadrunner is operating in same environment and then must increase appropriately)
5. Timeout for checkpoints and CS statements is 1000 milliseconds
6. Timeout for Text Recognition is 500 milliseconds
7. All scripts will stop and start on the main menu page
8. All recorded scripts will remain short; Debugging is easier. However, the entire script, or portions of scripts, can be added together for long runs once the environment has greater stability.

Winrunner Script Naming Conventions
1. All automated scripts will begin with GE abbreviation representing the Generic Project and be filed under the Winrunner on LAB11 W Drive/Generic/Scripts Folder.
2. GE will be followed by the Product Path name in lower case: air, htl, car
3. After the automated scripts have been debugged, a date for the script will be attached: 0710 for July 10. When significant improvements have been made to the same script, the date will be changed.
4. As incremental improvements have been made to an automated script, version numbers will be attached signifying the script with the latest improvements: eg. XX0710.1 XX0710.2 The .2 version is the most up-to-date

Winrunner GUIMAP Naming Conventions
1. All Generic GUI Maps will begin with XX followed by the area of test. Eg. XX. XXpond GUI Map represents all pond paths. XXEmemmainmenu GUI Map represents all membership and main menu concerns. XXlogin GUI Map represents all XX login concerns.
2. As there can only be one GUI Map for each Object, etc on the site, they are under constant revision when the site is undergoing frequent program loads.

Winrunner Result Naming Conventions
1. When beginning a script, allow default res## name to be filed
2. After a successful run of a script where the results will be used toward a report, move file to results and rename: XX for project name, res for Test Results, 0718 for the date the script was run, your initials and the original default number for the script. Eg. XXres0718jr.1

Winrunner Report Naming Conventions
1. When the accumulation of test result(s) files for the day are formulated, and the statistics are confirmed, a report will be filed that is accessible by upper management. The daily Report file will be as follows: XXdaily0718 XX for project name, daily for daily report, and 0718 for the date the report was issued.
2. When the accumulation of test result(s) files for the week are formulated, and the statistics are confirmed, a report will be filed that is accessible by upper management. The weekly Report file will be as follows: XXweek0718 XX for project name, week for weekly report, and 0718 for the date the report was issued.
Winrunner Script, Result and Report Repository
1. LAB 11, located within the XX Test Lab, will house the original Winrunner Script, Results and Report Repository for automated testing within the Generic Project. WRITE access is granted Winrunner Technicians and READ ONLY access is granted those who are authorized to run scripts but not make any improvements. This is meant to maintain the purity of each script version.
2. Winrunner on LAB11 W Drive houses all Winrunner related documents, etc for XX automated testing.
3. Project file folders for the Generic Project represent the initial structure of project folders utilizing automated testing. As our automation becomes more advanced, the structure will spread to other appropriate areas.
4. Under each Project file folder, a folder for SCRIPT, RESULT and REPORT can be found.
5. All automated scripts generated for each project will be filed under Winrunner on LAB11 W Drive/Generic/Scripts Folder and moved to folder ARCHIVE SCRIPTS as necessary
6. All GUI MAPS generated will be filed under Winrunner on LAB11 W Drive/Generic/Scripts/gui_files Folder.
7. All automated test results are filed under the individual Script Folder after each script run. Results will be referred to and reports generated utilizing applicable statistics. Automated Test Results referenced by reports sent to management will be kept under the Winrunner on LAB11 W Drive/Generic/Results Folder. Before work on evaluating a new set of test results is begun, all prior results are placed into Winrunner on LAB11 W Drive/Generic/Results/Archived Results Folder. This will ensure all reported statistics are available for closer scrutiny when required.
8. All reports generated from automated scripts and sent to upper management will be filed under Winrunner on LAB11 W Drive/Generic/Reports Folder

Test Preparation Specifications
Test Environment
Environment for Automated Test
Automated Test environment is indicated below. Existing dependencies are entered in comments.

Environment Test System Comments
Test System/Integration Test (SIT) Cert Access via http://xxxxx/xxxxx
Production Production Access via http:// www.xxxxxx.xxx
Other (specify) Development Individual Test Environments
Hardware for Automated Test
The following is a list of the hardware needed to create production like environment:
Manufacturer Device Type
Various Personal Computer (486 or Higher) with monitor & required peripherals; with connectivity to internet test/production environments. Must be enabled to ADDITIONAL REQUIREMENTS.
Software
The following is a list of the software needed to create a production like environment:
Software Version (if applicable) Programmer Support
Netscape Navigator ZZZ or higher -
Internet Explorer ZZZ or higher -
Test Team Roles and Responsibilities
Test Team Roles and Responsibilities
Role Responsibilities Name
COMPANY NAME Sponsor Approve project development, handle major issues related to project development, and approve development resources Name, Phone
XXX Sponsor Signature approval of the project, handle major issues Name, Phone
XXX Project Manager Ensures all aspects of the project are being addressed from CUSTOMERS’ point of view Name, Phone
COMPANY NAME Development Manager Manage the overall development of project, including obtaining resources, handling major issues, approving technical design and overall timeline, delivering the overall product according to the Partner Requirements Name, Phone
COMPANY NAME Project Manager Provide PDD (Project Definition Document), project plan, status reports, track project development status, manage changes and issues Name, Phone
COMPANY NAME Technical Lead Provide Technical guidance to the Development Team and ensure that overall Development is proceeding in the best technical direction Name, Phone
COMPANY NAME Back End Services Manager Develop and deliver the necessary Business Services to support the PROJECT NAME Name, Phone
COMPANY NAME Infrastructure Manager Provide PROJECT NAME development certification, production infrastructure, service level agreement, and testing resources Name, Phone
COMPANY NAME Test Coordinator Develops ADTP and Detail Test Plans, tests changes, logs incidents identified during testing, coordinates testing effort of test team for project Name, Phone
COMPANY NAME Tracker Coordinator/ Tester Tracks XXX’s in DEFECT TRACKING TOOL. Reviews new XXX’s for duplicates, completeness and assigns to Module Tech Leads for fix. Produces status documents as needed. Tests changes, logs incidents identified during testing. Name, Phone
COMPANY NAME Automation Enginneer Tests changes, logs incidents identified during testing Name, Phone
Test Team Training Requirements
Automation Training Requirements
Training Requirement Training Approach Target Date for Completion Roles/Resources to be Trained
. . . .
. . . .

Automation Test Preparation
1. Write and receive approval of the ADTP from Generic Project management
2. Manually test the cases in the plan to make sure they actually work before recording repeatable scripts
3. Record appropriate scripts and file them according to the naming conventions described within this document
4. Initial order of automated script runs will be to load GUI Maps through a STARTUP script. After the successful run of this script, scripts testing all paths will be kicked off. Once an appropriate number of PNR’s are generated, GenericCancel scripts will be used to automatically take the inventory out of the test profile and system environment. During the automation test period, requests for testing of certain functions can be accommodated as necessary as long as these functions have the ability to be tested by automation.
5. The ability to use Generic Automation will be READ ONLY for anyone outside of the test group. Of course, this is required to maintain the pristine condition of master scripts on our data repository.
6. Generic Test Group will conduct automated tests under the rules specified in our agreement for use of the Winrunner tool marketed by Mercury Interactive.
7. Results filed for each run will be analyzed as necessary, reports generated, and provided to upper management.
Test Issues and Risks
Issues
The table below lists known project testing issues to date. Upon sign-off of the ADTP and Detail Test Plan, this table will not be maintained, and these issues and all new issues will be tracked through the Issue Management System, as indicated in the projects approved Issue Management Process
Issue Impact Target Date for Resolution Owner
COMPANY NAME test team is not in possession of market data regarding what browsers are most in use in CUSTOMER target market. Testing may not cover some browsers used by CLIENT customers Beginning of Automated Testing during System and Integration Test Phase CUSTOMER TO PROVIDE
OTHER . . .

Risks
Risks
The table below identifies any high impact or highly probable risks that may impact the success of the Automated testing process.
Risk Assessment Matrix
Risk Area Potential Impact Likelihood of Occurrence Difficulty of Timely Detection Overall Threat(H, M, L)
1. Unstable Environment Delayed Start HISTORY OF PROJECT Immediately .
2. Quality of Unit Testing Greater delays taken by automated scripts Dependent upon quality standards of development group Immediately .
3. Browser Issues Intermittent Delays Dependent upon browser version Immediately .
Risk Management Plan
Risk Area Preventative Action Contingency Plan Action Trigger Owner
1. Meet with Environment Group . . . .
2. Meet with Development Group . . . .
3. . . . .
Traceability Matrix
The purpose of the Traceability Matrix is to identify all business requirements and to trace each requirement through the project's completion.
Each business requirement must have an established priority as outlined in the Business Requirements Document.
They are:
Essential - Must satisfy the requirement to be accepted by the customer.
Useful - Value -added requirement influencing the customer's decision.
Nice-to-have - Cosmetic non-essential condition, makes product more appealing.
The Traceability Matrix will change and evolve throughout the entire project life cycle. The requirement definitions, priority, functional requirements, and automated test cases are subject to change and new requirements can be added. However, if new requirements are added or existing requirements are modified after the Business Requirements document and this document have been approved, the changes will be subject to the change management process.
The Traceability Matrix for this project will be developed and maintained by the test coordinator. At the completion of the matrix definition and the project, a copy will be added to the project notebook.

Functional Areas of Traceability Matrix
# Functional Area Priority
B1 Pond E
B2 River E
B3 Lake U
B4 Sea E
B5 Ocean E
B6 Misc U
B7 Modify E
L1 Language E
EE1 End-to-End Testing EE
Legend:
B = Order Engine
L = Language
N = Nice to have
EE = End-to-End
E = Essential
U = Useful
Definitions for Use in Testing
Test Requirement
A scenario is a prose statement of requirements for the test. Just as there are high level and detailed requirements in application development, there is a need to provide detailed requirements in the test development area.

Test Case
A test case is a transaction or list of transactions that will satisfy the requirements statement in a test scenario. The test case must contain the actual entries to be executed as well as the expected results, i.e., what a user entering the commands would see as a system response.

Test Procedure
Test procedures define the activities necessary to execute a test case or set of cases. Test procedures may contain information regarding the loading of data and executables into the test system, directions regarding sign in procedures, instructions regarding the handling of test results, and anything else required to successfully conduct the test.

Automated Test Cases
NAME OF FUNCTION Test Case
_______________________________________________________________________________________
Project Name/Number Generic Project / Project Request #Date
__________________________________________________________________________________
Test Case Description Check all drop down boxes, fill in
boxes and pop-up windows operate Build #
according to requirements on the _______________________
main Pond web page. Run #
__________________________________________________________________________________
Function / Module B1.1 Execution
Under Test Retry #
__________________________________________________________________________________
Test Requirement # Case # AB1.1.1(A for
Automated)
__________________________________________________________________________________
Written by
_____________________________________________________________________________________
Goals Verify that Pond module functions as required
____________________________________________________________________________________
Setup for Test Access browser, Go to .. .
____________________________________________________________________________________
Pre-conditions Login with name and password. When arrive at Generic Main Menu...
____________________________________________________________________________________
StepActionExpected Results Pass/FailActual Results if Step Fails
__________________________________________________________________________________
Go to From the Generic Main Menu,
click on the Pond gif and go to
Pond Pond web page. Once on the Pond
and web page, check all drop down
.. boxes for appropriate information
(eg Time.7a, 8a in 1 hour
increments), fill in boxes
(remarks allows alpha and numeric
but no other special characters),
and pop up windows (eg. Privacy.
Ensure it is retrieved, has
correct verbage and closes).
__________________________________________________________________________________


Each automation project team needs write up an automation standards document stating the following:
• The installation configurations of the automation tool.
• How the client machines environment will be set up
• Where the network repositories, and manual test plans documents are located.
• Identify what the drive letter is that all client machines must map to.
• How the automation tool will be configured.
• Identify what Servers and Databases the automation will run against.
• Any naming standards that the test procedures, test cases and test plans will follow.
• Any recording standards and scripting standards that all scripts must follow.
• Describe what components of the product that will be tested.}
Installation Configuration
Install Step: Selection: Completed:
Installations Components Full
Destination Directory C:\sqa6
Type Of Repository Microsoft Access
Scripting Language SQA Basic only
Test Station Name Your PC Name
DLL messages Overlay all DLL's the system prompts for. Robot will not run without its own DLL's.

Client Machines Configuration
Configuration Item Setting: Notes:
Lotus Notes Shut down lotus notes before using robot. This will prevent mail notification messages from interrupting your scripts and it will allow robot to have more memory.
Close all applications Close down all applications down (except SQA robot recorder and the application you are testing) This will free up memory on the PC.
Shut down printing Select printer window from start menu Select File -> Server Properties Select Advance tab Un-check notify check box
Shut down printing Network Bring up dos prompt Select Z drive Type CASTOFF
Turn off Screensavers Select NONE or change it to 90 minutes
Display Settings for PC Set in Control Panel display application Colors - 256 Font Size - small Desktop 800 X 600 pixels
Map a Network drive to {LETTER} Bring up explorer and map a network drive to here.

Repository Creation
Item Information
Repository Name
Location
Mapped Drive Letter
Project Name
Users set up for Project Admin - no password
Sbh files used in projects scripts
Client Setup Options for the SQA Robot tool
Option Window Option Selection
Recording ID list selections by Contents
ID Menu selections by Text
Record unsupported mouse drags as Mouse click if within object
Window positions Record Object as text Auto record window size
While Recording Put Robot in background
Playback Test Procedure Control Delay Between :5000 milliseconds
Partial Window Caption On Each window search
Caption Matching options Check - Match reverse captions Ignore file extensions Ignore Parenthesis
Test Log Test log Management Output Playback results to test log All details
Update SQA repository View test log after playback
Test Log Data Specify Test Log Info at Playback
Unexpected Window Detect Check
Capture Check
Playback response Select pushbutton with focus
On Failure to remove Abort playback
Wait States Wait Pos/Neg Region Retry - 4 Timeout after 90
Automatic wait Retry - 2 Timeout after 120
Keystroke option Playback delay 100 millsec Check record delay after enter key
Error Recovery On Script command Failure Abort Playback
On test case failure Continue Execution
SQA trap Check all but last 2
Object Recognition Do not change
Object Data Test Definitions Do not change
Editor Leave with defaults
Preferences Leave with defaults
Identify what Servers and Databases the automation will run against.
This {Project name} will use the following Servers:
{Add servers}
On these Servers it will be using the following Databases:
{Add databases}



Naming standards for test procedures, cases and plans
The naming standards for this project are:

Recording standards and scripting standards
In order to ensure that scripts are compatible on the various clients and run with the minimum maintenance the following recording standards have been set for all scripts recorded.

1. Use assisting scripts to open and close applications and activity windows.
2. Use global constants to pass data into scripts and between scripts.
3. Make use of main menu selections over using double clicks, toolbar items and pop up menus whenever possible.
4. Each test procedure should have a manual test plan associated with it.
5. Do not Save in the test procedure unless it is absolutely necessary, this will prevent the need to write numerous clean up scripts.
6. Do a window existence test for every window you open, this will prevent scripts dying from slow client/server calls.
7. Do not use the mouse for drop down selections, whenever possible use hotkeys and the arrow keys.
8. When navigating through a window use the tab and arrow keys instead of using a mouse, this will make maintenance of scripts due to UI changes easier in the future.
9. Create a template header file called testproc.tpl. This file will insert template header information on the top of all scripts recorded. This template area can be used for modification tracking and commenting on the script.
10. Comment all major selections or events in the script. This will make debugging easier.
11. Make sure that you maximize all MDI main windows in login initial scripts.
12. When recording make sure you begin and end your scripts in the same position. Ex. On the platform browser always start your script opening the browser tree and selecting your activity (this will ensure that the activity window will always be in the same position), likewise always end your scripts with collapsing the browser tree.

Describe what components of the product that will be tested.
This project will test the following components:
The objective is to:






WinRunner Fundamentals
The 5 major areas to know for WinRunner are listed below with SOME of the subtopics called out for each of the major topics:
1) GUI Map
- Learning objects
- Mapping custom objects to standard objects
2) Record/Playback
- Record modes: Context Sensitive and Analog
- Playback modes: (Batch), Verify, Update, Debug
3) Synchronization
- Using wait parameter of functions
- Wait window/object info
- Wait Bitmap
- Hard wait()
4) Verification/Checkpoints
- Window/object GUI checkpoints
- Bitmap checkpoints
- Text checkpoints (requires TLS)
5) TLS (Test Script Language)
- To enhance scripts (flow control, parameterization, data driven test, user defined functions,...
________________________________________
1. Calling Scripts and Expected Results
When running in non-batch mode, WinRunner will always look in the calling scripts \exp directory for the checks. When running in batch mode, WinRunner will look in the called script's \exp directory.
There is a limitation, though. WinRunner will only look in the called script's \exp directory one call level deep. For example, in bacth mode:
script1:
gui_check(...); #will look in script1\exp
call "script2" ();

script2:
gui_check(...); #will look in script2\exp
call "script3" ();

script3:
gui_check(...); #will look in script2\exp (and cause an error)

In non bacth mode:

script1:
gui_check(...); #will look in script1\exp
call "script2" ();

script2:
gui_check(...); #will look in script1\exp (and cause an error)
call "script3" ();

script3:
gui_check(...); #will look in script1\exp (and cause an error)
________________________________________
2. Run Modes
3. Batch mode will write results to the individual called test.
4. Interactive (non-batch) mode writes to the main test.

________________________________________
5. Data Types
TSL supports two data types: numbers and strings, and you do not have to declare them. Look at the on-line help topic for some things to be aware of:
"TSL Language", "Variables and Constants", "Type (of variable or constant)"
Generally, you shouldn't see any problems with comparisons.
However, if you perform arithmetic operations you might see some unexpected behavior (again check out the on-line help mentioned above).
var="3abc4";
rc=var + 2; # rc will be 5 :-)
________________________________________
6. Debugging
When using pause(x); for debugging, wrap the variable with brackets to easily see if "invisible" characters are stored in the variable (i.e., \n, \t, space, or Null) pause("[" & x & "]");
Use the debugging features of WinRunner to watch variables. "invisible" characters will show themselves (i.e., \n, \t, space)
Examples:
Variable pause(x); pause("[" & x & "]");
x="a1"; a1 [a1]
x="a1 "; a1 [a1 ]
x="a1\t"; a1 [a1 ]
x="a1\n"; a1 [a1
]
x=""; []
________________________________________
7. Block Comments
To temporarily comment out a block of code use:
if (TRUE)
{
... block of code to be commented out!!
}
________________________________________
8. Data Driven Test ddt_* functions vs getline/split
Personally I do not care one way or another about the ddt_* or getline/split functions. They both do almost the same thing. There are some arguably good benefits to using ddt_*, but most of them are focused on the data management. In general you can always keep the data in Excel and perform a Save As to convert the file to a delimited text file.
One major difference is in the performance of playing back a script that has a huge data file. The ddt_* functions currently can not compare to the much faster getline/split method.
But here is an area to consider: READABILITY I personally do not like scripts with too many nested function calls (which the parameterize value method does) because it may reduce the readability for people with out a programming background.
Example:
edit_set("FirstName", ddt_val(table, "FirstName"));
edit_set("LastName", ddt_val(table, "LastName"));
So what I typically do is, declare my own variables at the beginning of the script, assign the values to them, and use the variable names in the rest of the script. It doesn't matter if I'm using the getline/split or ddt_val functions. This also is very useful when I may need to change the value of a variable, because they are all initialized at the top of the script (whenever possible).
Example with ddt_* functions in a script:
FIRSTNAME=ddt_val(table, "FirstName");
LASTNAME=ddt_val(table, "LastName");
...
edit_set("FirstName", FIRSTNAME);
edit_set("LastName", LASTNAME);
And most of the time I have a driving test which calls another test and passes an array of data to be used to update a form.
Example with ddt_* functions before calling another
script:

# Driver script will have
...
MyPersonArray [ ] =
{
"FIRSTNAME" = ddt_val(table, "FirstName");
"LASTNAME" = ddt_val(table, "LastName");
}

call AddPerson(MyPersonArray)
...

# Called script will have
edit_set("FirstName", Person["FIRSTNAME"]);
edit_set("LastName", Person["LASTNAME"]);
So as you can see, there are many ways to do the same thing. What people must keep in mind is the skill level of the people that may inherit the scripts after they are created. And a consistent method should be used throughout the project.

________________________________________
9. String Vs Number Comparison
10. String Vs Number comparisons are not a good thing to do.
11. Try this sample to see why:
12.
13. c1=47.88 * 6;
14. c2="287.28";
15.
16. #Prints a decimal value while suppressing non-significant zeros
17. #and converts the float to a string.
18. c3 = sprintf ("%g", c1);

19. pause ("c1 = [" & c1 & "]\nc2 = [" & c2 & "]\nc3 = [" & c3 & "]\n" & "c1 - c2 =
20. [" & c1 - c2 & "]\nc1 - c3 = [" & c1 - c3 & "]\nc2 - c3 = [" & c2 - c3 & "]");

How to Create a Test Using Winrunner (1)

User can create tests using recording or programming or by both. While recording, each operation performed by the user generates a statement in the Test Script Language (TSL). These statements are displayed as a test script in a test window. User can then enhance the recorded test script, either by typing in additional TSL functions and programming elements or by using WinRunner�s visual programming tool, the Function Generator, or using the Function Viewer.

There are 2 modes of recording in WinRunner

1. Context Sensitive mode records the operations user performed on the application by identifying Graphical User Interface (GUI) objects. Context Sensitivity test scripts can be reused in the future version of the application because WinRunner writes a unique description of each selected object to a GUI map file. The GUI map files are maintained separately from test scripts and the same GUI map file (or files) can be used for multiple tests.
For example, if the user clicks the Open button in an Open dialog box, WinRunner records the action and generate a script. When it runs the test, WinRunner looks for the Open dialog box and the Open button represented in the test script. If, in subsequent runs of the test, the button is in a different DemoUrl in the Open dialog box, WinRunner is still able to find it.

2. Analog mode records mouse clicks, keyboard input, and the exact x-and y-coordinates traveled by the mouse. When the test is run, WinRunner retraces the mouse tracks. Use Analog mode when exact mouse coordinates are important to the test, such as when testing a drawing application.
For example, if the user clicks the Open button in an Open dialog box, WinRunner records the movements of the mouse pointer. If, in subsequent runs of the test, the button is in a different DemoUrl in the Open dialog box, WinRunner will not able to find it. When recording in Analog mode, use softkeys rather than the WinRunner menus or toolbars to insert checkpoints in order to avoid extraneous mouse movements.

Different recording methods 1) Record 2) Pass up 3) As Object 4) Ignore.

1) Record instructs WinRunner to record all operations performed on a GUI object. This is the default record method for all classes. (The only exception is the static class (static text), for which the default is Pass Up.)

2) Pass Up instructs WinRunner to record an operation performed on this class as an operation performed on the element containing the object. Usually this element is a window, and the operation is recorded as win_mouse_click.

3) As Object instructs WinRunner to record all operations performed on a GUI object as though its class were �object� class.
4) Ignore instructs WinRunner to disregard all operations performed on the class.

Some common Settings we need to set in General Options:
1. Default Recording Mode is Object mode
2. Synch Point time is 10 seconds as default
3. When Test Execution is in Batch Mode ensure all the options are set off so that the Batch test runs uninterrupted.
4. In the Text Recognition if the Application Text is not recognizable then the Default Font Group is set. The Text Group is identified with a User Defined Name and then include in the General Option.


Checkpoints allow user to compare the current behavior of the application being tested to its behavior in an earlier version. If any mismatches are found, WinRunner captures them as actual results. User can add four types of checkpoints to test scripts they are

GUI Checkpoints
Bitmap Checkpoints
Text checkpoints
Database checkpoints


All mouse operations, including those performed on the WinRunner window or WinRunner dialog boxes are recorded during an analog recording session. Therefore, don�t insert checkpoints and synchronization points, or select other WinRunner menu or toolbar options during an analog recording session. Note that even if user chooses to record only on selected applications, user can still create checkpoints and perform all other non-recording operations on all applications. Any checkpoints should not be a component of X & Y Co-ordinate dependant. In practical terms if there is a Check point that is defined on X, Y Parameters then the usability of the control point wouldn't make any sense for the application to test. User cannot insert objects from different windows into a single checkpoint. Don�t use Bitmap or GUI Checkpoints for dynamic verification. These checkpoints are purely for static verifications. There are of course, work-around, but mostly not worth the effort.

GUI checkpoints verify information about GUI objects. For example, user can check that a button is enabled or see which item is selected in a list. There are three types of GUI Check points they are
For Single Property
For Object/Window
For Multiple Objects

GUI checkpoint for single property:-user can check a single property of a GUI object. For example, user can check whether a button is enabled or disabled or whether an item in a list is selected.
GUI checkpoint for object/window:-user can create a GUI checkpoint to check a single object in the application being tested. User can either check the object with its default properties or user can specify multiple properties to check.
GUI checkpoint for multiple objects:-user can create a GUI checkpoint to check multiple objects in the application being tested. User can either check the object with its default properties or user can specify multiple properties of multiple objects to check.
Bitmap Checkpoint checks an object, a window, or an area of a screen in the application as a bitmap. While creating a test, user can indicate what user want to check. WinRunner captures the specified bitmap, stores it in the expected results folder (exp) of the test, and inserts a checkpoint in the test script. While running the test, WinRunner compares the bitmap currently displayed in the application being tested with the expected bitmap stored earlier. In the event of a mismatch, WinRunner captures the current actual bitmap and generates a difference bitmap. By comparing the three bitmaps (expected, actual, and difference), user can identify the nature of the discrepancy there are two types of bitmap check points they are
Bitmap Checkpoint for Object/Window: - user can capture a bitmap of any window or object in the application by pointing to it.
Bitmap Checkpointfor Screen Area:-user defines any rectangular area of the screen and captures it as a bitmap for comparison.

Text checkpoints read and check text in GUI objects and in areas of the screen. While creating a test user has to point to an object or a window containing text. WinRunner reads the text and writes a TSL statement to the test script. Later user can add simple programming elements to test scripts to verify the contents of the text. User should use a text checkpoint on a GUI object only when a GUI checkpoint cannot be used to check the text property. There are two types of Text checkpoints they are
From Object/Window
From Screen Area


Database checkpoints check the contents and the number of rows and columns of a result set, which is based on a query user, create on database. There are three types of database check points they are
Default Check:-used to check the entire contents of a result set, Default Check are useful when the expected results can be established before the test run.
Custom Check:-used to check the partial contents, the number of rows, and the number of columns of a result set.
Runtime Record Check:-user can create runtime database record checkpoints in order to compare the values displayed in the application during the test run with the corresponding values in the database.

How to Create a Test Using Winrunner (2)
GUI checkpoint
GUI checkpoint for single property
User can check a single property of a GUI object. For example, user can check whether a button is enabled or disabled or whether an item in a list is selected To create a GUI checkpoint for a property value, use the Check Property dialog box to add one of the following functions to the test script:

button_check_info ()
scroll_check_info ()
edit_check_info ()
static_check_info ()
list_check_info ()
win_check_info ()
obj_check_info ()

Syntax:-Function_Name (name, property, property_value)
name: The Logical name of the object to be checked
property: The property to be checked
property_value: The expected property value


The Functions checks the current value of the specified property matches the expected property value.
To create a GUI checkpoint for a property value:
1. Choose Insert >GUI Checkpoint >For Single Property.
2. The mouse pointer becomes a pointing hand, and the Check Property dialog box opens and shows the default function for the selected object. WinRunner automatically assigns argument values to the function.
3. User can modify the arguments for the property check. To modify assigned argument values, choose a value from the Property list. The expected value is updated in the Expected text box. To choose a different object, click the pointing hand and then click the object to choose.

If the user clicks an object that is not compatible with the selected function, a message states that the current function cannot be applied to the selected object.


GUI checkpoint for object/window
This checkpoint is used to check the state or properties of a single object or window in an application. If a user single-click on a GUI object, the default checks for that object are included in the GUI checkpoint. If the user double-clicks on a GUI object, after WinRunner capturing GUI data, the Check GUI dialog box opens. User can choose which checks to include for that particular object. When using a GUI Checkpoint command, WinRunner inserts a checkpoint statement into the test script.

For a GUI object class, WinRunner inserts an obj_check_gui statement, which compares current GUI object data to expected data.

obj_check_gui (object, checklist, expected_results_file, time);

object The logical name or description of the GUI object. The object may belong to any class.
checklist The name of the checklist defining the GUI checks.
expected_results_file The name of the file that stores the expected GUI data.
time The interval in seconds. This interval is added to the timeout test option during the test run.
For a window, WinRunner inserts a win_check_gui statement, which compares current GUI data to expected GUI data for a window.

win_check_gui (window, checklist, expected_results_file, time);

WinRunner names the first checklist in the test as list1.ckl and the first expected results file gui1.

During test creation, the GUI data is captured and stored. When the user run the test, the current GUI data is compared to the data stored in the expected_results_file, according to the checklist. A file containing the actual results is also generated.

GUI checkpoint for multiple objects
The checkpoint statement inserted by the WinRunner in the case of GUI checkpoint for multiple objects and GUI checkpoint for object/window are the same.

To create a GUI checkpoint for two or more objects select GUI Checkpoint For Multiple Objects button on the User toolbar. The Create GUI Checkpoint dialog box opens.

To add an object, click the Add button once. If the user clicks a window title bar or menu bar, a window pops up asking "You are currently pointing at a window. What do you wish to check inside the window?" objects or menus. User can continue to choose objects by clicking the Add button.

Click the right mouse button to stop the selection process and to restore the mouse pointer to its original shape. The Create GUI Checkpoint dialog box reopens. The Objects pane contains the name of the window and objects included in the GUI checkpoint. To specify which objects to check, click an object name in the Objects pane. The Properties pane lists all the properties of the object. The default properties are selected.

The checklist file contains the expected values and it come under the exp folder. A GUI checklist includes only the objects and the properties to be checked. It does not include the expected results for the values of those properties. WinRunner has an edit checklist file option under the Insert menu. For modifying GUI checklist file select the Edit GUI Checklist. This brings up a dialog box that gives the option to select the checklist file to modify. There is also an option to select the scope of the checklist file, whether it is Test specific or a shared one.

Bitmap Checkpoint
Bitmap Checkpoint for Object/Window
To create a Bitmap Checkpoint for Object/Window
Choose Insert >Bitmap Checkpoint >For Object/Window.

The WinRunner window is minimized, the mouse pointer becomes a pointing hand.
Point to the object or window and click it.

WinRunner captures the bitmap and generates a TSL statement in the script.

The TSL statement generated for a window bitmap has the following syntax:

win_check_bitmap (window, bitmap, time);

The TSL statement generated for an object bitmap has the following syntax:
obj_check_bitmap (object, bitmap, time);

window or object The logical name or description of the window or object.
bitmap A string expression that identifies the captured bitmap.

time The interval marking the maximum delay between the previous input event and the capture of the current bitmap, in seconds. This interval is added to the timeout test option before the next statement is executed.

The win_check_bitmap function captures and compares bitmaps of a window or window area. During test creation, the specified window or area is captured and stored. During a test run, the current bitmap is compared to the one stored in the database. If they are different, the actual bitmap is captured. This function is generated during the recording of a test. Since the test is waiting for a result, the test should be run in update mode.

Bitmap Checkpoint for Screen Area

To create a Bitmap Checkpoint for Screen Area
Choose Insert>:Bitmap Checkpoint >:For Screen Area.

The WinRunner window is minimized, the mouse pointer becomes a crosshairs pointer Mark the area to be captured: press the left mouse button and drag the mouse pointer until a rectangle encloses the area; then release the mouse button.

Press the right mouse button to complete the operation.
WinRunner captures the area and generates a win_check_bitmap statement in the script.

The win_check_bitmap statement for an area of the screen has the following syntax:

win_check_bitmap (window, bitmap, time, x, y, width, height);

x, y For an area bitmap: the coordinates or the upper-left corner, relative to the window in which the selected area is located. width, height For an area bitmap: the size of the selected area, in pixels.

When area of the window is captured, the additional parameters i.e. x, y, width and height define the area's location and dimensions.

The analog version of win_check_bitmap is check_window. Syntax is as follows

check_window (time, bitmap, window, width, height, x, y [, relx1, rely1, relx2, rely2]);

time - Indicates the interval between the previous input event and the bitmap capture, in seconds. This interval is added to the timeout_msec testing option. The sum is the interval between the previous event and the bitmap capture, in seconds.
bitmap - A string identifying the captured bitmap. The string length is limited to 6 characters.
window - A string indicating the name in the window banner.
width, height - The size of the window, in pixels.

x, y - The position of the upper left corner of the window (relative to the screen). In the case of an MDI child window, the position is relative to the parent window.
relx1, rely1 For an area bitmap: the coordinates of the upper left corner of the rectangle, relative to the upper left corner of the client window (the x and y parameters).
relx2, rely2 For an area bitmap: the coordinates of the lower right corner of the rectangle, relative to the lower right corner of the client window (the x and y parameters).

The check_window function captures a bitmap of a window. During recording, the specified bitmap is captured and stored. During a test run, the current bitmap is compared to the bitmap stored in the database, and if it is different, the actual bitmap is captured.

Text checkpoints
Text checkpoints read text in GUI objects and in bitmaps and enable user to verify the contents. When creating a text check point for an object or a window containing text. WinRunner reads the text and writes a TSL statement to the test script. Using simple programming the user can use the text content.

User can use a text checkpoint to:
1. Read text from a GUI object or window in the application, using obj_get_text or win_get_text. The maximum number of characters that can be captured in one obj_get_text statement is 2048.

obj_get_text (object, out_text [, x1, y1, x2, y2]);

object The logical name or description of the GUI object. The object may belong to any class. out_text The name of the output variable that stores the captured text. x1,y1,x2,y2 An optional parameter that defines the location from which text will be read, relative to the specified object. The pairs of coordinates can designate any two diagonally opposite corners of a rectangle.

2. Search for text in an object or window, using obj_find_text or win_find_text returns the location of a string within an object.
obj_find_text (object, string, result_array [, search_area [, string_def]]);
object The logical name or description of the object. The object may belong to any class.

string A valid string expression or the name of a string variable, which can include a regular expression. The regular expression should not include an exclamation mark (!), however, which is treated as a literal character.
result_array The name of the four-element array that stores the location of the string. The elements are numbered 1 to 4. Elements 1 and 2 store the x- and y- coordinates of the upper left corner of the enclosing rectangle; elements 3 and 4 store the coordinates for the lower right corner.

search_area Indicates the area of the screen to search as coordinates that define any two diagonal corners of a rectangle, expressed as a pair of x,y coordinates. The coordinates are stored in result_array.

string_def Defines the type of search to perform. If no value is specified (0 or FALSE, the default), the search is for a single, complete word only. When 1, or TRUE, is specified, the search is not restricted to a single, complete word. Any regular expression used in the string must not contain blank spaces, and only the default value of string_def, FALSE, is in effect.


3. Compares two strings using compare_text (str1, str2 [, chars1, chars2]);
str1, str2 the two strings to be compared.
chars1 One or more characters in the first string that should be considered equivalent to the character(s) specified in chars2.
chars2 One or more characters in the second string. that should be considered equivalent to the character(s) specified in chars1.

The compare_text function compares two strings, ignoring any differences specified. The two optional parameters are used to indicate characters that should be considered equivalent during the comparison. For instance, if the user specify "m" and "n", the words "any" and "amy" will be considered a match. The two optional parameters must be of the same length. Note that blank spaces are ignored.

WinRunner can read the visible text from the screen in most applications. If the Text Recognition mechanism is set to driver based recognition, this process is automatic. However, if the Text Recognition Mechanism is set to image-based recognition, WinRunner must first learn the fonts used by the application. When using the WinRunner text-recognition mechanism for Windows-based applications, keep in mind that it may occasionally retrieve unwanted text information (such as hidden text and shadowed text, which appears as multiple copies of the same string). The text recognition may behave differently in different run sessions depending on the operating system version, service packs, other installed toolkits, the APIs used, and so on. Therefore, when possible, it is highly recommended to retrieve or check text from application window by inserting a standard GUI checkpoint and selecting to check the object ’s value (or similar)property.

When reading text with a learned font, WinRunner reads a single line of text only. If the captured text exceeds one line, only the leftmost line is read. If two or more lines have the same left margin, then the bottom line is read.

Database Checkpoint

Default Check on a Database
To create default check on a database using ODBC or Microsoft Query:
Choose Insert >Database Checkpoint >Default Check
If Microsoft Query is installed and user is creating a new query, an instruction screen opens for creating a query. If Microsoft Query is not installed, the Database Checkpoint wizard opens to a screen where the user can define the ODBC query manually.
Define a query, copy a query, or specify an SQL statement.

WinRunner takes several seconds to capture the database query and restore the WinRunner window. WinRunner captures the data specified by the query and stores it in the test’s exp folder. WinRunner creates the msqr*.sql query file and stores the query and the database checklist is stored in the test’s chklist folder.

A database checkpoint is inserted in the test script as a db_check statement.
Syntax:-
db_check (checklist, expected_results_file [, max_rows [, parameter_array]]);

checklist The name of the checklist specifying the checks to perform.
expected_results_file The name of the file storing the expected database data.
max_rows The maximum number of rows retrieved in a database. If no maximum is specified, then by default the number of rows is not limited.
parameter_array The array of parameters for the SQL statement.

The db_check function captures and compares information about a database. During a test run, WinRunner checks the query of the database with the checks specified in the checklist. WinRunner then checks the information obtained during the test run against the expected results contained in the expected_results_file.
Note:-when using Create > Database Checkpoint command to create a database checkpoint, only the first two (obligatory) parameters are included in the db_check statement (unless the user parameterize the SQL statement from within Microsoft Query). If the user changes this parameter in a db_check statement recorded in test script, user must run the test in Update mode before running it in Verify mode. SQL queries used with db_check are limited to 4Kb in length.
Custom Check on a Database

When the user wants to create a custom check on a database, user creates a standard database checkpoint in which user can specify which properties to check on a result set. User can create a custom check on a database using ODBC, Microsoft Query or Data Junction. User can create a custom check on a database in order to:

Check the contents of part or the entire result set
Edit the expected results of the contents of the result set
Count the rows in the result set
Count the columns in the result set

To create a custom check on a database:
Choose Insert >Database Checkpoint >Custom Check

The Database Checkpoint wizard opens. Use ODBC or Microsoft Query to define a query, copy a query, or specify an SQL statement. WinRunner takes several seconds to capture the database query and restore the WinRunner window.
If the user wants to edit the expected value of a property, first select it. Next, either click the Edit Expected Value button, or double-click the value in the Expected Value column. WinRunner captures the current property values and stores them in the test’s exp folder. WinRunner stores the database query in the test’s chklist folder. A database checkpoint is inserted in the test script as a db_check statement. If the user is using Microsoft Query and the user want to be able to parameterize the SQL statement in the db_check statement, then in the last wizard screen in Microsoft Query, click View data or edit query in Microsoft Query

The default check for a multiple-column query on a database is a case sensitive check on the entire result set by column name and row index. The default check for a single-column query on a database is a case sensitive check on the entire result set by row position. If the result set contains multiple columns with the same name, WinRunner disregards the duplicate columns and does not perform checks on them. Therefore, user should create a custom check on the database and select the column index option.

Modifying a Standard Database Checkpoint
User can make the following changes to an existing standard database checkpoint:
Make a checklist available to other users by saving it in a shared folder
User can edit an existing database checklist.
User can modify a query in an existing checklist


To save a database checklist in a shared folder:
Choose Insert >Edit Database Checklist.

The Open Checklist dialog box opens.
Select a database checklist and click OK .
Under Scope, click Shared .Type in a name for the shared checklist.

*.sql files are not saved in shared database checklist folders. Checklists have the .cdl extension, while GUI checklists have the .ckl extension. The Objects pane contains “Database check”and the name of the *.sql query file or *.djs conversion file that will be included in the database checkpoint. The Properties pane lists the different types of checks that can be performed on databases. A check mark indicates that the item is selected and is included in the checkpoint. In the Properties pane, user can edit the database checklist to include or exclude the following types of checks:

ColumnsCount: Counts the number of columns in the result set.
Content : Checks the content of the result set
RowsCount: Counts the number of rows in the result set.

To modify a query in an existing checklist, highlight the name of the query file or the conversion file, and click Modify.
The Modify ODBC Query dialog box opens and the user can make modification to connection string and/or the SQL statement.
After making the modifications user must run all tests that use this checklist in Update mode before running them in Verify mode.

Runtime record checkpoints
Runtime record checkpoints are useful when the information in the database changes from one run to the other. Runtime record checkpoints enable user to verify that the information displayed in the application was correctly inserted to the database or conversely, that information from the database is successfully retrieved and displayed on the screen. If the comparison does not meet the success criteria user specify for the checkpoint, the checkpoint fails.

To add a runtime database record checkpoints
Select Insert >Database Checkpoint >Runtime Record Check .

The Define Query screen pops up which enables user to select a database and define a query for the checkpoint. User can create a new query from database using Microsoft Query, or manually define an SQL statement.

The Next screen is the Match Database Field screen which enables user to identify the application control or text in application that matches the displayed database field.

The Next screen is the Matching Record Criteria screen which enables user to specify the number of matching database records required for a successful checkpoint.

db_record_check statement is inserted into the script. db_record_check () function compares information that appears in the application under test during a test run with the current values in the corresponding record(s) in database.

Syntax of db_record_check ():-
db_record_check (ChecklistFileName, SuccessConditions, RecordNumber [, Timeout]);


ChecklistFileName A file created by WinRunner and saved in the test's checklist folder. The file contains information about the data to be captured during the test run and its corresponding field in the database. The file is created based on the information entered in the Runtime Record Checkpoint wizard.

SuccessConditions Contains one of the following values:

DVR_ONE_OR_MORE_MATCH -
The checkpoint passes if one or more matching database records are found.
DVR_ONE_MATCH -
The checkpoint passes if exactly one matching database record is found.
DVR_NO_MATCH - The checkpoint passes if no matching database records are found.
RecordNumber Parameter that returns the number of records the database.
Timeout The number of seconds before the query attempt times out.


User cannot use an SQL statement of the type "SELECT * from ..." with the db_record_check function. Instead, user must supply the tables and field names. The reason for this is that WinRunner needs to know which database fields should be matched to which variables in the WinRunner script. The expected SQL format is:

SELECT table_name1.field_name1, table_name2.field_name2, ……FROM table_name1, table_name2, ... [WHERE ...]

Editing a Runtime Database Record Checklist
User can make changes to a checklist created for a runtime database record checkpoint. A checklist includes the connection string to the database, the SQL statement or a query, the database fields in the data source, the controls in the application, and the mapping between them. It does not include the success conditions of a runtime database record Checkpoint so the user can’t edit the success conditions. User can change the success condition of the checkpoint by modifying the second parameter in the db_record_check statement in the test script.

To edit an existing runtime database record checklist:

Choose Insert >Edit Runtime Record Checklist.
Select the checklist name from the Runtime Record Checkpoint wizard by default, runtime database record checklists are named sequentially in each test, starting with list1.cvr.

The next screen is the Specify SQL statement screen where the user can modify the Connection String and SQL statement. If the user modified the SQL statement or query in Microsoft Query so that it now references an additional database field in the data source, the checklist will now include a new database field.

User must match this database field to an application control. Use the pointing hand in the next screen to identify the control or text that matches the displayed field name. New database fields are marked with a “New” icon

If user wants several db_record_check statements, each with different success conditions then user can manually enter a db_record_check statement that references an existing checklist and specify the success conditions user want. User does not need to rerun the Runtime Record Checkpoint wizard for each new checkpoint.
Parameterize Standard Database Checkpoints

While creating a standard database checkpoint using ODBC (Microsoft Query), user can add parameters to an SQL statement to parameterize the checkpoint. A parameterized query is a query in which at least one of the fields of the WHERE clause is parameterized, i.e., the value of the field is specified by a question mark symbol (?).

To execute a parameterized query, user must specify the values for the parameters.

To parameterize the SQL statement in the checkpoint, the db_check function has a fourth, optional, argument the parameter_array argument.

Syntax:-
db_check (checklist, expected_results_file [, max_rows [, parameter_array]]);

The parameter_array argument will contain the values to substitute for the parameters in the parameterized checkpoint. WinRunner cannot capture the expected result set while recording the test. Unlike regular database checkpoints, recording a parameterized checkpoint requires additional steps to capture the expected results set. Therefore, user must use array statements to add the values to substitute for the parameters. User must run the test in Update mode once to capture the expected results set before running the test in Verify mode.

TSL Functions for Working with ODBC (Microsoft Query)
When the user works with ODBC (Microsoft Query), user must perform the following steps in the following order:

Connect to the database.
Execute a query and create a result set based on an SQL statement.
Retrieve information from the database.
Disconnect from the database.

Connect to the database.
Syntax:-db_connect (session_name, connection_string [, timeout]);
session_name The logical name or description of the database session.
connection_string The connection parameters to the ODBC database.
timeout The number of seconds before the login attempt times out.


The db_connect function creates the new session_name database session and uses the connection_string to establish a connection to an ODBC database. User can use the Function Generator to open an ODBC dialog box, in which user can create the connection string. If user tries to use a session name that has already been used, WinRunner will delete the old session object and create a new one using the new connection string.

Execute a query and create a result set based on an SQL statement.
Syntax:-db_execute_query ( session_name, SQL, record_number );
SQL The SQL statement to be executed
record_number An out parameter returning the number of records in the result query.


The db_execute_query function executes the query based on the SQL statement and creates a record set. User must use a db connect statement to connect to the database before using this function.

Retrieve information from the database.
Syntax:-db_get_field_value (session_name, row_index, column);
row_index The index of the row written as a string: "# followed by the numeric index. (The first row is always numbered "#0".)
column name of the field in the column


The db_get_field_value function returns the value of a single field in the specified row_index and column in the session_name database session. In case of an error, an empty string will be returned. Before using this function user must use a db connect statement, connect to the database and db execute query statement, execute a query.


Syntax:-db_get_headers (session_name, header_count, header_content);
header_count The number of column headers in the query.
header_content The column headers concatenated and delimited by tabs. If this string exceeds 1024 characters, it is truncated.

The db_get_headers function returns the header_count and the text in the column headers in the session_name database session. Before using this function user must use a db connect statement, connect to the database and db execute query statement, execute a query.

Syntax:-db_get_row (session_name, row_index, row_content);
row_index The numeric index of the row. (The first row is always numbered "0".)
row_content The row content as a concatenation of the fields values, delimited by tabs.

The db_get_row function returns the row_content of the specified row_index, concatenated and delimited by tabs in the session_name database session. Before using this function user must use a db connect statement, connect to the database and db execute query statement, execute a query.

Syntax:-db_write_records (session_name, output_file [, headers [, record_limit]]);
output_file The name of the text file in which the record set is written.
headers An optional Boolean parameter that will include or exclude the column headers from the record set written into the text file. record_limit The maximum number of records in the record set to be written into the text file. A value of NO_LIMIT (the default value) indicates there is no maximum limit to the number of records in the record set.

The db_write_records writes the record set of the session_name into an output_file delimited by tabs. Before using this function user must use a db connect statement, connect to the database and db execute query statement, execute a query.

Syntax:-db_get_last_error ( session_name, error );
error The error message.

The db_get_last_error function returns the last error message of the last ODBC or Data Junction operation in the session_name database session. If there is no error message, an empty string will be returned. User must use a db connect statement to connect to the database before using this function.

Disconnect from the database.
Syntax:-db_disconnect ( session_name );

The db_disconnect function disconnects from the session_name database session. User must use a db connect statement to connect to the database before using this function.

Specifying the Verification Method
User can select the verification method to control how WinRunner identifies columns or rows within a result set. The verification method applies to the entire result set. Specifying the verification method is different for multiple-column and single-column result sets.

Specifying the Verification Method for a Multiple-Column Result Set
Column
Name (default setting)

WinRunner looks for the selection according to the column names. A shift in the position of the columns within the result set does not result in a mismatch.

Index WinRunner looks for the selection according to the index, or position, of the columns. A shift in the position of the columns within the result set results in a mismatch. Select this option if the result set contains multiple columns with the same name.
Row
Key WinRunner looks for the rows in the selection according to the key(s) specified in the Select key columns list box, which lists the names of all columns in the result set. A shift in the position of any of the rows does not result in a mismatch. If the key selection does not identify a unique row, only the first matching row will be checked.

Index (default setting) WinRunner looks for the selection according to the index, or position, of the rows. A shift in the position of any of the rows results in a mismatch.

Specifying the Verification Method for a Single-Column Result Set
By position WinRunner checks the selection according to the location of the items within the column.
By content WinRunner checks the selection according to the content of the items, ignoring their location in the column.


Specifying the Verification Type
WinRunner can verify the contents of a result set in several different ways. User can choose different verification types for different selections of cells.

Case Sensitive (the default)
WinRunner compares the text content of the selection. Any difference in case or text content between the expected and actual data results in a mismatch.

Case Sensitive Ignore Spaces
WinRunner checks the data in the field according to case and content, ignoring differences in spaces. WinRunner reports any differences in case or content as a mismatch.

Case Insensitive WinRunner compares the text content of the selection. Only differences in text content between the expected and actual data result in a mismatch.

Case Insensitive Ignore Spaces
WinRunner checks the content in the cell according to content, ignoring differences in case and spaces. WinRunner reports only differences in content as a mismatch.

Numeric Content WinRunner evaluates the selected data according to numeric values. WinRunner recognizes, for example, that “2” and “2.00” are the same number.

Numeric Range WinRunner compares the selected data against a numeric range. Both the minimum and maximum values are any real number that the user specifies. This comparison differs from text and numeric content verification in that the actual database data is compared against the range that user defined and not against the expected results.
Synchronization points

Synchronization points enable user to solve anticipated timing problems between the test and the application. By inserting a synchronization point in the test script, user can instruct WinRunner to suspend the test run and wait for a cue before continuing the test. It is useful for testing client-server systems, where the response time of the server varies significantly.

For Analog testing, user can also use a synchronization point to ensure that WinRunner repositions a window at a specific location. While running a test, the mouse cursor travels along exact coordinates. Repositioning the window enables the mouse pointer to make contact with the correct elements in the window.

There are three kinds of synchronization points:

Synchronization point for Property Values of Objects or Windows
Synchronization point for Bitmaps of Objects and Windows
Synchronization point for Bitmaps of Screen Areas

Depending on which Synchronization Point command user has choose, WinRunner either captures the property value of a GUI object or a bitmap of a GUI object or area of the screen, and stores it in the expected results folder (exp ). User can also modify the property value of a GUI object that is captured before it is saved in the expected results folder. When the user runs the test, WinRunner suspends the test run and waits for the expected bitmap or property value to appear. It then compares the current actual bitmap or property value with the expected bitmap or property value saved earlier. When the bitmap or property value appears, the test continues.

Synchronization point for Property Values of Objects or Windows
When the user wants WinRunner to wait for an object or a window to have a specified property, user creates a property value synchronization point. A property value synchronization point is a synchronization point that captures a property value of Objects or Windows. It appears as a _wait_info statement in the test script, such as button_wait_info or list_wait_info.

For example, user can tell WinRunner to wait for a button to become enabled or for an item to be selected from a list.

To create synchronization point for Property Values of Objects or Windows
Go to Insert >Synchronization Point > For Object/Window Property.

When the user passes the mouse pointer over the application, objects and windows flash.

To select a window, user has to click the title bar or the menu bar of the desired window. To select an object, user has to click the object. A dialog box opens containing the name of the selected window or object. User can specify which property of the window or object to check, the expected value of that property, and the amount of time that WinRunner waits at the synchronization point.

Syntax:-button_wait_info (button, property, value, time);
button The logical name or description of the button.
property Any of the properties listed.
value The property value.
time Indicates the maximum interval, in seconds, before the next statement is executed.


The button_wait_info function waits for the value of a button property and then continues test execution. If the property does not return the required value, the function waits until the time expires before continuing the test run. The other function used for synchronization point for Property Values of Objects or Windows are

edit_wait_info Waits for the value of an edit property.
list_wait_info Waits for the value of a list property.
menu_wait_info Waits for the value of a menu property.
obj_wait_info Waits for the value of an object property.
scroll_wait_info Waits for the value of a scroll property.
spin_wait_info Waits for the value of a spin property.
static_wait_info Waits for a the value of a static text property.
statusbar_wait_info Waits for the value of a status bar property.
tab_wait_info Waits for the value of a tab property.
win_wait_info Waits for the value of a window property.


Synchronization point for Bitmaps of Objects and Windows
br> When the user wants WinRunner to wait for a visual cue to be displayed, user has to create a bitmap synchronization point. In a bitmap synchronization point, WinRunner waits for the bitmap of an object or a window, to appear. It appears as a win_wait_bitmap or obj_wait_bitmap statement in the test script.br> br> To create synchronization point for Bitmaps of Objects and Windows
Go to Insert >Synchronization Point>For Object/Window Bitmap.

To select the bitmap of an entire window, user has to click the window’s title bar or menu bar. To select the bitmap of an object, user has to click the object. During a test run, WinRunner suspends test execution until the specified bitmap is redrawn, and then compares the current bitmap with the expected one captured earlier. If the bitmaps match, then WinRunner continues the test.

Syntax:-obj_wait_bitmap (object, bitmap, time);
object The logical name or description of the object. The object may belong to any class.

bitmap A string expression that identifies the captured bitmap.

time Indicates the interval between the previous input event and the capture of the current bitmap, in seconds. This parameter is added to the timeout
The obj_wait_bitmap function synchronizes a test run. It ensures that the bitmap of a specified GUI object appears on the screen before the test continues.

Waiting for Bitmaps of Screen Areas

User can create a bitmap synchronization point that waits for a bitmap of a selected area in the application. User can define any rectangular area of the screen and capture it as a bitmap for a synchronization point. It appears as a win_wait_bitmap or obj_wait_bitmap statement in the test script.

Syntax: - obj_wait_bitmap (object, bitmap, time [, x, y, width, height]);
x, y For an area bitmap: the coordinates of the upper left corner, relative to the object in which the selected region is located. width, height For an area bitmap: the size of the selected region, in pixels.

To create synchronization point Bitmaps of Screen Areas
Go to Insert >Synchronization Point >For Screen Area Bitmap.

The mouse pointer becomes a crosshairs pointer; user can use the crosshairs pointer to outline a rectangle around the area. The area can be any size, it can be part of a single window, or it can intersect several windows. WinRunner defines the rectangle using the coordinates of its upper left and lower right corners. These coordinates are relative to the upper left corner of the object or window in which the area is located. If the area intersects several objects in a window, the coordinates are relative to the window. If the selected area intersects several windows, or is part of a window with no title (a popup menu, for example), the coordinates are relative to the entire screen (the root window).

During a test run, WinRunner suspends test execution until the specified bitmap is displayed. It then compares the current bitmap with the expected bitmap. If the bitmaps match, then WinRunner continues the test.

In the event of a mismatch, WinRunner displays an error message, when the mismatch_break testing option is on. User can make the mismatch_break testing option off. Execute the following setvar statement:

setvar ("mismatch_break", "off");
WinRunner disables the mismatch_break testing option. The setting remains in effect during the testing session until it is changed again, either with another setvar statement or from the corresponding Break when verification fails check box in the Run >Settings category of the General Options dialog box. Using the setvar function changes a testing option globally, and this change is reflected in the General Options dialog box. However, user can also use the setvar function to set testing options for a specific test, or even for part of a specific test.

The main difference between the wait () and Synchronization Point is the wait () pauses test execution for the specified interval. But the Synchronization point only wait until the specified bitmap or object is displayed.

Syntax: - wait (seconds [, milliseconds]);
seconds The length of the pause, in seconds. The valid range of this parameter is from 0 to 32,767 seconds.
milliseconds The number of milliseconds that are added to the seconds.
Testing Date Operations

The recommended workflow while checking dates in the application is as follows:

Define the date format(s) currently used in the application.
Create baseline tests by recording tests on the application. While recording, insert checkpoints that will check the dates in the application.
Run the tests (in Debug mode) to check that they run smoothly. If a test incorrectly identifies non-date fields as date fields or reads a date field using the wrong date format, user can override the automatic date recognition on selected fields.
Run the test (in Update mode) to create expected results.
Run the tests (in Verify mode). If the user wants to check how the application performs with future dates, user can age the dates before running the test.


Analyze test results to pinpoint where date-related problems exist in the application. If the user change date formats in the application, user should repeat the workflow described above after redefining the date formats used in the application.

To specify date formats:
Go to Date > Set Date Formats. The Set Date Formats dialog box opens. User can select each date format used in the application. User should move the most frequently-used date format in the application to the top of the list. WinRunner considers the top date format first.

Checking Dates in GUI Objects
User can use GUI checkpoints to check dates in GUI objects (such as edit boxes or static text fields).
The default check for edit boxes and static text fields is the date.
The default check for tables performs a case-sensitive check on the entire contents of a table, and checks all the dates in the table.

Overriding Date Settings
When debugging the tests, user may want to override user can override in the following ways:

Aging of a specific date format: - User can override the aging of a specific date format so that it will be aged differently than the default aging setting.

To override the aging of a date format:
Go to Date > Set Date Formats. The Set Date Formats dialog box opens.
Click the Advanced button. The Advanced Settings dialog box opens.
In the Format list, select a date format.
Click Change. The Override Aging dialog box opens.

User can increment the date format by a specific number of years, months and days. If the user wants no aging then use 0. User can choose a specific date for the selected date format by selecting the "Change all date to" option or user can stick to the default aging.

Overriding Aging or date format of a specific object: - User can define that a specific object that resembles a date should not be treated as a date object.

To override settings for an object:
Go to Date > Override Object Settings. The Override Object Settings dialog box opens.
Click the pointing hand button and then click the date object.
To override date format settings or to specify that the object is not a date object, clear the Use default format conversion check box

Note: When WinRunner runs tests, it first examines the general settings defined in the Date Operations Run Mode dialog box. Then, it examines the aging overrides for specific date formats. Finally, it considers overrides defined for particular objects.

Checking Dates with TSL

User can enhance the recorded test scripts by adding the following TSL date functions:

date_calc_days_in_field (field_name1, field_name2);
field_name1 The name of the 1st date field.
field_name2 The name of the 2nd date field.

The date_calc_days_in_field function calculates the number of days between the dates appearing in two date fields. Note that the specified date fields must be located in the same window.

date_calc_days_in_string (string1, string2);
string1 The name of the 1st string.
string2 The name of the 2nd string.

The date_calc_days_in_string function calculates the number of days between two numeric dates’ strings. Note that the specified strings must be located in the same window.

date_field_to_Julian (date_field);
date_field The name of the date field.

The date_field_to_Julian function translates a date string to a Julian number. For example, if the date 121398 (December 13, 1998) appears in the specified date field, WinRunner translates the date to the Julian number 2451162.

date_string_to_Julian (string)
string The numeric date string.

The date_string_to_Julian function translates a date string to a Julian number. For example, it calculates the string 12/13/98 (December 13, 1998) to 2451162.

date_is_field (field_name, min_year, max_year);
field_name The name of the field containing the date.
min_year Determines the minimum year allowed.
max_year Determines the maximum year allowed.

The date_is_field function checks that a field contains a valid date by determining whether the date falls within a specified date range.

date_is_string (string, min_year, max_year);
string The numeric string containing the date.
min_year Determines the minimum year allowed.
max_year Determines the maximum year allowed.

The date_is_string function checks that a numeric string contains a valid date by determining whether the date falls within a specified date range.

date_is_leap_year (year);
year A year, for example "1998".

The date_is_leap_year function determines whether a year is a leap year. The function returns "0" if the year is not a leap year or "1" if the year is a leap year.

date_month_language (language);
language The language used for month names.

The date_month_language function enables user to select the language used for month names in the application so that WinRunner can identify dates. User can select English, French, German, Spanish, Portuguese, or Italian. If the application uses a different language, select "Other" and define the names for all 12 months.

Data-Driven Testing

The Different stages of the data-driven testing process in WinRunner are:
Creating a test
Converting a test to a Data-Driven test
Create a corresponding data table.
Running the Test
Analyzing test results

Creating a test

In order to create a data-driven test user must create a basic test by recording a test, as usual with one set of data.
Converting a test to a Data-Driven test

User can convert the test to a Data-Driven test by Data Driver Wizard or by modifying the script manually. The procedure for converting a test to a data-driven test is composed of the following main steps:

Assigning a variable name to the data table (mandatory when using the Data Driver wizard and otherwise optional)
Add statements to the script that open and close the data table.
Adding statements and functions to the test so that it will read the data from the data table and run in a loop, while it reads each iteration of data.
Replace fixed values in checkpoint statements and in recorded statements with parameters.
Create a data table containing values for the parameters. This is known as parameterize the test.

To create data-driven tests select lines in the test script:
Go to Choose Table >Data Driver Wizard.

The Data Driver Wizard pop up opens with the "Use a new or existing Excel table" box which displays the name of the Excel file that WinRunner creates, which stores the data for the data-driven test.

In the “Assign a name to the variable” box, enter a variable name with which to refer to the data table.

Check the “Add statements to create a data-driven test" check box which automatically adds statements to run the test in a loop. If the user do not choose to select this option user will receive a warning that data-driven test must contain a loop and statements to open and close the data table. User should not select this option if the user has chosen it previously while running the Data Driver wizard on the same portion of the test script.

If the user wants to Imports data from a database check the "Import data from a database" check box. In order to import data from a database, either Microsoft Query or Data Junction must be installed on the machine.

Check the "Parameterize the test" check box which replaces fixed values in selected checkpoints and in recorded statements with parameters and in the data table, adds columns with variable values for the parameters.

Select the "Line by line" option if the user decide to parameterize a particular line, and if so, whether to add a new column to the data table or use an existing column when parameterize data.

Select the "Automatically" option if the user decides to replaces all data and adds new columns to the data table.

In the Next screen "Test script line to parameterize" box displays the line of the test script to parameterize. The highlighted value can be replaced by a parameter. “Argument to be replaced” box displays the argument (value) that user can replace with a parameter. User can use the arrows to select a different argument to replace. User has to Choose whether and how to replace the selected data. After finishing the parameterization the final screen of the wizard opens where the user if needed can see the data table created.

Assigning the Main Data Table for a Test

The main data table is the table that is selected by default when user choose Tools >Data Table or open the Data Driver wizard. To assign the main data table for a test:
Go to File >Test Properties and click the General tab.

Choose the data table user want to assign from the Main data table list. All data tables that are stored in the test folder are displayed in the list.

Using Data-Driven Checkpoints and Bitmap Synchronization Points

When checking the properties of GUI objects in a data-driven test, it is better to create a single property check than to create a GUI checkpoint which contains references to a checklist stored in the test’s chklist folder and expected results stored in the test’s exp folder. A single property check does not contain checklist, so it can be easily parameterized. In order to parameterize GUI and bitmap checkpoints and bitmap synchronization points statements. First create separate columns for each checkpoint or bitmap synchronization point. Then enter dummy values in the columns to represent captured expected results. While running the test in Update mode, WinRunner recaptures expected values for GUI and bitmap checkpoints automatically. WinRunner prompts user before recapturing expected values for bitmap synchronization points. And save all the results in the test’s exp folder.

Using TSL Functions with Data-Driven Tests
Opening a Data Table
ddt_open (data_table_name [, mode]);

data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0.

mode The mode for opening the data table: DDT_MODE_READ (read-only) or DDT_MODE_READWRITE (read or write). When the mode is not specified, the default mode is DDT_MODE_READ.

The ddt_open function opens the data table file with the specified data_table_name. The active row becomes row number 1. User must use a ddt_open statement to open the data table before using any other ddt_functions.

Saving a Data Table
ddt_save (data_table_name);

The ddt_save function saves the information in a data table in its existing format. ddt_save does not close the data table. Use the ddt_close to close the data table.

Closing a Data Table
ddt_close (data_table_name);

The ddt_close function closes the specified data table. ddt_close does NOT save changes to the data table. If user makes any changes to the data table, user must use the ddt_save function to save the changes before using ddt_close to close the table. The ddt_close function will not close the table if it is currently open in the table editor, regardless of whether it was opened from the WinRunner menu or using the ddt_show function. The ddt_close function checks if the table editor is displaying the table, and if so, leaves it open.

Displaying the Data Table Editor
dt_show (data_table_name [, show_flag]);
show_flag The value indicating whether the editor should be shown (default=1) or hidden (0).

The ddt_show function allows the table editor to be shown or hidden. The show_flag value is 1 if the table editor is to be shown and is 0 if the table editor is to be hidden.

Exporting a Data Table
ddt_export (data_table_name1, data_table_name2);
data_table_name1 The source data table filename.
data_table_name2 The destination data table filename.
The ddt_export function sends the contents of data_table_name1 to data_table_name2

Returning the Number of Rows in a Data Table
dt_get_row_count (data_table_name, out_rows_count);
out_rows_count The output variable that stores the total number of rows in the data table.
The ddt_get_row_count function retrieves the number of rows in the specified data table.


Changing the Active Row in a Data Table to the Next Row
ddt_next_row (data_table_name);

The ddt_next_row function changes the active row in the specified data table to the next row. If the active row is the last row in a data table, then the E_OUT_OF_RANGE value is returned.

Setting the Active Row in a Data Table
ddt_set_row (data_table_name, row);
row The new active row in the data table.

The ddt_set_row function sets the active row in the specified data table. When the data table is first opened, the active row is the first row.

Setting a Value in the Current Row of the Table
ddt_set_val (data_table_name, parameter, value);
parameter The name of the column into which the value will be inserted.
value The value to be written into the table.

The ddt_set_val function sets a value in a cell of the current row of a database. User can only use this function if the data table was opened in DDT_MODE_READWRITE (read or write mode).

Setting a Value in a Row of the Table
ddt_set_val_by_row (data_table_name, row, parameter, value);
row The row number in the table. It can be any existing row or the current row number plus 1, which will add a new row to the data table.
parameter The name of the column into which the value will be inserted.
value The value to be written into the table.

The ddt_set_val_by_row function sets a value in a specified cell in the table. User can only use this function if the data table was opened in DDT_MODE_READWRITE (read or write mode).

Retrieving the Active Row of a Data Table
ddt_get_current_row ( data_table_name, out_row );
out_row The output variable that stores the active row in the data table.

The ddt_get_current_row function retrieves the active row in the specified data table and returns this value as out_row.

Determining Whether a Parameter in a Data Table is Valid
ddt_is_parameter (data_table_name, parameter);
parameter The parameter name to check in the data table.

The ddt_is_parameter function returns whether a parameter in the specified data table is valid.

Returning a List of Parameters in a Data Table
ddt_get_parameters (data_table_name, params_list, params_num);
params_list This out parameter returns the list of all parameters in the data table, separated by tabs.
params_num This out parameter returns the number of parameters in params_list.
The ddt_get_parameters function returns a list of all parameters in a data table.

Returning the Value of a Parameter in the Active Row in a Data Table
ddt_val (data_table_name, parameter);

The ddt_val function returns the value of a parameter in the active row in the specified data table.
Returning the Value of a Parameter in a Row in a Data Table
ddt_val_by_row (data_table_name, row_number, p

WinRunner automated software functionality test tool from Mercury Interactive for functional and regression testing

Q: For new users, how to use WinRunner to test software applications automately ?
A: The following steps may be of help to you when automating tests
1. MOST IMPORTANT - write a set of manual tests to test your application - you cannot just jump in with WR and expect to produce a set of meaningful tests. Also as you will see from the steps below this set of manual tests will form your plan to tackle automation of your application.
2. Once you have a set of manual tests look at them and decide which ones you can automate using your current level of expertise. NOTE that there will be tests that are not suitable for automation, either because you can't automate them, or they are just not worth the effort.
3. Automate the tests selected in step 2 - initially you will use capture/replay using the steps in the manual test, but you will soon see that to produce meaningful and informative tests you need to add additional code to your test eg. use tl_step() to give test results. As this process continues you will soon see that there are operations that you repeatedly do in multiple tests - these are then candidates for user-defined functions and compiled modules
4. Once you have completed step 3 go back to step 2 and you will find that the knowledge you have gained in step 3 will now allow you to select some more tests that you can do.
If you continue going through this loop you will gradually become more familiar with WR and TSL, in fact you will probably find that eventually you do very little capture/replay and more straight TSL coding.

Q: How to use WinRunne to check whether the record was updated or the record was delelte or the record was inserted or not?
Using WinRunner check point features: Create->dDB checkpoint->Runtime Record check
Q: How to use WinRunner to test the login screen
A: When you enter wrong id or password, you will get Dialog box.
1. Record this Dialog box
2. User win_exists to check whether dialog box exists or not
3. Playback: Enter wrong id or password, if win_exists is
true, then your application is working good.
Enter good id or password, if win_exists is false,
then your application is working perfectly.

Q: After clicking on "login" button, they opens other windows of the web application, how to check that page is opened or not
When your expecting "Window1" to come up after clicking on Login...
Capture the window in the GUI Map. No two windows in an web based
application can have the same html_name property. Hence, this would
be the property to check.

First try a simple win_exists("window1", ) in an IF condition.

If that does'nt work, try the function,

win_exists("{ class: window, MSW_class: html_frame, html_name: "window1"}",); :

Winrunner testscript for checking all the links at a time
location = 0;
set_window("YourWindow",5);

while(obj_exists((link = "{class: object,MSW_class: html_text_link,location: "
& location & "}"))== E_OK)
{
obj_highlight(link);
web_obj_get_info(link,"name",name);
web_link_valid(link,valid);
if(valid)
tl_step("Check web link",PASS,"Web link \"" & name & "\" is valid.");
else
tl_step("Check web link",FAIL,"Web link \"" & name & "\" is not valid.");
location++;
}

Q: How to get the resolution settings
Use get_screen_res(x,y) to get the screen resolution in WR7.5.
or
Use get_resolution (Vert_Pix_int, Horz_Pix_int, Frequency_int) in WR7.01

Q: WITHOUT the GUI map, use the phy desc directly....
It's easy, just take the description straight out of the GUI map squigglies and
all, put it into a variable (or pass it as a string)
and use that in place of the object name.

button_press ( "btn_OK" );
becomes
button_press ( "{class: push_button, label: OK}" );

Q: What are the three modes of running the scripts?
WinRunner provides three modes in which to run tests: Verify, Debug, and Update. You use each mode during a different phase of the testing process.
Verify
Use the Verify mode to check your application.
Debug
Use the Debug mode to help you identify bugs in a test script.
Update
Use the Update mode to update the expected results of a test or to create a new expected results folder.

Q: How do you handle unexpected events and errors?
WinRunner uses exception handling to detect an unexpected event when it occurs and act to recover the test run.
WinRunner enables you to handle the following types of exceptions:
Pop-up exceptions: Instruct WinRunner to detect and handle the appearance of a specific window.
TSL exceptions: Instruct WinRunner to detect and handle TSL functions that return a specific error code.
Object exceptions: Instruct WinRunner to detect and handle a change in a property for a specific GUI object.
Web exceptions: When the WebTest add-in is loaded, you can instruct WinRunner to handle unexpected events and errors that occur in your Web site during a test run.

Q: How do you handle pop-up exceptions?
A pop-up exception Handler handles the pop-up messages that come up during the execution of the script in the AUT. TO handle this type of exception we make WinRunner learn the window and also specify a handler to the exception. It could be
Default actions: WinRunner clicks the OK or Cancel button in the pop-up window, or presses Enter on the keyboard. To select a default handler, click the appropriate button in the dialog box.
User-defined handler: If you prefer, specify the name of your own handler. Click User Defined Function Name and type in a name in the User Defined Function Name box.

Q: How do you handle TSL exceptions?
Suppose you are running a batch test on an unstable version of your application. If your application crashes, you want WinRunner to recover test execution. A TSL exception can instruct WinRunner to recover test execution by exiting the current test, restarting the application, and continuing with the next test in the batch.
The handler function is responsible for recovering test execution. When WinRunner detects a specific error code, it calls the handler function. You implement this function to respond to the unexpected error in the way that meets your specific testing needs.
Once you have defined the exception, WinRunner activates handling and adds the exception to the list of default TSL exceptions in the Exceptions dialog box. Default TSL exceptions are defined by the XR_EXCP_TSL configuration parameter in the wrun.ini configuration file.

Q: How to write an email address validation script in TSL?
public function IsValidEMAIL(in strText)
{
auto aryEmail[], aryEmail2[], n;


n = split(strText, aryEmail, "@");
if (n != 2)
return FALSE;

# Ensure the string "@MyISP.Com" does not pass...
if (!length(aryEmail[1]))
return FALSE;

n = split(aryEmail[2], aryEmail2, ".");
if (n < guiname1 = "MMAQ_guimap.gui" guiname2 = "SSPicker_guimap.gui" guiname3 = "TradeEntry.gui" rc =" loadGui(guiLoad);" rc =" (GUI_load(GUIPATH" i ="1;i<="isize;i++)" s =" s"> );
extern long RegCloseKey(long);
extern long RegQueryValueExA(long,string,long,long,inout string<1024>,inout long );
extern long RegOpenKeyExA(long,string,long ,long,inout long);
extern long RegSetValueExA(long,string,long,long,string,long);

MainKey = 2147483649; # HKEY_CURRENT_USER
SubKey = "Software\\TestConverter\\TCEditor\\Settings";
# This is where you set your subkey path
const ERROR_SUCCESS = 0;

const KEY_ALL_ACCESS = 983103;
ret = RegOpenKeyExA(MainKey, SubKey, 0, KEY_ALL_ACCESS, hKey); # open the
key
if (ret==ERROR_SUCCESS)
{
cbData = 256;
tmp = space(256);
KeyType = 0;
ret = RegQueryValueExA(hKey,"Last language",0,KeyType,tmp,cbData); # replace
"Last language" with the key you want to read
}
pause (tmp);
NewSetting = "SQABASIC";
cbData = length(NewSetting) + 1;
ret = RegSetValueExA(hKey,"Last language",0,KeyType,NewSetting,cbData);
# replace "Last language" with the key you want to write

cbData = 256;
tmp = space(256);
KeyType = 0;
ret = RegQueryValueExA(hKey,"Last language",0,KeyType,tmp,cbData);
# verifies you changed the key

pause (tmp);

RegCloseKey(hKey); # close the key

Q: How to break infinite loop
set_window("Browser Main Window",1);
text="";
start = get_time();
while(text!="Done")
{
statusbar_get_text("Status Bar",0,text);
now = get_time();
if ( (now-start) == 60 ) # Specify no of seconds after which u want
break
{
break;
}
}

Q: User-defined function that would write to the Print-log as well as write to a file
function writeLog(in strMessage){
file_open("C:\FilePath\...");
file_printf(strMessage);
printf(strMessage);
}
Q: How to do text matching?
You could try embedding it in an if statement. If/when it fails use a tl_step statement to indicate passage and then do a texit to leave the test. Another idea would be to use win_get_text or web_frame_get_text to capture the text of the object and the do a comparison (using the match function) to determine it's existance.

Q: the MSW_id value sometimes changes, rendering the GUI map useless
MSW_Id's will continue to change as long as your developers are modifying your application. Having dealt with this, I determined that each MSW_Id shifted by the same amount and I was able to modify the entries in the gui map rather easily and continue testing.
Instead of using the MSW_id use the "location". If you use your GUI spy it will give you every detail it can. Then add or remove what you don't want.

Q: Having the DB Check point, its able to show the current values in form but its not showing the values that saved in the table
This looks like its happening because the data has
been written to the db after your checkpoint, so you
have to do a runtime record check Create>Database
Checkpoint>Runtime Record Check. You may also have to
perform some customization if the data displayed in
the application is in a different format than the data
in the database by using TSL. For example, converting
radio buttons to database readable form involves the
following:

# Flight Reservation
set_window ("Flight Reservation", 2);
# edit_set ("Date of Flight:", "06/08/02");

# retrieve the three button states
button_get_state ( "First", first);
button_get_state ( "Business", bus);
button_get_state ( "Economy", econ);

# establish a variable with the correct numeric value
based on which radio button is set
if (first)
service="1";

if (bus)
service="2";

if (econ)
service="3";

set_window("Untitled - Notepad",3);

edit_set("Report Area",service);

db_record_check("list1.cvr", DVR_ONE_MATCH,record_num);
Increas Capacity Testing
When you begin your stress testing, you will want to increase your capacity testing to make sure you are able to handle the increased load of data such as ASP pages and graphics. When you test the ASP pages, you may want to create a page similar to the original page that will simulate the same items on the ASP page and have it send the information to a test bed with a process that completes just a small data output. By doing this, you will have your processor still stressing the system but not taking up the bandwidth by sending the HTML code along the full path. This will not stress the entire code but will give you a basis from which to work. Dividing the requests per second by the total number of user or threads will determine the number of transactions per second. It will tell you at what point the server will start becoming less efficient at handling the load. Let's look at an example. Let's say your test with 50 users shows your server can handle 5 requests per seconf, with 100 users it is 10 requests per second, with 200 users it is 15 requests per second, and eventually with 300 users it is 20 requests per second. Your requests per second are continually climbing, so it seems that you are obtaining steadily improving performance. Let's look at the ratios:
05/50 = 0.1
10/100 = 0.1
15/200 = 0.075
20/300 = 0.073
From this example you can see that the performance of the server is becoming less and less efficient as the load grows. This in itself is not necessarily bad (as long as your pages are still returning within your target time frame). However, it can be a useful indicator during your optimization process and does give you some indication of how much leeway you have to handle expected peaks.

Stateful testing
When you use a Web-enabled application to set a value, does the server respond correctly later on?

Privilage testing
What happens when the everyday user tries to access a control that is authorized only for adminstrators?

Speed testing
Is the Web-enabled application taking too long to respond?

Boundary Test
Boundary tests are designed to check a program's response to extreme input values. Extreme output values are generated by the input values. It is important to check that a program handles input values and output results correctly at the lower and upper boundaries. Keep in mind that you can create extreme boundary results from non-extreme input values. It is essential to analyze how to generate extremes of both types. In addition. sometime you know that there is an intermediate variable involved in processing. If so, it is useful to determine how to drive that one through the extremes and special conditions such as zero or overflow condition.

Boundary timeing testing
What happens when your Web-enabled application request times out or takes a really long time to respond?

Regression testing
Did a new build break an existing function? Repeat testing after changes for managing risk relate to product enhancement.
A regression test is performded when the tester wishes to see the progress of the testing processs by performing identical tests before and after a bug has been fixed. A regression test allows the tester to compare expeted test results with the actual results.
Regression testing's primary objective is to ensure that all bugfree features stay that way. In addition, bugs which have been fixed once should not turn up again in subsequent program versions.
Regression testing: After every software modification or before next release, we repeat all test cases to check if fixed bugs are not show up again and new and existing functions are all working correctly.
Regression testing is used to confirm that fixed bugs have, in fact, been fixed and that new bugs have not been introduced in the process, and that festures that were proven correctly functional are intact. Depending on the size of a project, cycles of regression testing may be perform once per milestone or once per build. Some bug regression testing may also be performed during each accceptance test cycle, forcusing on only the most important bugs. Regression tests can be automated.
CONDITIONS DURING WHICH REGRESSION TESTS MAY BE RUN
Issu fixing cycle. Once the development team has fixed issues, a regression test can be run t ovalidate the fixes. Tests are based on the step-by-step test casess that were originally reported:
• If an issue is confirmeded as fixed, then the issue report status should be changed to Closed.
• If an issue is confirmed as fixed, but with side effects, then the issue report status should be changed to Closed. However, a new issue should be filed to report the side effect.
• If an issue is only partially fixed, then the issue report resolution should be changed back to Unfixed, along with comments outlining the oustanding problems

Open-status regression cycle. Periodic regression tests may be run on all open issue in the issue-tracking database. During this cycle, issue status is confirmed either the report is reproducible as is with no modification, the report is reproducible with additional comments or modifications, or the report is no longer reproducible
Closed-fixed regression cycle. In the final phase of testing, a full-regression test cycle should be run to confirm the status of all fixed-closed issues.
Feature regression cycle. Each time a new build is cut or is in the final phase of testing depending on the organizational procedure, a full-regression test cycle should be run to confirm that the proven correctly functional features are still working as expected.

Database Testing
Items to check when testing a database
What to test Environment toola/technique
Seach results System test environment Black Box and White Box technique
Response time System test environment Sytax Testing/Functional Testing
Data integrity Development environment White Box testing
Data validity Development environment White Box testing
Q:How do you find an object in an GUI map?
The GUI Map Editor is been provided with a Find and Show Buttons.
To find a particular object in the GUI Map file in the application, select the object and click the Show window. This blinks the selected object.
To find a particular object in a GUI Map file click the Find button, which gives the option to select the object. When the object is selected, if the object has been learned to the GUI Map file it will be focused in the GUI Map file.

Q:What different actions are performed by find and show button?
To find a particular object in the GUI Map file in the application, select the object and click the Show window. This blinks the selected object.
To find a particular object in a GUI Map file click the Find button, which gives the option to select the object. When the object is selected, if the object has been learned to the GUI Map file it will be focused in the GUI Map file.

Q:How do you identify which files are loaded in the GUI map?
The GUI Map Editor has a drop down GUI File displaying all the GUI Map files loaded into the memory.

Q:How do you modify the logical name or the physical description of the objects in GUI map?
You can modify the logical name or the physical description of an object in a GUI map file using the GUI Map Editor.

Q:When do you feel you need to modify the logical name?
Changing the logical name of an object is useful when the assigned logical name is not sufficiently descriptive or is too long.

Q:When it is appropriate to change physical description?
Changing the physical description is necessary when the property value of an object changes.

Q:How WinRunner handles varying window labels?
We can handle varying window labels using regular expressions. WinRunner uses two hidden properties in order to use regular expression in an object’s physical description. These properties are regexp_label and regexp_MSW_class.
i. The regexp_label property is used for windows only. It operates behind the scenes to insert a regular expression into a window’s label description.
ii. The regexp_MSW_class property inserts a regular expression into an object’s MSW_class. It is obligatory for all types of windows and for the object class object.

Q:What is the purpose of regexp_label property and regexp_MSW_class property?
The regexp_label property is used for windows only. It operates behind the scenes to insert a regular expression into a window’s label description.
The regexp_MSW_class property inserts a regular expression into an object’s MSW_class. It is obligatory for all types of windows and for the object class object.

Q:How do you suppress a regular expression?
We can suppress the regular expression of a window by replacing the regexp_label property with label property.
Q:How do you copy and move objects between different GUI map files?
We can copy and move objects between different GUI Map files using the GUI Map Editor. The steps to be followed are:
1. Choose Tools - GUI Map Editor to open the GUI Map Editor.
2. Choose View - GUI Files.
3. Click Expand in the GUI Map Editor. The dialog box expands to display two GUI map files simultaneously.
4. View a different GUI map file on each side of the dialog box by clicking the file names in the GUI File lists.
5. In one file, select the objects you want to copy or move. Use the Shift key and or Control key to select multiple objects. To select all objects in a GUI map file, choose Edit - Select All.
6. Click Copy or Move.
7. To restore the GUI Map Editor to its original size, click Collapse.

Q:How do you select multiple objects during merging the files?
Use the Shift key and or Control key to select multiple objects. To select all objects in a GUI map file, choose Edit - Select All.

Q:How do you clear a GUI map files?
We can clear a GUI Map file using the Clear All option in the GUI Map Editor.

Q:How do you filter the objects in the GUI map?
GUI Map Editor has a Filter option. This provides for filtering with 3 different types of options.
1. Logical name displays only objects with the specified logical name.
2. Physical description displays only objects matching the specified physical description. Use any substring belonging to the physical description.
3. Class displays only objects of the specified class, such as all the push buttons.

Q:How do you configure GUI map?
1. When WinRunner learns the description of a GUI object, it does not learn all its properties. Instead, it learns the minimum number of properties to provide a unique identification of the object.
2. Many applications also contain custom GUI objects. A custom object is any object not belonging to one of the standard classes used by WinRunner. These objects are therefore assigned to the generic object class. When WinRunner records an operation on a custom object, it generates obj_mouse_ statements in the test script.
3. If a custom object is similar to a standard object, you can map it to one of the standard classes. You can also configure the properties WinRunner uses to identify a custom object during Context Sensitive testing. The mapping and the configuration you set are valid only for the current WinRunner session. To make the mapping and the configuration permanent, you must add configuration statements to your startup test script.

Q:What is the purpose of GUI map configuration?
GUI Map configuration is used to map a custom object to a standard object.

Q:How do you make the configuration and mappings permanent?
The mapping and the configuration you set are valid only for the current WinRunner session. To make the mapping and the configuration permanent, you must add configuration statements to your startup test script.

Q:What is the purpose of GUI spy?
Using the GUI Spy, you can view the properties of any GUI object on your desktop. You use the Spy pointer to point to an object, and the GUI Spy displays the properties and their values in the GUI Spy dialog box. You can choose to view all the properties of an object, or only the selected set of properties that WinRunner learns.
Q:What is the purpose of different record methods 1) Record 2) Pass up 3) As Object 4) Ignore.?
1) Record instructs WinRunner to record all operations performed on a GUI object. This is the default record method for all classes. (The only exception is the static class (static text), for which the default is Pass Up.)
2) Pass Up instructs WinRunner to record an operation performed on this class as an operation performed on the element containing the object. Usually this element is a window, and the operation is recorded as win_mouse_click.
3) As Object instructs WinRunner to record all operations performed on a GUI object as though its class were object class.
4) Ignore instructs WinRunner to disregard all operations performed on the class.

Q:How do you find out which is the start up file in WinRunner?
The test script name in the Startup Test box in the Environment tab in the General Options dialog box is the start up file in WinRunner.

Q:What are the virtual objects and how do you learn them?
• Applications may contain bitmaps that look and behave like GUI objects. WinRunner records operations on these bitmaps using win_mouse_click statements. By defining a bitmap as a virtual object, you can instruct WinRunner to treat it like a GUI object such as a push button, when you record and run tests.
• Using the Virtual Object wizard, you can assign a bitmap to a standard object class, define the coordinates of that object, and assign it a logical name.
To define a virtual object using the Virtual Object wizard:
1. Choose Tools > Virtual Object Wizard. The Virtual Object wizard opens. Click Next.
2. In the Class list, select a class for the new virtual object. If rows that are displayed in the window. For a table class, select the number of visible rows and columns. Click Next.
3. Click Mark Object. Use the crosshairs pointer to select the area of the virtual object. You can use the arrow keys to make precise adjustments to the area you define with the crosshairs. Press Enter or click the right mouse button to display the virtual object’s coordinates in the wizard. If the object marked is visible on the screen, you can click the Highlight button to view it. Click Next.
4. Assign a logical name to the virtual object. This is the name that appears in the test script when you record on the virtual object. If the object contains text that WinRunner can read, the wizard suggests using this text for the logical name. Otherwise, WinRunner suggests virtual_object, virtual_push_button, virtual_list, etc.
5. You can accept the wizard’s suggestion or type in a different name. WinRunner checks that there are no other objects in the GUI map with the same name before confirming your choice. Click Next.

Q:What are the two modes of recording?
There are 2 modes of recording in WinRunner
1. Context Sensitive recording records the operations you perform on your application by identifying Graphical User Interface (GUI) objects.
2. Analog recording records keyboard input, mouse clicks, and the precise x- and y-coordinates traveled by the mouse pointer across the screen.

Q:What is a checkpoint and what are different types of checkpoints?
Checkpoints allow you to compare the current behavior of the application being tested to its behavior in an earlier version.
You can add four types of checkpoints to your test scripts:
1. GUI checkpoints verify information about GUI objects. For example, you can check that a button is enabled or see which item is selected in a list.
2. Bitmap checkpoints take a snapshot of a window or area of your application and compare this to an image captured in an earlier version.
3. Text checkpoints read text in GUI objects and in bitmaps and enable you to verify their contents.
4. Database checkpoints check the contents and the number of rows and columns of a result set, which is based on a query you create on your database.

Q:What are data driven tests?
When you test your application, you may want to check how it performs the same operations with multiple sets of data. You can create a data-driven test with a loop that runs ten times: each time the loop runs, it is driven by a different set of data. In order for WinRunner to use data to drive the test, you must link the data to the test script which it drives. This is called parameterizing your test. The data is stored in a data table. You can perform these operations manually, or you can use the DataDriver Wizard to parameterize your test and store the data in a data table.

Q:What are the synchronization points?
Synchronization points enable you to solve anticipated timing problems between the test and your application. For example, if you create a test that opens a database application, you can add a synchronization point that causes the test to wait until the database records are loaded on the screen.
For Analog testing, you can also use a synchronization point to ensure that WinRunner repositions a window at a specific location. When you run a test, the mouse cursor travels along exact coordinates. Repositioning the window enables the mouse pointer to make contact with the correct elements in the window.
Q:What is parameterizing?
In order for WinRunner to use data to drive the test, you must link the data to the test script which it drives. This is called parameterizing your test. The data is stored in a data table.

Q:How do you maintain the document information of the test scripts?
Before creating a test, you can document information about the test in the General and Description tabs of the Test Properties dialog box. You can enter the name of the test author, the type of functionality tested, a detailed description of the test, and a reference to the relevant functional specifications document.

Q:What do you verify with the GUI checkpoint for single property and what command it generates, explain syntax?
You can check a single property of a GUI object. For example, you can check whether a button is enabled or disabled or whether an item in a list is selected. To create a GUI checkpoint for a property value, use the Check Property dialog box to add one of the following functions to the test script:
button_check_info
scroll_check_info
edit_check_info
static_check_info
list_check_info
win_check_info
obj_check_info
Syntax: button_check_info (button, property, property_value );
edit_check_info ( edit, property, property_value );

Q:What do you verify with the GUI checkpoint for object/window and what command it generates, explain syntax?
• You can create a GUI checkpoint to check a single object in the application being tested. You can either check the object with its default properties or you can specify which properties to check.
• Creating a GUI Checkpoint using the Default Checks
• You can create a GUI checkpoint that performs a default check on the property recommended by WinRunner. For example, if you create a GUI checkpoint that checks a push button, the default check verifies that the push button is enabled.
• To create a GUI checkpoint using default checks:
1. Choose Create - GUI Checkpoint - For Object/Window, or click the GUI Checkpoint for Object/Window button on the User toolbar. If you are recording in Analog mode, press the CHECK GUI FOR OBJECT/WINDOW softkey in order to avoid extraneous mouse movements. Note that you can press the CHECK GUI FOR OBJECT/WINDOW softkey in Context Sensitive mode as well. The WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a help window opens on the screen.
2. Click an object.
3. WinRunner captures the current value of the property of the GUI object being checked and stores it in the test’s expected results folder. The WinRunner window is restored and a GUI checkpoint is inserted in the test script as an obj_check_gui statement Syntax: win_check_gui ( window, checklist, expected_results_file, time );
• Creating a GUI Checkpoint by Specifying which Properties to Check
• You can specify which properties to check for an object. For example, if you create a checkpoint that checks a push button, you can choose to verify that it is in focus, instead of enabled.
• To create a GUI checkpoint by specifying which properties to check:
• Choose Create - GUI Checkpoint - For Object/Window, or click the GUI Checkpoint for Object/Window button on the User toolbar. If you are recording in Analog mode, press the CHECK GUI FOR OBJECT/WINDOW softkey in order to avoid extraneous mouse movements. Note that you can press the CHECK GUI FOR OBJECT/WINDOW softkey in Context Sensitive mode as well. The WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a help window opens on the screen.
• Double-click the object or window. The Check GUI dialog box opens.
• Click an object name in the Objects pane. The Properties pane lists all the properties for the selected object.
• Select the properties you want to check.
1. To edit the expected value of a property, first select it. Next, either click the Edit Expected Value button, or double-click the value in the Expected Value column to edit it.
2. To add a check in which you specify arguments, first select the property for which you want to specify arguments. Next, either click the Specify Arguments button, or double-click in the Arguments column. Note that if an ellipsis (three dots) appears in the Arguments column, then you must specify arguments for a check on this property. (You do not need to specify arguments if a default argument is specified.) When checking standard objects, you only specify arguments for certain properties of edit and static text objects. You also specify arguments for checks on certain properties of nonstandard objects.
3. To change the viewing options for the properties of an object, use the Show Properties buttons.
4. Click OK to close the Check GUI dialog box. WinRunner captures the GUI information and stores it in the test’s expected results folder. The WinRunner window is restored and a GUI checkpoint is inserted in the test script as an obj_check_gui or a win_check_gui statement. Syntax: win_check_gui ( window, checklist, expected_results_file, time ); obj_check_gui ( object, checklist, expected results file, time );
Q:What do you verify with the GUI checkpoint for multiple objects and what command it generates, explain syntax?
To create a GUI checkpoint for two or more objects:
• Choose Create GUI Checkpoint For Multiple Objects or click the GUI Checkpoint for Multiple Objects button on the User toolbar. If you are recording in Analog mode, press the CHECK GUI FOR MULTIPLE OBJECTS softkey in order to avoid extraneous mouse movements. The Create GUI Checkpoint dialog box opens.
• Click the Add button. The mouse pointer becomes a pointing hand and a help window opens.
• To add an object, click it once. If you click a window title bar or menu bar, a help window prompts you to check all the objects in the window.
• The pointing hand remains active. You can continue to choose objects by repeating step 3 above for each object you want to check.
• Click the right mouse button to stop the selection process and to restore the mouse pointer to its original shape. The Create GUI Checkpoint dialog box reopens.
• The Objects pane contains the name of the window and objects included in the GUI checkpoint. To specify which objects to check, click an object name in the Objects pane. The Properties pane lists all the properties of the object. The default properties are selected.
1. To edit the expected value of a property, first select it. Next, either click the Edit Expected Value button, or double-click the value in the Expected Value column to edit it.
2. To add a check in which you specify arguments, first select the property for which you want to specify arguments. Next, either click the Specify Arguments button, or double-click in the Arguments column. Note that if an ellipsis appears in the Arguments column, then you must specify arguments for a check on this property. (You do not need to specify arguments if a default argument is specified.) When checking standard objects, you only specify arguments for certain properties of edit and static text objects. You also specify arguments for checks on certain properties of nonstandard objects.
3. To change the viewing options for the properties of an object, use the Show Properties buttons.
• To save the checklist and close the Create GUI Checkpoint dialog box, click OK. WinRunner captures the current property values of the selected GUI objects and stores it in the expected results folder. A win_check_gui statement is inserted in the test script.
Syntax: win_check_gui ( window, checklist, expected_results_file, time );
obj_check_gui ( object, checklist, expected results file, time );

Q:What information is contained in the checklist file and in which file expected results are stored?
The checklist file contains information about the objects and the properties of the object we are verifying.
The gui*.chk file contains the expected results which is stored in the exp folder

Q:What do you verify with the bitmap check point for object/window and what command it generates, explain syntax?
• You can check an object, a window, or an area of a screen in your application as a bitmap. While creating a test, you indicate what you want to check. WinRunner captures the specified bitmap, stores it in the expected results folder (exp) of the test, and inserts a checkpoint in the test script. When you run the test, WinRunner compares the bitmap currently displayed in the application being tested with the expected bitmap stored earlier. In the event of a mismatch, WinRunner captures the current actual bitmap and generates a difference bitmap. By comparing the three bitmaps (expected, actual, and difference), you can identify the nature of the discrepancy.
• When working in Context Sensitive mode, you can capture a bitmap of a window, object, or of a specified area of a screen. WinRunner inserts a checkpoint in the test script in the form of either a win_check_bitmap or obj_check_bitmap statement.
• Note that when you record a test in Analog mode, you should press the CHECK BITMAP OF WINDOW softkey or the CHECK BITMAP OF SCREEN AREA softkey to create a bitmap checkpoint. This prevents WinRunner from recording extraneous mouse movements. If you are programming a test, you can also use the Analog function check_window to check a bitmap.
• To capture a window or object as a bitmap:
1. Choose Create - Bitmap Checkpoint - For Object/Window or click the Bitmap Checkpoint for Object/Window button on the User toolbar. Alternatively, if you are recording in Analog mode, press the CHECK BITMAP OF OBJECT/WINDOW softkey. The WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a help window opens.
2. Point to the object or window and click it. WinRunner captures the bitmap and generates a win_check_bitmap or obj_check_bitmap statement in the script. The TSL statement generated for a window bitmap has the following syntax: win_check_bitmap ( object, bitmap, time );
3. For an object bitmap, the syntax is: obj_check_bitmap ( object, bitmap, time );
4. For example, when you click the title bar of the main window of the Flight Reservation application, the resulting statement might be: win_check_bitmap ("Flight Reservation", "Img2", 1);
5. However, if you click the Date of Flight box in the same window, the statement might be: obj_check_bitmap ("Date of Flight:", "Img1", 1);
Syntax: obj_check_bitmap ( object, bitmap, time [, x, y, width, height] );

Q:What do you verify with the bitmap checkpoint for screen area and what command it generates, explain syntax?
• You can define any rectangular area of the screen and capture it as a bitmap for comparison. The area can be any size: it can be part of a single window, or it can intersect several windows. The rectangle is identified by the coordinates of its upper left and lower right corners, relative to the upper left corner of the window in which the area is located. If the area intersects several windows or is part of a window with no title (for example, a popup window), its coordinates are relative to the entire screen (the root window).
• To capture an area of the screen as a bitmap:
1. Choose Create - Bitmap Checkpoint - For Screen Area or click the Bitmap Checkpoint for Screen Area button. Alternatively, if you are recording in Analog mode, press the CHECK BITMAP OF SCREEN AREA softkey. The WinRunner window is minimized, the mouse pointer becomes a crosshairs pointer, and a help window opens.
2. Mark the area to be captured: press the left mouse button and drag the mouse pointer until a rectangle encloses the area; then release the mouse button.
3. Press the right mouse button to complete the operation. WinRunner captures the area and generates a win_check_bitmap statement in your script.
4. The win_check_bitmap statement for an area of the screen has the following syntax: win_check_bitmap ( window, bitmap, time, x, y, width, height );
Q:What do you verify with the database checkpoint default and what command it generates, explain syntax?
• By adding runtime database record checkpoints you can compare the information in your application during a test run with the corresponding record in your database. By adding standard database checkpoints to your test scripts, you can check the contents of databases in different versions of your application.
• When you create database checkpoints, you define a query on your database, and your database checkpoint checks the values contained in the result set. The result set is set of values retrieved from the results of the query.
• You can create runtime database record checkpoints in order to compare the values displayed in your application during the test run with the corresponding values in the database. If the comparison does not meet the success criteria you
• specify for the checkpoint, the checkpoint fails. You can define a successful runtime database record checkpoint as one where one or more matching records were found, exactly one matching record was found, or where no matching records are found.
• You can create standard database checkpoints to compare the current values of the properties of the result set during the test run to the expected values captured during recording or otherwise set before the test run. If the expected results and the current results do not match, the database checkpoint fails. Standard database checkpoints are useful when the expected results can be established before the test run.
Syntax: db_check(checklist_file, expected_restult);
• You can add a runtime database record checkpoint to your test in order to compare information that appears in your application during a test run with the current value(s) in the corresponding record(s) in your database. You add runtime database record checkpoints by running the Runtime Record Checkpoint wizard. When you are finished, the wizard inserts the appropriate db_record_check statement into your script.
Syntax: db_record_check(ChecklistFileName,SuccessConditions,RecordNumber );
ChecklistFileName ---- A file created by WinRunner and saved in the test's checklist folder. The file contains information about the data to be captured during the test run and its corresponding field in the database. The file is created based on the information entered in the Runtime Record Verification wizard.
SuccessConditions ----- Contains one of the following values:
1. DVR_ONE_OR_MORE_MATCH - The checkpoint passes if one or more matching database records are found.
2. DVR_ONE_MATCH - The checkpoint passes if exactly one matching database record is found.
3. DVR_NO_MATCH - The checkpoint passes if no matching database records are found.
RecordNumber --- An out parameter returning the number of records in the database.

Q:How do you handle dynamically changing area of the window in the bitmap checkpoints?
The difference between bitmaps option in the Run Tab of the general options defines the minimum number of pixels that constitute a bitmap mismatch

Q:What do you verify with the database check point custom and what command it generates, explain syntax?
• When you create a custom check on a database, you create a standard database checkpoint in which you can specify which properties to check on a result set.
• You can create a custom check on a database in order to:
• check the contents of part or the entire result set
• edit the expected results of the contents of the result set
• count the rows in the result set
• count the columns in the result set
• You can create a custom check on a database using ODBC, Microsoft Query or Data Junction.

Q:What do you verify with the sync point for object/window property and what command it generates, explain syntax?
• Synchronization compensates for inconsistencies in the performance of your application during a test run. By inserting a synchronization point in your test script, you can instruct WinRunner to suspend the test run and wait for a cue before continuing the test.
• You can a synchronization point that instructs WinRunner to wait for a specified object or window to appear. For example, you can tell WinRunner to wait for a window to open before performing an operation within that window, or you may want WinRunner to wait for an object to appear in order to perform an operation on that object.
• You use the obj_exists function to create an object synchronization point, and you use the win_exists function to create a window synchronization point. These functions have the following syntax:
obj_exists ( object [, time ] ); win_exists ( window [, time ] );

Q:What do you verify with the sync point for object/window bitmap and what command it generates, explain syntax?
You can create a bitmap synchronization point that waits for the bitmap of an object or a window to appear in the application being tested.
During a test run, WinRunner suspends test execution until the specified bitmap is redrawn, and then compares the current bitmap with the expected one captured earlier. If the bitmaps match, then WinRunner continues the test.
Syntax:
obj_wait_bitmap ( object, image, time );
win_wait_bitmap ( window, image, time );
:What is the purpose of obligatory and optional properties of the objects?
For each class, WinRunner learns a set of default properties. Each default property is classified obligatory or optional.
1. An obligatory property is always learned (if it exists).
2. An optional property is used only if the obligatory properties do not provide unique identification of an object. These optional properties are stored in a list. WinRunner selects the minimum number of properties from this list that are necessary to identify the object. It begins with the first property in the list, and continues, if necessary, to add properties to the description until it obtains unique identification for the object.

Q:When the optional properties are learned?
An optional property is used only if the obligatory properties do not provide unique identification of an object.

Q:What is the purpose of location indicator and index indicator in GUI map configuration?
In cases where the obligatory and optional properties do not uniquely identify an object, WinRunner uses a selector to differentiate between them. Two types of selectors are available:
A location selector uses the spatial position of objects.
The location selector uses the spatial order of objects within the window, from the top left to the bottom right corners, to differentiate among objects with the same description.
An index selector uses a unique number to identify the object in a window.
The index selector uses numbers assigned at the time of creation of objects to identify the object in a window. Use this selector if the location of objects with the same description may change within a window.

Q:How do you handle custom objects?
A custom object is any GUI object not belonging to one of the standard classes used by WinRunner. WinRunner learns such objects under the generic object class. WinRunner records operations on custom objects using obj_mouse_ statements.
If a custom object is similar to a standard object, you can map it to one of the standard classes. You can also configure the properties WinRunner uses to identify a custom object during Context Sensitive testing.

Q:What is the name of custom class in WinRunner and what methods it applies on the custom objects?
WinRunner learns custom class objects under the generic object class. WinRunner records operations on custom objects using obj_ statements.

Q:In a situation when obligatory and optional both the properties cannot uniquely identify an object what method WinRunner applies?
In cases where the obligatory and optional properties do not uniquely identify an object, WinRunner uses a selector to differentiate between them. Two types of selectors are available:
i. A location selector uses the spatial position of objects.
ii. An index selector uses a unique number to identify the object in a window.

Q:What do you verify with the sync point for screen area and what command it generates, explain syntax?
For screen area verification we actually capture the screen area into a bitmap and verify the application screen area with the bitmap file during execution Syntax: obj_wait_bitmap(object, image, time, x, y, width, height);

Q:How do you edit checklist file and when do you need to edit the checklist file?
WinRunner has an edit checklist file option under the create menu. Select the Edit GUI Checklist to modify GUI checklist file and Edit Database Checklist to edit database checklist file. This brings up a dialog box that gives you option to select the checklist file to modify. There is also an option to select the scope of the checklist file, whether it is Test specific or a shared one. Select the checklist file, click OK which opens up the window to edit the properties of the objects.
Q:How do you edit the expected value of an object?
We can modify the expected value of the object by executing the script in the Update mode. We can also manually edit the gui*.chk file which contains the expected values which come under the exp folder to change the values.

Q:How do you modify the expected results of a GUI checkpoint?
We can modify the expected results of a GUI checkpoint be running the script containing the checkpoint in the update mode.

Q:How do you handle ActiveX and Visual basic objects?
WinRunner provides with add-ins for ActiveX and Visual basic objects. When loading WinRunner, select those add-ins and these add-ins provide with a set of functions to work on ActiveX and VB objects.

Q:How do you create ODBC query?
We can create ODBC query using the database checkpoint wizard. It provides with option to create an SQL file that uses an ODBC DSN to connect to the database. The SQL File will contain the connection string and the SQL statement.

Q:How do you record a data driven test?
We can create a data-driven testing using data from a flat file, data table or a database.
Using Flat File: we actually store the data to be used in a required format in the file. We access the file using the File manipulation commands, reads data from the file and assign the variables with data.
Data Table: It is an excel file. We can store test data in these files and manipulate them. We use the ‘ddt_*’ functions to manipulate data in the data table.
Database: we store test data in the database and access these data using ‘db_*’ functions.

Q:How do you convert a database file to a text file?
You can use Data Junction to create a conversion file which converts a database to a target text file.

Q:How do you parameterize database check points?
When you create a standard database checkpoint using ODBC (Microsoft Query), you can add parameters to an SQL statement to parameterize the checkpoint. This is useful if you want to create a database checkpoint with a query in which the SQL statement defining your query changes.

Q:How do you create parameterize SQL commands?
A parameterized query is a query in which at least one of the fields of the WHERE clause is parameterized, i.e., the value of the field is specified by a question mark symbol ( ? ). For example, the following SQL statement is based on a query on the database in the sample Flight Reservation application:
SELECT Flights.Departure, Flights.Flight_Number, Flights.Day_Of_Week FROM Flights Flights WHERE (Flights.Departure=?) AND (Flights.Day_Of_Week=?)
SELECT defines the columns to include in the query.
FROM specifies the path of the database.
WHERE (optional) specifies the conditions, or filters to use in the query. Departure is the parameter that represents the departure point of a flight.
Day_Of_Week is the parameter that represents the day of the week of a flight.
When creating a database checkpoint, you insert a db_check statement into your test script. When you parameterize the SQL statement in your checkpoint, the db_check function has a fourth, optional, argument: the parameter_array argument. A statement similar to the following is inserted into your test script:
db_check("list1.cdl", "dbvf1", NO_LIMIT, dbvf1_params);
The parameter_array argument will contain the values to substitute for the parameters in the parameterized checkpoint.
Q:What check points you will use to read and check text on the GUI and explain its syntax?
• You can use text checkpoints in your test scripts to read and check text in GUI objects and in areas of the screen. While creating a test you point to an object or a window containing text. WinRunner reads the text and writes a TSL statement to the test script. You may then add simple programming elements to your test scripts to verify the contents of the text.
• You can use a text checkpoint to:
• Read text from a GUI object or window in your application, using obj_get_text and win_get_text
• Search for text in an object or window, using win_find_text and obj_find_text
• Move the mouse pointer to text in an object or window, using obj_move_locator_text and win_move_locator_text
• Click on text in an object or window, using obj_click_on_text and win_click_on_text

Q:How to get Text from object/window ?
We use obj_get_text (logical_name, out_text) function to get the text from an object
We use win_get_text (window, out_text [, x1, y1, x2, y2]) function to get the text from a window.

Q:How to get Text from screen area ?
We use win_get_text (window, out_text [, x1, y1, x2, y2]) function to get the text from a window.

Q:Which TSL functions you will use for Searching text on the window
find_text ( string, out_coord_array, search_area [, string_def ] );
win_find_text ( window, string, result_array [, search_area [, string_def ] ] );

Q:What are the steps of creating a data driven test?
The steps involved in data driven testing are:
Creating a test
Converting to a data-driven test and preparing a database
Running the test
Analyzing the test results.

Q: How to use data driver wizard?
You can use the DataDriver Wizard to convert your entire script or a part of your script into a data-driven test. For example, your test script may include recorded operations, checkpoints, and other statements that do not need to be repeated for multiple sets of data. You need to parameterize only the portion of your test script that you want to run in a loop with multiple sets of data.
To create a data-driven test:
• If you want to turn only part of your test script into a data-driven test, first select those lines in the test script.
• Choose Tools - DataDriver Wizard.
• If you want to turn only part of the test into a data-driven test, click Cancel. Select those lines in the test script and reopen the DataDriver Wizard. If you want to turn the entire test into a data-driven test, click Next.
• The Use a new or existing Excel table box displays the name of the Excel file that WinRunner creates, which stores the data for the data-driven test. Accept the default data table for this test, enter a different name for the data table, or use
• The browse button to locate the path of an existing data table. By default, the data table is stored in the test folder.
• In the Assign a name to the variable box, enter a variable name with which to refer to the data table, or accept the default name, table.
• At the beginning of a data-driven test, the Excel data table you selected is assigned as the value of the table variable. Throughout the script, only the table variable name is used. This makes it easy for you to assign a different data table
• To the script at a later time without making changes throughout the script.
• Choose from among the following options:
1. Add statements to create a data-driven test: Automatically adds statements to run your test in a loop: sets a variable name by which to refer to the data table; adds braces ({and}), a for statement, and a ddt_get_row_count statement to your test script selection to run it in a loop while it reads from the data table; adds ddt_open and ddt_close statements
2. To your test script to open and close the data table, which are necessary in order to iterate rows in the table. Note that you can also add these statements to your test script manually.
3. If you do not choose this option, you will receive a warning that your data-driven test must contain a loop and statements to open and close your datatable.
4. Import data from a database: Imports data from a database. This option adds ddt_update_from_db, and ddt_save statements to your test script after the ddt_open statement.
5. Note that in order to import data from a database, either Microsoft Query or Data Junction must be installed on your machine. You can install Microsoft Query from the custom installation of Microsoft Office. Note that Data Junction is not automatically included in your WinRunner package. To purchase Data Junction, contact your Mercury Interactive representative. For detailed information on working with Data Junction, refer to the documentation in the Data Junction package.
6. Parameterize the test: Replaces fixed values in selected checkpoints and in recorded statements with parameters, using the ddt_val function, and in the data table, adds columns with variable values for the parameters. Line by line: Opens a wizard screen for each line of the selected test script, which enables you to decide whether to parameterize a particular line, and if so, whether to add a new column to the data table or use an existing column when parameterizing data.
7. Automatically: Replaces all data with ddt_val statements and adds new columns to the data table. The first argument of the function is the name of the column in the data table. The replaced data is inserted into the table.
• The Test script line to parameterize box displays the line of the test script to parameterize. The highlighted value can be replaced by a parameter. The Argument to be replaced box displays the argument (value) that you can replace with a parameter. You can use the arrows to select a different argument to replace.
Choose whether and how to replace the selected data:
1. Do not replace this data: Does not parameterize this data.
2. An existing column: If parameters already exist in the data table for this test, select an existing parameter from the list.
3. A new column: Creates a new column for this parameter in the data table for this test. Adds the selected data to this column of the data table. The default name for the new parameter is the logical name of the object in the selected. TSL statement above. Accept this name or assign a new name.
• The final screen of the wizard opens.
1. If you want the data table to open after you close the wizard, select Show data table now.
2. To perform the tasks specified in previous screens and close the wizard, click Finish.
3. To close the wizard without making any changes to the test script, click Cancel.
Q: How do you handle object exceptions?
During testing, unexpected changes can occur to GUI objects in the application you are testing. These changes are often subtle but they can disrupt the test run and distort results.
You could use exception handling to detect a change in property of the GUI object during the test run, and to recover test execution by calling a handler function and continue with the test execution

Q: What is a compile module?
A compiled module is a script containing a library of user-defined functions that you want to call frequently from other tests. When you load a compiled module, its functions are automatically compiled and remain in memory. You can call them directly from within any test.
Compiled modules can improve the organization and performance of your tests. Since you debug compiled modules before using them, your tests will require less error-checking. In addition, calling a function that is already compiled is significantly faster than interpreting a function in a test script.

Q: What is the difference between script and compile module?
Test script contains the executable file in WinRunner while Compiled Module is used to store reusable functions. Complied modules are not executable.
WinRunner performs a pre-compilation automatically when it saves a module assigned a property value of Compiled Module.
By default, modules containing TSL code have a property value of "main". Main modules are called for execution from within other modules. Main modules are dynamically compiled into machine code only when WinRunner recognizes a "call" statement. Example of a call for the "app_init" script:
call cso_init();
call( "C:\\MyAppFolder\\" & "app_init" );
Compiled modules are loaded into memory to be referenced from TSL code in any module. Example of a load statement:
reload (C:\\MyAppFolder\\" & "flt_lib");
or load ("C:\\MyAppFolder\\" & "flt_lib");

Q:How do you write messages to the report?
To write message to a report we use the report_msg statement
Syntax: report_msg (message);

Q:What is a command to invoke application?
Invoke_application is the function used to invoke an application.
Syntax: invoke_application(file, command_option, working_dir, SHOW);

Q:What is the purpose of tl_step command?
Used to determine whether sections of a test pass or fail.
Syntax: tl_step(step_name, status, description);

Q:Which TSL function you will use to compare two files?
We can compare 2 files in WinRunner using the file_compare function. Syntax: file_compare (file1, file2 [, save file]);

Q:What is the use of function generator?
The Function Generator provides a quick, error-free way to program scripts. You can:
Add Context Sensitive functions that perform operations on a GUI object or get information from the application being tested.
Add Standard and Analog functions that perform non-Context Sensitive tasks such as synchronizing test execution or sending user-defined messages to a report.
Add Customization functions that enable you to modify WinRunner to suit your testing environment.

Q:What is the use of putting call and call_close statements in the test script?
You can use two types of call statements to invoke one test from another:
A call statement invokes a test from within another test.
A call_close statement invokes a test from within a script and closes the test when the test is completed.
Q:What is the use of treturn and texit statements in the test script?
The treturn and texit statements are used to stop execution of called tests.
i. The treturn statement stops the current test and returns control to the calling test.
ii. The texit statement stops test execution entirely, unless tests are being called from a batch test. In this case, control is returned to the main batch test.
Both functions provide a return value for the called test. If treturn or texit is not used, or if no value is specified, then the return value of the call statement is 0.
The syntax is: treturn [( expression )]; texit [( expression )];



Q:What does auto, static, public and extern variables means?
auto: An auto variable can be declared only within a function and is local to that function. It exists only for as long as the function is running. A new copy of the variable is created each time the function is called.
static: A static variable is local to the function, test, or compiled module in which it is declared. The variable retains its value until the test is terminated by an Abort command. This variable is initialized each time the definition of the function is executed.
public: A public variable can be declared only within a test or module, and is available for all functions, tests, and compiled modules.
extern: An extern declaration indicates a reference to a public variable declared outside of the current test or module.

Q:How do you declare constants?
The const specifier indicates that the declared value cannot be modified. The class of a constant may be either public or static. If no class is explicitly declared, the constant is assigned the default class public. Once a constant is defined, it remains in existence until you exit WinRunner.
The syntax of this declaration is: [class] const name [= expression];

Q:How do you declare arrays?
The following syntax is used to define the class and the initial expression of an array. Array size need not be defined in TSL.
class array_name [ ] [=init_expression]
The array class may be any of the classes used for variable declarations (auto, static, public, extern).

Q:How do you load and unload a compile module?
In order to access the functions in a compiled module you need to load the module. You can load it from within any test script using the load command; all tests will then be able to access the function until you quit WinRunner or unload the compiled module.
You can load a module either as a system module or as a user module. A system module is generally a closed module that is invisible to the tester. It is not displayed when it is loaded, cannot be stepped into, and is not stopped by a pause command. A system module is not unloaded when you execute an unload statement with no parameters (global unload).
load (module_name [,10] [,10] );
The module_name is the name of an existing compiled module.
Two additional, optional parameters indicate the type of module. The first parameter indicates whether the function module is a system module or a user module: 1 indicates a system module; 0 indicates a user module.
(Default = 0)
The second optional parameter indicates whether a user module will remain open in the WinRunner window or will close automatically after it is loaded: 1 indicates that the module will close automatically; 0 indicates that the module will remain open.
(Default = 0)
The unload function removes a loaded module or selected functions from memory.
It has the following syntax:
unload ( [ module_name test_name [ , "function_name" ] ] );

Q:Why you use reload function?
If you make changes in a module, you should reload it. The reload function removes a loaded module from memory and reloads it (combining the functions of unload and load).
The syntax of the reload function is:
reload ( module_name [ ,10 ] [ ,10 ] );
The module_name is the name of an existing compiled module.
Two additional optional parameters indicate the type of module. The first parameter indicates whether the module is a system module or a user module: 1 indicates a system module; 0 indicates a user module.
(Default = 0)
The second optional parameter indicates whether a user module will remain open in the WinRunner window or will close automatically after it is loaded. 1 indicates that the module will close automatically. 0 indicates that the module will remain open.
(Default = 0)

Q:Write and explain compile module?
Write TSL functions for the following interactive modes:
i. Creating a dialog box with any message you specify, and an edit field.
ii. Create dialog box with list of items and message.
iii. Create dialog box with edit field, check box, and execute button, and a cancel button.
iv. Creating a browse dialog box from which user selects a file.
v. Create a dialog box with two edit fields, one for login and another for password input.

Q:How you used WinRunner in your project?
Yes, I have been using WinRunner for creating automated scripts for GUI, functional and regression testing of the AUT.
Q:Explain WinRunner testing process?
WinRunner testing process involves six main stages
Create GUI Map File so that WinRunner can recognize the GUI objects in the application being tested
Create test scripts by recording, programming, or a combination of both. While recording tests, insert checkpoints where you want to check the response of the application being tested.
Debug Test: run tests in Debug mode to make sure they run smoothly
Run Tests: run tests in Verify mode to test your application.
View Results: determines the success or failure of the tests.
Report Defects: If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window.



Q:What is contained in the GUI map?
WinRunner stores information it learns about a window or object in a GUI Map. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an object’s description in the GUI map and then looks for an object with the same properties in the application being tested. Each of these objects in the GUI Map file will be having a logical name and a physical description. There are 2 types of GUI Map files. Global GUI Map file: a single GUI Map file for the entire application. GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created.

Q:How does WinRunner recognize objects on the application?
WinRunner uses the GUI Map file to recognize objects on the application. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an object’s description in the GUI map and then looks for an object with the same properties in the application being tested.

Q:Have you created test scripts and what is contained in the test scripts?
Yes I have created test scripts. It contains the statement in Mercury Interactive’s Test Script Language (TSL). These statements appear as a test script in a test window. You can then enhance your recorded test script, either by typing in additional TSL functions and programming elements or by using WinRunner’s visual programming tool, the Function Generator.

Q:How does WinRunner evaluate test results?
Following each test run, WinRunner displays the results in a report. The report details all the major events that occurred during the run, such as checkpoints, error messages, system messages, or user messages. If mismatches are detected at checkpoints during the test run, you can view the expected results and the actual results from the Test Results window.

Q:Have you performed debugging of the scripts?
Yes, I have performed debugging of scripts. We can debug the script by executing the script in the debug mode. We can also debug script using the Step, Step Into, Step out functionalities provided by the WinRunner.

Q:How do you run your test scripts?
We run tests in Verify mode to test your application. Each time WinRunner encounters a checkpoint in the test script, it compares the current data of the application being tested to the expected data captured earlier. If any mismatches are found, WinRunner captures them as actual results.

Q:How do you analyze results and report the defects?
Following each test run, WinRunner displays the results in a report. The report details all the major events that occurred during the run, such as checkpoints, error messages, system messages, or user messages. If mismatches are detected at checkpoints during the test run, you can view the expected results and the actual results from the Test Results window. If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window. This information is sent via e-mail to the quality assurance manager, who tracks the defect until it is fixed.

Q:What is the use of Test Director software?
TestDirector is Mercury Interactive’s software test management tool. It helps quality assurance personnel plan and organize the testing process. With TestDirector you can create a database of manual and automated tests, build test cycles, run tests, and report and track defects. You can also create reports and graphs to help review the progress of planning tests, running tests, and tracking defects before a software release.

Q:Have you integrated your automated scripts from TestDirector?
When you work with WinRunner, you can choose to save your tests directly to your TestDirector database or while creating a test case in the TestDirector we can specify whether the script in automated or manual. And if it is automated script then TestDirector will build a skeleton for the script that can be later modified into one which could be used to test the AUT. What are the different modes of recording? - There are two type of recording in WinRunner. Context Sensitive recording records the operations you perform on your application by identifying Graphical User Interface (GUI) objects. Analog recording records keyboard input, mouse clicks, and the precise x- and y-coordinates traveled by the mouse pointer across the screen.
Q:What is the purpose of loading WinRunner Add-Ins?
Add-Ins are used in WinRunner to load functions specific to the particular add-in to the memory. While creating a script only those functions in the add-in selected will be listed in the function generator and while executing the script only those functions in the loaded add-in will be executed else WinRunner will give an error message saying it does not recognize the function. What are the reasons that WinRunner fails to identify an object on the GUI? - WinRunner fails to identify an object in a GUI due to various reasons. The object is not a standard windows object. If the browser used is not compatible with the WinRunner version, GUI Map Editor will not be able to learn any of the objects displayed in the browser window.

Q:What is meant by the logical name of the object?
An object’s logical name is determined by its class. In most cases, the logical name is the label that appears on an object.

Q:If the object does not have a name then what will be the logical name?
If the object does not have a name then the logical name could be the attached text.

Q:What is the different between GUI map and GUI map files?
The GUI map is actually the sum of one or more GUI map files. There are two modes for organizing GUI map files. Global GUI Map file: a single GUI Map file for the entire application. GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created. GUI Map file is a file which contains the windows and the objects learned by the WinRunner with its logical name and their physical description.

Q:How do you view the contents of the GUI map?
GUI Map editor displays the content of a GUI Map. We can invoke GUI Map Editor from the Tools Menu in WinRunner. The GUI Map Editor displays the various GUI Map files created and the windows and objects learned in to them with their logical name and physical description.

Q:How do you view the contents of the GUI map?
If we are learning a window then WinRunner automatically learns all the objects in the window else we will we identifying those object, which are to be learned in a window, since we will be working with only those objects while creating scripts.

Q:How to compare value of textbox in WinRunner?
the problem: textbox on page 1. then after clicking 'Submit' button, value of textbox will be display on page 2 as static. How to compare value of textbox from page 1 if it is equal to on page 2?
Capture the value from textbox in page 1 and store in a variable (like a). Then after clicking on submit button when the value is diplaying on page 2 as static. From here using screen area (get text) text point, capture the value and store in second variable (like b). Now compare two variables.
Winrunner with combo box
Problem: Application has combo box which has some values need to select item 4 in first combo box to run the test Scenario. How to get the value of the selected combo box?

Answer1:
Use the GUI spy and compare the values in the SPY with the values in the GUI map for the physical attributes of the TComboBox_* objects. It appears to me that WinRunner is recording an attribute to differentiate combobox_1 from _0 that is *dynamic* rather than static. You need to find a physical property of all the comboboxes that is constant and unique for each combobox between refreshes of the app. (handle is an example of a BAD one). That's the property you need to have recorded in your GUI map (in addition to those physical properties that were recorded for the first combobox that was recorded.

Answer2:
Go through the following script, it will help .....

function app_data(dof)
{
report_msg ("application data entry");
set_window ("Flight Reservation", 6);
list_get_items_count ("Fly From:" , flyfromc);
list_get_items_count ("Fly To:" , flytoc);
report_msg (flyfromc);
report_msg (flytoc);
for (i =0; i < j="0;" m="0;" j="0;">");
list_select_item ("Fly From:","#"i);
# Item Number 0;
obj_type ("Fly From:","");
list_select_item ("Fly To:", "#"j);
# Item Number 0;
obj_mouse_click ("FLIGHT", 42, 20,LEFT);
set_window ("Flights Table", 1);
list_get_items_count ("Flight" ,flightc);
list_activate_item ("Flight", "#"m);
# Item Number 1;
set_window ("Flight Reservation",5);
edit_set ("Name:", "ajay");
button_press ("Insert Order");
m++;
}while ( astartup
#load gui file
GUI_unload_all;
if(GUI_load("C:\\Program Files\\Mercury Interactive\\WinRunner\\EMR\\EMR.gui")!=0)
{
pause("unable to open C:\\Program Files\\Mercury
Interactive\\WinRunner\\EMR\\EMR.gui");
texit;
}
#end loading gui
you cant set path for GUI map file in winruner other than Temporary GUI Map File

Answer2:
Might suggest to your boss that the GUI is universal to all machines in spite of the fact that all machines must have their own local script in his view. Even if you are testing different versions of the same software, you can have the local machine "aware" of what software version it is running and know what GUI to load from you server. I run a lab with 30 test machines, each with their own copy of the script(s) from the server, but using one master GUI per software roll.
As far as how to set search path for the local machine, you can force that in the setup of each machine. Go to Tools=>Options=>General Options=> Folders. Once there, you can add, delete or move folders around at will. WinRunner will search in the order in which they are listed, from top down. "Dot" means search in the current directory, whatever that may be at the time.

Q: WinRunner: How to check the tab order?
winrunner sample application
set_window ("Flight Reservation", 7);
if(E_OK==obj_type ("Date of Flight:","")){
if(E_OK==obj_type ("Fly From:","")){
if(E_OK==obj_type ("Fly To:","")){
if(E_OK==obj_type ("Name:","")){
if(E_OK==obj_type ("Date ofFlight:","")) { report_msg("Ok");
}
}
}
}

Q:WinRunner: Why "Bitmap Check point" is not working with Framework?
Bitmap chekpoint is dependent on the monitor resolution. It depends on the machine on which it has been recorded. Unless you are using a machine with a screen of the same resolution and settings , it will fail. Run it in update mode on your machine once. It will get updated to your system and then onwards will pass.

Q: How to Plan automation testing to to impliment keyword driven methodology in testing automation using winrunner8.2?
Keyword-driven testing refers to an application-independent automation framework. This framework requires the development of data tables and keywords, independent of the test automation tool used to execute them and the test script code that "drives" the application-under-test and the data. Keyword-driven tests look very similar to manual test cases. In a keyword-driven test, the functionality of the application-under-test is documented in a table as well as in step-by-step instructions for each test.
Suppose you want to test a simple application like Calculator and want to perform 1+3=4, then you require to design a framework as follows:

Window->Calculator ; Control->Pushbutton ; Action-> Push; Argument->1
Window->Calculator ; Control->Pushbutton ; Action-> Push; Argument->+
Window->Calculator ; Control->Pushbutton ; Action-> Push; Argument->3
Window->Calculator ; Control->Pushbutton ; Action-> Push; Argument->=
Window->Calculator ; Action-> Verify; Argument->4

Steps are associated with the manual test case execution. Now write functions for all these common framework required for your test caese. Your representation may be different as per your requirement and used tool.
Q: How does winrunner invoke on remote machine?
Steps to call WinRunner in remote machine:
1) Send a file to remote machine particular folder (this may contains your test parameters)
2) write a shell script listener & keep always running the remotehost (this script will watching the file in folder mentioned in step 1)
3) write a batch file to invoke the winrunner, test name & kept it in remote machine
4) call the batch file thru shell script whenever the file exist as mentioned in step1

Q: WinRunner: How to connect to ORACLE Database without TNS?

The following code would help the above problem.
tblName = getvar("curr_dir")&table;
ddt_close_all_tables();
resConnection = "";
db_disconnect("session");
rc = ddt_open(tblName, DDT_MODE_READ);
if (rc != E_OK)
pause("Unable to open file");
else
{
dvr = ddt_val(tblName,"DRIVERNAME");
tnsName = ddt_val(tblName,"SERVER");
user = tolower(ddt_val(tblName,"UID"));
pass = tolower(ddt_val(tblName,"PWD"));
host = ddt_val(tblName,"HOSTNAME");
port = ddt_val(tblName,"PORT");
pro = toupper(ddt_val(tblName,"PROTOCOL"));
resConnection = db_connect("session", "driver="dvr";Database="tnsName";hostname="host";port="port";protocol="pro";
uid="user"; pwd="pass";");

if (resConnection != 0)
{
report_msg("There is a problem in connecting to the Database = "&tnsName&", Check it please..");
treturn;
}
else
{
report_msg("Connection to the Database is successful..");
rsEQ1 = db_execute_query("session","your database query",record_number1);
}
db_disconnect("session");
}
How to use this:
Assume you have saved the script in c:\winrunner as dbconnect
Save data table at same location, ie c:\winrunner as dbdetails.xls

give call to dbconnect from other script which aslo saved at same location
c:\winrunner as ==>call dbconnect("dbdetails.xls");
Because the above script is using getvar("curr_dir") function to get the current directory, looks at the same location for data table.

Q: WinRunner: How to Verify the data in excel spread sheet
[ A list box which is displaying report names and below that there is a multi line text box provided which is displaying the description of the report corresponding to each report. Able get all the descriptions by using below for loop. But have to verify against excel spread sheet where report descriptions are stored . please guide "how to proceed?"

list_get_info("Listbox1","count",count);
for(num = 1; num < row =" num" table = "E:\\test\\Datadriven\\default.xls" rc =" ddt_open(table," num =" 0;" table_row =" 1;" report_name =" ddt_val(table," val ="="" report_des =" ddt_val(table," text="="ddt_val(table," dbstatus =" db_connect(" dsn="dsn">, "%s\r\n", text);

Q: How to define the variable in script that have stored in excel sheet using winrunner?
[In A1 field contains {Class:push button, lable:OK.....}
In B1 field Contains OK = button_press(OK);
where OK contains the value of field A1
OK should act as a variable which has to contain value of field A1]

Answer1:
There is no need to define any variable that is going to use in the Testscript. You can just start using it directly.
So, if you want to assign a value to the dynamic variable which is taken from Data Table, then you can use the "eval" function for this.
Example:
eval( ddt_val(Table,"Column1") & "=\"water\";" );
# The above statement takes the variable name from Data table and assigns "water" as value to it.


Answer2:
Write a function that looked down a column in a table and then grabbed the value in the next cell and returned it. However, you would then need to call
button_press(tbl_convert("OK"));
rather than
button_press("OK");
where tbl_convert takes the value from the A1 (in your example) and returns the value in B1.
One other difficulty you would have would be if you wanted to have the same name for objects from different windows (e.g., an "OK" button in multiple windows). You could expand your function to handle this by having a separate column that carries the window name.

Q: WinRunner: How to Change physical description? [problem: the application containes defferent objects , but the location property is different/changing. Suppose for example, there is one html table and it contains objects and it's phisical properties,
for one object
{
class: object,
MSW_class: html_text_link,
html_name: "View/Edit"
location:0
}
and for other objects.
{
class: object,
MSW_class: html_text_link,
html_name: "View/Edit"
location:1
}
When record the scripts its gives viwe/edit as logical name,
Code: web_image_click("view/edit", 11, 7);
When run the script win runner cannot identifies which object to click and it gives an error Msg.
P.S. WinRunner 7.5 with Java and web addins on Windows XP operating system and IE 6.0 browser(SP2).

Answer1:
In dynamically changing the name of the html_table, we have to interchange the physical description. while recording the clicked name inside the table will be the name of the html_table in GUI Map. Change the logical name alone in GUI map. then in coding using the methods in gui_ get the logical name of this html_table.get its physical description.delete the Object thru coding from the Gui map.Then with the logical name and physical description you got previously , add these description using Gui_add methods.

Answer2:
Just change the logical names to unique names.
winrunner will recognize each object separately using the physical name and the location property.


Answer3:
i = 0;
web_link_click("{ class: object, MSW_class: html_text_link, html_name: \"View/Edit\", location:" & i & "}";
i = 1;
web_link_click("{ class: object, MSW_class: html_text_link, html_name: \"View/Edit\", location:" & i & "}";

Q: Is there any function in winrunner which will clear the history of the browser?
[Actually the script is working fine when you execute for the first time. But when you execute in the second time it is directly going inside the application without asking for login credentials by taking the path from the browser history. So the script fails. It is working fine if I clear the history of the browser before each run. ]
This is not the matter of clearing the history. In any case it should not allow you to login to application with entering login credentials. I think this is application bug.
To clear history:
DOS_system with
del "C:\Documents and Settings\%USERNAME%
\Cookies"\*your_cookie\site_name*

Q: WinRunner: How to read dynamic names of html_link

Answer1:
Use the following steps:
1) Using the Function, web_tbl_get_cell_data read the link.
2) use GUI_add function to add to the Map editor.
3) use GUI_save function to save the same link.
4) Now, web_link_click() and pass the variable that you got in step

Answer2:
Can try this method. It will reduce the complexity and there is no need to update the GUI Map File. Use web_tbl_get_cell_data() function to get the Description of the link and use the variable name in web_link_click() function.
web_tbl_get_cell_data("Tablename","#Rowno","#columnnumber",0,cell_value,cell_val\ ue_len);
web_link_click(cell_value);

Answer3:
1.get number of row in your table: tbl_get_rows_count ("tableName",rows);
2.write a for loop: for(i=0;i<=row;i++) 3.get text of specified cell with column and row:tbl_get_cell_data ("Name","#"&i,column,var1); 4.compare with the if condition 5.if true : make any flage and take row number in variable m 6.now end the loop and write tbl_set_selected_cell ( "tableName", "#"& m,column); type ("");
Example:
tbl_get_cols_count("Name",cols);
tbl_get_rows_count("Name",rows);
for(i=2;i<=rows;i++) { for(j=1;j<=cols;j++) { tbl_get_cell_data("Name","#"&i,"#"&j,var1); if(var1 == Supplier) { m=i; } } } tbl_set_selected_cell ( "Name", "#"&m,"#"&j type ("");
Q: Is it possible to use winrunner for testing .aspx forms or dotnet forms?
You can't test dot net application using winrunner 7.6 and also from prior version. Because winrunner do not have addin for dot net.
ASP.NET forms it is a code for server side part of an application, if it generates on the front end normal HTML/JavaScript/Java/ActiveX it shouldn't be a problem to test the application using WR.

Q: Can WinRunner put the test results in a file?
Yes, You can put the results into the file format. (the file extension is .txt) In Test Results window, you can select one option:
tools menu text report then we can get a text file.
Another option is to write out the results out into a html file.

WinRunner: What is the difference between virtual object and custom object?

Answer1:
The virtual object is an object which is not recognized by Winrunner. The virtual object class like obj_mouse_click which works for that instance only. To work at any time, then we should forcibly to instruct the winrunner to recognize the virtual object with the help of Virtual Object Wizard.
Note: the virtual object must be mapped to a relavant standard classes only avail in winruuner. Ex: button (which is avail on the toolbar in a app. window) which is to be mapped to the standard class callled PUSH_BUTTON. when its completed then u can observe the TSL statment would be button_press("logicalName") which is permanent one in u r winrunner.
GUI map Configuration:
It helps when winrunner is not able locate the object by winruuner. for ex : two or more objects will have same logical name and its physical properties then how winrunner locate the specific object. In which case that should instruct the winrunner to unquely identify the specific object by setting obligatory, optional and MS_WID with the help of GUI Map config.

Answer2:
we use the virtual object wizard in winrunner to map the bitmap object while recording winrunner generates the obj_mouse_click.
Custom object is an object which do not belong to one of the standard class of winrunner. We use the gui map configuration to map the custom object to standard object of the winrunner.

Answer3:
virtual object - image or portion of the window are made virtual object to use functions available for the object just for convenience in scripting.
virtual object captures the cordinates of the object.
custom object - general object which does not belong to winrunner class, we map this general object to winrunner standard object, i.e. custom object.

Q: How to create an Object of an Excel File in WinRunner?
The object part, or actual Excel table is created via the WinRunner Data Table and it is stored inside the same directory that the WinRunner script is stored in. Of course you may create the Excle spreadsheet yourself and reference it from your script manually. This is also mentioned in the User Guide.
The Data Table Wizard mentioned earlier will link this object to the script and assist in parameterizing the data from the Excel table object.

Q: How to use values returned by VB script in winrunner?
From your VB script create a file system object to write output to a text file:
Dim fso, MyFile
Set fso = CreateObject("Scripting.FileSystemObject")
Set MyFile = fso.CreateTextFile("c:\testfile.txt", True)
MyFile.WriteLine("This is a test.")
MyFile.Close
Then use file_open and file_getline functions in WinRunner to read the file.

Q: WinRunner: What tag is required to allow me to identify a html table?
.
Indeed, it is better to ask developer to put ID every place where it is possible. It will avoid lots of trouble and help the resuable of your script (consider localization).

Q: WinRunner: How to work with file type using WinRunner functions?
When recording, WinRunner does not record file-type objects. However, you can manually insert file-type statements into your test script using the web_file_browse and web_file_set functions.
Q: WinRunner: Do Java Add-Ins required for Web based Application?
You do not need any Java add-in to tests simple JSP pages. If you are using Java applets with some swing or awt components drawn on the applet then you need java add-in otherwise simple web add-in will server the purpose.

Q: How to generate unique name?
function unique_str()
{
auto t, tt, leng, i;
t = get_time();
leng = length(t);
tt = "";
for (i = 1; i <= leng; i++) { tt = tt & (sprintf("%c", 97 + i + substr(t, i, 1)) ); } return tt; } Q; WinRunner: How to access the last window brought up? [set_window("{class: window, active: 1}"); rc = win_get_info("{class: window, active: 1}", property, result); Is there something or some script that can determine the LAST WINDOW DISPLAYED or OPENED on the desktop and in order to use that information to gather the label. there are a couple of solutions, depending on what you know about the window. If you know distinguishing characteristics of the window, use them and just directly describe the gui attributes. I assume that you do not have these, or you would likely have already done so. If not, there is a brute force method. Iterate over all of the open windows prior to the new window opening and grab their handles. After your new window opens, iterate again. The 'extra' handle points to your new window. You can use it in the gui description directly to manipulate the new window. As I said, a bit brutish, but it works. You can use the same technique when you have multiple windows with essentially the same descriptors and need to iterate over them in the order in which they appeared. Any object (or window) can be described by it's class and it's iterator. Ask yourself, if I wanted to address each of the individuals in a room and had no idea what their names were, but would like to do so in a consistent way would it not be sufficient to say - 'person who came into the room first', 'person who came into the room second', or alternately 'person who is nearest the front on the left', 'person who is second nearest the front on the left'. These are perfectly good ways of describing the individuals because we do two things: limit the elements we want to describe (people) and then give an unambiguous way of enumerating them. So, to apply this to your issue - you want to do an 'exist' on a dynamically described element (window, in your case). So you make a loop and ask 'window # 0, do you exist', if the answer is yes, you ask for the handle, store it and repeat the loop. Eventually you get to window n, you ask if it exists, the answer is no and you now have a list of all of the handles of all of the existing windows.. You should note that there will be n windows ( 0 to n-1, makes a count of n). You may need to brush up on programmatically describing an object (or window), the syntax is a little lengthy but extremely useful once you get the feel for it. It really frees you from only accessing objects that are already described in the gui map. Try this as a starting point, you'll need to add storing & sorting the handles yourself: i = 0; finished = FALSE; while (finished == FALSE) { if (win_exists("{class: window, location: \"" & i & "\"}\"") == E_OK ) { win_get_info("{class: window, location: \"" & i & "\"}\"", "handle", handle); printf(" handle was " & handle); i ++; } else { finished = TRUE; } } Q: WinRunner: How to identifying dynamic objects in web applications ? Check whether the object is present inside the table. If yes then the get the table name and the location of that object. Then by using web_obj_get_child_item function you can get the description of the Object. Once you get the Description then you can do any operation on that object. Q: WinRunner: How to delete files from drive? Here is a simple method using dos. where speech_path_file is a variable. example: # -- initialize vars speech_path_file = "C:\\speech_path_verified.txt"; . . dos_system("del " & speech_path_file); Q: WinRunner: Could do we start automation before getting the build? The manual test cases should be written BEFORE the application is available, so does the automation process. automation itself is a development process, you do start the development BEFORE everything is ready, you can start to draw the structure and maybe some basic codes. And there are some benefits of having automation start early, e.g., if two windows have same name and structure and you think it is trouble, you may ask developer to put some unique identifiers, for example, a static which has different MSW_id). If you (& your boss) really treat the automation as the part of development, you should start this as early as possible, in this phase it likes the analyse and design phase of the product. Q: How to create a GUI map dynamically? gmf = "c:\\new_file_name.gui"; GUI_save_as ( "", gmf ); rc = GUI_add(gmf, "First_Window" , "" , ""); rc = GUI_add(gmf, "First_Window" , "new_obj" , ""); rc = GUI_add(gmf, "First_Window" , "new_obj" , "{label: Push_Me}"); Q: WinRunner script for Waitbusy # only need to load once, best in startup script or wherever load( getenv("M_ROOT") & "\\lib\\win32api", 1, 1 ); # returns 1 if app has busy cursor, 0 otherwise public function IsBusy(hwnd) {const HTCODE=33554433; # 0x2000001 const WM_SETCURSOR=32; return SendMessageLong(hwnd, WM_SETCURSOR, hwnd, HTCODE); # wait for app to not be busy, optional timeout public function WaitBusy(hwnd, timeout) {const HTCODE=33554433; # 0x2000001 const WM_SETCURSOR=32; if(timeout) timeout *= 4; while(--timeout) { if (SendMessageLong(hwnd, WM_SETCURSOR, hwnd, HTCODE) == 0) return E_OK; wait(0,250); # 1/4 second } return -1; # timeout error code } # wait busy, provide window instead of hwnd public function WinWaitBusy(win, timeout){auto hwnd ; win_get_info(win, "handle", hwnd); return WaitBusy(hwnd, timeout); } # example of how to use it... set_window(win); WinWaitBusy(win); Q: WinRunner script to get Min and Max public function fnMinMaxWinrunner (in action) { auto handle; const SW_MAXIMIZE = 3; const SW_MINIMIZE = 6; load_dll("user32.dll"); #extern int ShowWindow(long, int); win_get_info("{class: window, label: \"!WinRunner.*\"}", "handle", handle); switch(action) { case "SW_MINIMIZE" : { # Maximizing WinRunner ShowWindow(handle, SW_MINIMIZE); wait(2); break; } case "SW_MAXIMIZE" : { # Maximizing WinRunner ShowWindow(handle, SW_MAXIMIZE); wait(2); break; } } unload_dll("user32.dll"); }; Q: Type special chars in WinRuneer type special chars as they are, instead of interpreting them # data can be read from a data file and then typed into an app # # escape the following chars: <> - +
# in a string, quote " and backslash \ will already be escaped
#
# generally won't be a lot of special chars, so
# use index instead of looping through each character
#
function no_special(data )
{
auto esc_data, i, p;
esc_data = "";
while(1)
{
p=32000;
i=index(data,"-");
p=i?(i");
p=i?(iwin_activate("Untitled - Notepad");
win_type("Untitled - Notepad", no_special(data));

Q: Clean up script/function from WinRunner
public function cleanup(in win)
{
auto i;
auto edit;
auto atti;
set_window(win);
for (i = 0 ; ; i++)
{
edit = "{class:edit,index:"i"}";
if (obj_exists(edit) != E_OK)
break;
obj_get_info(edit,"displayed",atti);
if (atti == 0)
break;
obj_get_info(edit,"enabled",atti);
if (atti == 0)
continue;
edit_get_text(edit,atti);
if (atti != "")
edit_set_text(edit,"");
}
}


Q: How to convert variable from ascii to string?
If you want to generate characters from their ascii codes, you can use the sprintf() function, example:
sprintf("%c",65) will generate "A"
If you want to add a number onto the end of a string, you can simply stick it next to the string, example:
ball=5;
print "and the winning number is: " ball;
Putting them together can get some interesting effects, example:
public arr[] = {72,101,108,108,111,32,102,114,111,109,32,77,105,115,104,97};
msg = "";
for(i in arr) msg = msg sprintf("%c",arr[i]);
print msg;
Hmmm, interesting effect from the elements not being in order. I'll try it again:
msg = "";
for(i=0;i<16;i++) msg =" msg" functions ="="="="="="="="="="="="="="="="="="="="="="="="="="="="="="="="="="="="="="="="="="" gstrconnstring ="DRIVER={Oracle in OraHome92};SERVER=MANOJ; UID=BASECOLL;PWD=BASECOLL;DBA=W;APA=T;EXC=F; XSM=Default;FEN=T;QTO=T;FRC=10;FDL=10;LOB=T;RST=T;GDE=F;FRL=Lo; BAM=IfAllSuccessful;MTS=F;MDI=Me;CSR=F;FWC=F;PFC=10;TLO=O;" gstrconnstring =" ddt_val(gstrConfigFilePath," strsql = "Select PRODUCT_CODE from PRODUCT_MASTER where PRODUCT_NAME = 'WINE'" strcolumn = "PRODUCT_CODE" rc =" GetDBColumnValue(strSql," strsql =" Query" strheader =" Header" nheadercount =" Count" strrow =" Row" strsql =" Query" strcolumn =" Column" nrecord =" Gives" strsql =" Query" strheader =" Header" nheadercount =" Count" nrecord =" Count" strlasterror ="" rc =" -9999;" rc =" db_connect(" rc =" db_execute_query(" nrecord ="="" rc =" 1;" strval =" db_get_field_value" strval="="" rc =" 2;" rc =" db_disconnect(" strsql = "" strcolumn ="" strlasterror ="" strlasterror ="" rc =" -9999;" rc =" db_connect(" rc =" db_execute_query(" nrecord ="="" rc =" 1;" rc =" db_get_row(" rc =" db_get_headers(" strrow ="="" rc =" 2;" rc =" db_disconnect(" strsql = "" strlasterror ="" strlasterror ="" rc =" -9999;" rc =" db_connect(" rc =" db_execute_query(" nrecord ="="" rc =" 1;" i =" 1;" rc =" 2;" rc =" db_disconnect(" strsql = "" strcolumn ="" strlasterror ="" strlasterror ="" strtmp = "" rc =" -9999;" rc =" db_connect(" rc =" db_execute_query(" nrecord ="="" rc =" 1;" i =" 1;" strtmp = "" rc =" db_get_row(" rc =" db_get_headers(" rc =" db_disconnect(" strsql = "" strlasterror ="" strtmp="" rc =" -9999;" rc =" db_connect(strConn,gstrConnString);" second =" 1;" minute =" 60" hour =" MINUTE" day =" HOUR" year =" DAY" plural = "s, " singular = ", " timediff =" oldTime">= YEAR)
{
remainder = timeDiff % YEAR;
years = (timeDiff - remainder) / YEAR;
timeDiff = remainder;
}

if(timeDiff >= DAY)
{
remainder = timeDiff % DAY;
days = (timeDiff - remainder) / DAY;
timeDiff = remainder;
}

if(timeDiff >= HOUR)
{
remainder = timeDiff % HOUR;
hours = (timeDiff - remainder) / HOUR;
timeDiff = remainder;
}

if(timeDiff >= MINUTE)
{
remainder = timeDiff % MINUTE;
minutes = (timeDiff - remainder) / MINUTE;
timeDiff = remainder;
}

seconds = timeDiff;

strDuration = "";

if (years)
{
strDuration = years & " Year";
if (years > 1)
strDuration = strDuration & plural;
else
strDuration = strDuration & singular;
}

if (days)
{
strDuration = strDuration & days & " Day";
if (days > 1)
strDuration = strDuration & plural;
else
strDuration = strDuration & singular;
}

if (hours)
{
strDuration = strDuration & hours & " Hour";
if (hours > 1)
strDuration = strDuration & plural;
else
strDuration = strDuration & singular;
}

if (minutes)
{
strDuration = strDuration & minutes & " Minute";
if (minutes > 1)
strDuration = strDuration & plural;
else
strDuration = strDuration & singular;
}

if (seconds)
{
strDuration = strDuration & seconds & " Second";
if (seconds > 1)
strDuration = strDuration & "s.";
else
strDuration = strDuration & ".";
}

return E_OK;
}

Q: Working with QTP and work on web application which is developed on .Net. Trying to prepare scripts but , when record and run the script most of the liks are not recognizing by qtp,that links are dynamically generating and also into diff place. what should we do for it ?
Try changing the Web Event Recording Configurations. Go to Tools > Web Event Recording Configurations, and change the setting to high.
If the links are dynamically generated, try changing the recorded object properties. After recording, right click on the recorded object and select object properties. From this screen you can add/remove attributes for playback that were previously recorded. Focus on attributes of the object that are not specific to location and do not change (html ID maybe).

How to verify the animations(gif files) present in the applications using WinRunner?
WinRunner doesn't support testing that technology. You will need to find another tool to do that. QuickTest may be a possible choice for you. Go to the Mercury site and look at the list of supported technologies for QuickTest Pro 6.5 & above (not Astra).
WinRunner: Should I sign up for a course at a nearby educational institution?
When you're employed, the cheapest or free education is sometimes provided on the job, by your employer, while you are getting paid to do a job that requires the use of WinRunner and many other software testing tools.
If you're employed but have little or no time, you could still attend classes at nearby educational institutions.
If you're not employed at the moment, then you've got more time than everyone else, so that's when you definitely want to sign up for courses at nearby educational institutions. Classroom education, especially non-degree courses in local community colleges, tends to be cheap.

How important is QTP in automated testing, does only with manaul testing (Test Director) is enough for testing. Or do we require automated tools in each and every projects. What are the different advantages of the QTP?
Most projects that are being automated should not be because they're not ready to be. Most managers assume that automated functional QUI testing will replace testers. It won't. It just runs the same tests over, and over, and over. When changes are made to the system under test, those changes either break the existing automated tests or they are not covereb by the changes.
Automated functional GUI testing is usually a waste of time.
TestDirector is not used for executing any actual test activity but it is a test management tool used for Requirements Management, Test Plan, Test Lab, and Defects Management. Even if the individual test cases are not automated, TestDirector can make life much easier during the test cycles.
These two are also good reads on the topic:
Automation Myths
Test Automation Snake Oil
You can find information about QTP here:
http://www.mercury.com/us/products/quality-center/functional-testing/...

Tell me about the TestDirector®
The TestDirector® is a software tool that helps software QA professionals to gather requirements, to plan, schedule and run tests, and to manage and track defects/issues/bugs. It is a single browser-based application that streamlines the software QA process.
The TestDirector's "Requirements Manager" links test cases to requirements, ensures traceability, and calculates what percentage of the requirements are covered by tests, how many of these tests have been run, and how many have passed or failed.
As to planning, the test plans can be created, or imported, for both manual and automated tests. The test plans then can be reused, shared, and preserved.
The TestDirector’s "Test Lab Manager" allows you to schedule tests to run unattended, or run even overnight.
The TestDirector's "Defect Manager" supports the entire bug life cycle, from initial problem detection through fixing the defect, and verifying the fix.
Additionally, the TestDirector can create customizable graphs and reports, including test execution reports and release status assessments.

What is a backward compatible design?
The design is backward compatible, if the design continues to work with earlier versions of a language, program, code, or software. When the design is backward compatible, the signals or data that has to be changed does not break the existing code.
For instance, a (mythical) web designer decides he should make some changes, because the fun of using Javascript and Flash is more important (to his customers) than his backward compatible design. Or, alternatively, he decides, he has to make some changes because he doesn't have the resources to maintain multiple styles of backward compatible web design. Therefore, our mythical web designer's decision will inconvenience some users, because some of the earlier versions of Internet Explorer and Netscape will not display his web pages properly (as there are some serious improvements in the newer versions of Internet Explorer and Netscape that make the older versions of these browsers incompatible with, for example, DHTML). This is when we say, "Our (mythical) web designer's code fails to work with earlier versions of browser software, therefore his design is not backward compatible".
On the other hand, if the same mythical web designer decides that backward compatibility is more important than fun, or, if he decides that he does have the resources to maintain multiple styles of backward compatible code, then, obviously, no user will be inconvenienced when Microsoft or Netscape make some serious improvements in their web browsers. This is when we can say, "Our mythical web designer's design is backward compatible".

Q: How to get the compiler to create a DLL ?
In the Borland compiler, Creating "Console DLL's".
A console application is one that does not have a GUI windows queue component. This seems to work well and has a very small footprint.

Q: How to export DLL functions so that WinRunner could recognise them?
Created the following definition in the standard header file:
#define WR_EXPORTED extern "C" __stdcall __declspec(dllexport)
and write a function it looks something like this:
WR_EXPORTED UINT WrGetComputerName( )
{
. . .
}

Q: How to pass parameters between WinRunner and the DLL function?
Passing Strings (a DLL function):
In WinRunner,
extern int WrTestFunction1( in string );
In the DLL,
WR_EXPORTED int WrTestFunction1( char *lcStringArg1 )
{
. . .
return( {some int value} ); }
And then to use it in WinRunner,
WrTestFunction1( "Fred" );
Receiving Strings:
In WinRunner,
extern int WrTestFunction1( out string <10>); #The <10> tells WinRunner how much space to use for a buffer for the returned string.
In the DLL,
WR_EXPORTED int WrTestFunction1( char *lcStringArg1 )
{
. . .
{some code that populates lcStringArg1};
. . .
return( {some int value} );
}
And then to use it in WinRunner,
WrTestFunction1( lcString1 );
# lcString1 now contains a value passed back from the DLL function

Passing Numbers (a DLL function)
In WinRunner,
extern int WrTestFunction1( in int );
In the DLL,
WR_EXPORTED int WrTestFunction1( int lnIntegerArg1 )
{
. . .
return( {some int value} );
}
And then to use it in WinRunner,
WrTestFunction1( 2 );
Recieving Numbers
In WinRunner,
extern int WrTestFunction1( out int );
In the DLL,
WR_EXPORTED int WrTestFunction1( int *lnIntegerArg1 )
{
. . .
*lnIntegerArg1 = {some number};
return( {some int value} );
}
And then to use it in WinRunner,
WrTestFunction1( lnNum );
# lnNum now contains a value passed back from the DLL function

Here are some example functions.
#define WR_EXPORTED extern "C" __stdcall __declspec(dllexport)
#define WR_SUCCESS 0
#define WR_FAILURE 100000
#define FAILURE 0
#define WR_STAGE_1 10000
#define WR_STAGE_2 20000
#define WR_STAGE_3 30000
#define WR_STAGE_4 40000
#define WR_STAGE_5 50000
#define WR_STAGE_6 60000
#define WR_STAGE_7 70000
#define WR_STAGE_8 80000
#define WR_STAGE_9 90000
#define MAX_USERNAME_LENGTH 256
#define HOST_NAME_SIZE 64
WR_EXPORTED UINT WrGetComputerName( LPTSTR lcComputerName )
{
BOOL lbResult;
DWORD lnNameSize = MAX_COMPUTERNAME_LENGTH + 1;
// Stage 1
lbResult = GetComputerName( lcComputerName, &lnNameSize );
if( lbResult == FAILURE )
return( WR_FAILURE + WR_STAGE_1 + GetLastError() );
return( WR_SUCCESS );
}
WR_EXPORTED UINT WrCopyFile( LPCTSTR lcSourceFile, LPCTSTR lcDestFile, BOOL lnFailIfExistsFlag )
{
BOOL lbResult;
// Stage 1
lbResult = CopyFile( lcSourceFile, lcDestFile, lnFailIfExistsFlag );
if( lbResult == FAILURE )
return( WR_FAILURE + WR_STAGE_1 + GetLastError() );
return( WR_SUCCESS );
}
WR_EXPORTED UINT WrGetDiskFreeSpace( LPCTSTR lcDirectoryName,
LPDWORD lnUserFreeBytesLo,
LPDWORD lnUserFreeBytesHi,
LPDWORD lnTotalBytesLo,
LPDWORD lnTotalBytesHi,
LPDWORD lnTotalFreeBytesLo,
LPDWORD lnTotalFreeBytesHi )
{
BOOL lbResult;
ULARGE_INTEGER lsUserFreeBytes,
lsTotalBytes,
lsTotalFreeBytes;
// Stage 1
lbResult = GetDiskFreeSpaceEx( lcDirectoryName,
&lsUserFreeBytes,
&lsTotalBytes,
&lsTotalFreeBytes );
if( lbResult == FAILURE )
return( WR_FAILURE + WR_STAGE_1 + GetLastError() );
*lnUserFreeBytesLo = lsUserFreeBytes.LowPart;
*lnUserFreeBytesHi = lsUserFreeBytes.HighPart;
*lnTotalBytesLo = lsTotalBytes.LowPart;
*lnTotalBytesHi = lsTotalBytes.HighPart;
*lnTotalFreeBytesLo = lsTotalFreeBytes.LowPart;
*lnTotalFreeBytesHi = lsTotalFreeBytes.HighPart;
return( WR_SUCCESS );
}
Q: Why Have TSL Test Code Conventions
TSL Test Code conventions are important to TSL programmers for a number of reasons:
. 80% of the lifetime cost of a piece of software goes to maintenance.
. Hardly any software is maintained for its whole life by the original author.
. TSL Code conventions improve the readability of the software, allowing engineers to understand new code more quickly and thoroughly.
. If you ship your source code as a product, you need to make sure it is as well packaged and clean as any other product you create.

Q: Test Script Naming
Test type + Project Name + Version Number + Module name + Test script Function .
For example:
Test type = UAT
Project Name = MHE
Version of the Project = 3.2
Module Name = Upload
Function Name = Excel_File
So the entire file name would be UAT_MHE_3.2_Upload_Excel_File
Note & Caution :
. Make sure the entire file name saved is below 255 characters.
. Use the underscore "_" character instead of hyphen "-" or " " character for separation.
. Highly recommended to store the test scripts remotely in a common folder or in the Test director repository , which are accessible and can be accessed by the test team at any time.
. Do not use any special characters on the test script name like "*&^ #@!" etc .,
. In this document - script or test script(TSL) means the same , pls don't get confused

Q: Test script Directory structure:
Winrunner recognizes the testscript as a file which is stored as a directory in the Operating system. The script 's TSL code , header information , checklist files , results , expected results etc are stored in these directories for each and every script.
. Do not modify or delete anything inside these directories manually without consulting an expert.
. Try to have scripts, which have lines less than or equal to 500
. While creating multiple scripts make sure they follow the directories and subdirectory structure (ie) Every script is stored in their respective modules under a folder and a main script which call all these scripts in a Parent folder above these scripts.In a nutshell "All the scripts must be organized and should follow an hierarchy ".
. If a module contains more than 2 scripts , a excel file is kept in the respective folder , which gives details of the testscripts and the functionality of these scripts in a short description E.g the excel sheet can contain fields like TestPlan No, Test script No, Description of the Testscript, Status of Last run, Negative or a non-negative test.
. Also make sure that evert script has a text file , which contains the test results of the last run.
. Maintenance of script folder that has unwanted files and results folder must be cleaned periodically.
. Backup of all the scripts (zipped) to be taken either in hard drive, CD-ROM, zip drives etc and kept safely.

Q: Comments
All the TSL script files should begin with a comment that lists the Script name, Description of the script, version information, date, and copyright notice:

#################################################################
# Script Name: #
# #
# Script Description: #
# #
# Version information: #
# #
# Date created and modified: #
# #
# Copyright notice #
# #
# Author: #
#################################################################
Comments generated by WinRunner.
WinRunner automatically generates some of the comments during recording..If it makes any sense leave them, else modify the comments accordingly
Single line comment at the end of line.
Accessfile = create_browse_file_dialog ("*.mdb"); # Opens an Open dialog for an Access table.
It is mandatory to add comment for your test call
Call crea_org0001 (); #Call test to create organization
It is mandatory to add comments when you are using a variable which is a public variable and is not defined in the present script.
Web_browser_invoke (NETSCAPE, strUrl); #strUrl is a variable defined in the init script
Note:The frequency of comments sometimes reflects poor quality of code. When you feel compelled to add a comment, consider rewriting the code to make it clearer. Comments should never include special characters such as form-feed.

Q: Creating C DLLs for use with WinRunner
These are the steps to create a DLL that can be loaded and called from WinRunner.
1. Create a new Win32 Dynamic Link Library project, name it, and click .
2. On Step 1 of 1, select "An empty DLL project," and click .
3. Click in the New Project Information dialog.
4. Select File New from the VC++ IDE.
5. Select "C++ Source File," name it, and click .
6. Close the newly created C++ source file window.
7. In Windows Explorer, navigate to the project directory and locate the .cpp file you created.
8. Rename the .cpp file to a .c file
9. Back in the VC++ IDE, select the FileView tab and expand the tree under the Projects Files node.
10. Select the Source Files folder in the tree and select the .cpp file you created.
11. Press the Delete key; this will remove that file from the project.
12. Select Project Add To Project Files from the VC++ IDE menu.
13. Navigate to the project directory if you are not already there, and select the .c file that you renamed above.
14. Select the .c file and click . The file will now appear under the Source Files folder.
15. Double-click on the .c file to open it.
16. Create your functions in the following format:

#include "include1.h"
#include "include2.h"
.
.
.
#include "includen.h"
#define EXPORTED __declspec(dllexport)
EXPORTED ( ,
,
…,
)
{

return ;
}
.
.
.
EXPORTED ( ,
,
…,
)
{

return ;
}
17. Choose Build .DLL from the VC++ IDE menu.
18. Fix any errors and repeat step 17.
19. Once the DLL has compiled successfully, the DLL will be built in either a Debug directory or a Release directory under your project folder depending on your settings when you built the DLL.
20. To change this setting, select Build Set Active Configuration from the VC++ IDE menu, and select the Configuration you want from the dialog. Click , then rebuild the project (step 17).
21. All the DLLs types that you are going to create are loaded and called in the same way in WinRunner. This process will be covered once in a later section.
Q: Creating C++ DLLs for use with WinRunner
Here are the steps for creating a C++ DLL:
1. Create a new Win32 Dynamic Link Library project, name it, and click .
2. On Step 1 of 1, select "An Empty DLL Project," and click .
3. Click in the New Project Information dialog.
4. Select File New from the VC++ IDE.
5. Select C++ Source File, name it, and click .
6. Double-click on the .cpp file to open it.
7. Create your functions in the following format:

#include "include1.h"
#include "include2.h"
.
.
.
#include "includen.h"

#define EXPORTED extern "C" __declspec(dllexport)

EXPORTED ( ,
,
…,
)
{

return ;
}
.
.
.
EXPORTED ( ,
,
…,
)
{

return ;
}

8. Choose Build .DLL from the VC++ IDE menu.
9. Fix any errors and repeat step 8.
10. Once the DLL has compiled successfully, the DLL will be built in either a Debug directory or a Release directory under your project folder depending on your settings when you built the DLL.
11. To change this setting, select Build Set Active Configuration from the VC++ IDE menu, and select the Configuration you want from the dialog. Click , then rebuild the project (step 8).
12. All the DLLs types that you are going to create are loaded and called in the same way in WinRunner. This process will be covered once in a later section.

Q: Creating MFC DLLs for use with WinRunner
1. Create a new MFC AppWizard(DLL) project, name it, and click .
2. In the MFC AppWizard Step 1 of 1, accept the default settings and click .
3. Click in the New Project Information dialog.
4. Select the ClassView tab in the ProjectView and expand the classes tree. You will see a class that has the following name CApp; expand this branch.
5. You should see the constructor function CApp(); double-click on it.
6. This should open the .cpp file for the project. At the very end of this file add the following definition:
#define EXPORTED extern "C" __declspec( dllexport )
7. Below you will add your functions in the following format:
#define EXPORTED extern "C" __declspec(dllexport)
EXPORTED ( ,
,
…,
)
{

return ;
}
.
.
.
EXPORTED ( ,
,
…,
)
{

return ;
}

8. You will see the functions appear under the Globals folder in the ClassView tab in the ProjectView.
9. Choose Build .DLL from the VC++ IDE menu.
10. Fix any errors and repeat step 9.
11. Once the DLL has compiled successfully, the DLL will be built in either a Debug directory or a Release directory under your project folder depending on your settings when you built the DLL.
12. To change this setting, select Build Set Active Configuration from the VC++ IDE menu, and select the Configuration you want from the dialog. Click , then rebuild the project (step 9).
13. All the DLLs types that you are going to create are loaded and called in the same way in WinRunner. This process will be covered once in a later section.

Q: Creating MFC Dialog DLLs for use with WinRunner
1. Create a new MFC AppWizard(DLL) project, name it, and click .
2. In the MFC AppWizard Step 1 of 1, accept the default settings and click .
3. Click in the New Project Information dialog.
4. Select the ClassView tab in the ProjectView and expand the classes tree. You will see a class that has the following name CApp; expand this branch also.
5. You should see the constructor function CApp(); double-click on it.
6. This should open the .cpp file for the project. At the very end of this file add the following definition:
#define EXPORTED extern "C" __declspec( dllexport )

7. Switch to the ResourceView tab in the ProjectView.
8. Select Insert Resource from the VC++ IDE menu.
9. Select Dialog from the Insert Resource dialog and click .
10. The Resource Editor will open, showing you the new dialog. Add the controls you want to the dialog, and set the properties of the controls you added.
11. Switch to the ClassView tab in the ProjectView and select View ClassWizard from the VC++ IDE menu, or double-click on the dialog you are creating.
12. The Class Wizard should appear with an "Adding a Class" dialog in front of it. Select "Create a New Class" and click .
13. In the New Class dialog that comes up, give your new class a name and click .
14. In the Class Wizard, change to the Member Variables tab and create new variables for the controls you want to pass information to and from. Do this by selecting the control, clicking , typing in the variable name, selecting the variable type, and clicking . Do this for each variable you want to create.
15. Switch to the Message Maps tab in the Class Wizard. Select the dialog class from the Object IDs list, then select the WM_PAINT message from the Messages List. Click , then . This should bring up the function body for the OnPaint function.
16. Add the following lines to the OnPaint function so it looks like the following:
void ::OnPaint()
{
CPaintDC dc(this); // device context for painting
this-%gt;BringWindowToTop();
UpdateData(FALSE);
// Do not call CDialog::OnPaint() for painting messages
}

17. Select IDOK from the Object IDs list, then select the BN_CLICKED message from the Messages
list. Click , accept the default name, and click .
18. Add the line UpdateData(TRUE); to the function, so it looks like this:
void ::OnOK()
{
UpdateData(TRUE);
CDialog::OnOK();
}
19. When you are done with this, click to close the Class Wizard dialog and apply your changes. Your new class should appear in the ProjectView in the ClassView tab.
20. In the tree on the ClassView tab, double-click on the constructor function for the CApp (see step 5).
21. At the top of the file, along with the other includes, add an include statement to include the header file for your dialog class. It should be the same name as the name you gave the class in step 13 with a .h
appended to it. If you are unsure of the name, you can look it up on the FileView tab under the Header Files folder. 22. At the very end of the file, after the #define you created in step 6, create a function that looks something like this:
EXPORTED int create_dialog(char* thestring)
{
AFX_MANAGE_STATE(AfxGetStaticModuleState());
theDlg;
theDlg.=;
theDlg.DoModal();

strcpy(thestring,strVar1); //this will pass the value back to WinRunner.
return 0;
}
23. Choose Build .DLL from the VC++ IDE menu.
24. Fix any errors and repeat step 23.
25. Once the DLL has compiled successfully, the DLL will be built in either a Debug directory or a Release directory under your project folder depending on your settings when you built the DLL.
26. To change this setting, select Build Set Active Configuration from the VC++ IDE menu, then select the Configuration you want from the dialog. Click <, then rebuild the project (step 23).
27. All the DLLs types that you are going to create are loaded and called in the same way in WinRunner. This process will be covered once in a later section.

Q: Loading and Calling the Above DLLs from WinRunner
Loading and calling DLLs from WinRunner is really very simple. There are only 3 steps.
1. Load the DLL using the command load_dll.
2. Declare the function in the DLL as an external function using the extern function.
3. Call the function as you would any other TSL function.
As simple as this is, there are some things you need to be aware of.
1. WinRunner has a limited number of variable types; basically, there is string, int, and long. Windows has many different types. Two common types, which may confuse you, are HWND and DWORD. Which WinRunner type do you choose for these? You should declare these as long.
2. If you are building a function in a DLL and you are testing it in WinRunner, make sure you unload the DLL in WinRunner using the unload_dll function before you try to recompile the DLL. If you leave the DLL loaded in WinRunner and try to recompile the DLL, you will receive an error message in VC++ that looks like this:
LINK : fatal error LNK1104: cannot open file "Debug/.DLL Error executing link.exe
To resolve this error, step through the unload_dll line in WinRunner, then compile the DLL.
3. Before shipping a DLL make sure you compile it in Release mode. This will make the DLL much smaller and optimized.
Q: Definition of Tests
As a prime entry point defining the test needs a idea to classify the scripts into finer elements of functions each contributing the various aspects of automation Techniques.
Looking into this perspective the elements of the Automation Script would require the Record Play Back techniques, details of the application as better understood as Objects in tools, execution of Business Logic using loop constructs, and the test data accessibility for either Batch Process or any Back end operations. Ultimately we need this entire salient features to function at the right point of time getting the right inputs. To satisfy these criteria we require a lot of planning before we start automating the Test Scripts


Q: Test Recorder about Object Vs Actions
In automation tools the Test Recorder is of two modes Object based and Action Mode. It requires a meticulous but yet a simplified approach on which mode to use. Though it is inevitable to avoid Action mode, it is still used for many at TE Based applications. As a best practice the object based is widely accepted and mandatory mode of operation in Test Automation. To the extent possible we will avoid Action based functions and stick on the object mode of operation.

Q: Test Recorder about Generic Test Environment Options
Some common Settings we need to set in General Options:
1. Default Recording Mode is Object mode
2. Synch Point time is 10 seconds as default
3. When Test Execution is in Batch Mode ensure all the options are set off so that the Batch test runs uninterrupted
4. In the Text Recognition if the Application Text is not recognizable then the Default Font Group is set. The Text Group is identified with a User Defined Name and then include in the General Option.

Q: Test Recorder about Test Properties
1. Every Script before recording ensure that the Test properties is in Main Test with the defaults
2. Do not entertain any Parameters for Main Test
3. It is not a good practice to load the Object library from the Test Options (if any). Rather the Object library is loaded from the Script using the suitable tool commands. This would actually avoid the hidden settings in the Script and also the ease of Setting the Object library Load and Unload can be better done dynamically in the Test Script rather than doing it manually every time the Test Suite is ran.
4. Ensure the Add-ins is correct from the Add-ins tab.

Q: Test Recorder about Script Environment
The basic idea of setting the Test Bed is that the Test Suite must be potable and can readily be ran in any environment given the initial conditions. For this to happen, the automation tool supports a lot of functions to evolve a generic methodology where we can wrap up the entire built-ins to run before the Test Suite start executing the Script. In other word the fashion of organizing the Test Scripts remain in the Test automation developer's mind to harbinger the issues and hurdles that can be avoided with little or less of programming.
Q: Test Recorder about Script Environment: Automation Inits ()
Common Functions that get into Initialization Script are
1. Usage of built-in commands to keep the test path dynamically loaded. Options to rule out the possibility of Test Path Definitions
2. Close all the object files and data files in the Initialization Script
3. Connection to the database should be done in the Inits Script
4. Always Unload and Load the Object library, and it should be done only in Inits Script.
5. Define all the "public" Variables in the Inits Script
6. Establish the db connections in the Inits Test Script

Q: Test Recorder about Script Environment: Test Scripts Elements:
Prior to the development of Test Scripts the fashion of arranging the Test Scripts needs a proper planning. Lets look at few inputs on arranging the Test ware.
Test Ware Test Repository
Test Suite Should contain Sub Folders, Exception Handlers, Global Object files, Set Data file, Driver Scripts, Initialization & Termination scripts
Driver Script Object Checks, Bit Map Checks, Text Check, Web Check, User defined Functions, Global test Report Folder
Driven Script GUI/Bit/Text Check, External Libraries, I/O Handlers

Q: Test Recorder about Control Points
In any given automation tool the overall control of AUT is by Object identification technique. By this unique feature the tool recognizes the Application as an medium to interrogate with the Tester supplied inputs and tests the mobility of the Business Logistics. Invoking this object identification technique the test tool does have certain control features that checks the application at various given point of time. Innumerous criteria, myriads of object handlers, plenty of predefined conditions are the features that determine the so-called object based features of the Functional Check points. Each tester has a different perspective of defining the Control points.

Q: Test Recorder about Control Points - If. … Else:
1. Before we start the "if else" construct the nature of the control point is commented along side. For e.g.,
# Home Page Validation
If ( == "0")
print ("Successfully Launched");
else
print ("Operation Unsuccessful");
2. For all Data Table operation the return-code of the Open function should be handled in the "if else" construct.

Q: Test Recorder about Data Access
In automation Test Data becomes very critical to control, supplement and transfer in the application. In automation tools the Test Data is handled in data sheets of Excel format or a .csv file that is basically a character separated file using the Data driven technology. In most regression batch testing the Test Data is handled in data tables with proper allocation of test data in the sheets.
Q: Test Recorder about Control Points - Check Points
1. Any checkpoints should not be a component of X & Y Co-ordinate dependant. In practical terms if there is a Check point that is defined on X,Y Parameters then the usability of the control point wouldn't make any sense for the application to test. The following are some of the criteria which denotes the do's and don't's of checkpoints.
S.No Check Point Include Exclude
1 Text Check Capture Text, Position of the Text, Font & Font Size, Text area,
2 Bitmap Check only the picture Window or Screen that holds the picture, x-y co-ordinates,
3 Web Check URL Check, orphan page Avoid any text validation
2. As a case study, the WinRunner automation tool is mentioned here as examples for creating check points. Usage of OBJ_CHECK_INFO or WIN_CHECK_INFO can be avoided and inculcate the idea of creating always the GUI Check point with Multiple Property. The advantages are to identify every small object with its clause, properties and its relativity with the previous versions. This not only enables the Regression comparisons but also it gives you the flexibility of defining the GUI Checks in all Physical state of the Object.


Q: Test Recorder about Data Handlers
Test Data can be accessed by built-in data functions. Some of the common practices that would help a automation tester to use the data-tables in a proper fashion.
1. SINGLE DATA TABLE: By default every automation tool gives the data-table as an input file can be created using a tool wizard or sometimes potentially creating using a character-separated file. This wizard would help us in creating a Data sheet with its column names from the objects used in the test objects. With this concept, we can evolve a technique to load any File or manipulate the AUT by predefined set of cases.
2. Multiple Data Table: It's a common practice to use the single default data file for many test scripts. Often the usage of data tables is restricted to one file at a moment. Handling multiple data tables is not advisable and incur a lot of redundant code to handle the table manipulations. As a general practice the data file is mapped to every script. This mean every Test Script will have a unique data table for easier data access and also the data operation will become easy to maintain.
In Compuware's QARun following is the code used.

// Run a test script
TestData ("CreditLogon.csv")
Call TestFunc1

For e.g in Mercury Interactive's WinRunner,
call_close "Test_Script1" (dTable1.xls) ;
#
call_close "Test_Script2" (dTable2.xls);

3. Data files should be initialized before starting by way of simple tool commands by transferring a standard template data table to the actual template. By this practice the need of deleting data after every run in the data table can be avoided.
In Mercury Interactive's WinRunner the piece of code below explains the data table Initialization.
#/***************Data Table Initialization*****************
ddt_open(Template, DDT_MODE_READ);
ddt_open(dTable, DDT_MODE_READWRITE);
ddt_export(Template,dTable);
ddt_save(dTable);
ddt_close(dTable);
ddt_close(Template);
ddt_close_all_tables();
#/***************Data Table Initialization*****************

4. Dynamic loading of data from the Data base operation is the most advisable practice to be followed, but yet handling the db operations with some meticulous programming would always benefit the tester avoiding a variety of operational hazard and reducing the data access time for remote server database to the local data table.
Some of the tips, which need to be followed in the WinRunner TSL handling when we use the db commands.
Set the row before writing the data values in to the data-table.
i.e., Use the following TSL Command
public count;
count = 1;
ddt_set_row (dTable, count);
Now we use the set value by row command for writing the values in it
ddt_set_val_by_row (dTable, count, "CTS_EMP_NAME", value);
Need less to mention here, but to avoid confusion it is better to use the same column names as found in the Data Base table. And never insert any columns before or after or in between the column names in the WinRunner data table. It is a better practice to load the data table with the data as found in the Backend database.
Fig 1. shows the Automation Test Plan, its pre-requisites, Initial Conditions and the Test repository. This figure also gives the idea of building any Automation Test plan.

Q: Online Vs Batch Execution - Online Test Scripts
How do we use Online Scripts?
Using Dialog functions we can use the Interactive Testing accomplished.

In Mercury Interactive the following code is
SSN = create_input_dialog ("Please Enter the SSN Number");
In Compuware's QARun the following code is Dialog "Array_A" Array_A []
USER = Array_A["Userid"]
Pass = Array_A["Password"]

Q: Online Vs Batch Execution - User Input
. Where should the input_dialog_box function exist - in the driver file or in individual script?
. The input dialog function should be used within the driver files (Master driver and within each of the Type driver files)


Q: Online Vs Batch Execution - Test Results
. Is it necessary to pass results back to the driver script even if scripts are not dependent? How should the results be passed back?
No need to pass the result back to the driver script if your scripts are independent

Q: Online Vs Batch Execution - TRe-Runnable Tests
Should setup scripts be made re-runnable? If yes then why? Also what is the best way to make them re-runnable (should it be attaching a random-number string or should it be, 'if' statements to check if data already exists)
It is best to create scripts that are re-runnable but we understand that it may not be possible for all cases for Set-Up type.

Q: Online Vs Batch Execution - TRe-Runnable Tests
Calling a driver file from within a driver file? Is this advisable?
No.

Q: Online Vs Batch Execution - Functions & Compiled Modules-Load Library
Loading libraries and memory issues, i.e. if a library contains 100 functions and only one function is used then unnecessarily we are loading all the function into memory. Should we make multiple smaller libraries and load and unload libraries frequently or just have one big library and keep it loaded all throughout the execution of master driver
Known Issue
We will run into memory issues when loading 100 functions into memory Q: Online Vs Batch Execution - Functions & Compiled Modules-Data Fetch
Should we open and read from data table in driver scripts? Why or why not?
The purpose of the driver script is to setup the application and then calls each individual scripts. To open, read and close the data-file should happen at the individual test script level.

Q: Online Vs Batch Execution - Functions & Compiled Modules - User Defined Functions
Creating user-defined libraries and functions: How to access if a script should be made a function - What are the pros and cons of making a script a function versus just using it as a script and calling it from the driver file?
You have to load the function library first before you are able to make a call out to any of the functions defined in a function library. Using User-defined function is more efficient in the sense that they are compiled and loaded into memory before a function is being called and a function can be used over and again without having to recompile the function library.

WinRunner: Test Director
• Test Director is a one-stop solution for organizing the entire test cycle.
• It has four main tabs(categories) corresponding to the various phases in the testing cycle namely Requirements, Test Plan, Test Lab and Defects.
• Requirements can be entered and organized into various categories like login operations, database operations and so on.
• After setting up requirements, test cases corresponding(covering) these requirements can be defined and associated with the requirements. A requirement can be covered by multiple test cases and a test case can cover multiple requirements.
• The test plan can be defined with test cases each with test steps and can be manual or automated.
• Test sets are created to group similar test cases and then the test sets can be run.
• If a particular test set fails the run, after examination, the tester/QA can enter a defect in the associated defect tracking system. Attributes such as severity can be assigned.
• Test Director allows two modes of operation - user and administrator. Administrator can create and update user and group accounts, configure mail, customize project lists, custom project entities and setup wokflow whereas the user doesn't have these privileges.
• The six project entities are Requirement, Test, Test Step, Test Set(Execution), Run and Defect.
• Test Director allows attachments(file, URL, snapshot) with requirements, test step, test case, test run or defect.
• Test Director is flexible and can be customized within certain limits. Additional fields can be added in requirements, test case, test step, test plan and defects.
• Test Director has what is known as favorite views wherein any view or report or graph can be made to look as the user wants it to. The user can make only certain columns viewable and make it a favorite view.
• Test Director also has filters to filter test cases, requirements, defects by any of the various attributes like severity, assigned to etc.
• Test Director also has an Exection Flow option which is used to schedule automated test cases.
• Work flow is setup by the administrator which includes creating Visual Basic modules to save data before entering a bug in a defect tracking system, to perform operations before opening a bug form, after a bug field is changed and so on.
• Test Director also a comprehensive document generator utility to develop professional reports for the testing process.
• Also reports and graphs corresponding to requirements, test plan, test lab and defects can be created.
• The host machine can also be configured while running the test sets.

WinRunner: Test Director - Test Repositories
. A separate test repository will be created for each groups project. The test repositories will be created as Common Directories and will be located on a network server (this area should be a shared area where the group stores the rest of their files and documents).
. Initially all test repositories will be created using a Microsoft Access Database. In the future we may change this to SQL Server.
. The path to the network area can not be more than 47 characters (A Test Director restriction).
. The path can not contain any special characters such as $,& -, or %.
. All folders that contain the test repositories should start with TD_ .
. All test repositories should start with TD_ .

TD_NameofProject
Reports - Created automatically by Test Director, this is where it stores results of tests etc.
Tests - This is where the test scripts will reside if use WinRunner also.
GUImap - This is where the GUI map files will reside
Datafile - This is where all data flat files and excel spreadsheets will reside
Docs - This is where copies of documents that pertain to this project will reside Fonts
- This is where a copy of the font groups will reside
Functions - This is where the function library for the project will reside.
. Within Test Director various folders can be created as a way to organize your project and tests. These folders are stored in the database and may or may not be apparent on the file system.
TD_NameofProject
FolderName - Folder for functional regression tests
SubFolder - Sub folder for Specific Window
FolderName - Folder for SC functional regression tests
. It is not recommended to nest the folders more than 3 levels deep.

WinRunner: Test Director - Steps to take before creating Test Projects:
. Before starting Test Director, you should close all applications that are not required for testing. (Mail, Explorer, Screen Savers, CD Player etc).
. After installing a new version of Test Director and WinRunner, it is a good idea to make a back up copy of the following ini files to another location(testers choice of location). This is recommended to allow the tester to easily reset their WinRunner/TestDirector environment in the event of system corruption.
c:\windows\wrun.ini
c:\windows\mercury.ini
c:\~\TestDirector\bin\td.ini
c:\~\TestDirector\bin\filters.ini
c:\~\TestDirector\bin\forms.ini
c:\~\TestDirector\bin\grids.ini

. Your Test Director Application comes with a full set of on-line manuals. The manuals can be accessed using the Help Menu in the Test Director application. The on-line manuals can be viewed using Adobe Acrobat Reader 4.0.

WinRunner: Test Director - Set Up Recommendations:
Before a tester starts creating folders and test scripts, they should configure their Test project using the Administration menu.
1. Create various Users and User Groups for your project through the Administration -> Setup Users… menu item. Test Director comes with the following Pre-defined users and groups. We recommend that you create a user id that is similar to your Network login. You also have the option to create a password or leave it blank. The default is blank. You can also create your own groups or use the default groups provided by Test Director.
Default Users and Groups:
Users:
. Admin
. Guest
Groups:
. TDAdmin Has full privileges in a TestDirector project. This is the only type of user which can make changes to the information in the Setup Users dialog box. It is recommended to assign this user type to one person per group who will serve as the TestDirector Administrator.
. QATester Can create and modify tests in Plan Tests mode, and create test sets, run tests, delete test runs, and report defects in Run Tests mode. This user type is recommended for a quality assurance tester.
. Project Manager Can report new defects, delete defects, and modify a defect's status. This user type is recommended for a project manager or quality assurance manager.
. Developer Can report new defects, and change a defect's status to Fixed. This user type is recommended for a software developer.
. Viewer Has read-only privileges in a project.
2. Test Director also give you the option to customize your projects by creating user-defined fields for the dialog boxes and grids, creating categories and modifying drop down lists. These options enable you to add information that is relevant to your project. Modification to your projects are done using the Administration -> Customize Project Menu item. For more details on how to customize your project please see Chapter 3 in the Test Director's Administrators Guide.
3. Decide on Script Naming Convention and consistently use the naming convention for all tests created within the project. Please reference the Naming Conventions section for more information.
4. Create Test Folders to organize your tests into various sections. Examples of possible folder(s) names could be the types of testing you are doing ie: (functional, negative, integration ) or you could base your folder names on the specific modules or windows you are testing.

5. Create test scripts on the Plan tests folder using the New button in the test frame or menu item Plan -> New Test. The Test window has four tabs, Details, Design Steps, Test script and attach.
The Details tab, should be used to list all the information regarding the test. Test Director defaults to displaying the Status: Design; Created: date and time; Designer: your id.
. The Design Steps Tab, should be used to list detail instructions on how to execute your test.
. The Test Script tab, is used for tests that are turned into automated tests, on this page the automated WinRunner code will appear.
. The Attach Tab, can be used to attach bitmaps or other files required for testing to the script.
6. When creating folders, tests and test sets in Test Director make sure every item has a description.
7. Create a "Documentation" test to document how to set up your testing environment and run your tests.
8. It is recommended that you write your test scripts as detailed as possible and not assume that the executor of your test "knows how to use your module". By making your scripts as detailed as possible, this will allow others from outside your project to understand how to execute your tests, in the event that they have to run the tests.
9. Create Test Sets to group like test together, or to specify the order in which your tests should run. Test director has a limit of 99 test scripts per test set.
10. Export the test scripts into word via the Document Generator, Menu item Report -> Document Generator.
WinRunner: Test Director - Documentation Standards:
. Use a consistent naming convention for all test scripts and tests.
. Put Detailed descriptions on all test folders and test scripts to explain the purpose of the tests.
. Each test script should have a detailed test step associated with it.
. Before any automation project is started, the tester should write an automation standards document. The automation standards document will describe the following:
. Automation Environment
. Installation requirements
. Client Machines Configurations
. Project Information
. WinRunner and Test Director Option Settings
. Identify Servers, Database and Sub System tests will run against
. Naming Convention for project
. Specific Recording Standards that apply to individual project.

WinRunner: Test Director - Naming Conventions:
. Never call a test "tests", WinRunner/TestDirector has problem distinguishing the name test with the directory tests.
. The project automation document will specify any naming conventions used for the individual projects.

WinRunner: Test Director - Importing WinRunner tests into Test Director
1. bring up Test Director
2. select Plan - > Import Automated Tests
3. select your tests and import them
4. select the test grid button
5. change each tests subject to point to the folder you want them in
6. now copy all the tests from un-attached to the folder.
7. close test director
8. bring up test director again.
9. If after all this they are not there, create a dummy test to refresh the tree view, the tree window does not seem to refresh very well.

WinRunner: Test Director - How to delete Mercury Toolbar out of MS Word
If you ever play with the Test Director import into word feature, you will automatically create a test director toolbar in your MS word application. They best way to get rid of this toolbar is to;
1. Shut down word if open
2. Bring up windows explorer
3. Navigate to C:\Program Files\Microsoft Office\Office\STARTUP
4. Delete all instances of Tdword.*
5. Restart word and verify it is now gone.

WinRunner: Other Test Director Features
The Test Director application has a number of other features:
1. Running Manual tests via Test Director application. Test Director has a feature that allows you to run your manual tests through the Mini Step Utility. This feature allows you compare the actual outcome to the expected results and record the results in the Test Director database at run time.
2. Test Director also has the capability of converting your manual tests into Automated Tests in the WinRunner application.
3. Test Director also provides reporting and graphing capabilities, that will assist you in your reviewing the process of test planning and test execution. Test Director provides a number of standard reports and graph formats as well as allows the user to create customized reports and graph.
4. Defect Tracking. Test Director also provides a Defect tracking tool in the Test Director product.

WinRunner: How to see the internal version of WebTest in your machine?
To see the internal version in your machine, right-click the ns_ext.dll file, select Properties, and click the Version tab. The ns_ext.dll file is located in the arch subdirectory of your WinRunner directory.

WinRunner: Web sites contain ActiveX controls
If your web sites contains ActiveX controls, you must install ActiveX add-in support when you install the WebTest add-in.
WinRunner: Web sites contain Java applets
If your web sites contain Java applets, you need to install Java add-in support for WinRunner.

WinRunner: To Record the Web Application
Recommendation: Set your browser to accept cookies. This will prevent a pop up window asking about the cookie from interfering with your script.

WinRunner: Steps to take before recording:
. Before starting to record, you should close all applications that are not required for testing. (Mail, Explorer, Screen Savers, CD Player etc).
. After installing a new version of Test Director and WinRunner, it is a good idea to make a back up copy of the following ini files to another location(testers choice of location). This is recommended to allow the tester to easily reset their WinRunner/TestDirector environment in the event of system corruption.
c:\windows\wrun.ini
c:\windows\mercury.ini
c:\~\TestDirector\bin\td.ini
c:\~\TestDirector\bin\filters.ini
c:\~\TestDirector\bin\forms.ini
c:\~\TestDirector\bin\grids.ini
c:\~\WinRunner\dat\ddt_func.ini
. Make sure your system is set up with all the necessary library functions that have been created.
. Make sure you create a GUI map and font group for each project.
. In the tsl_init file add the command GUI_close_all();. This command will make sure that no GUI maps are loaded when you bring up the WinRunner application. The benefit of this approach is that it will force the tester to load the correct GUI map for their testing, thus preventing scripting errors and other complications.

WinRunner: Libraries Needed for automation:
A number of libraries have been created to aid in the automation projects. Below is a list of the libraries that should be installed on each individual machine.
. csolib32 - This is a library full of many useful functions. This library was created by Mercury Customer Support and can be found in the following zip file csolib.zip. In order to access the library functions, the tsl_init file needs to be modified to run the cso_init file (which will load the libraries when the WinRunner application boots up).
. WebFunctions - This is a library contains functions designed to run on the YOUR-COMPANY Web systems.

WinRunner: Commands and Checkpoint Verification information for Web:
. Must do a set_window(); command for each action on a new window, this will assist the script in recognizing/resetting the window state and help prevent scripts failing due to slow network/system performance.
. Must add report_msg or tl_step command after each test to record what happens in the test.
. A obj_check_qui statement checks only one object on the window, and win_check_qui statement checks multiple objects in the window.
. The single property check allows you to check a single property of an object. The single property check dialog will add one of the following functions to your script.
button_check_info
edit_check_info
list_check_info
obj_check_info
scroll_check_info
static_check_info
The Object/Window check allows you to check every default object in the window. After it has completed it inserts an obj_check_gui statement in your script.
The Multiple Objects check allows you to check two or more objects in the window. This selection works best, because it first brings up the checkpoint window, then after the user selects add you can navigate to the AUT. Also, for some reason, the data in the object is retrieved with this feature but not the object/window check. After it has completed it inserts a win_check_gui statement in your script.
. There are 3 main types of GUI checks you can do with WinRunner. Single Property and Object/Window checks, and Multiple Objects.
. There are 35 Web functions, that come with the web test add-in. For the full list please see the TSL reference guide. The table below lists the most commonly used functions.

Function Description
web browser invoke Invokes the browser and opens a specified site.
web image click Clicks a hypergraphic link or an image.
web label click Clicks the specified label.
web link click Clicks a hypertext link.
web link valid Checks whether a URL name of a link is valid (not broken).
web obj get info Returns the value of an object property.
Needs a set_window command to be run before used
web obj get text Returns a text string from an object.
web obj text exists Returns a text value if it is found in an object.
web sync Waits for the navigation of a frame to be completed.
web url valid Checks whether a URL is valid
web find text Returns the location of text within a page.

. Most of the Web Test functions do not return a value to the test log, in order to record a pass or fail, conditional logic will have to be added to your code below the web function to send a tl_step or report_msg to the log

WinRunner: How to Structure tests for Web:
Create Begin and End Scripts. This will ensure that WinRunner starts and stops from the same place.
. Mercury recommends that it is better to use smaller specific GUI maps for your testing than have one large GUI map that encompasses your whole application.
. Comment all major selections or events in the script. This will make debugging easier
. Need to create a init script to load correct GUI map, font group and set option variables.
A few of the options you should set are:
# Turns off real time error message reporting if the test case
# fails. The error is still logged in the test results window.
setvar ("mismatch_break", "off");
# Turn off beeping
setvar ("beep", "off");
setvar ("sync_fail_beep", "off");
# Make sure context sensitive errors don't trigger real time
# failures that stop the script. The error is still logged in the
# test results window.
setvar ("cs_fail", "off");
# Sets time winrunner waits between executing statements
# (Mercury default is 0)
setvar ("cs_run_delay", "500");
# Sets time winrunner waits to make sure window is stable
# (Mercury default is 1000)
setvar ("delay_msec", "500");
# Sets the fail test when single property fails to uncheck (bug - recommend set to un-check) setvar ("single_prop_check_fail", "0");
. Determine all paths to start up directory and then set them in the options window.
. In your closing/ending scripts use the GUI_unload_all command to unload all GUI maps in memory.
WinRunner: Recording tips:
. Always record in Context Sensitive mode.
. WinRunner is case sensitive, so be careful in your scripting regarding what is put in upper/lower case.
. If using the full check text test case, make sure you add a filter to block items such as a date, user ID (which might vary depending upon the time the script is running and who is running it).
When recording in Analog mode, avoid holding down the mouse button if this results in a repeated action. For example, do not hold down the mouse button to scroll a window. Instead, scroll by clicking the scrollbar arrow repeatedly. This enables WinRunner to accurately execute the test.
Before switching from Context Sensitive mode to Analog mode during a recording session, always move the current window to a new position on the desktop. This ensures that when you run the test, the mouse pointer will reach the correct areas of the window during the Analog portion of the test.
When recording, if you click a non- standard GUI object, WinRunner generates a generic obj_ mouse_ click statement in the test script. For example, if you click a graph object, it records: obj_ mouse_ click (GS_ Drawing, 8, 53, LEFT); If your application contains a non- standard GUI object which behaves like a standard GUI object, you can map this object to a standard object class so that WinRunner will record more intuitive statements in the test script.
Do not Save in the test procedure unless it is absolutely necessary, this will prevent the need to write numerous clean up scripts.
Do not use the mouse for drop down selections, whenever possible use hotkeys and the arrow keys. When navigating through a window use the tab and arrow keys instead of using a mouse, this will make maintenance of scripts due to UI changes easier in the future.
. If recording on a PC, make sure the environmental settings are set up correctly. Use the Control Panels -> Regional Settings window to make sure that the date format, number formatting, currency, time and date are set the same for all PC's that will be using the scripts. This should be done to ensure that playback of test cases do not fail due to a date, currency or time differences. The best way to handle this is to record a small script to set the correct settings in the Control Panel Regional Settings window.
. If recording on a PC, make sure that all workstations running the scripts have the same Windows Display settings. By setting the PC's window appearance, and display settings the same, this will help ensure that bitmap comparisons and other graphic tests do not fail due to color and size differences. The best way to handle this is to record a small script to set the correct settings in the Control Panel Display Settings window.
When recording, if you click on an object whose description was not learned by the RapidTest Script wizard, WinRunner learns a description of the object and adds it to a temporary GUI map file.
. WinRunner does not compile the scripts until run time, so be careful to check your code before running it. Another option is to put your script in debug mode and step through the code to make sure it will compile correctly.
. Please Indent "if" statements and loops, to help make the code more understandable.
. To add a new object(s) to a GUI map that already exists; perform the following steps:
1. Ensure the no GUI Maps are loaded in the GUI Map Editor.
2. Do a simple recording that will include the object you need added to the GUI Map. This will put the object into the TEMP GUI Map.
3. Go into the Temp GUI Map and delete objects that are already contained in the existing GUI Map.
4. Go into the GUI Map Editor and load the existing GUI Map.
5. Use the Expand button to display two panels on the window.

6. Using the Copy button, copy the values in TEMP into the existing GUI Map. 7. Save the GUI Map file on the network in J:\CorpQATD\TD_Daybr\GUImap . (or substitute TD_web with whatever machine you are currently working on).
. While scripting and debugging your script it is a good idea to put a load command for the Web Function script at the top of your script and an unload at the bottom of your script. This code will automatically load the function library when you run your scripts, thus saving you the extra step when you try to debug your scripts. It is very important to remember to comment out the lines when you are done debugging/developing.
Below is a sample of code you can use.
#this is here for debugging only, when run in shell script will comment out.
#reload ("J:\\CorpQATD\\TD_web\\functions\\Webfunctions");
#this is here for debugging only, when run in shell script will comment out.
#unload ("J:\\CorpQATD\\TD_web\\functions\\Webfunctions");
. If you create a script by copying and existing one as save as, make sure to go into windows explorer to delete the exp and res folders. You could carry with you extra files you don't need.

WinRunner: Documentation:
. Each test procedure should have a manual test plan associated with it.
. Use word to write the detailed test plan information.
. When test planning is completed cut and paste (or translate) the test plans into test director.
. When creating folders, tests and test sets in Test Director make sure every item has as description.
. When creating test scripts, cut and paste the default header script:

###########################################################
# Script Name:
# Description:
#
# Project:
# #########################################################
# Revision History:
# Date: Initials: Description of change:
#
###########################################################
# Explaination of what script does:
#
#
###########################################################
#this is here for debugging only, when run in shell script
#will comment out.
#reload ("J:\\CorpQATD\\TD_Daybr\\functions\\functions");
{put code here}

#this is here for debugging only, when run in shell script #will comment out.
#unload ("J:\\CorpQATD\\TD_Daybr\\functions\\functions");
. Before any automation project is started, the tester will write an automation standard document. The automation standards document will describe the following:
. Automation Environment
. Installation requirements
. Client Machines Configurations
. Project Information
. WinRunner Option Settings
. Identify Servers, Database and Sub System tests will run against
. Naming Convention for project
. Specific Recording Standards that apply to individual project.
. While Scripting please comment major portions of the script using WinRunner's comment command "#" . (example: #This is a comment.)

WinRunner: Naming Conventions:
. Never call a test "tests", WinRunner/TestDirector has problem distinguishing the name test with the directory tests.
. The project automation document will specify any naming conventions used for the individual projects.
. Test Script names should be in UPPER CASE.

WinRunner: When Running Scripts:
When you make your shell script, remember to run it the first time in update mode to create all the expected results, and then run it in verify mode. The reason this needs to be done is because the expect results reside under each specific test script, and for shell scripts it created sub folders for each test it runs. The expected results are not pulled from the individual test area to the shell script area, so it needs to be run in update mode to re-create them. Another option is to use Windows Explorer and copy all your expected results folders to the directory containing the shell script.
WinRunner: How to test to see if the window is maximized
If you want to test to see if the window is maximized here is a sample of how to code it. This code would be best used in the start up script of any automation project.
#first grab the windows handle for the netsoft elite window
win_get_info("Browser Main Window ","handle",value);
#now test to see if window can be maximized
if(win_check_info("Browser Main Window ","maximizable",FALSE)!=E_OK)
{
#Now run maximize function and pass in the handle's value
if (is_maximized(value) == E_OK)
{
report_msg("Ran Max window test and maxed the window");
win_max("Browser Main Window ");
}
else
{
report_msg("Ran Max window test and did not have to do anything");
}
}
# end of script

WinRunner: How to determine which window you are on:
Each time a new browser window appears, you need to test to make sure the correct window is activated. to do with use the following code:
#test to make sure on browser
win_check_info("Browser Main Window_1","enabled",1);
# check to make sure the menu says Menu Selection
menu = obj_check_gui("title_shad", "list5.ckl", "gui5", 5);
if (menu == 0)
report_msg("On Menu Window");
else
{
report_msg("not on right window");
texit:
}

WinRunner: How to test if a link exists and is valid
Use the web_link_valid command, then add some conditional logic to say whether or not the test passed.
# verify the link is valid
set_window("Default Menu", 1);
yes = web_link_valid("YOUR PRODUCT APPLICATION", valid);
if (yes == 0)
report_msg("link exists on page");
else
report_msg("no link");

WinRunner: How to select a link on a web page
In order to select a link, you need to use the web_link_click command.
win_activate ("Browser Main Window");
set_window ("Default Menu", 0);
web_link_click("YOUR PRODUCT APPLICATION");
web_sync(5);

WinRunner: How to check a property of an object on the web page
The most flexible and dynamic GUI check point is the Multiple Objects Check point. This feature allows you to view the objects before selecting them, and then gives you the opportunity to select with properties of the object you want to test.
Steps to Verify contents of a list box:
1. Turn on record in context sensitive mode (this will create GUI objects for you, if you try using insert function only the code will be created, and you will then have to run in update mode to generate the gui checks).
2. Select Create -> GUI Check -> Multiple Objects
3. Next the Create GUI check point window will come up
4. Press the Add button
5. Now move the cursor around the screen and select the object(s) you want to test.
6. When done selecting the objects press right mouse
7. Now you will be brought back to the Create GUI check point window. Listed in the window will be the object(s) you selected. For each object a list of properties will be selected. Using the check boxes on the left select which values you want to check. To view the content of the values, click on the < … > in the expected results.
8. By clicking on the < … > the edit check window will come up, that will allow you to edit the values.
9. When done press OK on all windows. Then the following code will be added to your script.
win_activate ("Browser Main Window_0");
win_check_gui("State Selection", "list1.ckl", "gui1", 1);
10. To modify or edit your GUI check point select Create -> Edit GUI checklist, and the Create GUI check point window will come back up.

WinRunner: Parameterization rules:
. Do not call the excel sheet default.xls, rename it the same name as your script (or calling script).
If you want the change the start row of the table change the code table_Row = 1 on the line
for(table_Row = 1; table_Row <= table_RowCount; table_Row ++) . The c:\~\WinRunner\dat\ddt_func.ini file lists what functions will work with Data Driven testing. No web functions are listed in this file. If you want to data drive a web function you will have to added them to the file. . Any Excel file used for data driven testing must be saved in excel format. . The Excel Files can only have one worksheet and no formatting. . The character length max for a number in a cell is 10 char. Anything over becomes scientific notation and does not work. There are two workarounds to this problem, option one is to use concatenation, and option two is to use a ' in the field and make the value a string. Workaround 1: Use the & (concatenation command to make your values larger. ) Here is a code sample: edit_set("1" & ddt_val(table,"SalesNumMax")); Workaround 2: In the data table, instead of typing in the number as 12345678901 type it in as '12345678901. The ' in the front of the number will make it a string (and strings char limits are 255). . Also a field length can not start with leading 0's. To work around this, use either of the methodologies shown above. . When defining a new table in the DataDriver Wizard, the table will not be saved if an .xls extension is not specified. Workaround: When typing the name of a new table, set the name to have an .xls extension. If you use the Parameterize Data function, DO NOT highlight the row, just put your cursor on the space you want to over lay and it will work. If you highlight the whole row, it comes back all messed up. Here are some steps that explain how to use the DataDriven functionality in WinRunner. 1. Record your script save it 2. Select wizard Tools -> Data Driven Wizard
3. Press Next button
4. At Use a new or existing Excel file box: Navigate to data file area and select data file or enter name for new file.
5. On Assign table name variable: Enter the name to call table in script.
6. Check off Add statements and create a data-driven test
7. Check Parameterize test by line
8. Press Next button
9. On Test Script line to parameterize: either do not replace line (if you don't want to) or select a new column. If you select a new column (you can change column name if you want.
10. Repeat for all lines that appear (this depends upon how many line you have in script.
11. When done press Finish
12. Your script will come back and it is all parameterized for you.

Here are Code:
1 table = "path to excel file";
2 rc = ddt_open(table, DDT_MODE_READ);
3 if (rc!= E_OK && rc != E_FILE_OPEN)
4 pause("Cannot open table.");
5 ddt_get_row_count(table,table_RowCount);
6 for(table_Row = 1; table_Row <= table_RowCount; table_Row ++) 7 { 8 ddt_set_row(table,table_Row); 9 edit_set("Log",ddt_val(table,"Log")); 10 obj_type("Log",""); 11 edit_set("password",ddt_val(table, "password")); 12 button_press("Login"); 13 } 14 ddt_close(table); Manual 1. Create an xls file (using WinRunner's Tools -> Data Table)
2. Make the Columns names be your variable name, make the rows be your data.
3. Save the Xls file
4. At the top of your script type line 1 (take from above example) - This sets the table name for you.
5. Type lines 2 - 5 exactly - this tells the script to open the table and does error handling incase it can't open the table, and gets the count of the table
6. Now move cursor to area want to parameterize.
7 Type lines 6 - 8. If you do not want to script to start on row 1, change table_Row = (row to start on).
If you want to run numerous times then create a loop here.
8. Now move cursor to line you want to parameterize. You parameterize by replacing the value in the edit_field statement with ddt_val(table, "variable").
Before:
edit_set("log", "STORE");
After parameterization will look like:
edit_set("Log", ddt_val(table,"Log"));
9. Repeat for all lines you want to parameterize.
10. Then add the closing }.
11. Add the last line (14) to close the table.
12. Repeat steps 7 - 11 for all areas you need to parameterize in code.

ddt_func.ini file:
The Data Wizard functionality uses the ddt_func.ini file to determine which functions you can parameterize. If you run the wizard and find that a certain function does not parameterize the work around is to add the parameter to the ddt_func.ini file. Here are the steps: 1. Shut down WinRunner Application 2. Open the ddt_func.ini file located in your \~\winrunner\dat directory. 3. Add the function you want to add and the parameter of the function you want the Data wizard to change. 4. Save the file 5. Bring up WinRunner Again. 6. Your function should now work with the data wizard
WinRunner: Use the following templates to assist in your scripting
As a default header for all scripts.
###########################################################
# Script Name:
# Description:
#
# Project:
# #########################################################
# Revision History:
# Date: Initials: Description of change:
#
###########################################################
# Explaination of what script does:
#
#
###########################################################
#this is here for debugging only, when run in shell script
#will comment out.
#reload ("J:\\CorpQATD\\TD_Daybr\\functions\\functions");
{put code here}

As a default script will reset the WinRunner's environment to have the correct option settings and GUI maps loaded.
###########################################################
# Script Name: Setenv
# Description: This script sets up the environment for the
# the automated testing suite ( ).
# Project:
# #########################################################
# Revision History:
#
# Date: Initials: Description of change:
#
#
###########################################################
# Load the Gui map
#GUI_unload ("c:\\ \\ ");
# remember to use double slashes
# Load Functions
#font group
# Load any dll's
# set any option parameters for this particular script.
# Turns off error message if the test case fails.
setvar ("mismatch_break", "off");
# Turn off beeping
setvar ("beep", "off");
setvar ("sync_fail_beep", "off");
# Make sure context sensitive errors don't trigger failure
setvar ("cs_fail", "off");
# Sets time winrunner waits between executing statements
setvar ("cs_run_delay", "2000");
# Sets time winrunner waits to make sure window is stable
setvar ("delay_msec", "2000");
# Declare any Constant Declarations
# Declare any Variable Declarations


As a default script to use for calling all the scripts in your project.
###########################################################
# Script Name: OpenClose
# Description: This is the calling(main script that runs ....
#
# Project:
# #########################################################
# Revision History:
#
# Date: Initials: Description of change:
#
###########################################################
status=0;
passed=0;
failed=1;
#Run the set up environment script
call "c:\\ "();
#Run the begin script
call "c:\\ "();
# Run
call "c:\\ "();
# Run end script
call "c:\\ "();
# Run the closeenv script
call "c:\\ "();

As a default script to reset your WinRunner environment to the generic default settings.
###########################################################
# Script Name: closeenv
# Description: This script re-sets the environment to the
# default settings.
# Project:
# #########################################################
# Revision History:
#
# Date: Initials: Description of change:
# 1
#
###########################################################
# Load the Gui map
#GUI_unload ("c:\\ \\ "); # remember to use double slashes
# Load Functions
#font group
# Load any dll's
# set any option parameters for this particular script.
# Turns off error message if the test case fails.
setvar ("mismatch_break", "off");
# Turn off beeping
setvar ("beep", "off");
setvar ("sync_fail_beep", "off");
# Make sure context sensitive errors don't trigger failure
setvar ("cs_fail", "off");
# Sets time winrunner waits between executing statements
setvar ("cs_run_delay", "2000");
# Sets time winrunner waits to make sure window is stable
setvar ("delay_msec", "2000");
# Declare any Constant Declarations
# Declare any Variable Declarations

WinRunner: The following code is written to replace WinDiff as used by WinRunner for showing differences in file comparison checks.
Written by Misha Verplak
INSTRUCTIONS
. Place these files into winrunner \arch directory:
wdiff_replace.exe
wdiff_replace.ini

. Rename wdiff.exe to wdiff_orig.exe

. Rename wdiff_replace.exe to wdiff.exe . Edit wdiff_replace.ini to specify the new difference program

FILES
wdiff_replace.exe compiled program
wdiff_replace.ini settings
wdiff_replace.c C source code
wdiff_readme.txt this file :)

#include
#define TITLE_NAME TEXT("wdiff_replace")
#define CLASS_NAME TEXT("funny_class")
#define BC2_APP_PATH TEXT("C:\\Program Files\\Beyond Compare 2\\BC2.exe")
#define WDIFF_INI TEXT("wdiff.ini")
#define WDIFF_REPL_INI TEXT("wdiff_replace.ini")
#define EMPTY_TXT TEXT("[EMPTY]")
extern char** _argv;
int WINAPI
WinMain (HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow)
{
DWORD res;
TCHAR sLeftFile[MAX_PATH], sRightFile[MAX_PATH];
TCHAR sAllArgs[MAX_PATH*2];
TCHAR sPathDrive[MAX_PATH], sPathDir[MAX_PATH], sPathFile[MAX_PATH], sPathExt[MAX_PATH];
TCHAR sDiffReplIni[MAX_PATH];
TCHAR sDiffApp[MAX_PATH];
TCHAR sErrMsg[MAX_PATH*2];
TCHAR sArgN[10], sArg[MAX_PATH];
int n;
/* using argv[0] (current fullpath exe), extract directory name, expect to find ini there */
_splitpath (_argv[0], sPathDrive, sPathDir, sPathFile, sPathExt);
sprintf(sDiffReplIni, "%s%s%s", sPathDrive, sPathDir, WDIFF_REPL_INI);
/* read wdiff.ini for WinRunner's two files */
res = GetPrivateProfileString(TEXT("WDIFF"), TEXT("LeftFile"), EMPTY_TXT, sLeftFile, MAX_PATH, WDIFF_INI);
res = GetPrivateProfileString(TEXT("WDIFF"), TEXT("RightFile"), EMPTY_TXT, sRightFile, MAX_PATH, WDIFF_INI);
/* check if got the default string, this means didn't get a value */
if (!strcmp(sLeftFile, EMPTY_TXT) !strcmp(sRightFile, EMPTY_TXT)) {
MessageBox (NULL, TEXT("Problem reading LeftFile or RightFile from wdiff.ini"), TITLE_NAME, MB_ICONERROR MB_OK);
return(0);
}
/* read wdiff_replace.ini for file & path of replacement to wdiff */
res = GetPrivateProfileString(TEXT("Diff"), TEXT("diff_app"), EMPTY_TXT, sDiffApp, MAX_PATH, sDiffReplIni);
if (!strcmp(sDiffApp, EMPTY_TXT)) {
sprintf(sErrMsg, "Problem reading diff_app from:\n\n%s", sDiffReplIni);
MessageBox (NULL, sErrMsg, TITLE_NAME, MB_ICONERROR MB_OK);
return(0);
}
/*
* read wdiff_replace.ini for args
* add the arguments together, with quotes, eg. "arg1" "arg2"
* also substitute for LeftFile and RightFile
*/
sprintf(sAllArgs, "");
n=1;
while(1) {
sprintf(sArgN, "arg%d", n);
res = GetPrivateProfileString(TEXT("Diff"), sArgN, EMPTY_TXT, sArg, MAX_PATH, sDiffReplIni);
if (!strcmp(sArg, EMPTY_TXT)) break;
if (!strcmp(sArg, TEXT("LeftFile"))) strcpy(sArg, sLeftFile);
if (!strcmp(sArg, TEXT("RightFile"))) strcpy(sArg, sRightFile);
if (n == 1) {
sprintf(sAllArgs, "\"%s\"", sArg);
}
else {
sprintf(sAllArgs, "%s \"%s\"", sAllArgs, sArg);
}
n++;
}
/* Run alternative diff application with its args (could use spawn here?) */
res = execlp (sDiffApp, TEXT("dummy"), sAllArgs);
/* exec replaces current app in the same env, so only get to here if problems */
sprintf(sErrMsg, "Problem running diff_app:\n\n%s", sDiffApp);
MessageBox (NULL, sErrMsg, TITLE_NAME, MB_ICONERROR MB_OK);
return 0;
}
WinRunner: The following script and dll provides WinRunner with perl-like regular expression search and match functions, to use with any GUI property, and add search and match functions.
By Misha Verplak
# regular expressions from DLL
extern int re_match(string str, string re, out int m_pos, out int m_len, inout string detail <252>);
extern int re_search(string str, string re, out int m_pos, out int m_len, inout string detail <252>);
public function re_func_init()
{ auto re_func_dll, html_name;
# location of dll
re_func_dll = getenv("M_ROOT") "\\arch\\rexp.dll";
# to access exported functions
load_dll(re_func_dll);
# function generator declarations
generator_add_function("re_search","Search a string for a regular expression.\n"
"Returns 0 no match, 1 found match, gets position and length.\n"
"Submatch results in 'detail', use re_get_details() or re_get_match().",5,
"search_string","type_edit","\"string to search\"",
"regular_expression","type_edit","\"regexp\"", "Out position","type_edit","position",
"Out length","type_edit","len", "Out detail","type_edit","detail");
generator_add_category("regex");
generator_add_function_to_category("regex","re_search");
generator_set_default_function("regex","re_search");

generator_add_function("re_match","Match a regular expression to a whole string.\n"
"Returns 0 no match, 1 found match, gets position and length.\n"
"Submatch results in 'detail', use re_get_details() or re_get_match().",5,
"match_string","type_edit","\"string to match\"",
"regular_expression","type_edit","\"regexp\"", "Out position","type_edit","position",
"Out length","type_edit","len", "Out detail","type_edit","detail");
generator_add_function_to_category("regex","re_match");

generator_add_function("re_get_detail","Get the (sub)match position and length from the detail.\n"
"Typically used after re_search() or re_match()\nsubmatch can be 0 for whole match",6,
"detail","type_edit","detail", "submatch","type_edit","0", "Out nsubs","type_edit","nsubs",
"Out line","type_edit","line", "Out position","type_edit","position", "Out length","type_edit","len");
generator_add_function_to_category("regex","re_get_detail");

generator_add_function("re_get_match","Get the (sub)matched string from the detail.\n"
"Typically used after re_search() or re_match()\nsubmatch can be 0 for whole match",4,
"original_string","type_edit","orig_str", "detail","type_edit","detail",
"submatch","type_edit","0", "Out match_str","type_edit","match_str");
generator_add_function_to_category("regex","re_get_match");

generator_add_function("re_print_detail","Print the re match details to the debug window.\n"
"Typically used after re_search() or re_match().",1, "detail","type_edit","detail");
generator_add_function_to_category("regex","re_print_detail");

generator_add_function("matche","Replacement for the builtin match() function.",2,
"match_string","type_edit","\"string to match\"", "regular_expression","type_edit","\"regexp\"");
generator_add_function_to_category("string","matche");
generator_add_function_to_category("regex","matche");
generator_add_function("match","Do not use this function. Use matche() instead.",0);
}

# replacement for the builtin match() function
public function matche(search_string, regexp)
{
extern RSTART, RLENGTH;
auto rc, m_pos, m_len, detail;
if(re_search(search_string, regexp, m_pos, m_len, detail))
{
rc = m_pos+1;
RSTART = m_pos+1;
RLENGTH = m_len;
}
else
{
rc = 0;
RSTART = 0;
RLENGTH = 0;
}
return rc;
}

# internal function to decode detail from DLL
function _detail_decode(detail, position, nbytes)
{
auto v, v_hi;
v = int(ascii(substr(detail, position, 1))/2);
if(nbytes == 2)
{
v_hi = int(ascii(substr(detail, position+1, 1))/2);
v += v_hi*256;
}
return v;
}

# dump the detail to WinRunner's debug window
#
# structure of the detail string:
# (1 byte ) size of this detail, ie. number of submatches + 1
# (2 bytes) line number where match occurred, counting from 1
# [(2 bytes) position of (sub)match, 0-th submatch is whole match
# [(2 bytes) length of (sub)match
# [--------- repeated to a maximum of 50 submatches ---]
#
public function re_print_detail(detail)
{
auto size, line, i, pos, len, s;

size = _detail_decode(detail, 1, 1);
print "size " size;
if (size == 0) return E_OK;
print "submatches " (size-1);
line = _detail_decode(detail, 2, 2);
print "line " line;

for (s=0; s size) return E_OUT_OF_RANGE;

line = _detail_decode(detail, 2, 2);
position = _detail_decode(detail, submatch*4+4, 2);
len = _detail_decode(detail, submatch*4+6, 2);
return E_OK;
}

# get the (sub)matched string from the detail
public function re_get_match(in orig_str, in detail, in submatch, out match_str)
{
auto rc, nsubs, position, len, line;

match_str = "";

rc = re_get_detail(detail, submatch, nsubs, line, position, len);
if (rc != E_OK) return rc;

match_str = substr(orig_str, position+1, len);
return E_OK;
}

Q: Online Vs Batch Execution - Functions & Compiled Modules - Wild Card Characters
. Every time there is a change in the Application Object I need to change the Object name and rerun the Test Script with a new object Name. Any suggestions on it.?
If there is a minimal change in the application Object then it is better to wild card the Object properties

Q:Coming up soon for the following Questions. If you know the answers, please email to us !
How do you call a function from external libraries (dll).
What is the purpose of load_dll?
How do you load and unload external libraries?
How do you declare external functions in TSL?
How do you call windows APIs, explain with an example?
What is the purpose of step, step into, step out, step to cursor commands for debugging your script?
How do you update your expected results?
How do you run your script with multiple sets of expected results?
How do you view and evaluate test results for various check points?
How do you view the results of file comparison?
What is the purpose of Wdiff utility?
What are batch tests and how do you create and run batch tests ?
How do you store and view batch test results?
How do you execute your tests from windows run command?
Explain different command line options?
What TSL function you will use to pause your script?
What is the purpose of setting a break point?
What is a watch list?
During debugging how do you monitor the value of the variables?
Describe the process of planning a test in WinRunner?
How do you record a new script?
Can you e-mail a WinRunner script?
How can a person run a previously saved WinRunner script?
How can you synchronize WinRunner scripts?
What is a GUI map? How does it work?
How can you verify application behavior?
Explain in detail how WinRunner checkpoints work. What are standard checkpoints?
What is a data-driven test? What are the benefits of a data-driven test?
How do you modify logical names on GUI map?
Why would you use batch testing under WinRunner? Explain advantages and disadvantages. Give an example of one project where you used batch testing.
How do you pass parameter values between the tests? typically learns all the objects in the window else we will identifying those object, which are to be learned in a window, since we will be working with only those objects while creating scripts.
Have you used WinRunner Recovery Manager?
What is an exception handler? Wny would you define one in WinRunner?
We’re testing an application that returns a graphical object (i.e., a map) as a result of the user query. Explain how you’d teach WinRunner to recognize and analyze the returned object.
What is a TSL? Write a simple script in TSL.


Load Testing

1. What is load testing?
Load testing is to test that if the application works fine with the loads that result from large number of simultaneous users, transactions and to determine weather it can handle peak usage periods.

2. What is Performance testing?
Timing for both read and update transactions should be gathered to determine whether system functions are being performed in an acceptable timeframe. This should be done standalone and then in a multi user environment to determine the effect of multiple transactions on the timing of a single transaction.

3. What is LoadRunner?
LoadRunner works by creating virtual users who take the place of real users operating client software, such as sending requests using the HTTP protocol to IIS or Apache web servers. Requests from many virtual user clients are generated by Load Generators in order to create a load on various servers under test
These load generator agents are started and stopped by Mercury's Controller program. The Controller controls load test runs based on Scenarios invoking compiled Scripts and associated Run-time Settings.
Scripts are crafted using Mercury's "Virtual user script Generator" (named "V U Gen"), It generates C-language script code to be executed by virtual users by capturing network traffic between Internet application clients and servers.
With Java clients, VuGen captures calls by hooking within the client JVM. During runs, the status of each machine is monitored by the Controller.
At the end of each run, the Controller combines its monitoring logs with logs obtained from load generators, and makes them available to the "Analysis" program, which can then create run result reports and graphs for Microsoft Word, Crystal Reports, or an HTML webpage browser.

Each HTML report page generated by Analysis includes a link to results in a text file which Microsoft Excel can open to perform additional analysis.
Errors during each run are stored in a database file which can be read by Microsoft Access.

4. What is Virtual Users?
Unlike a WinRunner workstation which emulates a single user's use of a client, LoadRunner can emulate thousands of Virtual Users.
Load generators are controlled by VuGen scripts which issue non-GUI API calls using the same protocols as the client under test. But WinRunner GUI Vusers emulate keystrokes, mouse clicks, and other User Interface actions on the client being tested.
Only one GUI user can run from a machine unless LoadRunner Terminal Services Manager manages remote machines with Terminal Server Agent enabled and logged into a Terminal Services Client session.
During run-time, threadedvusers share a common memory pool.
So threading supports more Vusers per load generator.
The Status of Vusers on all load generators start from "Running", then go to "Ready" after going through the init section of the script. Vusers are "Finished" in passed or failed end status. Vusers are automatically "Stopped" when the Load Generator is overloaded.
To use Web Services Monitors for SOAP and XML, a separate license is needed, and vUsers require the Web Services add-in installed with Feature Pack (FP1).
No additional license is needed for standard web (HTTP) server monitors Apache, IIS, and Netscape.




5. Using Windows Remote Desktop Connection
To keep Windows Remote Desktop Connection sessions from timing out during a test, the Terminal Services on each machine should be configured as follows:
1. Click Start, point to Programs (or Control Panel), Administrative Tools and choose Terminal Services
2. Configuration.
3. Open the Connections folder in tree by clicking it once.
4. Right-click RDP-Tcp and select Properties.
5. Click the Sessions tab.
6. Make sure "Override user settings" is checked.
7. Set Idle session limit to the maximum of 2 days instead of the default 2 hours.
8. Click Apply.
9. Click OK to confirm message "Configuration changes have been made to the system registry; however, the user session now active on the RDP-Tcp connection will not be changed."
6. Explain the Load testing process? Version 7.2
Step 1: Planning the test. Here, we develop a clearly defined test plan to ensure the test scenarios we develop will accomplish load-testing objectives.
Step 2: Creating Vusers. Here, we create Vuser scripts that contain tasks performed by each Vuser, tasks performed by Vusers as a whole, and tasks measured as transactions.
Step 3: Creating the scenario. A scenario describes the events that occur during a testing session. It includes a list of machines, scripts, and Vusers that run during the scenario. We create scenarios using LoadRunner Controller. We can create manual scenarios as well as goal-oriented scenarios. In manual scenarios, we define the number of Vusers, the load generator machines, and percentage of Vusers to be assigned to each script. For web tests, we may create a goal-oriented scenario where we define the goal that our test has to achieve. LoadRunner automatically builds a scenario for us.
Step 4: Running the scenario. We emulate load on the server by instructing multiple Vusers to perform tasks simultaneously. Before the testing, we set the scenario configuration and scheduling. We can run the entire scenario, Vuser groups, or individual Vusers.
Step 5: Monitoring the scenario. We monitor scenario execution using the LoadRunner online runtime, transaction, system resource, Web resource, Web server resource, Web application server resource, database server resource, network delay, streaming media resource, firewall server resource, ERP server resource, and Java performance monitors.
Step 6: Analyzing test results. During scenario execution, LoadRunner records the performance of the application under different loads. We use LoadRunner’s graphs and reports to analyze the application’s performance.

7. When do you do load and performance Testing?
We perform load testing once we are done with interface (GUI) testing. Modern system architectures are large and complex. Whereas single user testing primarily on functionality and user interface of a system component, application testing focuses on performance and reliability of an entire system. For example, a typical application-testing scenario might depict 1000 users logging in simultaneously to a system. This gives rise to issues such as what is the response time of the system, does it crash, will it go with different software applications and platforms, can it hold so many hundreds and thousands of users, etc. This is when we set do load and performance testing.

8. What are the components of LoadRunner?
The components of LoadRunner are The Virtual User Generator, Controller, and the Agent process, LoadRunner Analysis and Monitoring, LoadRunner Books Online.


9. What Component of LoadRunner would you use to record a Script?
The Virtual User Generator (VuGen) component is used to record a script. It enables you to develop Vuser scripts for a variety of application types and communication protocols.
10. When do you do load and performance Testing?
We perform load testing once we are done with interface (GUI) testing. Modern system architectures are large and complex. Whereas single user testing primarily on functionality and user interface of a system component, application testing focuses on performance and reliability of an entire system. For example, a typical application-testing scenario might depict 1000 users logging in simultaneously to a system. This gives rise to issues such as what is the response time of the system, does it crash, will it go with different software applications and platforms, can it hold so many hundreds and thousands of users, etc. This is when we set do load and performance testing.

11. What are the components of LoadRunner?
The components of LoadRunner are The Virtual User Generator, Controller, and the Agent process, LoadRunner Analysis and Monitoring, LoadRunner Books Online. What Component of LoadRunner would you use to record a Script? - The Virtual User Generator (VuGen) component is used to record a script. It enables you to develop Vuser scripts for a variety of application types and communication protocols.

12. What Component of LoadRunner would you use to play Back the script in multi user mode?
The Controller component is used to playback the script in multi-user mode. This is done during a scenario run where a vuser script is executed by a number of vusers in a group.

13. What is a rendezvous point?
You insert rendezvous points into Vuser scripts to emulate heavy user load on the server. Rendezvous points instruct Vusers to wait during test execution for multiple Vusers to arrive at a certain point, in order that they may simultaneously perform a task. For example, to emulate peak load on the bank server, you can insert a rendezvous point instructing 100 Vusers to deposit cash into their accounts at the same time.

14. What is a rendezvous point?
A scenario defines the events that occur during each testing session. For example, a scenario defines and controls the number of users to emulate, the actions to be performed, and the machines on which the virtual users run their emulations.
15. Explain the recording mode for web Vuser script?
We use VuGen to develop a Vuser script by recording a user performing typical business processes on a client application. VuGen creates the script by recording the activity between the client and the server. For example, in web based applications, VuGen monitors the client end of the database and traces all the requests sent to, and received from, the database server. We use VuGen to: Monitor the communication between the application and the server; Generate the required function calls; and Insert the generated function calls into a Vuser script.

16. Why do you create parameters?
Parameters are like script variables. They are used to vary input to the server and to emulate real users. Different sets of data are sent to the server each time the script is run. Better simulate the usage model for more accurate testing from the Controller; one script can emulate many different users on the system.

17. What is correlation? Explain the difference between automatic correlation and manual correlation?
Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can be application server specific. Here values are replaced by data which are created by these rules. In manual correlation, the value we want to correlate is scanned and create correlation is used to correlate.

18. How do you find out where correlation is required?
Two ways: First we can scan for correlations, and see the list of values which can be correlated. From this we can pick a value to be correlated. Secondly, we can record two scripts and compare them. We can look up the difference file to see for the values which needed to be correlated.

19. Where do you set automatic correlation options?
Automatic correlation from web point of view can be set in recording options and correlation tab. Here we can enable correlation for the entire script and choose either issue online messages or offline actions, where we can define rules for that correlation. Automatic correlation for database can be done using show output window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. If we know the specific value to be correlated, we just do create correlation for the value and specify how the value to be created.

20. What is a function to capture dynamic values in the web Vuser script?
Web_reg_save_param function saves dynamic data information to a parameter.

21. VuGen Recording and Scripting?
LoadRunner script code obtained from recording in the ANSI C language syntax, represented by icons in icon view until you click Script View.
22. What is Scenarios ?
Scenarios encapsulate the Vuser Groups and scripts to be executed on load generators at run-time.
Manual scenarios can distribute the total number of Vusers among scripts based on the analyst-specified percentage (evenly among load generators).
Goal Oriented scenarios are automatically created based on a specified transaction response time or number of hits/transactions-per-second (TPS). Test analysts specify the % of Target among scripts.

23. What are the typical settings for each type of run scenario ?
24. When do you disable log in Virtual User Generator, When do you choose standard and extended logs?
Once we debug our script and verify that it is functional, we can enable logging for errors only. When we add a script to a scenario, logging is automatically disabled. Standard Log Option: When you select Standard log, it creates a standard log of functions and messages sent during script execution to use for debugging. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled Extended Log Option: Select extended log to create an extended log, including warnings and other messages. Disable this option for large load testing scenarios. When you copy a script to a scenario, logging is automatically disabled. We can specify which additional information should be added to the extended log using the Extended log options.

25. How do you debug a LoadRunner script?
VuGen contains two options to help debug Vuser scripts-the Run Step by Step command and breakpoints. The Debug settings in the Options dialog box allow us to determine the extent of the trace to be performed during scenario execution. The debug information is written to the Output window. We can manually set the message class within your script using the lr_set_debug_message function. This is useful if we want to receive debug information about a small section of the script only.

26. How do you write user defined functions in LR?
Before we create the User Defined functions we need to create the external library (DLL) with the function. We add this library to VuGen bin directory. Once the library is added then we assign user defined function as a parameter. The function should have the following format: __declspec (dllexport) char* (char*, char*)

27. What are the changes you can make in run-time settings?
The Run Time Settings that we make are:
1. Pacing - It has iteration count.
2. Log - Under this we have Disable Logging Standard Log and
3. Extended Think Time - In think time we have two options like Ignore think time and Replay think time.
4. General - Under general tab we can set the vusers as process or as multithreading and whether each step as a transaction.
28. Where do you set Iteration for Vuser testing?
We set Iterations in the Run Time Settings of the VuGen. The navigation for this is Run time settings, Pacing tab, set number of iterations.

29. How do you perform functional testing under load?
Functionality under load can be tested by running several Vusers concurrently. By increasing the amount of Vusers, we can determine how much load the server can sustain.

30. Using network drive mappings
If several load generators need to access the same physical files, rather than having to remember to copy the files each time they change, each load generator can reference a common folder using a mapped drive. But since drive mappings are associated with a specific user:
1. Logon the load generator as the user the load generator will use
2. Open Windows Explorer and under Tools select Map a Network Drive and create a drive. It saves time and hassle to have consistent drive letters across load generators, so some organizations reserver certain drive letters for specific locations.
3. Open the LoadRunner service within Services (accessed from Control Panel, Administrative Tasks),
4. Click the "Login" tab.
5. Specify the username and password the load generator service will use. (A dot appears in front of the username if the userid is for the local domain).
6. Stop and start the service again.
31. What is Ramp up? How do you set this?
This option is used to gradually increase the amount of Vusers/load on the server. An initial value is set and a value to wait between intervals can be specified. To set Ramp Up, go to ‘Scenario Scheduling Options’

32. What is the advantage of running the Vuser as thread?
VuGen provides the facility to use multithreading. This enables more Vusers to be run pergenerator. If the Vuser is run as a process, the same driver program is loaded into memory for each Vuser, thus taking up a large amount of memory. This limits the number of Vusers that can be run on a single generator. If the Vuser is run as a thread, only one instance of the driver program is loaded into memory for the given number of Vusers (say 100). Each thread shares the memory of the parent driver program, thus enabling more Vusers to be run per generator.

33. If you want to stop the execution of your script on error, how do you do that?
The lr_abort function aborts the execution of a Vuser script. It instructs the Vuser to stop executing the Actions section, execute the vuser_end section and end the execution. This function is useful when you need to manually abort a script execution as a result of a specific error condition. When you end a script using this function, the Vuser is assigned the status "Stopped". For this to take effect, we have to first uncheck the Continue on error option in Run-Time Settings.

34. What is the relation between Response Time and Throughput?
The Throughput graph shows the amount of data in bytes that the Vusers received from the server in a second. When we compare this with the transaction response time, we will notice that as throughput decreased, the response time also decreased. Similarly, the peak throughput and highest response time would occur approximately at the same time.
35. Explain the Configuration of your systems?
The configuration of our systems refers to that of the client machines on which we run the Vusers. The configuration of any client machine includes its hardware settings, memory, operating system, software applications, development tools, etc. This system component configuration should match with the overall system configuration that would include the network infrastructure, the web server, the database server, and any other components that go with this larger system so as to achieve the load testing objectives.

36. How do you identify the performance bottlenecks?
Performance Bottlenecks can be detected by using monitors. These monitors might be application server monitors, web server monitors, database server monitors and network monitors. They help in finding out the troubled area in our scenario which causes increased response time. The measurements made are usually performance response time, throughput, hits/sec, network delay graphs, etc.

37. If web server, database and Network are all fine where could be the problem?
The problem could be in the system itself or in the application server or in the code written for the application.

38. How did you find web server related issues?
Using Web resource monitors we can find the performance of web servers. Using these monitors we can analyze throughput on the web server, number of hits per second that occurred during scenario, the number of http responses per second, the number of downloaded pages per second.

39. How did you find database related issues?
By running Database monitor and help of Data Resource Graph we can find database related issues. E.g. You can specify the resource you want to measure on before running the controller and than you can see database related issues .

40. What is the difference between Overlay graph and Correlate graph?
Overlay Graph: It overlay the content of two graphs that shares a common x-axis. Left Y-axis on the merged graph show’s the current graph’s value & Right Y-axis show the value of Y-axis of the graph that was merged. Correlate Graph: Plot the Y-axis of two graphs against each other. The active graph’s Y-axis becomes X-axis of merged graph. Y-axis of the graph that was merged becomes merged graph’s Y-axis.

41. How did you plan the Load? What are the Criteria?
Load test is planned to decide the number of users, what kind of machines we are going to use and from where they are run. It is based on 2 important documents, Task Distribution Diagram and Transaction profile. Task Distribution Diagram gives us the information on number of users for a particular transaction and the time of the load. The peak usage and off-usage are decided from this Diagram. Transaction profile gives us the information about the transactions name and their priority levels with regard to the scenario we are deciding.

42. What does vuser_init action contain?
Vuser_init action contains procedures to login to a server.

43. What does vuser_end action contain?
Vuser_end section contains log off procedures.

44. What is think time? How do you change the threshold?
Think time is the time that a real user waits between actions. Example: When a user receives data from a server, the user may wait several seconds to review the data before responding. This delay is known as the think time. Changing the Threshold: Threshold level is the level below which the recorded think time will be ignored. The default value is five (5) seconds. We can change the think time threshold in the Recording options of the Vugen.

45. What is the difference between standard log and extended log?
The standard log sends a subset of functions and messages sent during script execution to a log. The subset depends on the Vuser type Extended log sends a detailed script execution messages to the output log. This is mainly used during debugging when we want information about: Parameter substitution. Data returned by the server. Advanced trace.

46. What is lr_debug_message ?
The lr_debug_message function sends a debug message to the output log when the specified message class is set.

47. What is lr_output_message ?
The lr_output_message function sends notifications to the Controller Output window and the Vuser log file.

48. What is lr_error_message ?
The lr_error_message function sends an error message to the LoadRunner Output window.

49. What is lrd_stmt?
The lrd_stmt function associates a character string (usually a SQL statement) with a cursor. This function sets a SQL statement to be processed.
50. What is lrd_fetch?
The lrd_fetch function fetches the next row from the result set.

51. What is Throughput?
If the throughput scales upward as time progresses and the number of Vusers increase, this indicates that the bandwidth is sufficient. If the graph were to remain relatively flat as the number of Vusers increased, it would be reasonable to conclude that the bandwidth is constraining the volume of data delivered.


52. Types of Goals in Goal-Oriented Scenario
Load Runner provides you with five different types of goals in a goal oriented scenario:
1. The number of concurrent Vusers
2. The number of hits per second
3. The number of transactions per second
4. The number of pages per minute
The transaction response time that you want your scenario Analysis Scenario (Bottlenecks): In Running Vuser graph correlated with the response time graph you can see that as the number of Vusers increases, the average response time of the check itinerary transaction very gradually increases. In other words, the average response time steadily increases as the load increases. At 56 Vusers, there is a sudden, sharp increase in the average response time. We say that the test broke the server. That is the mean time before failure (MTBF). The response time clearly began to degrade when there were more than 56 Vusers running simultaneously.

53. What is correlation? Explain the difference between automatic correlation and manual correlation?
Correlation is used to obtain data which are unique for each run of the script and which are generated by nested queries. Correlation provides the value to avoid errors arising out of duplicate values and also optimizing the code (to avoid nested queries). Automatic correlation is where we set some rules for correlation. It can be application server specific. Here values are replaced by data which are created by these rules. In manual correlation, the value we want to correlate is scanned and create correlation is used to correlate.

54. Where do you set automatic correlation options?
Automatic correlation from web point of view, can be set in recording options and correlation tab. Here we can enable correlation for the entire script and choose either issue online messages or offline actions, where we can define rules for that correlation. Automatic correlation for database, can be done using show output window and scan for correlation and picking the correlate query tab and choose which query value we want to correlate. If we know the specific value to be correlated, we just do create correlation for the value and specify how the value to be created.

55. Where do you set automatic correlation options?
What is a function to capture dynamic values in the web vuser script? - Web_reg_save_param function saves dynamic data information to a parameter.











QuickTest Professional (QTP)

What is QTP ?
QuickTest is a graphical interface record-playback automation tool. It is able to work with any web, java or windows client application. Quick Test enables you to test standard web objects and ActiveX controls. In addition to these environments, QuickTest Professional also enables you to test Java applets and applications and multimedia objects on Applications as well as standard Windows applications, Visual Basic 6 applications and .NET framework applications...
QTP is Mercury Interactive Functional Testing Tool. QTP stands for Quality Test Professional.
Mercury QuickTest Professional: provides the industry's best solution for functional test and regression test automation - addressing every major software application and environment. This next-generation automated testing solution deploys the concept of Keyword-driven testing to radically simplify test creation and maintenance. Unique to QuickTest Professional’s Keyword-driven approach, test automation experts have full access to the underlying test and object properties, via an integrated scripting and debugging environment that is round-trip synchronized with the Keyword View.

QuickTest Professional enables you to test standard Windows applications, Web objects, ActiveX controls, and Visual Basic applications. You can also acquire additional QuickTest add-ins for a number of special environments (such as Java, Oracle, SAP Solutions, .NET Windows and Web Forms, Siebel, PeopleSoft, Web services, and terminal emulator applications).

What’s the basic concept of QTP?
QTP is based on two concept-
* Recording
* Playback

Which scripting language used by QTP?
QTP using VB scripting.

How many types of recording facility are available in QTP?
QTP provides three types of recording methods-
* Context Recording (Normal)
* Analog Recording
* Low Level Recording

How many types of Parameters are available in QTP?
QTP provides three types of Parameter-
* Method Argument
* Data Driven
* Dynamic

What’s the QTP testing process?
QTP testing process consist of seven steps-
* Preparing to recoding
* Recording
* Enhancing your script
* Debugging
* Run
* Analyze
* Report Defects (more…)

How to Start recording using QTP?
Choose Test > Record or click the Record button.
When the Record and Run Settings dialog box opens to do this;
1. In the Web tab, select Open the following browser when a record or run session begins.
2. In the Windows Applications tab, confirm that Record and run on these applications (opened on session start) is selected, and that there are no applications listed.

How to insert a check point to a image to check enable property in QTP?

Answer1:
AS you are saying that the all images are as push button than you can check the property enabled or disabled. If you are not able to find that property than go to object repository for that objecct and click on add remove to add the available properties to that object. Let me know if that works. And if you take it as image than you need to check visible or invisible property tht also might help you are there are no enable or disable properties for the image object.

Answer2:
The Image Checkpoint does not have any property to verify the enable/disable property.
One thing you need to check is:
* Find out form the Developer if he is showing different images for activating/deactiving i.e greyed out image. That is the only way a developer can show deactivate/activate if he is using an "image". Else he might be using a button having a headsup with an image.
* If it is a button used to display with the headsup as an image you woudl need to use the object Properties as a checkpoint.
How to Save your test using QTP?
Select File > Save or click the Save button. The Save dialog box opens to the Tests folder.
Create a folder which you want to save to, select it, and click Open.
Type your test name in the File name field.
Confirm that Save Active Screen files is selected.
Click Save. Your test name is displayed in the title bar of the main QuickTest window.

How to Run a Test using QTP?
1 Start QuickTest and open your test.

If QuickTest is not already open, choose Start > Programs > QuickTest Professional > QuickTest Professional.

. If the Welcome window opens, click Open Existing.
. If QuickTest opens without displaying the Welcome window, choose File > Open or click the Open button.
In the Open Test dialog box, locate and select your test, then click Open.

2 Confirm that all images are saved to the test results.
QuickTest allows you to determine when to save images to the test results.

Choose Tools > Options and select the Run tab. In the Save step screen capture to test results option, select Always.

Click OK to close the Options dialog box.

3 Start running your test.

Click Run or choose Test > Run. The Run dialog box opens.
Select New run results folder. Accept the default results folder name.
Click OK to close the Run dialog box.

How to open a new test using QTP?
1. If QuickTest is not currently open, choose Start > Programs > QuickTest Professional > QuickTest Professional. If the Welcome window opens, click Blank Test. Otherwise, choose File > New, or click the New button. A blank test opens. 2. . If QuickTest is already open, check which add-ins are loaded by selecting Help > About QuickTest Professional. If the Web Add-in is not loaded, you must exit and restart QuickTest. When the Add-in Manager opens, select the Web Add-in, and clear all other add-ins. Choose File > New, or click the New button. A blank test opens.

How to do Laod testing for web based Application?
1. Recording a scenerio in QTP of my web based application.
2. Make 100 copies of that scenerio and run the test (scenerio run for 100 times)
3. In that case, do the load of application on server.
4. The basic logic of running the copy 100 times is to create same scenerio as if 100 users were working.

What is the extension of script and object repository files?
Object Repository : .tsr , Script : .mts, Excel : Default.xls

How to supress warnings from the "Test results page"?
From the Test results Viewer "Tools > Filters > Warnings"...must be "Unchecked".

When we try to use test run option "Run from Step", the browser is not launching automatically why?
This is default behaviour.
What's Checkpoints for QTP?
A checkpoint verifies that expected information is displayed in your application while the test is running.

QuickTest Professional offers the following types of checkpoints:
Checkpoint Type Description Example of Use
Standard Checkpoint Checks values of an object’s properties. Check that a radio button is selected.
Image Checkpoint Checks the property values of an image. Check that the image source file is correct.
Table Checkpoint Checks information in a table. Check that the value in a table cell is correct.
Page Checkpoint Checks the characteristics of a Web page. Check how long a Web page takes to load or if a Web page contains broken links.
Text / Text Area Checkpoint Checks that a text string is displayed in the appropriate place in a Web page or application window. Check whether the expected text string is displayed in the expected locatio
Bitmap Checkpoint Checks an area of a Web page or application after capturing it as a bitmap Check that a Web page (or any portion of it) is displayed as expected.
Database Checkpoint Checks the contents of databases accessed by an application or Web site Check that the value in a database query is correct.
Accessibility Checkpoint Identifies areas of a Web site to check for Section 508 compliancy. Check if the images on a Web page include ALT properties, required by the W3C Web Content Accessibility Guidelines.
XML Checkpoint Checks the data content of XML documents. Note: XML file checkpoints are used to check a specified XML file; XML application checkpoints are used to check an XML document within a Web page.

How to add a standard checkpoint in your test ?
1. Start QuickTest and open your test.
In the Open Test dialog box, locate and select your test, then click Open.

2. Save the test as Checkpoint.
Select File > Save As. Save the test as Checkpoint.

3. Confirm that the Active Screen option is enabled.
If you do not see the Active Screen at the bottom of the QuickTest window, click the Active Screen button, or choose View > Active Screen.

4. Locate the page where you want to add a standard checkpoint.

5 Create a standard checkpoint.
In the Active Screen, right-click element in your application and choose Insert Standard Checkpoint.

6 Save the test.

How to add a page checkpoint to your test?
The page checkpoint checks that the number of links and images in the page when you run your test is the same as when you recorded your test.
1 Locate the page where you want to add a page checkpoint.
2 Create a page checkpoint.
Right-click anywhere in the Active Screen, and choose Insert Standard Checkpoint. The Object Selection - Checkpoint Properties dialog box opens. Note that this dialog box may include different elements, depending on where you click in the Active Screen.
3 Save the test.
How Does Run time data (Parameterization) is handled in QTP?
You can then enter test data into the Data Table, an integrated spreadsheet with the full functionality of Excel, to manipulate data sets and create multiple test iterations, without programming, to expand test case coverage. Data can be typed in or imported from databases, spreadsheets, or text files.

What is keyword view and Expert view in QTP?
QuickTest’s Keyword Driven approach, test automation experts have full access to the underlying test and object properties, via an integrated scripting and debugging environment that is round-trip synchronized with the Keyword View. Advanced testers can view and edit their tests in the Expert View, which reveals the underlying industry-standard VBScript that QuickTest Professional automatically generates. Any changes made in the Expert View are automatically synchronized with the Keyword View.



How QTP recognizes Objects in AUT?
QuickTest stores the definitions for application objects in a file called the Object Repository. As you record your test, QuickTest will add an entry for each item you interact with. Each Object Repository entry will be identified by a logical name (determined automatically by QuickTest), and will contain a set of properties (type, name, etc) that uniquely identify each object. Each line in the QuickTest script will contain a reference to the object that you interacted with, a call to the appropriate method (set, click, check) and any parameters for that method (such as the value for a call to the set method). The references to objects in the script will all be identified by the logical name, rather than any physical, descriptive properties.

What are the types of Object Repositorys in QTP?
QuickTest has two types of object repositories for storing object information: shared object repositories and action object repositories. You can choose which type of object repository you want to use as the default type for new tests, and you can change the default as necessary for each new test. The object repository per-action mode is the default setting. In this mode, QuickTest automatically creates an object repository file for each action in your test so that you can create and run tests without creating, choosing, or modifying object repository files. However, if you do modify values in an action object repository, your changes do not have any effect on other actions. Therefore, if the same test object exists in more than one action and you modify an object's property values in one action, you may need to make the same change in every action (and any test) containing the object.

If I give some thousand tests to execute in 2 days what do u do?
Adhoc testing is done. It Covers the least basic functionalities to verify that the system is working fine.
Does QTP is "Unicode" compatible?
QTP 6.5 is not but QTP 8.0 is expected to be Unicode compatabile by end of December 2004.

How to "Turn Off" QTP results after running a Script?
Goto "Tools > Options > Run Tab" and Deselect "View results when run session ends". But this supresses only the result window, but a og will be created and can viewed manulaly which cannot be restricted from getting created.

Explain about the Test Fusion Report of QTP ?
Once a tester has run a test, a TestFusion report displays all aspects of the test run: a high-level results overview, an expandable Tree View of the test specifying exactly where application failures occurred, the test data used, application screen shots for every step that highlight any discrepancies, and detailed explanations of each checkpoint pass and failure. By combining TestFusion reports with QuickTest Professional, you can share reports across an entire QA and development team.

To which environments does QTP supports ?
QuickTest Professional supports functional testing of all enterprise environments, including Windows, Web, ..NET, Java/J2EE, SAP, Siebel, Oracle, PeopleSoft, Visual Basic, ActiveX, mainframe terminal emulators, and Web services.

What's QuickTest Window?
Before you begin creating tests, you should familiarize yourself with the main QuickTest window.
The QuickTest window contains the following key elements:
. Title bar—Displays the name of the currently open test.
. Menu bar—Displays menus of QuickTest commands.
. File toolbar—Contains buttons to assist you in managing your test.
. Testing toolbar—Contains buttons to assist you in the testing process.
. Debug toolbar—Contains buttons to assist you in debugging tests.
. Test pane—Contains the Keyword View and Expert View tabs.
. Active Screen—Provides a snapshot of your application as it appeared when you performed a certain step during the recording session.
. Data Table—Assists you in parameterizing your test.
. Debug Viewer pane—Assists you in debugging your test. The Debug Viewer pane contains the Watch Expressions, Variables, and Command tabs. (The Debug Viewer pane is not displayed when you open QuickTest for the first time. You can display the Debug Viewer by choosing View < type="submit" value="Find" name="Find">

QuickTest identifies the object that you clicked as a WebButton test object. It creates a WebButton object with the name Find, and records the following properties and values for the Find WebButton:
It also records that you performed a Click method on the WebButton.
QuickTest displays your step in the Keyword View like this:
QuickTest displays your step in the Expert View like this:
Browser("Mercury Interactive").Page("Mercury Interactive").
WebButton("Find").
How to analyzing Test Results using QTP?
When QuickTest finishes running the test, the Test Results window opens.
Initially, the Test Results window contains two panes for displaying the key elements of your test run.
. The left pane displays the results tree, an icon-based view of the steps that were performed while the test was running. The results tree is organized according to the Web pages visited during the test run and can be expanded (+) to view each step. The steps performed during the test run are represented by icons in the tree. You can instruct QuickTest to run a test or action more than once using different sets of data in each run. Each test run is called an iteration, and each iteration is numbered. (The test you ran had only one iteration.)
. The right pane displays the test results details. The iteration summary table indicates which iterations passed and which failed. The status summary table indicates the number of checkpoints or reports that passed, failed, and raised warnings during the test.
1 View the test results for a specific step.
In the results tree, expand (+) Test Recording Summary > Recording Iteration 1 (Row 1) > Action1 Summary > your application > your test name .
The Test Results window now contains three panes, displaying:
. the results tree, with one step highlighted
. the test results details of the highlighted step
. the Active Screen, showing a screen capture of the Web page on which the step was performed.

When you click a page in the results tree, QuickTest displays the corresponding page in the application view. When you click a step (an operation performed on an object) in the results tree, the corresponding object is highlighted in the application view. In this case, the Departing From text box is highlighted.

Explain the check points in QTP?
A checkpoint verifies that expected information is displayed in a Application while the test is running. You can add eight types of checkpoints to your test for standard web objects using QTP.

• A page checkpoint checks the characteristics of a Application
• A text checkpoint checks that a text string is displayed in the appropriate place on a Application.
• An object checkpoint (Standard) checks the values of an object on a Application.
• An image checkpoint checks the values of an image on a Application.
• A table checkpoint checks information within a table on a Application
• An Accessiblity checkpoint checks the web page for Section 508 compliance.
• An XML checkpoint checks the contents of individual XML data files or XML documents that are part of your Web application.
• A database checkpoint checks the contents of databases accessed by your web site

In how many ways we can add check points to an application using QTP.
We can add checkpoints while recording the application or we can add after recording is completed using Active screen (Note : To perform the second one The Active screen must be enabled while recording).
Explain in brief about the QTP Automation Object Model.
Essentially all configuration and run functionality provided via the QuickTest interface is in some way represented in the QuickTest automation object model via objects, methods, and properties. Although a one-on-one comparison cannot always be made, most dialog boxes in QuickTest have a corresponding automation object, most options in dialog boxes can be set and/or retrieved using the corresponding object property, and most menu commands and other operations have corresponding automation methods. You can use the objects, methods, and properties exposed by the QuickTest automation object model, along with standard programming elements such as loops and conditional statements to design your program.

Discuss QTP Environment.
QuickTest Pro environment using the graphical interface and ActiveScreen technologies - A testing process for creating test scripts, relating manual test requirements to automated verification features - Data driving to use several sets of data using one test script.

Explain the concept of how QTP identifies object.
During recording qtp looks at the object and stores it as test object.For each test object QT learns a set of default properties called mandatory properties,and look at the rest of the objects to check whether this properties are enough to uniquely identify the object. During test run,QT searches for the run time obkects that matches with the test object it learned while recording.
Object Repositories types, Which & when to use?
Deciding Which Object Repository Mode to Choose
To choose the default object repository mode and the appropriate object repository mode for each test, you need to understand the differences between the two modes.
In general, the object repository per-action mode is easiest to use when you are creating simple record and run tests, especially under the following conditions:

You have only one, or very few, tests that correspond to a given application, interface, or set of objects.
You do not expect to frequently modify test object properties.
You generally create single-action tests.
Conversely, the shared object repository mode is generally the preferred mode when:

You have several tests that test elements of the same application, interface, or set of objects.
You expect the object properties in your application to change from time to time and/or you regularly need to update or modify test object properties.
You often work with multi-action tests and regularly use the Insert Copy of Action and Insert Call to Action options.

Can we Script any test case with out having Object repository? or Using Object Repository is a must?
No. U can script with out Object repository by knowing the Window Handlers, spying and recognizing the objects logical names and properties available.

How to execute a WinRunner Script in QTP?
(a) TSLTest.RunTest TestPath, TestSet [, Parameters ] --> Used in QTP 6.0 used for backward compatibility Parameters : The test set within Quality Center, in which test runs are stored. Note that this argument is relevant only when working with a test in a Quality Center project. When the test is not saved in Quality Center, this parameter is ignored.
e.g : TSLTest.RunTest "D:\test1", ""
(b)TSLTest.RunTestEx TestPath, RunMinimized, CloseApp [, Parameters ] TSLTest.RunTestEx "C:\WinRunner\Tests\basic_flight", TRUE, FALSE, "MyValue" CloseApp : Indicates whether to close the WinRunner application when the WinRunner test run ends. Parameters : Up to 15 WinRunner function argument
Why divide a test into three action calls?
When you create a new test, it contains a call to one action. By dividing your tests into calls to multiple actions, you can design more modular and efficient tests.

How To clear the AutoComplete?
1 In your Internet Explorer’s menu bar, choose Tools > Internet Options > Content tab.
2 Click AutoComplete in the Personal information area. The AutoComplete Settings dialog box opens.
3 In the Use AutoComplete for area, clear the User names and passwords on forms option.
4 Click OK to save your changes and close the AutoComplete Settings dialog box, then click OK again to close the Internet Options dialog box.

What is Object Spy in QTP?
Using the Object Spy, you can view the properties of any object in an open application. You use the Object Spy pointer to point to an object. The Object Spy displays the selected object’s hierarchy tree and its properties and values in the Properties tab of the Object Spy dialog box.
What is the Diff between Image check-point and Bit map Check point?
Image checkpoints enable you to check the properties of a Web image. You can check an area of a Web page or application as a bitmap. While creating a test or component, you specify the area you want to check by selecting an object. You can check an entire object or any area within an object. QuickTest captures the specified object as a bitmap, and inserts a checkpoint in the test or component. You can also choose to save only the selected area of the object with your test or component in order to save disk Space For example, suppose you have a Web site that can display a map of a city the user specifies. The map has control keys for zooming. You can record the new map that is displayed after one click on the control key that zooms in the map. Using the bitmap checkpoint, you can check that the map zooms in correctly.
You can create bitmap checkpoints for all supported testing environments (as long as the appropriate add-ins are loaded).
Note: The results of bitmap checkpoints may be affected by factors such as operating system, screen resolution, and color settings.

How many ways we can parameterize data in QTP ?
There are four types of parameters:
Test, action or component parameters enable you to use values passed from your test or component, or values from other actions in your test.
Data Table parameters enable you to create a data-driven test (or action) that runs several times using the data you supply. In each repetition, or iteration, QuickTest uses a different value from the Data Table.
Environment variable parameters enable you to use variable values from other sources during the run session. These may be values you supply, or values that QuickTest generates for you based on conditions and options you choose.
Random number parameters enable you to insert random numbers as values in your test or component. For example, to check how your application handles small and large ticket orders, you can have QuickTest generate a random number and insert it in a number of tickets edit field.

How do u do batch testing in WR & is it possible to do in QTP, if so explain?
Batch Testing in WR is nothing but running the whole test set by selecting "Run Testset" from the "Execution Grid".The same is possible with QTP also. If our test cases are automated then by selecting "Run Testset" all the test scripts can be executed. In this process the Scripts get executed one by one by keeping all the remaining scripts in "Waiting" mode.



How to use the Object spy in QTP 8.0 version?
There are two ways to Spy the objects in QTP
1) Thru file toolbar
---In the File ToolBar click on the last toolbar button (an icon showing a person with hat). 2) Tru Object repository Dialog
---In Objectrepository dialog click on the button"object spy..."

In the Object spy Dialog click on the button showing hand symbol. the pointer now changes in to a hand symbol and we have to point out the object to spy the state of the object if at all the object is not visible..or window is minimised then Hold the Ctrl button and activate the required window to and release the Ctrl button.

What is the file extension of the code file & object repository file in QTP?
File extension of
-- Per test object rep :- filename.mtr
-- Shared Oject rep :- filename.tsr
Code file extension id script.mts

How to Creating an Output Value using QTP?
1 Start QuickTest and open the Parameter test.
2 Save the test as Output.
3 Confirm that the Active Screen option is enabled.
4 Select the text you want to use as an output value.
5 Set the output value settings.
6 Modify the table checkpoint.
7 Save the test.
What does it mean when a check point is in red color? what do u do?
A red color indicates failure. Here we analyze the the cause for failure whether it is a Script Issue or Envronment Issue or a Application issue.

What do you call the window testdirector-testlab?
"Execution Grid". It is place from where we Run all Manual / Automated Scripts

How do u create new test sets in TD
Login to TD.
Click on "Test Lab" tab.
Select the Desired folder under which we need to Create the Test Set. ( Test Sets can be grouped as per module.) Click on "New Test Set or Ctrl+N" Icon to create a Test Set.

Explain the concept of object repository & how QTP recognises objects?
Object Repository: displays a tree of all objects in the current component or in the current action or entire test( depending on the object repository mode you selected). we can view or modify the test object description of any test object in the repository or to add new objects to the repository. Quicktest learns the default property values and determines in which test object class it fits.If it is not enough it adds assistive properties, one by one to the description until it has compiled the unique description.If no assistive properties are available, then it adds a special Ordianl identifier such as objects location onthe page or in the source code.

What are the properties you would use for identifying a browser & page when using descriptive programming ?
"name" would be another property apart from "title" that we can use. OR We can also use the property "micClass". ex: Browser("micClass:=browser").page("micClass:=page")....

I want to open a Notepad window without recording a test and I do not want to use SystemUtil.Run command as well How do I do this?
U can still make the notepad open without using the record or System utility script, just by mentioning the path of the notepad "( i.e., where the notepad.exe is stored in the system) in the "Windows Applications Tab" of the "Record and Run Settings window. Try it out. All the Best.

If an application name is changes frequently i.e while recording it has name, in this case how does QTP handles?
If an application name is changes frequently i.e while recording it has name ?Window1? and then while running its ?Windows2? in this case how does QTP handles? QTP handles those situations using ?Regular Expressions?. ...

How does QTP identifies the object in the application?
How does QTP identifies the object in the application? QTP identifies the object in the application by LogicalName and Class. For example: The Edit box is identified by Logical Name : PSOPTIONS_BSE_TIME20 Class: WebEdit ...

IF we use batch testing.the result shown for last action only.in that how can i get result for every action.
You can click on the icon in the tree view to view the result of every action
WinRunner Compared to QuickTest Pro

Environment Coverage Comparison:

Common environments shared by both WinRunner and QuickTest Pro:

Web-Related Environments IE, Netscape, AOL
JDK, Java Foundation Classes, AWT
Symantec Visual Café
ActiveX Controls
ERP/CRM Oracle: Jinitiator, 11i, NCA
Custom Client Server Windows
C++/C
Visual Basic
Operating Systems Windows 98, 2000, NT, ME, XP
Legacy 3270, 5250 Emulators
VT100



WinRunner Only Environments:
Custom Client/Server PowerBuilder
Forte
Delphi
Centura
Stingray
SmallTalk
ERP/CRM Baan
PeopleSoft Windows
Siebel 5, 6 GUI Clients
Oracle GUI Forms
QuickTest Pro Only Environments:
ERP/CRM SAP
Siebel 7.x
PeopleSoft 8.x
.Net WinForms
WebForms
.Net controls
Web Services XML, HTTP
WSDL, SOAP
J2EE, .Net
Multimedia RealAudio/Video
Flash
Feature Comparison:

Common features found in both WinRunner and QuickTest Pro:
Record/Replay ODBC & Excel Connectivity
Code Editor & Debugger Recovery Manager
Shared Object Repository Rapid Object Import
Numerous Checkpoints Analog
Script & Function Libraries

WinRunner Only Environments:
Function Generator Database Integration
Run Wizard TSL
MDI


QuickTest Pro Only Environments:
ActiveScreen TestGuard
Tree View ScriptFusion
Data Table VBScript
Function Generator*
(coming in v7.0) Run Wizard*
(coming in v7.0)

How to Import data from a ".xls" file to Data table during Runtime.
Datatable.Import "...XLS file name..."
DataTable.ImportSheet(FileName, SheetSource, SheetDest)
DataTable.ImportSheet "C:\name.xls" ,1 ,"name"

How to export data present in Datatable to an ".xls" file?
DataTable.Export "....xls file name..."

Syntact for how to call one script from another? and Syntax to call one "Action" in another?
RunAction ActionName, [IterationMode , IterationRange , Parameters]
Here the actions becomes reusable on making this call to any Action.
IterationRange String Not always required. Indicates the rows for which action iterations will be performed. Valid only when the IterationMode is rngIterations. Enter the row range (i.e. "1-7"), or enter rngAll to run iterations on all rows.
If the action called by the RunAction statement includes an ExitAction statement, the RunAction statement can return the value of the ExitAction's RetVal argument.

How to export QTP results to an ".xls" file?
By default it creates an "XML" file and displays the results

How the exception handling can be done using QTP
It can be done Using the Recovery Scenario Manager which provides a wizard that gudies you through the process of defining a recovery scenario. FYI.. The wizard could be accesed in QTP> Tools-> Recovery Scenario Manager .......

How many types of Actions are there in QTP?
There are three kinds of actions:
non-reusable action—an action that can be called only in the test with which it is stored, and can be called only once. reusable action—an action that can be called multiple times by the test with which it is stored (the local test) as well as by other tests.
external action—a reusable action stored with another test. External actions are read-only in the calling test, but you can choose to use a local, editable copy of the Data Table information for the external action.

Analyzing the Checpoint results
Standard Checpoint :By adding standard checkpoints to your tests or components, you can compare the expected values of object properties to the object's current values during a run session. If the results do not match, the checkpoint fails.


How to handle Run-time errors?
On Error Resume Next : causes execution to continue with the statement immediately following the statement that caused the run-time error, or with the statement immediately following the most recent call out of the procedure containing the On Error Resume Next statement. This allows execution to continue despite a run-time error. You can then build the error-handling routine inline within the procedure.
Using "Err" object msgbox "Error no: " & " " & Err.Number & " " & Err.description & " " & Err.Source & Err.HelpContext

What are the different scripting languages you could use when working with QTP ?
Visual Basic (VB),XML,JavaScript,Java,HTML

How to handle dynamic objects in QTP?
QTP has a unique feature called Smart Object Identification/recognition. QTP generally identifies an object by matching its test object and run time object properties. QTP may fail to recognise the dynamic objects whose properties change during run time. Hence it has an option of enabling Smart Identification, wherein it can identify the objects even if their properties changes during run time. Check this out-
If QuickTest is unable to find any object that matches the recorded object description, or if it finds more than one object that fits the description, then QuickTest ignores the recorded description, and uses the Smart Identification mechanism to try to identify the object. While the Smart Identification mechanism is more complex, it is more flexible, and thus, if configured logically, a Smart Identification definition can probably help QuickTest identify an object, if it is present, even when the recorded description fails.

The Smart Identification mechanism uses two types of properties:
Base filter properties—The most fundamental properties of a particular test object class; those whose values cannot be changed without changing the essence of the original object. For example, if a Web link's tag was changed from to any other value, you could no longer call it the same object. Optional filter properties—Other properties that can help identify objects of a particular class as they are unlikely to change on a regular basis, but which can be ignored if they are no longer applicable.

Explain the keyword createobject with an example.
Creates and returns a reference to an Automation object
syntax: CreateObject(servername.typename [, location])
Arguments
servername:Required. The name of the application providing the object.
typename : Required. The type or class of the object to create.
location : Optional. The name of the network server where the object is to be created.

What is a Run-Time Data Table? Where can I find and view this table?
-In QTP, there is data table used , which is used at runtime.
-In QTP, select the option View->Data tabke.
-This is basically an excel file, which is stored in the folder of the test created, its name is Default.xls by default.

How to do the scripting. Is there any inbuilt functions in QTP as in QTP-S. Whatz the difference between them? how to handle script issues?
Yes, there's an in-built functionality called "Step Generator" in Insert->Step->Step Generator -F7, which will generate the scripts as u enter the appropriate steps.



What is the difference between check point and output value.
An outPut value is a value captured during the test run and entered in the run-time but to a specified location. EX:-Location in Data Table[Global sheet / local sheet]
Types of properties that Quick Test learns while recording?
(a) Mandatory (b) Assistive . In addition to recording the mandatory and assistive properties specified in the Object Identification dialog box, QuickTest can also record a backup ordinal identifier for each test object. The ordinal identifier assigns the object a numerical value that indicates its order relative to other objects with an otherwise identical description (objects that have the same values for all properties specified in the mandatory and assistive property lists). This ordered value enables QuickTest to create a unique description when the mandatory and assistive properties are not sufficient to do so.

Differences between QTP & Winrunner?
(a) QTP is object bases Scripting ( VBS) where Winrunner is TSL (C based) Scripting.
(b) QTP supports ".NET" application Automation not available in Winrunner
(c) QTP has "Active Screen" support which captures the application, not available in WR.
(d) QTP has "Data Table" to store script values , variables which WR does not have.
(e) Using a “point and click” capability you can easily interface with objects, their definitions and create checkpoints after having recorded a script – without having to navigate back to that location in your application like you have to with WinRunner. This greatly speeds up script development.

Few basic questions on commonly used Excel VBA functions.
common functions are:
Coloring the cell
Auto fit cell
setting navigation from link in one cell to other saving

How does Parameterization and Data-Driving relate to each other in QTP?
To datadrive we have to parameterize.i.e. we have to make the constant value as parameter, so that in each iteraration(cycle) it takes a value that is supplied in run-time datatable. Through parameterization only we can drive a transaction(action) with different sets of data. You know running the script with the same set of data several times is not suggestable, & it's also of no use.

What is the difference between Call to Action and Copy Action.?
Call to Action : The changes made in Call to Action , will be reflected in the orginal action( from where the script is called).But where as in Copy Action , the changes made in the script ,will not effect the original script(Action)

How to verify the Cursor focus of a certain field?
Use "focus" property of "GetRoProperty" method"

Any limitation to XML Checkpoints?
Mercury has determined that 1.4MB is the maximum size of a XML file that QTP 6.5 can handle

How to make arguments optional in a function?
this is not possible as default VBS doesn't support this. Instead you can pass a blank scring and have a default value if arguments r not required.


How to add a text checkpoint to your test to check whether 'welcome' is displayed in your welcome page.
1 Locate the page where you want to add a text checkpoint.

2 Create a text checkpoint.
In the Active Screen, under your page highlight the text welcome. Right-click the highlighted text and choose Insert Text Checkpoint. The Text Checkpoint Properties dialog box opens.
When Checked Text appears in the list box, the Constant field displays the text string you highlighted. This is the text QuickTest looks for when running the test.
Click OK to accept the default settings in this dialog box.
QuickTest adds the text checkpoint to your test. It is displayed in the Keyword View as a checkpoint operation on your welcome page

3 Save the test.

How to Running and Analyzing a Test with Checkpoints?
1 Expand the test and review your test.
Choose View > Expand All or use the * shortcut key on your number keypad.

2 Start running your test.
Click Run or choose Test > Run. The Run dialog box opens. Ensure that New run results folder is selected. Accept the default results folder name. Click OK. When the test run is completed, the Test Results window opens.

3 View the test results.
When QuickTest finishes running the test, the Test Results window opens. The test result should be Passed, indicating that all checkpoints passed. If one or more checkpoints had failed, the test result would be Failed.

4 View the results of the page checkpoint.
In the Details pane, you can review the details of the page checkpoint, which lists the items checked.

5 View the results of the table checkpoint.
In the Details pane, you can review the details of the table checkpoint. You can also review the values of the table cells (cell values that were checked are displayed in black; cell values that were not checked are displayed in gray).

6 View the results of the standard checkpoint.
In the Details pane, you can review the details of the standard checkpoint, which lists the properties that were checked and their values. The checkpoint passed because the actual values of the checked properties match the expected values.

7 View the results of the text checkpoint.
In the Details pane, you can review the details of the text checkpoint. The checkpoint passed because the actual text matches the expected text.

8 Close the Test Results window. Choose File > Exit.

How to Defining a Data Table Parameter for QTP?
1 Start QuickTest and open the Checkpoint test.
2 Save the test as Parameter.
3 Confirm that the Active Screen option is enabled.
4 Confirm that the Data Table option is enabled.
5 Select the text to parameterize.
6 Set the parameterization properties.
How to add a runtime parameter to a datasheet?
DataTable.LocalSheet
The following example uses the LocalSheet property to return the local sheet of the run-time Data Table in order to add a parameter (column) to it.
MyParam=DataTable.LocalSheet.AddParameter("Time", "5:45")
How to change the run-time value of a property for an object?
SetTOProperty changes the property values used to identify an object during the test run. Only properties that are included in the test object description can be set

How to retrieve the property of an object?
using "GetRoProperty".

How to open any application during Scripting?
SystemUtil , object used to open and close applications and processes during a run session.
(a) A SystemUtil.Run statement is automatically added to your test when you run an application from the Start menu or the Run dialog box while recording a test
E.g : SystemUtil.Run "Notepad.exe" SystemUtil.CloseDescendentProcesses ( Closes all the processes opened by QTP )

How to covert a String to an integer?
CInt()---> a conversion function available.

Inserting a Call to Action is not Importing all columns in Datatable of globalsheet. Why?
Inserting a call to action will only Import the columns of the Action called

Differentiate the two Object Repository Types of QTP.
Object repository is used to store all the objects in the application being tested.2 types of oject repositoy per action and shared. In shared repository only one centralised repository for all the tests. where as in per action.for each test a separate per action repostory is created.

What the differences are and best practical application of each.
Per Action: For Each Action, one Object Repository is created. Shared : One Object Repository is used by entire application

Explain what the difference between Shared Repository and Per_Action Repository
Shared Repository: Entire application uses one Object Repository , that similar to Global GUI Map file in WinRunner Per Action: For each Action ,one Object Repository is created, like GUI map file per test in WinRunner

Have you ever written a compiled module? If yes tell me about some of the functions that you wrote.
I Used the functions for Capturing the dynamic data during runtime. Function used for Capturing Desktop, browser and pages.

What projects have you used WinRunner on? Tell me about some of the challenges that arose and how you handled them.
pbs :WR fails to identify the object in gui. If there is a non std window obk wr cannot recognize it ,we use GUI SPY for that to handle such situation.
Can you do more than just capture and playback?
I have done Dynamically capturing the objects during runtime in which no recording, no playback and no use of repository is done AT ALL.
-It was done by the windows scripting using the DOM(Document Object Model) of the windows.
Summary: QuickTest Pro
Summary:
QuickTest Professional provides an interactive, visual environment for test development.
Here is the description from the Mercury Interactive “How it Works” section of the QuickTest Pro web page:
Mercury QuickTest Professional™ allows even novice testers to be productive in minutes. You can create a test script by simply pressing a Record button and using an application to perform a typical business process. Each step in the business process is automated documented with a plain-English sentence and screen shot. Users can easily modify, remove, or rearrange test steps in the Keyword View.

QuickTest Professional can automatically introduce checkpoints to verify application properties and functionality, for example to validate output or check link validity. For each step in the Keyword View, there is an ActiveScreen showing exactly how the application under test looked at that step. You can also add several types of checkpoints for any object to verify that components behave as expected, simply by clicking on that object in the ActiveScreen.

You can then enter test data into the Data Table, an integrated spreadsheet with the full functionality of Excel, to manipulate data sets and create multiple test iterations, without programming, to expand test case coverage. Data can be typed in or imported from databases, spreadsheets, or text files.

Advanced testers can view and edit their test scripts in the Expert View, which reveals the underlying industry-standard VBScript that QuickTest Professional automatically generates. Any changes made in the Expert View are automatically synchronized with the Keyword View.

Once a tester has run a script, a TestFusion report displays all aspects of the test run: a high-level results overview, an expandable Tree View of the test script specifying exactly where application failures occurred, the test data used, application screen shots for every step that highlight any discrepancies, and detailed explanations of each checkpoint pass and failure. By combining TestFusion reports with Mercury Quality Management, you can share reports across an entire QA and development team.

QuickTest Professional also facilitates the update process. As an application under test changes, such as when a “Login” button is renamed “Sign In,” you can make one update to the Shared Object Repository, and the update will propagate to all scripts that reference this object. You can publish test scripts to Mercury Quality Management, enabling other QA team members to reuse your test scripts, eliminating duplicative work.

QuickTest Professional supports functional testing of all popular environments, including Windows, Web, .Net, Visual Basic, ActiveX, Java, SAP, Siebel, Oracle, PeopleSoft, terminal emulators, and Web services.

- QuickTest Pro “How it Works” webpage from Mercury:
http://www.mercury.com/us/products/quality-center/functional-testing/quicktest-professional/works.html

We like QuickTest Pro and now prefer implementing it over WinRunner. When you get into advance testing scenarios, QuickTest Pro has more options and they are easier to implement compared to WinRunner in our opinion.

Do to the similarities in concept and features, an experienced WinRunner user can easily convert to QuickTest Pro and quickly become an efficient Test Automation Engineer!

We recommend that existing customers begin all new development with QuickTest Pro and use the built-in feature of calling WinRunner scripts from QuickTest Pro for all existing WinRunner scripts that they already have. As older scripts require updates and time permits, we recommend replacing them with QuickTest Pro scripts. Eventually you will be able to convert your test script library with all QuickTest Pro scripts.
Pros:
* Will be getting the initial focus on development of all new features and supported technologies.
* Ease of use.
* Simple interface.
* Presents the test case as a business workflow to the tester (simpler to understand).
* Numerous features.
* Uses a real programming language (Microsoft’s VBScript) with numerous resources available.
* QuickTest Pro is significantly easier for a non-technical person to adapt to and create working test cases, compared to WinRunner.
* Data table integration better and easier to use than WinRunner.
* Test Run Iterations/Data driving a test is easier and better implement with QuickTest.
* Parameterization easier than WinRunner.
* Can enhance existing QuickTest scripts without the “Application Under Test” being available; by using the ActiveScreen.
* Can create and implement the Microsoft Object Model (Outlook objects, ADO objects, FileSystem objects, supports DOM, WSH, etc.).
* Better object identification mechanism.
* Numerous existing functions available for implementation – both from within QuickTest Pro and VBScript.
* QTP supports .NET development environment (currently WinRunner 7.5 does not).
* XML support (currently WinRunner 7.5 does not).
* The Test Report is more robust in QuickTest compared to WinRunner.
* Integrates with TestDirector and WinRunner (can kick off WinRunner scripts from QuickTest).
Cons:
* Currently there are fewer resources (consultants and expertise) available due to QTP being a newer product on the market and because there is a greater Demand than Supply, thus fewer employee/consulting resources.
* Must know VBScript in order to program at all.
* Must be able to program in VBScript in order to implement the real advance testing tasks and to handle very dynamic situations.
* Need training to implement properly.
* The Object Repository (OR) and “testing environment” (paths, folders, function libraries, OR) can be difficult to understand and implement initially.

Explain the terms “TEST” and “ Business Component”
Test—A collection of steps organized into one or more actions, which are used to verify that your application performs as expected. By default each test begins with a single action. Business Component—A collection of steps representing a single task in your application. Business components (also known as components) are combined into specific scenarios to build business process tests in Mercury Quality Center with Business Process Testing. A component does not contain actions, you add steps directly to a componenet.

What is check point?
A checkpoint checks specific values or characteristics of a page, object, or text string and enables you to identify whether or not your Web site or application is functioning correctly. A checkpoint compares the value of an element captured in your test when you recorded your test, with the value of the same element captured during the test run.


What do you mean by iteration ?
Each run session that uses a different set of parameterized data is called an iteration.

What is output value?
An output value is a value retrieved during the run session and entered into your Data Table or saved as a variable or a parameter. Each run session that uses a different set of parameterized data is called an iteration.

How many Add-ins comes by default with QTP?
There are 3 Add-ins comes with QTP: (1) ActiveX (2) Visual Basic (3) Web

What are the views available in QTP?
(1) Keyword View (2) Expert View

What is Active Screen?
The Active Screen provides a snapshot of your application as it appeared when you performed a certain step during a recording session.
What are the key elements of a QTP window?
The QuickTest window contains the following key elements: ? QuickTest title bar—Displays the name of the currently open test or component. ? Menu bar—Displays menus of QuickTest commands. ? File toolbar—Contains buttons to assist you in managing your test or component. ? Testing toolbar—Contains buttons to assist you in the testing process. ? Debug toolbar—Contains buttons to assist you in debugging your test or component (not displayed by default). ? Action toolbar—Contains buttons and a list of actions, enabling you to view the details of an individual action or the entire test flow. ? Test pane—Contains the Keyword View and Expert View tabs. ? Active Screen—Provides a snapshot of your application as it appeared when you performed a certain step during the recording session. ? Data Table—Assists you in parameterizing your test or component. For a test, the Data Table contains the Global tab and a tab for each action. For a component, the Data Table contains single tab. ? Debug Viewer pane—Assists you in debugging your test or component. The Debug Viewer pane contains the Watch Expressions, Variables, and Command tabs (not displayed by default). ? Status bar—Displays the status of the QuickTest application.

How many tabs are available in Debug Viewer Pane?
The Debug Viewer pane contains three tabs to assist you in debugging your test or component—Watch Expressions, Variables, and Command. Watch Expressions The Watch Expressions tab enables you to view the current value of any variable or other VBScript expression. Variables The Variables tab enables you to view the current value of all variables that have been recognized up to the last step performed in the run session. Command The Command tab enables you to execute a line of script in order to set or modify the current value of a variable or VBScript object in your test or component. When you continue the run session, QuickTest uses the new value that was set in the command.

How many toolbars QTP has?
QuickTest has 4 built-in toolbars: 1. The File toolbar 2. The Testing toolbar 3. The Debug toolbar 4. Action toolbar The Action toolbar is available in the Keyword View and contains options that enable you to view all actions in the test flow or to view the details of a selected action. The Action toolbar is not available for components.



Explain the terms Password Encoder, Remote Agent, Test Batch Runner, Test Results Deletion tool?
Password Encoder—enables you to encode passwords. You can use the resulting strings as method arguments or Data Table parameter values. Remote Agent—determines how QuickTest behaves when a test or component is run by a remote application such as Quality Center. Test Batch Runner—enables you to set up QuickTest to run several tests in succession. Test Results Deletion Tool—enables you to delete unwanted or obsolete results from your system according to specific criteria that you define.

Explain the terms Test Object Model, Test Object & Run-Time object?
The test object model is a large set of object types or classes that QuickTest uses to represent the objects in your application. Each test object class has a list of properties that can uniquely identify objects of that class and a set of relevant methods that QuickTest can record for it. A test object is an object that QuickTest creates in the test or component to represent the actual object in your application. QuickTest stores information about the object that will help it identify and check the object during the run session. A run-time object is the actual object in your Web site or application on which methods are performed during the run session.

What are assistive properties or an ordinal identifier?
When mandatory property values are not sufficient to uniquely identify the object within its parent object, QuickTest adds some assistive properties and/or an ordinal identifier to create a unique description. Note: You can retrieve or modify property values of the test object during the run session by adding GetTOProperty and SetTOProperty statements in the Keyword View or Expert View. You can retrieve property values of the runtime object during the run session by adding GetROProperty statements. If the available test object methods or properties for an object do not provide the functionality you need, you can access the internal methods and properties of any run-time object using the Object property. You can also use the attribute object property to identify Web objects in your application according to user-defined properties.
What is object Repository ?Explain different types of Object Repositories?
QuickTest identifies objects in your application based on a set of test object properties. It stores the object data it learns in the object repository. You can save your objects either in a shared object repository or in action object repository. In shared object repository mode, you can use one object repository file for multiple tests or components. In object repository peraction mode, QuickTest automatically creates an object repository file for each action in your test. Object repository per-action mode is not available for components.

How you can enhance your test?
There are variety of options to enhance your test: (1) You can add checkpoints to your test. A checkpoint is a step in your test that compares the values of the specified property during a test run with the values stored for the same test object property within the test. This enables you to identify whether or not your Web site or application is functioning correctly. (2) You can parameterize your test to replace fixed values with values from an external source during your test run. The values can come from a Data Table, environment variables you define, or values that QuickTest generates during the test run. (3)You can retrieve values from your test and store them in the Data Table as output values. You can subsequently use these values as an input parameter in your test. This enables you to use data retrieved during a test in other parts of the test. (4) You can divide your test into actions to streamline the testing process of your Web site or application. (5)You can use special QuickTest options to enhance your test with programming statements. The Step Generator guides you step-by-step through the process of adding recordable and non-recordable methods to your test. You can also synchronize your test to ensure that your application is ready for QuickTest to perform the next step in your test, and you can measure the amount of time it takes for your application to perform steps in a test by defining and measuring transactions. (6)You can also manually enter standard VBScript statements, as well as statements using QuickTest test objects and methods, in the Expert View.

Explain different recording modes ?
QuickTest’s normal recording mode records the objects in your application and the operations performed on them. This mode is the default and takes full advantage of QuickTest’s test object model, recognizing the objects in your application regardless of their location on the screen. Analog Recording - enables you to record the exact mouse and keyboard operations you perform in relation to either the screen or the application window. In this recording mode, QuickTest records and tracks every movement of the mouse as you drag the mouse around a screen or window. This mode is useful for recording operations that cannot be recorded at the level of an object, for example, recording a signature produced by dragging the mouse. Note: You cannot edit analog recording steps from within QuickTest. ? Low-Level Recording - enables you to record on any object in your application, whether or not QuickTest recognizes the specific object or the specific operation. This mode records at the object level and records all run-time objects as Window or WinObject test objects. Use low-level recording for recording tests in an environment or on an object not recognized by QuickTest. You can also use low-level recording if the exact coordinates of the object are important for your test. Note: Steps recorded using low-level mode may not run correctly on all objects.

Explain different types of checkpoints?
There are 10 types of checkpoints you can insert: Standard Checkpoint checks the property value of an object in your application or Web page. The standard checkpoint checks a variety of objects such as buttons, radio buttons, combo boxes, lists, etc. Image Checkpoint checks the value of an image in your application or Web page. For example, you can check that a selected image’s source file is correct. Bitmap Checkpoint checks an area of your Web page or application as a bitmap. Table Checkpoint checks information within a table. For example, suppose your application or Web site contains a table listing all available flights from New York to San Francisco. You can add a table checkpoint to check that the time of the first flight in the table is correct. Text Checkpoint checks that a text string is displayed in the appropriate place in your application or on a Web page. Text Area Checkpoint checks that a text string is displayed within a defined area in a Windows application, according to specified criteria. Accessibility Checkpoint identifies areas of your Web site that may not conform to the World Wide Web Consortium (W3C) Web Content Accessibility Guidelines. Page Checkpoint checks the characteristics of a Web page. For example, you can check how long a Web page takes to load or whether a Web page contains broken links. Database Checkpoint checks the contents of a database accessed by your application. XML Checkpoint checks the data content of XML documents in XML files or XML documents in Web pages and frames.

What is parameter?
A parameter is a variable that is assigned a value from an external data source or generator. If you wish to parameterize the same value in several steps in your test or component, you may want to consider using the Data Driver rather than adding parameters manually.
How many types of parameters are there?
There are four types of parameters: 1. Test, action or component parameters 2. Data Table parameters 3. Environment variable parameters 4. Random number parameters Test, action or component parameters enable you to use values passed from your test or component, or values from other actions in your test.0 Data Table parameters enable you to create a data-driven test (or action) that runs several times using the data you supply. In each repetition, or iteration, QuickTest uses a different value from the Data Table. Environment variable parameters enable you to use variable values from other sources during the run session. These may be values you supply, or values that QuickTest generates for you based on conditions and options you choose. Random number parameters enable you to insert random numbers as values in your test or component.

How Does Run time data (Parameterization) is handled in QTP?
You can then enter test data into the Data Table, an integrated spreadsheet with the full functionality of Excel, to manipulate data sets and create multiple test iterations, without programming, to expand test case coverage. Data can be typed in or imported from databases, spreadsheets, or text files.
Explain about the Test Fusion Report of QTP ?
Once a tester has run a test, a TestFusion report displays all aspects of the test run: a high-level results overview, an expandable Tree View of the test specifying exactly where application failures occurred, the test data used, application screen shots for every step that highlight any discrepancies, and detailed explanations of each checkpoint pass and failure. By combining TestFusion reports with QuickTest Professional, you can share reports across an entire QA and development team.

How QTP recognizes Objects in AUT?
QuickTest stores the definitions for application objects in a file called the Object Repository. As you record your test, QuickTest will add an entry for each item you interact with. Each Object Repository entry will be identified by a logical name (determined automatically by QuickTest), and will contain a set of properties (type, name, etc) that uniquely identify each object. Each line in the QuickTest script will contain a reference to the object that you interacted with, a call to the appropriate method (set, click, check) and any parameters for that method (such as the value for a call to the set method). The references to objects in the script will all be identified by the logical name, rather than any physical, descriptive properties.

In how many ways we can add check points to an application using QTP.
We can add checkpoints while recording the application or we can add after recording is completed using Active screen (Note : To perform the second one The Active screen must be enabled while recording).

What is the file extension of the code file & object repository file in QTP?
(1)For code file the extension is .vbs (2)For object repository file the extension is .tsr File extension for per action is (.mtr)
How to Merge Object repositories?
With QTP 8.2 ,there is QTP Plus setup.It provides Repositories Merge Utility.The Object Repository Merge Utility enables user to merge Object repository files into a single Object repository file.

What are the different scripting languages you could use when working with QTP ?
Visual Basic (VB), XML, JavaScript, Java, HTML

Can you do more than just capture and playback?
I have done Dynamically capturing the objects during runtime in which no recording, no playback and no use of repository is done AT ALL. -It was done by the windows scripting using the DOM(Document Object Model) of the windows.

How many types of Actions are there in QTP?
There are three kinds of actions: (1)non-reusable action—an action that can be called only in the test with which it is stored, and can be called only once. (2)reusable action—an action that can be called multiple times by the test with which it is stored (the local test) as well as by other tests. (3)external action—a reusable action stored with another test. External actions are read-only in the calling test, but you can choose to use a local, editable copy of the Data Table information for the external action

How can we write scripts without having GUI(means u dont have any GUI and u want to write a script in QTP)?
By descriptive programming



What is the descrirptive progrmaing?.what is the use of descriptive programing?
QTP uses its object repository to refer to objects present in your test and which have been recorded, if you wish to use objects that were not recorded and are not present in your object repository then we use descriptive programming, where QTP does not refer to object repository but the property name and value are mentioned in the code itself for QTP to use it for e.g this is not Desc. prog. Browser("Mercury Tours").Page("Mercury Tours").WebEdit("username") This is desc prog. Browser("Title:=Mercury Tours").Page("Title:=Mercury Tours").WebEdit("Name:=Author", "Index:=3").Set "Mark Twain"

Explain the need to use analog recording in qtp?
This mode records exact mouse and Key Board operations you perform in relation to the screen /Application Window. This mode is useful for the operation which you can record at Object Level, such as drawing a picture, recording signature. The steps recorded using Analog Mode is saved in separated data file, Quick Tests add to your Test a Run Analog File statement that calls the recorded analog File. This file is stored with your action in which these Analog Steps are created. The Step recorded in Analog mode can not be edited within QT.
Descriptive programming in QTP
Whenever QTP records any action on any object of an application, it adds some description on how to recognize that object to a repository of objects called object repository. QTP cannot take action on an object until unless its object description is in the Object Repository. But descriptive programming provides a way to perform action on objects which are not in Object repository

Object Identification:
To identify an object during the play back of the scripts QTP stores some properties which helps QTP to uniquely identify the object on a page. Below screen shots shows an example Object repository:

Now to recognize a radio button on a page QTP had added 2 properties the name of the radio button and the html tag for it. The name the left tree view is the logical name given by QTP for the object. This can be changed as per the convenience of the person writing the test case. QTP only allows UNIQUE logical name under same level of hierarchy. As we see in the snapshot the two objects in Browser->Page node are “WebTable” and “testPath”, they cannot have the same logical name. But an object under some other node can have the same name. Now with the current repository that we have, we can only write operation on objects which are in the repository. Some of the example operations are given below

Browser("Browser").Page("Page").WebRadioGroup ("testPath").Select "2"
cellData = Browser("Browser").Page("Page").WebTable ("WebTable").GetCellData (1,1)
Browser("Example2").Page("Page").WebEdit("testPath").Set "Test text"

When and Why to use Descriptive programming?
Below are some of the situations when Descriptive Programming can be considered useful:
The objects in the application are dynamic in nature and need special handling to identify the object. The best example would be of clicking a link which changes according to the user of the application, Ex. “Logout <>”.
When object repository is getting huge due to the no. of objects being added. If the size of Object repository increases too much then it decreases the performance of QTP while recognizing a object.
When you don’t want to use object repository at all. Well the first question would be why not Object repository? Consider the following scenario which would help understand why not Object repository

Scenario 1: Suppose we have a web application that has not been developed yet. Now QTP for recording the script and adding the objects to repository needs the application to be up, that would mean waiting for the application to be deployed before we can start of with making QTP scripts. But if we know the descriptions of the objects that will be created then we can still start off with the script writing for testing

Scenario 2: Suppose an application has 3 navigation buttons on each and every page. Let the buttons be “Cancel”, “Back” and “Next”. Now recording action on these buttons would add 3 objects per page in the repository. For a 10 page flow this would mean 30 objects which could have been represented just by using 3 objects. So instead of adding these 30 objects to the repository we can just write 3 descriptions for the object and use it on any page.

Modification to a test case is needed but the Object repository for the same is Read only or in shared mode i.e. changes may affect other scripts as well.

When you want to take action on similar type of object i.e. suppose we have 20 textboxes on the page and there names are in the form txt_1, txt_2, txt_3 and so on. Now adding all 20 the Object repository would not be a good programming approach.

How to use Descriptive programming?
There are two ways in which descriptive programming can be used
By creating properties collection object for the description.
By giving the description in form of the string arguments.

By creating properties collection object for the description.

To use this method you need first to create an empty description

Dim obj_Desc ‘Not necessary to declare
Set obj_Desc = Description.Create

Now we have a blank description in “obj_Desc”. Each description has 3 properties “Name”, “Value” and “Regular Expression”.

obj_Desc(“html tag”).value= “INPUT”

When you use a property name for the first time the property is added to the collection and when you use it again the property is modified. By default each property that is defined is a regular expression. Suppose if we have the following description

obj_Desc(“html tag”).value= “INPUT
” obj_Desc(“name”).value= “txt.*”

This would mean an object with html tag as INPUT and name starting with txt. Now actually that “.*” was considered as regular expression. So, if you want the property “name” not to be recognized as a regular expression then you need to set the “regularexpression” property as FALSE

obj_Desc(“html tag”).value= “INPUT”
obj_Desc(“name”).value= “txt.*”
obj_Desc(“name”).regularexpression= “txt.*”

This is how of we create a description. Now below is the way we can use it

Browser(“Browser”).Page(“Page”).WebEdit(obj_Desc).set “Test”

When we say .WebEdit(obj_Desc) we define one more property for our description that was not earlier defined that is it’s a text box (because QTPs WebEdit boxes map to text boxes in a web page).

If we know that we have more than 1 element with same description on the page then we must define “index” property for the that description

Consider the HTML code given below




Now the html code has two objects with same description. So distinguish between these 2 objects we will use the “index” property. Here is the description for both the object

For 1st textbox:
obj_Desc(“html tag”).value= “INPUT”
obj_Desc(“name”).value= “txt_Name
” obj_Desc(“index”).value= “0”

For 2nd textbox:
obj_Desc(“html tag”).value= “INPUT”
obj_Desc(“name”).value= “txt_Name”
obj_Desc(“index”).value= “1”

Consider the HTML Code given below:




We can use the same description for both the objects and still distinguish between both of them
obj_Desc(“html tag”).value= “INPUT”
obj_Desc(“name”).value= “txt_Name”

When I want to refer to the textbox then I will use the inside a WebEdit object and to refer to the radio button I will use the description object with the WebRadioGroup object.

Browser(“Browser”).Page(“Page”).WebEdit(obj_Desc).set “Test” ‘Refers to the text box
Browser(“Browser”).Page(“Page”).WebRadioGroup(obj_Desc).set “Test” ‘Refers to the radio button

But if we use WebElement object for the description then we must define the “index” property because for a webelement the current description would return two objects.

Hierarchy of test description:
When using programmatic descriptions from a specific point within a test object hierarchy, you must continue to use programmatic descriptions from that point onward within the same statement. If you specify a test object by its object repository name after other objects in the hierarchy have been described using programmatic descriptions, QuickTest cannot identify the object.

For example, you can use Browser(Desc1).Page(Desc1).Link(desc3), since it uses programmatic descriptions throughout the entire test object hierarchy. You can also use Browser("Index").Page(Desc1).Link(desc3), since it uses programmatic descriptions from a certain point in the description (starting from the Page object description).

However, you cannot use Browser(Desc1).Page(Desc1).Link("Example1"), since it uses programmatic descriptions for the Browser and Page objects but then attempts to use an object repository name for the Link test object (QuickTest tries to locate the Link object based on its name, but cannot locate it in the repository because the parent objects were specified using programmatic descriptions).

Getting Child Object:
We can use description object to get all the objects on the page that matches that specific description. Suppose we have to check all the checkboxes present on a web page. So we will first create an object description for a checkboxe and then get all the checkboxes from the page

Dim obj_ChkDesc

Set obj_ChkDesc=Description.Create
obj_ChkDesc(“html tag”).value = “INPUT”
obj_ChkDesc(“type”).value = “checkbox”

Dim allCheckboxes, singleCheckBox

Set allCheckboxes = Browse(“Browser”).Page(“Page”).ChildObjects(obj_ChkDesc)

For each singleCheckBox in allCheckboxes

singleCheckBox.Set “ON”

Next
The above code will check all the check boxes present on the page. To get all the child objects we need to specify an object description i.e. we can’t use the string arguments that will be discussed later in the 2nd way of using the programming description.

Possible Operation on Description Object

Consider the below code for all the solutions
Dim obj_ChkDesc

Set obj_ChkDesc=Description.Create
obj_ChkDesc(“html tag”).value = “INPUT”
obj_ChkDesc(“type”).value = “checkbox”

How to get the no. of description defined in a collection
obj_ChkDesc.Count ‘Will return 2 in our case

How to remove a description from the collection
obj_ChkDesc.remove “html tag” ‘would delete the html tag property from the collection

How do I check if property exists or not in the collection?
The answer is that it’s not possible. Because whenever we try to access a property which is not defined its automatically added to the collection. The only way to determine is to check its value that is use a if statement “if obj_ChkDesc(“html tag”).value = empty then”.

How to browse through all the properties of a properties collection?
Two ways
1st:
For each desc in obj_ChkDesc
Name=desc.Name
Value=desc.Value
RE = desc.regularexpression
Next
2nd:
For i=0 to obj_ChkDesc.count - 1
Name= obj_ChkDesc(i).Name
Value= obj_ChkDesc(i).Value
RE = obj_ChkDesc(i).regularexpression
Next


By giving the description in form of the string arguments.
You can describe an object directly in a statement by specifying property:=value pairs describing the object instead of specifying an object’s name. The general syntax is:

TestObject("PropertyName1:=PropertyValue1", "..." , "PropertyNameX:=PropertyValueX")

TestObject—the test object class could be WebEdit, WebRadioGroup etc….

PropertyName:=PropertyValue—the test object property and its value. Each property:=value pair should be separated by commas and quotation marks. Note that you can enter a variable name as the property value if you want to find an object based on property values you retrieve during a run session.

Consider the HTML Code given below:




Now to refer to the textbox the statement would be as given below

Browser(“Browser”).Page(“Page”).WebEdit(“Name:=txt_Name”,”html tag:=INPUT”).set “Test”

And to refer to the radio button the statement would be as given below

Browser(“Browser”).Page(“Page”).WebRadioGroup(“Name:=txt_Name”,”html tag:=INPUT”).set “Test”

If we refer to them as a web element then we will have to distinguish between the 2 using the index property

Browser(“Browser”).Page(“Page”).WebElement(“Name:=txt_Name”,”html tag:=INPUT”,”Index:=0”).set “Test” ‘ Refers to the textbox Browser(“Browser”).Page(“Page”).WebElement(“Name:=txt_Name”,”html tag:=INPUT”,”Index:=1”).set “Test” ‘ Refers to the radio button
QuickTest Professional (QTP) 8.2 Tips and Tricks (1)

Data Table
Two Types of data tables
Global data sheet: Accessible to all the actions
Local data sheet: Accessible to the associated action only

Usage:
DataTable("Column Name",dtGlobalSheet) for Global data sheet
DataTable("Column Name",dtLocalSheet) for Local data sheet

If we change any thing in the Data Table at Run-Time the data is changed only in the run-time data table. The run-time data table is accessible only through then test result. The run-time data table can also be exported using DataTable.Export or DataTable.ExportSheet

How can I save the changes to my DataTable in the test itself?
Well QTP does not allow anything for saving the run time changes to the actual data sheet. The only work around is to share the

spreadsheet and then access it using the Excel COM Api's.

How can I check if a parameter exists in DataTable or not?
The best way would be to use the below code:
code:
on error resume next
val=DataTable("ParamName",dtGlobalSheet)
if err.number<> 0 then
'Parameter does not exist
else
'Parameter exists
end if

How can I make some rows colored in the data table?
Well you can't do it normally but you can use Excel COM API's do the same. Below code will explain some expects of Excel COM APIs

code:
Set xlApp=Createobject("Excel.Application")
set xlWorkBook=xlApp.workbooks.add
set xlWorkSheet=xlWorkbook.worksheet.add
xlWorkSheet.Range("A1:B10").interior.colorindex = 34 'Change the color of the cells
xlWorkSheet.Range("A1:A10").value="text" 'Will set values of all 10 rows to "text"
xlWorkSheet.Cells(1,1).value="Text" 'Will set the value of first row and first col


rowsCount=xlWorkSheet.Evaluate("COUNTA(A:A)") 'Will count the # of rows which have non blank value in the column A
colsCount=xlWorkSheet.Evaluate("COUNTA(1:1)") 'Will count the # of non blank columns in 1st row

xlWorkbook.SaveAs "C:\Test.xls"
xlWorkBook.Close
Set xlWorkSheet=Nothing
Set xlWorkBook=Nothing
set xlApp=Nothing

SMART Identification
Smart Identification is nothing but an algorithm used by QTP when it is not able to recognize one of the object. A very generic example as per the QTP manual would be, A photograph of a 8 year old girl and boy and QTP records identification properties of that girl when she was 8, now when both are 10 years old then QTP would not be able to recognize the girl. But there is something that is still the same, that is there is only one girl in the photograph. So it kind of PI (Programmed intelligence) not AI.

When should I use SMART Identification?
Something that people don't think about too much. But the thing is that you should disable SI while creating your test cases. So that you are able to recognize the objects that are dynamic or inconsistent in their properties. When the script has been created, the SI should be enabled,so that the script does not fail in case of small changes. But the developer of the script should always check for the test results to verify if the SI feature was used to identify a object or not. Sometimes SI needs to be disabled for particular objects in the OR, this is advisable when you use

SetTOProperty to change any of the TO properties of an object and especially ordinal identifiers like index, location and creationtime.

Descriptive Programming
Descriptive programming is nothing but a technique using which operations can be performed on the AUT object which are not present in the OR. For more details refer to http://bondofus.tripod.com/QTP/DP_in_QTP.doc (right click and use save as...)

What is a Recovery Scenario?
Recovery scenario gives you an option to take some action for recovering from a fatal error in the test. The error could range in from occasional to typical errors. Occasional error would be like "Out of paper" popup error while printing something and typical errors would be like "object is disabled" or "object not found". A test case have more then one scenario associated with it and also have the priority or order in which it should be checked.

What does a Recovery Scenario consists of?
Trigger: Trigger is nothing but the cause for initiating the recovery scenario. It could be any popup window, any test error, particular state of an object or any application error. Action: Action defines what needs to be done if scenario has been triggered. It can consist of a mouse/keyboard event, close application, call a recovery function defined in library file or restart windows. You can have a series of all the specified actions.

Post-recovery operation: Basically defined what need to be done after the recovery action has been taken. It could be to repeat the step, move to next step etc....

When to use a Recovery Scenario and when to us on error resume next?
Recovery scenarios are used when you cannot predict at what step the error can occur or when you know that error won't occur in your

QTP script but could occur in the world outside QTP, again the example would be "out of paper", as this error is caused by printer device driver. "On error resume next" should be used when you know if an error is expected and dont want to raise it, you may want to have different actions depending upon the error that occurred. Use err.number & err.description to get more details about the error.

Library Files or VBScript Files
How do we associate a library file with a test ?
Library files are files containing normal VBScript code. The file can contain function, sub procedure, classes etc.... You can also use executefile function to include a file at run-time also. To associate a library file with your script go to Test->Settings... and add your library file to resources tab.

When to associate a library file with a test and when to use execute file?
When we associate a library file with the test, then all the functions within that library are available to all the actions present in the test. But when we use Executefile function to load a library file, then the function are available in the action that called executefile. By associated a library to a test we share variables across action (global variables basically), using association also makes it possible to execute code as soon as the script runs because while loading the script on startup QTP executes all the code on the global scope. We can use executefile in a library file associated with the test to load dynamic files and they will be available to all the actions in the test.

What is the difference between Test Objects and Run Time Objects ?
Test objects are basic and generic objects that QTP recognize. Run time object means the actual object to which a test object maps.
QuickTest Professional (QTP) 8.2 Tips and Tricks (2)

Can I change properties of a test object?
Yes. You can use SetTOProperty to change the test object properties. It is recommended that you switch off the Smart Identification for the object on which you use SetTOProperty function.

Can I change properties of a run time object?
No (but Yes also). You can use GetROProperty("outerText") to get the outerText of a object but there is no function like SetROProperty to change this property. But you can use WebElement().object.outerText="Something" to change the property.

What is the difference between an Action and a function?
Action is a thing specific to QTP while functions are a generic thing which is a feature of VB Scripting. Action can have a object repository associated with it while a function can't. A function is just lines of code with some/none parameters and a single return value while an action can have more than one output parameters.

Where to use function or action?
Well answer depends on the scenario. If you want to use the OR feature then you have to go for Action only. If the functionality is not about any automation script i.e. a function like getting a string between to specific characters, now this is something not specific to QTP and can be done on pure VB Script, so this should be done in a function and not an action. Code specific to QTP can also be put into an function using DP. Decision of using function/action depends on what any one would be comfortable using in a given situation.

What is checkpoint?
Checkpoint is basically a point in the test which validates for truthfulness of a specific things in the AUT. There are different types of checkpoints depending on the type of data that needs to be tested in the AUT. It can be text, image/bitmap, attributes, XML etc....

What's the difference between a checkpoint and output value?
Checkpoint only checks for the specific attribute of an object in AUT while Output value can output those attributes value to a column in data table.

How can I check if a checkpoint passes or not?
code:
chk_PassFail = Browser(...).Page(...).WebEdit(...).Check (Checkpoint("Check1"))
if chk_PassFail then
MsgBox "Check Point passed"
else
MsgBox "Check Point failed"
end if

My test fails due to checkpoint failing, Can I validate a checkpoint without my test failing due to checpoint failure?
code:
Reporter.Filter = rfDisableAll 'Disables all the reporting stuff
chk_PassFail = Browser(...).Page(...).WebEdit(...).Check (Checkpoint("Check1"))
Reporter.Filter = rfEnableAll 'Enable all the reporting stuff
if chk_PassFail then
MsgBox "Check Point passed"
else
MsgBox "Check Point failed"
end if

How can I import environment from a file on disk?
Environment.LoadFromFile "C:\Env.xml"

How can I check if a environment variable exist or not?
When we use Environment("Param1").value then QTP expects the environment variable to be already defined. But when we use Environment.value("Param1") then QTP will create a new internal environment variable if it does not exists already. So to be sure that variable exist in the environment try using Environment("Param1").value.

How to connect to a database?
code:
Const adOpenStatic = 3
Const adLockOptimistic = 3
Const adUseClient = 3
Set objConnection = CreateObject("ADODB.Connection")
Set objRecordset = CreateObject("ADODB.Recordset")
objConnection.Open "DRIVER={Microsoft ODBC for Oracle};UID=;PWD=
"
objRecordset.CursorLocation = adUseClient
objRecordset.CursorType = adopenstatic
objRecordset.LockType = adlockoptimistic
ObjRecordset.Source="select field1,field2 from testTable"
ObjRecordset.ActiveConnection=ObjConnection ObjRecordset.Open 'This will execute your Query
If ObjRecordset.recordcount>0 then
Field1 = ObjRecordset("Field1").Value
Field2 = ObjRecordset("Field2").Value
End if
QuickTest Pro Questions Only (1)
1. how many maximum actions can be performed in a single test
2. How to create the dynamic object repository in QTP?
3. What is difference between global sheet anh action sheet?
4. How to pass parameters from one action to another action.
5. How to perform Cross platform testing and Cross browser testing using QTP?Can u explain giving some exampl
6. how to connect the database through QTP
7. How to open multiple instances of an application from QTP? 2. How to recognize each instance and setting
8. This is the chain of question#: 15434What conditions we go for Reusable Scripts and how we create and
9. Can we call QTP test from another test using scripting.Suppose there are 4 tests and i want to call these
10. Is it possible to test a web application(java) with winrunner?otherwise is it possible to check with
11. How can we insert Text check point and Bit map check point ? if provide example script, it is greatefu
12. Can we mask a Code In .vbs file so that it is not viewable to others?
13. Is there any function to double click a particular row in a webtable?
14. what is the use of command tab in Debug viewer ?can we execute any user defined queries
15. What is Analog recording,What is the difference between analog recording and low level recording
16. what is database check point.
17. What is the use of function and sub function in QTP?
18. What is the new Version of QTP which is recently released in the Market?
19. How to call a funtion present in dll file in QTP Script.
20. I have n iterations of test run in QTP. I want to see the results of not only the latest (‘n’th) iteration
21. How to call from one action to another action in QTP?
22. What is the Difference Between Bit map Check point & Image Check pointPlease explain in detailText &
23. How can we do block commenting in QTP?
24. How to get the column count and column name from resultset in database connction program?
25. 1. How to write QTP scripts?2. Any related website resource to learn QTP?3. What steps the to be follwed
26. How to check an XML schema (XML Schema Validation--from XML file)? Ttell me about .XSD file format.
27. How to schedule tests in QTP?
28. Is Addins enough to work in Flex based applications? or do we have to get a licence for multimedia for
29. how to identify a 'web element' class object while recording and running in 'Event' mode of settings.
30. What are the new features available in QTP 8.2 compared with earlier versions?
31. Hi forum. Could any body tell me what is AMDOCS, and what are its models ? Best regrds.
32. What is difference between window(" ") and dialog(" ") in QTP while creating script?
33. How do you retrieve the Class name of a Test Object programmatically from within a script?
34. What is the best way to test UNIX (QTP, Winrunner or Xrunner)? If QTP supports let me know the brief
35. What is the Recovery Scenerio can apllied for any telephone line connection(Ex:BSNl,Airtel etc
36. what are advantages and disadvantages between internet explorer and netscape navigator (or) Internet
37. Anybody with an experience of testing Mainframe applications. I usually check the ActiveX and VB add-ins
38. Difference Between text and Textarea checkpoints in QTP
39. How to Handle dynamic WebList in QTP...Values in Weblist are different
40. What is the Hotkey that can be used for Hitting spacebar?
41. We are trying to avoid that anybody see our script after we wrote it. Did anybody know how to do this?
42. how can i insert database checkpoint in qtp-6.5
43. What are different execution modes available in QTP & explain them.
44. How can we recognize objects in Flex application using QTP? When I record scripts, it takes all objects
45. For the AS/400 application that takes data only thru the keyboard input and not even single mouse click
46. How to write QTP test results to an Excel application
47. How to write recovery scenario for below questions and what are the steps we will follow?if i click
48. If winrunner and QTP both are functional testing tools from the same company.why a separate tool QTP
49. How do we test Links using Quick Test Professional and confirm that the page we are requesting is seen
50. How do you test DLL files using QTP?
51. After importing external .xls datasheet in to Datatable of QTP, How to set NO of iterations run for
52. How to test Dynamic web pages using QTP
53. How to record Flex (1.0) objects using QTP?Post the code which works for this.
54. Advantage of using Mercury Quality Center over Test Director
55. How do we connect to Oracle database from QTP ?
56. Can any one pls tell me about how to configure the environment variables in qtp and how to use the variables
57. What is the process for creating an automated test script using QTP assuming you have reviewed the manual
58. How to use a data driver in QTP?
59. What is the method used to focus on particuler field.I need the script.I will give example.I flights
60. 1)what is the advantage and disadvantage of using Virtual Object wizard?2) How efficiently we can use
61. Without recording objects in Object Repository are we able to run scripts?
62. Can we call a QTP script from a main script without making it reusable?
63. wat is xml schema validation and how to perform schema validation for a file.wat is *.XSD extention
64. Can any body expalin me the differences between a reusable and a external action with example?
65. I need information on using FSO(file system object) also its significance
66. How to Run a script recorded in English flavor of my application and re-run the same script on different
67. How to write QTP test results to an Excel application, Please provide the exact code if possible ASAPThanks
68. What are the limitations for XML Checkpoints in QTP 8.0?
69. how good is QTP for testing siebel applications?whether QTP recognizes siebel objects or something else
70. How do I use text checkpoint in QTP as everytime I use this checkpoint in the excel sheet and highlight
71. How is automation used in QTP for regreession testing. Please give me a sample script.
72. Anybody explain me, the concept of checkpoint declaration in the QTP mainly for the Objects, Pages, Text
73. How can we validate the PDF file recognization and its content with the help of Mercury product QTP(Quick
74. What is Expert view in QTP?Can you explain with example?
75. What is the best way to do regression testing using QTP.
76. what is the use functions in QTP. public, private
77. How can we return values from userdefined function? anybody provide code with small example its great
78. how to retrive/update database by writing code in expert viewIn my case database is Accessmy dsn name="try"database
79. How can I import and/or merge an existing repository into my current test?
81. Hi,I was set the repository as per test mode and recorded my script. Now I wana to change the repository
82. What are Limitation of QTP?
83. what is difference between stub and driver?
84. What is meant by Source Control?
85. What is descriptive programming?
86. how to automate editing an XML file .because when i record the editing on an XML file and run it,some
87. What are the disadvantages or drawbacks in QTP?
88. 1. Each test that you run is displayed into the screen ... I'm looking for a way to run a test in background
89. I have faced one question in interviewhe has given one screen with one bitmap and one edit box.The original
90. What qt plus ? How we merge the files in qtp?What is feasibility study in automation?
91. when a script is recorded in quick test for connecting 10 rows in the database, can we change script
92. How do you test siebel application using qtp?
93. How to get Traceability matrix from TD?
94. How to import a test case present in ".xls" file to TD under a Test set?
95. How to attach a file to TD?
96. What do you to script when objects are removed from application?
97. How do you data drive an external spreadsheet?
98. Give me an example where you have used a COM interface in your QTP project?
99. How long have you used the product?
100. How to get "FontSize" of a "WebEdit"?
101. Is there anyway to automatically update the Datasource name in Database Checkpoints object when we migrate tests to a new release?
102. How to create a Runtime property for an object?
103. How to handle the exceptions using recovery scenario manager in QTP ?
104. What is the use of Text output value in QTP?
105. Have you ever written a compiled module? If yes tell me about some of the features.
106. I want to open a Notepad window without recording a test and I do not want to use SystemUtil.Run command as well How do I do this?
107. What is the command in QTP to invoke IE Browser?
108. How to execute QTP script from command prompt?
109. How We will count the total number of web Links on a Page?
110. How you will creat a object in VB Script?
111. What is the difference between TSL & VBScript?
112. Write a Script if a button is enable or not?

QuickTest Pro Questions Only (2)
1.List me out the short cut keys for some functionalities in the QTP....for example.......to record......to run....etc
2.What are the challenges do we face while testing webbased applications using the automation tool QTP or any?
3.Do we get any issues if we run the testscript on different browsers?What are the options we need to st in QTP?
4.How we can add actions in the test using QTP?
5.Does QTP provides any tools for parametarization?
6.What are recovery management techniques?
7.How we can merge the object repositories?say if we have two or three object repositories then how we can merge them?is there any option in qtp to merge the object repositories?
8.What is the difference between link and hyperlink?
9.Can some one help me how to compare thevalues from one sheet to values in another sheet?suppose i have a field called temp in action1 and i also have temp in action2. i want to compare the values of those two actions sheets data table? how can i do it using descriptive programming?
10.How to do batch run in qtp? provide the steps?
11.Can we do qtp testing without creating objects in Dbject repository?can we do it completely writing code i.e in expert view only. are there any books for this?
12. In qtp,how to interact tool & application build?
13.Scenario:2 combo boxes.1st combo contains a,b,c,d.2nd combo contains 10..20,20...30,30.40,...selecting 'a' shuld display
10.....20.,'b' shld display 20.......30.......and so on.tell d script? 2.Random testing of 500 test cases in qtp?
14.How to do the batch testing using qtp?
15. What is the use of "function generator " in qtp?
16. What is environment variables?
17. What is the use of virtual object?explain?
18. Difference between wr 8.2? how to integrate with some other tools?
19. What is the function (how to do) the batch testing in test director using manual testing procedure?
20. What is a test strategy & what is the difference between test strategy & test plan?
21. What is descriptive programming when it is useful? & when to use this?
22.How to invoke any recorded script in QTP with out using RECORD & PLAYBACK CONCEPT?
23. If two text boxes are there in a "form". ....>A table contains some records which contains usernames & password....what is the script we need to write using descriptive programming concept....in QTP(simply data driven test script of qtp)?
24. Can we run the scripts of qtp 8.2 in the qtp 7.0?
25. How to capture data from images in qtp and produce them in excel sheet?
26. How to handle recovery scenario in QTP. give detailed explanation about recovery scenario..
27. How we can take data in the username field text box into variable in web application explain with example and give all functions to get data into variables
28. What are the file extensions for pre-action,shared object repository files and what is the extension for library files?
29. What are the most frequent errors you faced while executing your scripts?
30. among all the check points what is the most important check point?
31. How to write script in QTP(vbscript)...i mean with out application deployed...and how to call script1 into script2?
32. How do you handle multiple banners(at the top the page,the banner is scrolling) in a web page(donot take the name property(regular expression))
33. How do you handle XML exceptions in QTP(here it is exception,not the check point)
34. What is iteration? how it is related to test results in QTP
35. Is it possible to map an image as standard object or u've to treat as virtual? how to map an dynamic image into standard object?
36. What is the exact diff between test and component in QTP?
37. Difference between image checkpoint and bitmap checkpoint?
38. What is the descriptive.crate() in QTP?
39. What is TOM in QTP?
40. What is the abbrivation of .mtr in action repository?
41.I am new in QTP, plz tell me any books for QTP,my company is QTP,so i learn QTP
42. Tell about descriptive programming in QTP8.2?
43. Tell about automation object model(Aom) in QTP?
44. How to merge object repositories?
45. What is the command used to start the QTP from run.(start-> run)
46. What does VBS file contain...??what is VBS file?
47. What is unicoad compability?how does this makes a difference from winrunner?
48. What is the exact meaning of environment variables? what is complie module in QTP?what exactly it contains functions or actions?
49. What is compile modulein QTP?what exactly it contains functions or actions?
50. What is the exact meaning of environment variabes?
51. How to watch currnet value of an object like OK button or edit box with "agent nale:" lable. in watch expression tab
52. Which functionalities of QTP used in banking project?
53. I am begineer in QTP and i want to try the software QTP. i checked the mercury site for QTP trial version and i found QTP9.0. the thing is tht it supports windows 2000,Xp but not windows 98. and i am using windows 98 only and its not possible for me to update to windows 2000.guide me to get the tria version of QTP which supports windows 98 operating system.
54. Tell me the QTP advantage and disadvantges?
55. How do you configure QTP and test director?
56. How to get the column count and column name from the result set in the data base connection program?
57. Scalability testing comes under in which tool?
58. What is the difference between rational rose and QTP?which tool is better to learn?
59. How to test the mainframe application?(tell me few basic things)
60. What is throw object?
61. How will you handle the situation when object isnot captured during recors?
62. What kind of errors can be handled in QTP in real time scenario?
63. Can objects recognised without repository?
64. What is smart identification?
65. What is the differece between normal mode and fast moce?
66. In how many ways you perform batchtesting
67. What is API
68.What is the difference between action and script
69.Synchronozation types in QTP
70.Batch testing in howmany ways u perform in QTP
71. Approch for installation,comaptibility,system testing
72.user defined function in QTP
73.How you perform exception handling in QTP,what is other name for this.
74.How you call functions in QTP
75.How you connect bugzilla with QTP
76.How you sre using QTP in ur projrct
77.how you automate testscripts one by one or modukewise or all at once
78.Can we directly automate testscripts according to requirements
79.How you automate test scripts
80.What do you do if QTP doesnt recognize object,what action should be taken
81.After running scripts how you report results, there is any specific report form?
82.In object repository,two actions r there.action1 name is A.tsr, action2 name is B.tsr.is it possible?if yes,what is the output of A+B?if no, why?
83.Anybody explain me,the concept of checkpoint declaration in the QTP mainly for the objects,pages,text and tables?
84.How to use regular expressions in QTP?give an example.
85.Take a situation when you are working with QTP,suddenly systam has crashed. so u again start the system. my que ishow can QTP directly opened when the system desktop appears?
86.Give me descriptive programming code sample flight application in QTP
87.Can you put checkpoints for moving images.
88.how to test background color and dynamic images during runtime
89.Where is the bitmap checkpoint be saved?
90.What happen in object repository(shared) if we call an existing action from an external action? and what happen in object repository(peraction) if we call an existing action from an external action?
91.How can we retrive ten rows from the data table using loop concept?
92.How to convert a string to an integer?
93.How to "turn off" QTP results after running a script?
94.Does QTP is "unicoe" compatible?
95.How to supress warnings from the "test results page"?
96.What is the extension of script and object repository files in QTP?
97.How to open any application during scripting in QTP?
98.How to retrive the property of an object in QTP?
99.How to change the run time values of a property for an object in QTP?
100.How to handle run-time errors?







Credit Card Validation - Check Digits
This document outlines procedures and algorithms for Verifying the accuracy and validity of credit card numbers. Most credit card numbers are encoded with a "Check Digit". A check digit is a digit added to a number (either at the end or the beginning) that validates the authenticity of the number. A simple algorithm is applied to the other digits of the number which yields the check digit. By running the algorithm, and comparing the check digit you get from the algorithm with the check digit encoded with the credit card number, you can verify that you have correctly read all of the digits and that they make a valid combination.
Possible uses for this information:
• When a user has keyed in a credit card number (or scanned it) and you want to validate it before sending it our for debit authorization.
• When issuing cards, say an affinity card, you might want to add a check digit using the MOD 10 method.
1.Prefix, Length, and Check Digit Criteria
Here is a table outlining the major credit cards that you might want to validate.
CARD TYPE Prefix Length Check digit algorithm
MASTERCARD 51-55 16 mod 10
VISA 4 13, 16 mod 10
AMEX 34
37 15 mod 10
Diners Club/
Carte Blanche 300-305
36
38 14 mod 10
Discover 6011 16 mod 10
enRoute 2014
2149 15 any
JCB 3 16 mod 10
JCB 2131
1800 15 mod 10
2. LUHN Formula (Mod 10) for Validation of Primary Account Number
The following steps are required to validate the primary account number:
Step 1: Double the value of alternate digits of the primary account number beginning with the second digit from the right (the first right--hand digit is the check digit.)
Step 2: Add the individual digits comprising the products obtained in Step 1 to each of the unaffected digits in the original number.
Step 3: The total obtained in Step 2 must be a number ending in zero (30, 40, 50, etc.) for the account number to be validated.
For example, to validate the primary account number 49927398716:
Step 1:
4 9 9 2 7 3 9 8 7 1 6
x2 x2 x2 x2 x2
------------------------------
18 4 6 16 2

Step 2: 4 +(1+8)+ 9 + (4) + 7 + (6) + 9 +(1+6) + 7 + (2) + 6
Step 3: Sum = 70 : Card number is validated
Note: Card is valid because the 70/10 yields no remainder.
The great folks at ICVERIFY are the original source of this data, I only formatted it in HTML.
If you are in the market, I wrote a set of FoxPro modules for Windows/Dos that interface nicely with ICVERIFY in a multi-user LAN setup. You just set up ICVERIFY on a single station, and all stations on the LAN can authorize credit cards with a single FOXBASE function call. Of course, you have to license ICVERIFY by the node, but it is very reasonable. I also wrote a couple of simple functions to perform pre-authorization, card screening, etc.
Here is a Microsoft Excel worksheet that will validate a number for you (useful for understanding the algorithm, it is in a .ZIP compressed format)
Horace Vallas made a NeoWebScript (Tcl really) procedure that implements it.
Check it out at https://enterprise.neosoft.com/secureforms/hav/
Because I get at least a letter a week regarding this routine, here are some additional helpful notes:
Make sure that you:
21. have started with the rightmost digit (including the check digit) (figure odd and even based upon the rightmost digit being odd, regardless of the length of the Credit Card.) ALWAYS work right to left.
22. the check digit counts as digit #1 (assuming that the rightmost digit is the check digit) and is not doubled
23. double every second digit (starting with digit # 2 from the right)
24. remember that when you double a number over 4, (6 for example) you don't add the result to your total, but rather the sum of the digits of the result (in the above example 6*2=12 so you would add 1+2 to your total (not 12).
25. always include the Visa or M/C/ prefix.

12 comments:

Unknown said...

Its Very Nice

Unknown said...

Thank you for this. I'm new to testing.It's very informative, although some of the pages are cut off. How do I get in contact with you?

Unknown said...

Thank you for modifying the pages. Looks great!

Ashok said...

hello sir
thanks for this beautiful trial
really this gives me lot of help
to improving my testing skill.
sir i am fresher so i need objective type question if you will provide us such type of question then it will be a great help for us.
AshokRathore
Pune

Santosh PM said...

Kuldeep, I have this simple issue when I test a Borland application. The combo boxes seem to have a different MSW_ID each time I run it and hence the winrunner seems unable to choose items from the combo box.

Is there a work around for the same?

I defined the combo box as a virtual list box and it identifies the box, but is still not able to choose the values from the listbox!

Unknown said...

Hello Kuldeep-
Thank you for your thorough explanation. I am impressed with your knowledge of testing methods among other things. If you want to talk more about this and a possibility of working a premier international internet company, please contact me at sarahkurien@google.com.

thank you,
Sarah

Shweta... said...

Your blog was very informative. I had a question and thought you might be of help. I am scripting test cases in QA Run and I want to add 2 variables. '+' in QA run is used for Concatenation. Can you please help? Thanks

Dmitry Motevich said...

I recommend to study my LoadRUnner visual tutorials:
http://motevich.blogspot.com/search/label/visual%20tutorials

To simplify understanding, I add screenshots and pictures to my posts. So, I hope, they will be usefull for you.

In any case, feel free to contact me, if you have ideas for further LoadRunner topics, to be explained, or any LoadRunner questions.

neelima said...

helloooooo..good information on testing….can anyone say how to select multiple objects in weblist as i have many values in list…my coding is
Browser(”name”).Page(”name”).Weblist(”html id:=name”).Select”EMP501″
my necessity is how to select all the employees likewise EMP502,EMP503….????only one value is selected …so i need help

neelima said...

hi kuldeep very good information on testing

Ram - Toronto Tiger XI (A Hex team) said...

Hi Kuldeep,

Very good information on testing, i need certain information with regards to Winrunner, when i am recording winrunner doesn't recognize a hidden type field in the web page, how should i handle this. Pls let me know.

Thanks,
Ramu

Neelam said...

Hi Kuldeep,

Can i have your contact info. Actually I am looking for Advanced QTP Traning.

Thank you,

South Beach Smoke E-Cigarette - The Better Smoking Choice