Load / Stress Testing of Websites 1. The Importance of Scalability & Load Testing? Some very high profile websites have suffered from serious outages and/or performance issues due to the number of people hitting their website. E-commerce sites that spent heavily on advertising but not nearly enough on ensuring the quality or reliability of their service have ended up with poor web-site performance, system downtime and/or serious errors, with the predictable result that customers are being lost.
In the case of toysrus.com, its web site couldn't handle the approximately 1000 percent increase in traffic that their advertising campaign generated. Similarly, Encyclopaedia Britannica was unable to keep up with the amount of users during the immediate weeks following their promotion of free access to its online database. The truth is, these problems could probably have been prevented, had adequate load testing taken place.
When creating an eCommerce portal, companies will want to know whether their infrastructure can handle the predicted levels of traffic, to measure performance and verify stability.
These types of services include Scalability / Load / Stress testing, as well as Live Performance Monitoring.
Load testing tools can be used to test the system behaviour and performance under stressful conditions by emulating thousands of virtual users. These virtual users stress the application even harder than real users would, while monitoring the behaviour and response times of the different components. This enables companies to minimise test cycles and optimise performance, hence accelerating deployment, while providing a level of confidence in the system. Once launched, the site can be regularly checked using Live Performance Monitoring tools to monitor site performance in real time, in order to detect and report any performance problems - before users can experience them. 2. Preparing for a Load Test The first step in designing a Web site load test is to measure as accurately as possible the current load levels. Measuring Current Load Levels The best way to capture the nature of Web site load is to identify and track, [e.g. using a log analyzer] a set of key user session variables that are applicable and relevant to your Web site traffic. Some of the variables that could be tracked include: the length of the session (measured in pages) the duration of the session (measured in minutes and seconds) the type of pages that were visited during the session (e.g., home page, product information page, credit card information page etc.) the typical/most popular ‘flow’ or path through the website the % of ‘browse’ vs. ‘purchase’ sessions the % type of users (new user vs. returning registered user)
Measure how many people visit the site per week/month or day. Then break down these current traffic patterns into one-hour time slices, and identify the peak-hours (i.e. if you get lots of traffic during lunch time etc.), and the numbers of users during those peak hours. This information can then be used to estimate the number of concurrent users on your site. 3. Concurrent Users Although your site may be handling x number of users per day, only a small percentage of these users would be hitting your site at the same time. For example, if you have 3000 unique users hitting your site on one day, all 3000 are not going to be using the site between 11.01 and 11.05 am. So, once you have identified your peak hour, divide this hour into 5 or 10 minute slices [you should use your own judgement here, based on the length of the average user session] to get the number of concurrent users for that time slice. 4. Estimating Target Load Levels Once you have identified the current load levels, the next step is to understand as accurately and as objectively as possible the nature of the load that must be generated during the testing.
Using the current usage figures, estimate how many people will visit the site per week/month or day. Then divide that number to attain realistic peak-hour scenarios.
It is important to understand the volume patterns, and to determine what load levels your web site might be subjected to (and must therefore be tested for).
There are four key variables that must be understood in order to estimate target load levels: how the overall amount of traffic to your Web site is expected to grow the peak load level which might occur within the overall traffic how quickly the number of users might ramp up to that peak load level how long that peak load level is expected to last
Once you have an estimate of overall traffic growth, you’ll need to estimate the peak level you might expect within that overall volume.
5. Estimating Test Duration The duration of the peak is also very important-a Web site that may deal very well with a peak level for five or ten minutes may crumble if that same load level is sustained longer than that. You should use the length of the average user session as a base for determining the load test duration.
6. Ramp-up Rate As mentioned earlier, Although your site may be handling x number of users per day, only a small percentage of these users would be hitting your site at the same time.
Therefore, when preparing your load test scenario, you should take into account the fact that users will hit the website at different times, and that during your peak hour the number of concurrent users will likely gradually build up to reach the peak number of users, before tailing off as the peak hour comes to a close.
The rate at which the number of users build up, the "Ramp-up Rate" should be factored into the load test scenarios (i.e. you should not just jump to the maximum value, but increase in a series of steps).
7. Scenario Identification The information gathered during the analysis of the current traffic is used to create the scenarios that are to be used to load test the web site. The identified scenarios aim to accurately emulate the behavior of real users navigating through the Web site. for example, a seven-page session that results in a purchase is going to create more load on the Web site than a seven-page session that involves only browsing. A browsing session might only involve the serving of static pages, while a purchase session will involve a number of elements, including the inventory database, the customer database, a credit card transaction with verification going through a third-party system, and a notification email. A single purchase session might put as much load on some of the system’s resources as twenty browsing sessions. Similar reasoning may apply to purchases from new vs. returning users. A new user purchase might involve a significant amount of account setup and verification —something existing users may not require. The database load created by a single new user purchase may equal that of five purchases by existing users, so you should differentiate the two types of purchases. 8. Script Preparation Next, program your load test tool to run each scenario with the number of types of users concurrently playing back to give you a the load scenario.
The key elements of a load test design are:
test objective pass/fail criteria script description scenario description
Load Test Objective The objective of this load test is to determine if the Web site, as currently configured, will be able to handle the X number of sessions/hr peak load level anticipated. If the system fails to scale as anticipated, the results will be analyzed to identify the bottlenecks.
Pass/Fail Criteria The load test will be considered a success if the Web site will handle the target load of X number of sessions/hr while maintaining the pre-defined average page response times (if applicable). The page response time will be measured and will represent the elapsed time between a page request and the time the last byte is received.
Since in most cases the user sessions follow just a few navigation patterns, you will not need hundreds of individual scripts to achieve realism—if you choose carefully, a dozen scripts will take care of most Web sites.
9. Script Execution Scripts should be combined to describe a load testing scenario. A basic scenario includes the scripts that will be executed, the percentages in which those scripts will be executed, and a description of how the load will be ramped up. By emulating multiple business processes, the load testing can generate a load equivalent to X numbers of virtual users on a Web application. During these load tests, real-time performance monitors are used to measure the response times for each transaction and check that the correct content is being delivered to users. In this way, they can determine how well the site is handling the load and identify any bottlenecks. The execution of the scripts opens X number of HTTP sessions (each simulating a user) with the target Web site and replays the scripts over and over again. Every few minutes it adds X more simulated users and continues to do so until the web site fails to meet a specific performance goal.
10. System Performance Monitoring It is vital during the execution phase to monitor all aspects of the website. This includes measuring and monitoring the CPU usage and performance aspects of the various components of the website – i.e. not just the webserver, but the database and other parts aswell (such as firewalls, load balancing tools etc.) For example, one etailer, whose site fell over (apparently due to a high load), when analysing the performance bottlenecks on their site discovered that the webserver had in fact only been operating at 50% of capacity. Further investigation revealed that the credit card authorisation engine was the cause of failure – it was not responding quick enough for the website, which then fellover when it was waiting for too many responses from the authorisation engine. They resolved this issue by changing the authorisation engine, and amending the website coding so that if there were any issues with authorisation responses in future, the site would not crash. Similarly, another ecommerce site found that the performance issues that they were experiencing were due to database performance issues – while the webserver CPU usage was only at 25%, the backend db server CPU usage was 86%. Their solution was to upgrade the db server. Therefore, it is necessary to use (install if necessary) performance monitoring tools to check each aspect of the website architecture during the execution phase. 11. Suggested Execution Strategy: Start with a test at 50% of the expected virtual user capacity for 15 minutes and a medium ramp rate. The different members of the team [testers will also need to be monitoring the CPU usage during the testing] should be able to check whether your website is handling the load efficiently or some resources are already showing high utilization. After making any system adjustments, run the test again or proceed to 75% of expected load. Continue with the testing and proceed to 100%; then up to 150% of the expected load, while monitoring and making the necessary adjustments to your system as you go along. 12. Results Analysis Often the first indication that something is wrong is the end user response times start to climb. Knowing which pages are failing will help you narrow down where the problem is. Whichever load test tool you use, it will need to produce reports that will highlight the following:
• Page response time by load level • Completed and abandoned session by load level • Page views and page hits by load level • HTTP and network errors by load level • Concurrent user by minute • Missing links report, if applicable • Full detailed report which includes response time by page and by transaction, lost • sales opportunities, analysis and recommendations
13. Important Considerations When testing websites, it is critically important to test from outside the firewall. In addition, web-based load testing services, based outside the firewall, can identify bottlenecks that are only found by testing in this manner. Web-based stress testing of web sites are therefore more accurate when it comes to measuring a site's capacity constraints. Web traffic is rarely uniformly distributed, and most Web sites exhibit very noticeable peaks in their volume patterns. Typically, there are a few points in time (one or two days out of the week, or a couple of hours each day) when the traffic to the Web site is highest.
Automated Testing Detail Test Plan Automated Testing DTP Overview This Automated Testing Detail Test Plan (ADTP) will identify the specific tests that are to be performed to ensure the quality of the delivered product. System/Integration Test ensures the product functions as designed and all parts work together. This ADTP will cover information for Automated testing during the System/Integration Phase of the project and will map to the specification or requirements documentation for the project. This mapping is done in conjunction with the Traceability Matrix document, that should be completed along with the ADTP and is referenced in this document. This ADTP refers to the specific portion of the product known as PRODUCT NAME. It provides clear entry and exit criteria, and roles and responsibilities of the Automated Test Team are identified such that they can execute the test. The objectives of this ADTP are: • Describe the test to be executed. • Identify and assign a unique number for each specific test. • Describe the scope of the testing. • List what is and is not to be tested. • Describe the test approach detailing methods, techniques, and tools. • Outline the Test Design including: • Functionality to be tested. • Test Case Definition. • Test Data Requirements. • Identify all specifications for preparation. • Identify issues and risks. • Identify actual test cases. • Document the design point Test Identification This ADTP is intended to provide information for System/Integration Testing for the PRODUCT NAME module of the PROJECT NAME. The test effort may be referred to by its PROJECT REQUEST (PR) number and its project title for tracking and monitoring of the testing progress.
Test Purpose and Objectives Automated testing during the System/Integration Phase as referenced in this document is intended to ensure that the product functions as designed directly from customer requirements. The testing goal is to identify the quality of the structure, content, accuracy and consistency, some response times and latency, and performance of the application as defined in the project documentation.
Assumptions, Constraints, and Exclusions Factors which may affect the automated testing effort, and may increase the risk associated with the success of the test include: • Completion of development of front-end processes • Completion of design and construction of new processes • Completion of modifications to the local database • Movement or implementation of the solution to the appropriate testing or production environment • Stability of the testing or production environment • Load Discipline • Maintaining recording standards and automated processes for the project • Completion of manual testing through all applicable paths to ensure that reusable automated scripts are valid
Entry Criteria The ADTP is complete, excluding actual test results. The ADTP has been signed-off by appropriate sponsor representatives indicating consent of the plan for testing. The Problem Tracking and Reporting tool is ready for use. The Change Management and Configuration Management rules are in place. The environment for testing, including databases, application programs, and connectivity has been defined, constructed, and verified.
Exit Criteria
In establishing the exit/acceptance criteria for the Automated Testing during the System/Integration Phase of the test, the Project Completion Criteria defined in the Project Definition Document (PDD) should provide a starting point. All automated test cases have been executed as documented. The percent of successfully executed test cases met the defined criteria. Recommended criteria: No Critical or High severity problem logs remain open and all Medium problem logs have agreed upon action plans; successful execution of the application to validate accuracy of data, interfaces, and connectivity. Pass/Fail Criteria The results for each test must be compared to the pre-defined expected test results, as documented in the ADTP (and DTP where applicable). The actual results are logged in the Test Case detail within the Detail Test Plan if those results differ from the expected results. If the actual results match the expected results, the Test Case can be marked as a passed item, without logging the duplicated results. A test case passes if it produces the expected results as documented in the ADTP or Detail Test Plan (manual test plan). A test case fails if the actual results produced by its execution do not match the expected results. The source of failure may be the application under test, the test case, the expected results, or the data in the test environment. Test case failures must be logged regardless of the source of the failure. Any bugs or problems will be logged in the DEFECT TRACKING TOOL. The responsible application resource corrects the problem and tests the repair. Once this is complete, the tester who generated the problem log is notified, and the item is re-tested. If the retest is successful, the status is updated and the problem log is closed. If the retest is unsuccessful, or if another problem has been identified, the problem log status is updated and the problem description is updated with the new findings. It is then returned to the responsible application personnel for correction and test. Severity Codes are used to prioritize work in the test phase. They are assigned by the test group and are not modifiable by any other group. The following standard Severity Codes to be used for identifying defects are: Table 1 Severity Codes Severity Code Number Severity Code Name Description 1. Critical Automated tests cannot proceed further within applicable test case (no work around) 2. High The test case or procedure can be completed, but produces incorrect output when valid information is input. 3. Medium The test case or procedure can be completed and produces correct output when valid information is input, but produces incorrect output when invalid information is input.(e.g. no special characters are allowed as part of specifications but when a special character is a part of the test and the system allows a user to continue, this is a medium severity) 4. Low All test cases and procedures passed as written, but there could be minor revisions, cosmetic changes, etc. These defects do not impact functional execution of system The use of the standard Severity Codes produces four major benefits: • Standard Severity Codes are objective and can be easily and accurately assigned by those executing the test. Time spent in discussion about the appropriate priority of a problem is minimized. • Standard Severity Code definitions allow an independent assessment of the risk to the on-schedule delivery of a product that functions as documented in the requirements and design documents. • Use of the standard Severity Codes works to ensure consistency in the requirements, design, and test documentation with an appropriate level of detail throughout. • Use of the standard Severity Codes promote effective escalation procedures.
Test Scope The scope of testing identifies the items which will be tested and the items which will not be tested within the System/Integration Phase of testing. Items to be tested by Automation (PRODUCT NAME ...) Items not to be tested by Automation(PRODUCT NAME ...)
Test Approach Description of Approach The mission of Automated Testing is the process of identifying recordable test cases through all appropriate paths of a website, creating repeatable scripts, interpreting test results, and reporting to project management. For the Generic Project, the automation test team will focus on positive testing and will complement the manual testing undergone on the system. Automated test results will be generated, formatted into reports and provided on a consistent basis to Generic project management. System testing is the process of testing an integrated hardware and software system to verify that the system meets its specified requirements. It verifies proper execution of the entire set of application components including interfaces to other applications. Project teams of developers and test analysts are responsible for ensuring that this level of testing is performed. Integration testing is conducted to determine whether or not all components of the system are working together properly. This testing focuses on how well all parts of the web site hold together, whether inside and outside the website are working, and whether all parts of the website are connected. Project teams of developers and test analyst are responsible for ensuring that this level of testing is performed. For this project, the System and Integration ADTP and Detail Test Plan complement each other. Since the goal of the System and Integration phase testing is to identify the quality of the structure, content, accuracy and consistency, response time and latency, and performance of the application, test cases are included which focus on determining how well this quality goal is accomplished. Content testing focuses on whether the content of the pages match what is supposed to be there, whether key phrases exist continually in changeable pages, and whether the pages maintain quality content from version to version. Accuracy and consistency testing focuses on whether today’s copies of the pages download the same as yesterday’s, and whether the data presented to the user is accurate enough. Response time and latency testing focuses on whether the web site server responds to a browser request within certain performance parameters, whether response time after a SUBMIT is acceptable, or whether parts of a site are so slow that the user discontinues working. Although Loadrunner provides the full measure of this test, there will be various AD HOC time measurements within certain Winrunner Scripts as needed. Performance testing (Loadrunner) focuses on whether performance varies by time of day or by load and usage, and whether performance is adequate for the application. Completion of automated test cases is denoted in the test cases with indication of pass/fail and follow-up action. Test Definition This section addresses the development of the components required for the specific test. Included are identification of the functionality to be tested by automation, the associated automated test cases and scenarios. The development of the test components parallels, with a slight lag, the development of the associated product components.
Test Functionality Definition (Requirements Testing) The functionality to be automated tested is listed in the Traceability Matrix, attached as an appendix. For each function to undergo testing by automation, the Test Case is identified. Automated Test Cases are given unique identifiers to enable cross-referencing between related test documentation, and to facilitate tracking and monitoring the test progress. As much information as is available is entered into the Traceability Matrix in order to complete the scope of automation during the System/Integration Phase of the test.
Test Case Definition (Test Design) Each Automated Test Case is designed to validate the associated functionality of a stated requirement. Automated Test Cases include unambiguous input and output specifications. This information is documented within the Automated Test Cases in Appendix 8.5 of this ADTP.
Test Data Requirements The automated test data required for the test is described below. The test data will be used to populate the data bases and/or files used by the application/system during the System/Integration Phase of the test. In most cases, the automated test data will be built by the OTS Database Analyst or OTS Automation Test Analyst.
Automation Recording Standards Initial Automation Testing Rules for the Generic Project: 1. Ability to move through all paths within the applicable system 2. Ability to identify and record the GUI Maps for all associated test items in each path 3. Specific times for loading into automation test environment 4. Code frozen between loads into automation test environment 5. Minimum acceptable system stability Winrunner Menu Settings 1. Default recording mode is CONTEXT SENSITIVE 2. Record owner-drawn buttons as OBJECT 3. Maximum length of list item to record is 253 characters 4. Delay for Window Synchronization is 1000 milliseconds (unless Loadrunner is operating in same environment and then must increase appropriately) 5. Timeout for checkpoints and CS statements is 1000 milliseconds 6. Timeout for Text Recognition is 500 milliseconds 7. All scripts will stop and start on the main menu page 8. All recorded scripts will remain short; Debugging is easier. However, the entire script, or portions of scripts, can be added together for long runs once the environment has greater stability.
Winrunner Script Naming Conventions 1. All automated scripts will begin with GE abbreviation representing the Generic Project and be filed under the Winrunner on LAB11 W Drive/Generic/Scripts Folder. 2. GE will be followed by the Product Path name in lower case: air, htl, car 3. After the automated scripts have been debugged, a date for the script will be attached: 0710 for July 10. When significant improvements have been made to the same script, the date will be changed. 4. As incremental improvements have been made to an automated script, version numbers will be attached signifying the script with the latest improvements: eg. XX0710.1 XX0710.2 The .2 version is the most up-to-date
Winrunner GUIMAP Naming Conventions 1. All Generic GUI Maps will begin with XX followed by the area of test. Eg. XX. XXpond GUI Map represents all pond paths. XXEmemmainmenu GUI Map represents all membership and main menu concerns. XXlogin GUI Map represents all XX login concerns. 2. As there can only be one GUI Map for each Object, etc on the site, they are under constant revision when the site is undergoing frequent program loads.
Winrunner Result Naming Conventions 1. When beginning a script, allow default res## name to be filed 2. After a successful run of a script where the results will be used toward a report, move file to results and rename: XX for project name, res for Test Results, 0718 for the date the script was run, your initials and the original default number for the script. Eg. XXres0718jr.1
Winrunner Report Naming Conventions 1. When the accumulation of test result(s) files for the day are formulated, and the statistics are confirmed, a report will be filed that is accessible by upper management. The daily Report file will be as follows: XXdaily0718 XX for project name, daily for daily report, and 0718 for the date the report was issued. 2. When the accumulation of test result(s) files for the week are formulated, and the statistics are confirmed, a report will be filed that is accessible by upper management. The weekly Report file will be as follows: XXweek0718 XX for project name, week for weekly report, and 0718 for the date the report was issued. Winrunner Script, Result and Report Repository 1. LAB 11, located within the XX Test Lab, will house the original Winrunner Script, Results and Report Repository for automated testing within the Generic Project. WRITE access is granted Winrunner Technicians and READ ONLY access is granted those who are authorized to run scripts but not make any improvements. This is meant to maintain the purity of each script version. 2. Winrunner on LAB11 W Drive houses all Winrunner related documents, etc for XX automated testing. 3. Project file folders for the Generic Project represent the initial structure of project folders utilizing automated testing. As our automation becomes more advanced, the structure will spread to other appropriate areas. 4. Under each Project file folder, a folder for SCRIPT, RESULT and REPORT can be found. 5. All automated scripts generated for each project will be filed under Winrunner on LAB11 W Drive/Generic/Scripts Folder and moved to folder ARCHIVE SCRIPTS as necessary 6. All GUI MAPS generated will be filed under Winrunner on LAB11 W Drive/Generic/Scripts/gui_files Folder. 7. All automated test results are filed under the individual Script Folder after each script run. Results will be referred to and reports generated utilizing applicable statistics. Automated Test Results referenced by reports sent to management will be kept under the Winrunner on LAB11 W Drive/Generic/Results Folder. Before work on evaluating a new set of test results is begun, all prior results are placed into Winrunner on LAB11 W Drive/Generic/Results/Archived Results Folder. This will ensure all reported statistics are available for closer scrutiny when required. 8. All reports generated from automated scripts and sent to upper management will be filed under Winrunner on LAB11 W Drive/Generic/Reports Folder
Test Preparation Specifications Test Environment Environment for Automated Test Automated Test environment is indicated below. Existing dependencies are entered in comments.
Environment Test System Comments Test System/Integration Test (SIT) Cert Access via http://xxxxx/xxxxx Production Production Access via http:// www.xxxxxx.xxx Other (specify) Development Individual Test Environments Hardware for Automated Test The following is a list of the hardware needed to create production like environment: Manufacturer Device Type Various Personal Computer (486 or Higher) with monitor & required peripherals; with connectivity to internet test/production environments. Must be enabled to ADDITIONAL REQUIREMENTS. Software The following is a list of the software needed to create a production like environment: Software Version (if applicable) Programmer Support Netscape Navigator ZZZ or higher - Internet Explorer ZZZ or higher - Test Team Roles and Responsibilities Test Team Roles and Responsibilities Role Responsibilities Name COMPANY NAME Sponsor Approve project development, handle major issues related to project development, and approve development resources Name, Phone XXX Sponsor Signature approval of the project, handle major issues Name, Phone XXX Project Manager Ensures all aspects of the project are being addressed from CUSTOMERS’ point of view Name, Phone COMPANY NAME Development Manager Manage the overall development of project, including obtaining resources, handling major issues, approving technical design and overall timeline, delivering the overall product according to the Partner Requirements Name, Phone COMPANY NAME Project Manager Provide PDD (Project Definition Document), project plan, status reports, track project development status, manage changes and issues Name, Phone COMPANY NAME Technical Lead Provide Technical guidance to the Development Team and ensure that overall Development is proceeding in the best technical direction Name, Phone COMPANY NAME Back End Services Manager Develop and deliver the necessary Business Services to support the PROJECT NAME Name, Phone COMPANY NAME Infrastructure Manager Provide PROJECT NAME development certification, production infrastructure, service level agreement, and testing resources Name, Phone COMPANY NAME Test Coordinator Develops ADTP and Detail Test Plans, tests changes, logs incidents identified during testing, coordinates testing effort of test team for project Name, Phone COMPANY NAME Tracker Coordinator/ Tester Tracks XXX’s in DEFECT TRACKING TOOL. Reviews new XXX’s for duplicates, completeness and assigns to Module Tech Leads for fix. Produces status documents as needed. Tests changes, logs incidents identified during testing. Name, Phone COMPANY NAME Automation Enginneer Tests changes, logs incidents identified during testing Name, Phone Test Team Training Requirements Automation Training Requirements Training Requirement Training Approach Target Date for Completion Roles/Resources to be Trained . . . . . . . .
Automation Test Preparation 1. Write and receive approval of the ADTP from Generic Project management 2. Manually test the cases in the plan to make sure they actually work before recording repeatable scripts 3. Record appropriate scripts and file them according to the naming conventions described within this document 4. Initial order of automated script runs will be to load GUI Maps through a STARTUP script. After the successful run of this script, scripts testing all paths will be kicked off. Once an appropriate number of PNR’s are generated, GenericCancel scripts will be used to automatically take the inventory out of the test profile and system environment. During the automation test period, requests for testing of certain functions can be accommodated as necessary as long as these functions have the ability to be tested by automation. 5. The ability to use Generic Automation will be READ ONLY for anyone outside of the test group. Of course, this is required to maintain the pristine condition of master scripts on our data repository. 6. Generic Test Group will conduct automated tests under the rules specified in our agreement for use of the Winrunner tool marketed by Mercury Interactive. 7. Results filed for each run will be analyzed as necessary, reports generated, and provided to upper management. Test Issues and Risks Issues The table below lists known project testing issues to date. Upon sign-off of the ADTP and Detail Test Plan, this table will not be maintained, and these issues and all new issues will be tracked through the Issue Management System, as indicated in the projects approved Issue Management Process Issue Impact Target Date for Resolution Owner COMPANY NAME test team is not in possession of market data regarding what browsers are most in use in CUSTOMER target market. Testing may not cover some browsers used by CLIENT customers Beginning of Automated Testing during System and Integration Test Phase CUSTOMER TO PROVIDE OTHER . . .
Risks Risks The table below identifies any high impact or highly probable risks that may impact the success of the Automated testing process. Risk Assessment Matrix Risk Area Potential Impact Likelihood of Occurrence Difficulty of Timely Detection Overall Threat(H, M, L) 1. Unstable Environment Delayed Start HISTORY OF PROJECT Immediately . 2. Quality of Unit Testing Greater delays taken by automated scripts Dependent upon quality standards of development group Immediately . 3. Browser Issues Intermittent Delays Dependent upon browser version Immediately . Risk Management Plan Risk Area Preventative Action Contingency Plan Action Trigger Owner 1. Meet with Environment Group . . . . 2. Meet with Development Group . . . . 3. . . . . Traceability Matrix The purpose of the Traceability Matrix is to identify all business requirements and to trace each requirement through the project's completion. Each business requirement must have an established priority as outlined in the Business Requirements Document. They are: Essential - Must satisfy the requirement to be accepted by the customer. Useful - Value -added requirement influencing the customer's decision. Nice-to-have - Cosmetic non-essential condition, makes product more appealing. The Traceability Matrix will change and evolve throughout the entire project life cycle. The requirement definitions, priority, functional requirements, and automated test cases are subject to change and new requirements can be added. However, if new requirements are added or existing requirements are modified after the Business Requirements document and this document have been approved, the changes will be subject to the change management process. The Traceability Matrix for this project will be developed and maintained by the test coordinator. At the completion of the matrix definition and the project, a copy will be added to the project notebook.
Functional Areas of Traceability Matrix # Functional Area Priority B1 Pond E B2 River E B3 Lake U B4 Sea E B5 Ocean E B6 Misc U B7 Modify E L1 Language E EE1 End-to-End Testing EE Legend: B = Order Engine L = Language N = Nice to have EE = End-to-End E = Essential U = Useful Definitions for Use in Testing Test Requirement A scenario is a prose statement of requirements for the test. Just as there are high level and detailed requirements in application development, there is a need to provide detailed requirements in the test development area.
Test Case A test case is a transaction or list of transactions that will satisfy the requirements statement in a test scenario. The test case must contain the actual entries to be executed as well as the expected results, i.e., what a user entering the commands would see as a system response.
Test Procedure Test procedures define the activities necessary to execute a test case or set of cases. Test procedures may contain information regarding the loading of data and executables into the test system, directions regarding sign in procedures, instructions regarding the handling of test results, and anything else required to successfully conduct the test.
Automated Test Cases NAME OF FUNCTION Test Case _______________________________________________________________________________________ Project Name/Number Generic Project / Project Request #Date __________________________________________________________________________________ Test Case Description Check all drop down boxes, fill in boxes and pop-up windows operate Build # according to requirements on the _______________________ main Pond web page. Run # __________________________________________________________________________________ Function / Module B1.1 Execution Under Test Retry # __________________________________________________________________________________ Test Requirement # Case # AB1.1.1(A for Automated) __________________________________________________________________________________ Written by _____________________________________________________________________________________ Goals Verify that Pond module functions as required ____________________________________________________________________________________ Setup for Test Access browser, Go to .. . ____________________________________________________________________________________ Pre-conditions Login with name and password. When arrive at Generic Main Menu... ____________________________________________________________________________________ StepActionExpected Results Pass/FailActual Results if Step Fails __________________________________________________________________________________ Go to From the Generic Main Menu, click on the Pond gif and go to Pond Pond web page. Once on the Pond and web page, check all drop down .. boxes for appropriate information (eg Time.7a, 8a in 1 hour increments), fill in boxes (remarks allows alpha and numeric but no other special characters), and pop up windows (eg. Privacy. Ensure it is retrieved, has correct verbage and closes). __________________________________________________________________________________
Each automation project team needs write up an automation standards document stating the following: • The installation configurations of the automation tool. • How the client machines environment will be set up • Where the network repositories, and manual test plans documents are located. • Identify what the drive letter is that all client machines must map to. • How the automation tool will be configured. • Identify what Servers and Databases the automation will run against. • Any naming standards that the test procedures, test cases and test plans will follow. • Any recording standards and scripting standards that all scripts must follow. • Describe what components of the product that will be tested.} Installation Configuration Install Step: Selection: Completed: Installations Components Full Destination Directory C:\sqa6 Type Of Repository Microsoft Access Scripting Language SQA Basic only Test Station Name Your PC Name DLL messages Overlay all DLL's the system prompts for. Robot will not run without its own DLL's.
Client Machines Configuration Configuration Item Setting: Notes: Lotus Notes Shut down lotus notes before using robot. This will prevent mail notification messages from interrupting your scripts and it will allow robot to have more memory. Close all applications Close down all applications down (except SQA robot recorder and the application you are testing) This will free up memory on the PC. Shut down printing Select printer window from start menu Select File -> Server Properties Select Advance tab Un-check notify check box Shut down printing Network Bring up dos prompt Select Z drive Type CASTOFF Turn off Screensavers Select NONE or change it to 90 minutes Display Settings for PC Set in Control Panel display application Colors - 256 Font Size - small Desktop 800 X 600 pixels Map a Network drive to {LETTER} Bring up explorer and map a network drive to here.
Repository Creation Item Information Repository Name Location Mapped Drive Letter Project Name Users set up for Project Admin - no password Sbh files used in projects scripts Client Setup Options for the SQA Robot tool Option Window Option Selection Recording ID list selections by Contents ID Menu selections by Text Record unsupported mouse drags as Mouse click if within object Window positions Record Object as text Auto record window size While Recording Put Robot in background Playback Test Procedure Control Delay Between :5000 milliseconds Partial Window Caption On Each window search Caption Matching options Check - Match reverse captions Ignore file extensions Ignore Parenthesis Test Log Test log Management Output Playback results to test log All details Update SQA repository View test log after playback Test Log Data Specify Test Log Info at Playback Unexpected Window Detect Check Capture Check Playback response Select pushbutton with focus On Failure to remove Abort playback Wait States Wait Pos/Neg Region Retry - 4 Timeout after 90 Automatic wait Retry - 2 Timeout after 120 Keystroke option Playback delay 100 millsec Check record delay after enter key Error Recovery On Script command Failure Abort Playback On test case failure Continue Execution SQA trap Check all but last 2 Object Recognition Do not change Object Data Test Definitions Do not change Editor Leave with defaults Preferences Leave with defaults Identify what Servers and Databases the automation will run against. This {Project name} will use the following Servers: {Add servers} On these Servers it will be using the following Databases: {Add databases}
Naming standards for test procedures, cases and plans The naming standards for this project are:
Recording standards and scripting standards In order to ensure that scripts are compatible on the various clients and run with the minimum maintenance the following recording standards have been set for all scripts recorded.
1. Use assisting scripts to open and close applications and activity windows. 2. Use global constants to pass data into scripts and between scripts. 3. Make use of main menu selections over using double clicks, toolbar items and pop up menus whenever possible. 4. Each test procedure should have a manual test plan associated with it. 5. Do not Save in the test procedure unless it is absolutely necessary, this will prevent the need to write numerous clean up scripts. 6. Do a window existence test for every window you open, this will prevent scripts dying from slow client/server calls. 7. Do not use the mouse for drop down selections, whenever possible use hotkeys and the arrow keys. 8. When navigating through a window use the tab and arrow keys instead of using a mouse, this will make maintenance of scripts due to UI changes easier in the future. 9. Create a template header file called testproc.tpl. This file will insert template header information on the top of all scripts recorded. This template area can be used for modification tracking and commenting on the script. 10. Comment all major selections or events in the script. This will make debugging easier. 11. Make sure that you maximize all MDI main windows in login initial scripts. 12. When recording make sure you begin and end your scripts in the same position. Ex. On the platform browser always start your script opening the browser tree and selecting your activity (this will ensure that the activity window will always be in the same position), likewise always end your scripts with collapsing the browser tree.
Describe what components of the product that will be tested. This project will test the following components: The objective is to:
WinRunner Fundamentals The 5 major areas to know for WinRunner are listed below with SOME of the subtopics called out for each of the major topics: 1) GUI Map - Learning objects - Mapping custom objects to standard objects 2) Record/Playback - Record modes: Context Sensitive and Analog - Playback modes: (Batch), Verify, Update, Debug 3) Synchronization - Using wait parameter of functions - Wait window/object info - Wait Bitmap - Hard wait() 4) Verification/Checkpoints - Window/object GUI checkpoints - Bitmap checkpoints - Text checkpoints (requires TLS) 5) TLS (Test Script Language) - To enhance scripts (flow control, parameterization, data driven test, user defined functions,... ________________________________________ 1. Calling Scripts and Expected Results When running in non-batch mode, WinRunner will always look in the calling scripts \exp directory for the checks. When running in batch mode, WinRunner will look in the called script's \exp directory. There is a limitation, though. WinRunner will only look in the called script's \exp directory one call level deep. For example, in bacth mode: script1: gui_check(...); #will look in script1\exp call "script2" ();
script2: gui_check(...); #will look in script2\exp call "script3" ();
script3: gui_check(...); #will look in script2\exp (and cause an error)
In non bacth mode:
script1: gui_check(...); #will look in script1\exp call "script2" ();
script2: gui_check(...); #will look in script1\exp (and cause an error) call "script3" ();
script3: gui_check(...); #will look in script1\exp (and cause an error) ________________________________________ 2. Run Modes 3. Batch mode will write results to the individual called test. 4. Interactive (non-batch) mode writes to the main test.
________________________________________ 5. Data Types TSL supports two data types: numbers and strings, and you do not have to declare them. Look at the on-line help topic for some things to be aware of: "TSL Language", "Variables and Constants", "Type (of variable or constant)" Generally, you shouldn't see any problems with comparisons. However, if you perform arithmetic operations you might see some unexpected behavior (again check out the on-line help mentioned above). var="3abc4"; rc=var + 2; # rc will be 5 :-) ________________________________________ 6. Debugging When using pause(x); for debugging, wrap the variable with brackets to easily see if "invisible" characters are stored in the variable (i.e., \n, \t, space, or Null) pause("[" & x & "]"); Use the debugging features of WinRunner to watch variables. "invisible" characters will show themselves (i.e., \n, \t, space) Examples: Variable pause(x); pause("[" & x & "]"); x="a1"; a1 [a1] x="a1 "; a1 [a1 ] x="a1\t"; a1 [a1 ] x="a1\n"; a1 [a1 ] x=""; [] ________________________________________ 7. Block Comments To temporarily comment out a block of code use: if (TRUE) { ... block of code to be commented out!! } ________________________________________ 8. Data Driven Test ddt_* functions vs getline/split Personally I do not care one way or another about the ddt_* or getline/split functions. They both do almost the same thing. There are some arguably good benefits to using ddt_*, but most of them are focused on the data management. In general you can always keep the data in Excel and perform a Save As to convert the file to a delimited text file. One major difference is in the performance of playing back a script that has a huge data file. The ddt_* functions currently can not compare to the much faster getline/split method. But here is an area to consider: READABILITY I personally do not like scripts with too many nested function calls (which the parameterize value method does) because it may reduce the readability for people with out a programming background. Example: edit_set("FirstName", ddt_val(table, "FirstName")); edit_set("LastName", ddt_val(table, "LastName")); So what I typically do is, declare my own variables at the beginning of the script, assign the values to them, and use the variable names in the rest of the script. It doesn't matter if I'm using the getline/split or ddt_val functions. This also is very useful when I may need to change the value of a variable, because they are all initialized at the top of the script (whenever possible). Example with ddt_* functions in a script: FIRSTNAME=ddt_val(table, "FirstName"); LASTNAME=ddt_val(table, "LastName"); ... edit_set("FirstName", FIRSTNAME); edit_set("LastName", LASTNAME); And most of the time I have a driving test which calls another test and passes an array of data to be used to update a form. Example with ddt_* functions before calling another script:
# Driver script will have ... MyPersonArray [ ] = { "FIRSTNAME" = ddt_val(table, "FirstName"); "LASTNAME" = ddt_val(table, "LastName"); }
call AddPerson(MyPersonArray) ...
# Called script will have edit_set("FirstName", Person["FIRSTNAME"]); edit_set("LastName", Person["LASTNAME"]); So as you can see, there are many ways to do the same thing. What people must keep in mind is the skill level of the people that may inherit the scripts after they are created. And a consistent method should be used throughout the project.
________________________________________ 9. String Vs Number Comparison 10. String Vs Number comparisons are not a good thing to do. 11. Try this sample to see why: 12. 13. c1=47.88 * 6; 14. c2="287.28"; 15. 16. #Prints a decimal value while suppressing non-significant zeros 17. #and converts the float to a string. 18. c3 = sprintf ("%g", c1);
User can create tests using recording or programming or by both. While recording, each operation performed by the user generates a statement in the Test Script Language (TSL). These statements are displayed as a test script in a test window. User can then enhance the recorded test script, either by typing in additional TSL functions and programming elements or by using WinRunner�s visual programming tool, the Function Generator, or using the Function Viewer.
There are 2 modes of recording in WinRunner
1. Context Sensitive mode records the operations user performed on the application by identifying Graphical User Interface (GUI) objects. Context Sensitivity test scripts can be reused in the future version of the application because WinRunner writes a unique description of each selected object to a GUI map file. The GUI map files are maintained separately from test scripts and the same GUI map file (or files) can be used for multiple tests. For example, if the user clicks the Open button in an Open dialog box, WinRunner records the action and generate a script. When it runs the test, WinRunner looks for the Open dialog box and the Open button represented in the test script. If, in subsequent runs of the test, the button is in a different DemoUrl in the Open dialog box, WinRunner is still able to find it.
2. Analog mode records mouse clicks, keyboard input, and the exact x-and y-coordinates traveled by the mouse. When the test is run, WinRunner retraces the mouse tracks. Use Analog mode when exact mouse coordinates are important to the test, such as when testing a drawing application. For example, if the user clicks the Open button in an Open dialog box, WinRunner records the movements of the mouse pointer. If, in subsequent runs of the test, the button is in a different DemoUrl in the Open dialog box, WinRunner will not able to find it. When recording in Analog mode, use softkeys rather than the WinRunner menus or toolbars to insert checkpoints in order to avoid extraneous mouse movements.
Different recording methods 1) Record 2) Pass up 3) As Object 4) Ignore.
1) Record instructs WinRunner to record all operations performed on a GUI object. This is the default record method for all classes. (The only exception is the static class (static text), for which the default is Pass Up.)
2) Pass Up instructs WinRunner to record an operation performed on this class as an operation performed on the element containing the object. Usually this element is a window, and the operation is recorded as win_mouse_click.
3) As Object instructs WinRunner to record all operations performed on a GUI object as though its class were �object� class. 4) Ignore instructs WinRunner to disregard all operations performed on the class.
Some common Settings we need to set in General Options: 1. Default Recording Mode is Object mode 2. Synch Point time is 10 seconds as default 3. When Test Execution is in Batch Mode ensure all the options are set off so that the Batch test runs uninterrupted. 4. In the Text Recognition if the Application Text is not recognizable then the Default Font Group is set. The Text Group is identified with a User Defined Name and then include in the General Option.
Checkpoints allow user to compare the current behavior of the application being tested to its behavior in an earlier version. If any mismatches are found, WinRunner captures them as actual results. User can add four types of checkpoints to test scripts they are
GUI Checkpoints Bitmap Checkpoints Text checkpoints Database checkpoints
All mouse operations, including those performed on the WinRunner window or WinRunner dialog boxes are recorded during an analog recording session. Therefore, don�t insert checkpoints and synchronization points, or select other WinRunner menu or toolbar options during an analog recording session. Note that even if user chooses to record only on selected applications, user can still create checkpoints and perform all other non-recording operations on all applications. Any checkpoints should not be a component of X & Y Co-ordinate dependant. In practical terms if there is a Check point that is defined on X, Y Parameters then the usability of the control point wouldn't make any sense for the application to test. User cannot insert objects from different windows into a single checkpoint. Don�t use Bitmap or GUI Checkpoints for dynamic verification. These checkpoints are purely for static verifications. There are of course, work-around, but mostly not worth the effort.
GUI checkpoints verify information about GUI objects. For example, user can check that a button is enabled or see which item is selected in a list. There are three types of GUI Check points they are For Single Property For Object/Window For Multiple Objects
GUI checkpoint for single property:-user can check a single property of a GUI object. For example, user can check whether a button is enabled or disabled or whether an item in a list is selected. GUI checkpoint for object/window:-user can create a GUI checkpoint to check a single object in the application being tested. User can either check the object with its default properties or user can specify multiple properties to check. GUI checkpoint for multiple objects:-user can create a GUI checkpoint to check multiple objects in the application being tested. User can either check the object with its default properties or user can specify multiple properties of multiple objects to check. Bitmap Checkpoint checks an object, a window, or an area of a screen in the application as a bitmap. While creating a test, user can indicate what user want to check. WinRunner captures the specified bitmap, stores it in the expected results folder (exp) of the test, and inserts a checkpoint in the test script. While running the test, WinRunner compares the bitmap currently displayed in the application being tested with the expected bitmap stored earlier. In the event of a mismatch, WinRunner captures the current actual bitmap and generates a difference bitmap. By comparing the three bitmaps (expected, actual, and difference), user can identify the nature of the discrepancy there are two types of bitmap check points they are Bitmap Checkpoint for Object/Window: - user can capture a bitmap of any window or object in the application by pointing to it. Bitmap Checkpointfor Screen Area:-user defines any rectangular area of the screen and captures it as a bitmap for comparison.
Text checkpoints read and check text in GUI objects and in areas of the screen. While creating a test user has to point to an object or a window containing text. WinRunner reads the text and writes a TSL statement to the test script. Later user can add simple programming elements to test scripts to verify the contents of the text. User should use a text checkpoint on a GUI object only when a GUI checkpoint cannot be used to check the text property. There are two types of Text checkpoints they are From Object/Window From Screen Area
Database checkpoints check the contents and the number of rows and columns of a result set, which is based on a query user, create on database. There are three types of database check points they are Default Check:-used to check the entire contents of a result set, Default Check are useful when the expected results can be established before the test run. Custom Check:-used to check the partial contents, the number of rows, and the number of columns of a result set. Runtime Record Check:-user can create runtime database record checkpoints in order to compare the values displayed in the application during the test run with the corresponding values in the database.
How to Create a Test Using Winrunner (2) GUI checkpoint GUI checkpoint for single property User can check a single property of a GUI object. For example, user can check whether a button is enabled or disabled or whether an item in a list is selected To create a GUI checkpoint for a property value, use the Check Property dialog box to add one of the following functions to the test script:
Syntax:-Function_Name (name, property, property_value) name: The Logical name of the object to be checked property: The property to be checked property_value: The expected property value
The Functions checks the current value of the specified property matches the expected property value. To create a GUI checkpoint for a property value: 1. Choose Insert >GUI Checkpoint >For Single Property. 2. The mouse pointer becomes a pointing hand, and the Check Property dialog box opens and shows the default function for the selected object. WinRunner automatically assigns argument values to the function. 3. User can modify the arguments for the property check. To modify assigned argument values, choose a value from the Property list. The expected value is updated in the Expected text box. To choose a different object, click the pointing hand and then click the object to choose.
If the user clicks an object that is not compatible with the selected function, a message states that the current function cannot be applied to the selected object.
GUI checkpoint for object/window This checkpoint is used to check the state or properties of a single object or window in an application. If a user single-click on a GUI object, the default checks for that object are included in the GUI checkpoint. If the user double-clicks on a GUI object, after WinRunner capturing GUI data, the Check GUI dialog box opens. User can choose which checks to include for that particular object. When using a GUI Checkpoint command, WinRunner inserts a checkpoint statement into the test script.
For a GUI object class, WinRunner inserts an obj_check_gui statement, which compares current GUI object data to expected data.
object The logical name or description of the GUI object. The object may belong to any class. checklist The name of the checklist defining the GUI checks. expected_results_file The name of the file that stores the expected GUI data. time The interval in seconds. This interval is added to the timeout test option during the test run. For a window, WinRunner inserts a win_check_gui statement, which compares current GUI data to expected GUI data for a window.
WinRunner names the first checklist in the test as list1.ckl and the first expected results file gui1.
During test creation, the GUI data is captured and stored. When the user run the test, the current GUI data is compared to the data stored in the expected_results_file, according to the checklist. A file containing the actual results is also generated.
GUI checkpoint for multiple objects The checkpoint statement inserted by the WinRunner in the case of GUI checkpoint for multiple objects and GUI checkpoint for object/window are the same.
To create a GUI checkpoint for two or more objects select GUI Checkpoint For Multiple Objects button on the User toolbar. The Create GUI Checkpoint dialog box opens.
To add an object, click the Add button once. If the user clicks a window title bar or menu bar, a window pops up asking "You are currently pointing at a window. What do you wish to check inside the window?" objects or menus. User can continue to choose objects by clicking the Add button.
Click the right mouse button to stop the selection process and to restore the mouse pointer to its original shape. The Create GUI Checkpoint dialog box reopens. The Objects pane contains the name of the window and objects included in the GUI checkpoint. To specify which objects to check, click an object name in the Objects pane. The Properties pane lists all the properties of the object. The default properties are selected.
The checklist file contains the expected values and it come under the exp folder. A GUI checklist includes only the objects and the properties to be checked. It does not include the expected results for the values of those properties. WinRunner has an edit checklist file option under the Insert menu. For modifying GUI checklist file select the Edit GUI Checklist. This brings up a dialog box that gives the option to select the checklist file to modify. There is also an option to select the scope of the checklist file, whether it is Test specific or a shared one.
Bitmap Checkpoint Bitmap Checkpoint for Object/Window To create a Bitmap Checkpoint for Object/Window Choose Insert >Bitmap Checkpoint >For Object/Window.
The WinRunner window is minimized, the mouse pointer becomes a pointing hand. Point to the object or window and click it.
WinRunner captures the bitmap and generates a TSL statement in the script.
The TSL statement generated for a window bitmap has the following syntax:
win_check_bitmap (window, bitmap, time);
The TSL statement generated for an object bitmap has the following syntax: obj_check_bitmap (object, bitmap, time);
window or object The logical name or description of the window or object. bitmap A string expression that identifies the captured bitmap.
time The interval marking the maximum delay between the previous input event and the capture of the current bitmap, in seconds. This interval is added to the timeout test option before the next statement is executed.
The win_check_bitmap function captures and compares bitmaps of a window or window area. During test creation, the specified window or area is captured and stored. During a test run, the current bitmap is compared to the one stored in the database. If they are different, the actual bitmap is captured. This function is generated during the recording of a test. Since the test is waiting for a result, the test should be run in update mode.
Bitmap Checkpoint for Screen Area
To create a Bitmap Checkpoint for Screen Area Choose Insert>:Bitmap Checkpoint >:For Screen Area.
The WinRunner window is minimized, the mouse pointer becomes a crosshairs pointer Mark the area to be captured: press the left mouse button and drag the mouse pointer until a rectangle encloses the area; then release the mouse button.
Press the right mouse button to complete the operation. WinRunner captures the area and generates a win_check_bitmap statement in the script.
The win_check_bitmap statement for an area of the screen has the following syntax:
win_check_bitmap (window, bitmap, time, x, y, width, height);
x, y For an area bitmap: the coordinates or the upper-left corner, relative to the window in which the selected area is located. width, height For an area bitmap: the size of the selected area, in pixels.
When area of the window is captured, the additional parameters i.e. x, y, width and height define the area's location and dimensions.
The analog version of win_check_bitmap is check_window. Syntax is as follows
time - Indicates the interval between the previous input event and the bitmap capture, in seconds. This interval is added to the timeout_msec testing option. The sum is the interval between the previous event and the bitmap capture, in seconds. bitmap - A string identifying the captured bitmap. The string length is limited to 6 characters. window - A string indicating the name in the window banner. width, height - The size of the window, in pixels.
x, y - The position of the upper left corner of the window (relative to the screen). In the case of an MDI child window, the position is relative to the parent window. relx1, rely1 For an area bitmap: the coordinates of the upper left corner of the rectangle, relative to the upper left corner of the client window (the x and y parameters). relx2, rely2 For an area bitmap: the coordinates of the lower right corner of the rectangle, relative to the lower right corner of the client window (the x and y parameters).
The check_window function captures a bitmap of a window. During recording, the specified bitmap is captured and stored. During a test run, the current bitmap is compared to the bitmap stored in the database, and if it is different, the actual bitmap is captured.
Text checkpoints Text checkpoints read text in GUI objects and in bitmaps and enable user to verify the contents. When creating a text check point for an object or a window containing text. WinRunner reads the text and writes a TSL statement to the test script. Using simple programming the user can use the text content.
User can use a text checkpoint to: 1. Read text from a GUI object or window in the application, using obj_get_text or win_get_text. The maximum number of characters that can be captured in one obj_get_text statement is 2048.
object The logical name or description of the GUI object. The object may belong to any class. out_text The name of the output variable that stores the captured text. x1,y1,x2,y2 An optional parameter that defines the location from which text will be read, relative to the specified object. The pairs of coordinates can designate any two diagonally opposite corners of a rectangle.
2. Search for text in an object or window, using obj_find_text or win_find_text returns the location of a string within an object. obj_find_text (object, string, result_array [, search_area [, string_def]]); object The logical name or description of the object. The object may belong to any class.
string A valid string expression or the name of a string variable, which can include a regular expression. The regular expression should not include an exclamation mark (!), however, which is treated as a literal character. result_array The name of the four-element array that stores the location of the string. The elements are numbered 1 to 4. Elements 1 and 2 store the x- and y- coordinates of the upper left corner of the enclosing rectangle; elements 3 and 4 store the coordinates for the lower right corner.
search_area Indicates the area of the screen to search as coordinates that define any two diagonal corners of a rectangle, expressed as a pair of x,y coordinates. The coordinates are stored in result_array.
string_def Defines the type of search to perform. If no value is specified (0 or FALSE, the default), the search is for a single, complete word only. When 1, or TRUE, is specified, the search is not restricted to a single, complete word. Any regular expression used in the string must not contain blank spaces, and only the default value of string_def, FALSE, is in effect.
3. Compares two strings using compare_text (str1, str2 [, chars1, chars2]); str1, str2 the two strings to be compared. chars1 One or more characters in the first string that should be considered equivalent to the character(s) specified in chars2. chars2 One or more characters in the second string. that should be considered equivalent to the character(s) specified in chars1.
The compare_text function compares two strings, ignoring any differences specified. The two optional parameters are used to indicate characters that should be considered equivalent during the comparison. For instance, if the user specify "m" and "n", the words "any" and "amy" will be considered a match. The two optional parameters must be of the same length. Note that blank spaces are ignored.
WinRunner can read the visible text from the screen in most applications. If the Text Recognition mechanism is set to driver based recognition, this process is automatic. However, if the Text Recognition Mechanism is set to image-based recognition, WinRunner must first learn the fonts used by the application. When using the WinRunner text-recognition mechanism for Windows-based applications, keep in mind that it may occasionally retrieve unwanted text information (such as hidden text and shadowed text, which appears as multiple copies of the same string). The text recognition may behave differently in different run sessions depending on the operating system version, service packs, other installed toolkits, the APIs used, and so on. Therefore, when possible, it is highly recommended to retrieve or check text from application window by inserting a standard GUI checkpoint and selecting to check the object ’s value (or similar)property.
When reading text with a learned font, WinRunner reads a single line of text only. If the captured text exceeds one line, only the leftmost line is read. If two or more lines have the same left margin, then the bottom line is read.
Database Checkpoint
Default Check on a Database To create default check on a database using ODBC or Microsoft Query: Choose Insert >Database Checkpoint >Default Check If Microsoft Query is installed and user is creating a new query, an instruction screen opens for creating a query. If Microsoft Query is not installed, the Database Checkpoint wizard opens to a screen where the user can define the ODBC query manually. Define a query, copy a query, or specify an SQL statement.
WinRunner takes several seconds to capture the database query and restore the WinRunner window. WinRunner captures the data specified by the query and stores it in the test’s exp folder. WinRunner creates the msqr*.sql query file and stores the query and the database checklist is stored in the test’s chklist folder.
A database checkpoint is inserted in the test script as a db_check statement. Syntax:- db_check (checklist, expected_results_file [, max_rows [, parameter_array]]);
checklist The name of the checklist specifying the checks to perform. expected_results_file The name of the file storing the expected database data. max_rows The maximum number of rows retrieved in a database. If no maximum is specified, then by default the number of rows is not limited. parameter_array The array of parameters for the SQL statement.
The db_check function captures and compares information about a database. During a test run, WinRunner checks the query of the database with the checks specified in the checklist. WinRunner then checks the information obtained during the test run against the expected results contained in the expected_results_file. Note:-when using Create > Database Checkpoint command to create a database checkpoint, only the first two (obligatory) parameters are included in the db_check statement (unless the user parameterize the SQL statement from within Microsoft Query). If the user changes this parameter in a db_check statement recorded in test script, user must run the test in Update mode before running it in Verify mode. SQL queries used with db_check are limited to 4Kb in length. Custom Check on a Database
When the user wants to create a custom check on a database, user creates a standard database checkpoint in which user can specify which properties to check on a result set. User can create a custom check on a database using ODBC, Microsoft Query or Data Junction. User can create a custom check on a database in order to:
Check the contents of part or the entire result set Edit the expected results of the contents of the result set Count the rows in the result set Count the columns in the result set
To create a custom check on a database: Choose Insert >Database Checkpoint >Custom Check
The Database Checkpoint wizard opens. Use ODBC or Microsoft Query to define a query, copy a query, or specify an SQL statement. WinRunner takes several seconds to capture the database query and restore the WinRunner window. If the user wants to edit the expected value of a property, first select it. Next, either click the Edit Expected Value button, or double-click the value in the Expected Value column. WinRunner captures the current property values and stores them in the test’s exp folder. WinRunner stores the database query in the test’s chklist folder. A database checkpoint is inserted in the test script as a db_check statement. If the user is using Microsoft Query and the user want to be able to parameterize the SQL statement in the db_check statement, then in the last wizard screen in Microsoft Query, click View data or edit query in Microsoft Query
The default check for a multiple-column query on a database is a case sensitive check on the entire result set by column name and row index. The default check for a single-column query on a database is a case sensitive check on the entire result set by row position. If the result set contains multiple columns with the same name, WinRunner disregards the duplicate columns and does not perform checks on them. Therefore, user should create a custom check on the database and select the column index option.
Modifying a Standard Database Checkpoint User can make the following changes to an existing standard database checkpoint: Make a checklist available to other users by saving it in a shared folder User can edit an existing database checklist. User can modify a query in an existing checklist
To save a database checklist in a shared folder: Choose Insert >Edit Database Checklist.
The Open Checklist dialog box opens. Select a database checklist and click OK . Under Scope, click Shared .Type in a name for the shared checklist.
*.sql files are not saved in shared database checklist folders. Checklists have the .cdl extension, while GUI checklists have the .ckl extension. The Objects pane contains “Database check”and the name of the *.sql query file or *.djs conversion file that will be included in the database checkpoint. The Properties pane lists the different types of checks that can be performed on databases. A check mark indicates that the item is selected and is included in the checkpoint. In the Properties pane, user can edit the database checklist to include or exclude the following types of checks:
ColumnsCount: Counts the number of columns in the result set. Content : Checks the content of the result set RowsCount: Counts the number of rows in the result set.
To modify a query in an existing checklist, highlight the name of the query file or the conversion file, and click Modify. The Modify ODBC Query dialog box opens and the user can make modification to connection string and/or the SQL statement. After making the modifications user must run all tests that use this checklist in Update mode before running them in Verify mode.
Runtime record checkpoints Runtime record checkpoints are useful when the information in the database changes from one run to the other. Runtime record checkpoints enable user to verify that the information displayed in the application was correctly inserted to the database or conversely, that information from the database is successfully retrieved and displayed on the screen. If the comparison does not meet the success criteria user specify for the checkpoint, the checkpoint fails.
To add a runtime database record checkpoints Select Insert >Database Checkpoint >Runtime Record Check .
The Define Query screen pops up which enables user to select a database and define a query for the checkpoint. User can create a new query from database using Microsoft Query, or manually define an SQL statement.
The Next screen is the Match Database Field screen which enables user to identify the application control or text in application that matches the displayed database field.
The Next screen is the Matching Record Criteria screen which enables user to specify the number of matching database records required for a successful checkpoint.
db_record_check statement is inserted into the script. db_record_check () function compares information that appears in the application under test during a test run with the current values in the corresponding record(s) in database.
Syntax of db_record_check ():- db_record_check (ChecklistFileName, SuccessConditions, RecordNumber [, Timeout]);
ChecklistFileName A file created by WinRunner and saved in the test's checklist folder. The file contains information about the data to be captured during the test run and its corresponding field in the database. The file is created based on the information entered in the Runtime Record Checkpoint wizard.
SuccessConditions Contains one of the following values:
DVR_ONE_OR_MORE_MATCH - The checkpoint passes if one or more matching database records are found. DVR_ONE_MATCH - The checkpoint passes if exactly one matching database record is found. DVR_NO_MATCH - The checkpoint passes if no matching database records are found. RecordNumber Parameter that returns the number of records the database. Timeout The number of seconds before the query attempt times out.
User cannot use an SQL statement of the type "SELECT * from ..." with the db_record_check function. Instead, user must supply the tables and field names. The reason for this is that WinRunner needs to know which database fields should be matched to which variables in the WinRunner script. The expected SQL format is:
Editing a Runtime Database Record Checklist User can make changes to a checklist created for a runtime database record checkpoint. A checklist includes the connection string to the database, the SQL statement or a query, the database fields in the data source, the controls in the application, and the mapping between them. It does not include the success conditions of a runtime database record Checkpoint so the user can’t edit the success conditions. User can change the success condition of the checkpoint by modifying the second parameter in the db_record_check statement in the test script.
To edit an existing runtime database record checklist:
Choose Insert >Edit Runtime Record Checklist. Select the checklist name from the Runtime Record Checkpoint wizard by default, runtime database record checklists are named sequentially in each test, starting with list1.cvr.
The next screen is the Specify SQL statement screen where the user can modify the Connection String and SQL statement. If the user modified the SQL statement or query in Microsoft Query so that it now references an additional database field in the data source, the checklist will now include a new database field.
User must match this database field to an application control. Use the pointing hand in the next screen to identify the control or text that matches the displayed field name. New database fields are marked with a “New” icon
If user wants several db_record_check statements, each with different success conditions then user can manually enter a db_record_check statement that references an existing checklist and specify the success conditions user want. User does not need to rerun the Runtime Record Checkpoint wizard for each new checkpoint. Parameterize Standard Database Checkpoints
While creating a standard database checkpoint using ODBC (Microsoft Query), user can add parameters to an SQL statement to parameterize the checkpoint. A parameterized query is a query in which at least one of the fields of the WHERE clause is parameterized, i.e., the value of the field is specified by a question mark symbol (?).
To execute a parameterized query, user must specify the values for the parameters.
To parameterize the SQL statement in the checkpoint, the db_check function has a fourth, optional, argument the parameter_array argument.
The parameter_array argument will contain the values to substitute for the parameters in the parameterized checkpoint. WinRunner cannot capture the expected result set while recording the test. Unlike regular database checkpoints, recording a parameterized checkpoint requires additional steps to capture the expected results set. Therefore, user must use array statements to add the values to substitute for the parameters. User must run the test in Update mode once to capture the expected results set before running the test in Verify mode.
TSL Functions for Working with ODBC (Microsoft Query) When the user works with ODBC (Microsoft Query), user must perform the following steps in the following order:
Connect to the database. Execute a query and create a result set based on an SQL statement. Retrieve information from the database. Disconnect from the database.
Connect to the database. Syntax:-db_connect (session_name, connection_string [, timeout]); session_name The logical name or description of the database session. connection_string The connection parameters to the ODBC database. timeout The number of seconds before the login attempt times out.
The db_connect function creates the new session_name database session and uses the connection_string to establish a connection to an ODBC database. User can use the Function Generator to open an ODBC dialog box, in which user can create the connection string. If user tries to use a session name that has already been used, WinRunner will delete the old session object and create a new one using the new connection string.
Execute a query and create a result set based on an SQL statement. Syntax:-db_execute_query ( session_name, SQL, record_number ); SQL The SQL statement to be executed record_number An out parameter returning the number of records in the result query.
The db_execute_query function executes the query based on the SQL statement and creates a record set. User must use a db connect statement to connect to the database before using this function.
Retrieve information from the database. Syntax:-db_get_field_value (session_name, row_index, column); row_index The index of the row written as a string: "# followed by the numeric index. (The first row is always numbered "#0".) column name of the field in the column
The db_get_field_value function returns the value of a single field in the specified row_index and column in the session_name database session. In case of an error, an empty string will be returned. Before using this function user must use a db connect statement, connect to the database and db execute query statement, execute a query.
Syntax:-db_get_headers (session_name, header_count, header_content); header_count The number of column headers in the query. header_content The column headers concatenated and delimited by tabs. If this string exceeds 1024 characters, it is truncated.
The db_get_headers function returns the header_count and the text in the column headers in the session_name database session. Before using this function user must use a db connect statement, connect to the database and db execute query statement, execute a query.
Syntax:-db_get_row (session_name, row_index, row_content); row_index The numeric index of the row. (The first row is always numbered "0".) row_content The row content as a concatenation of the fields values, delimited by tabs.
The db_get_row function returns the row_content of the specified row_index, concatenated and delimited by tabs in the session_name database session. Before using this function user must use a db connect statement, connect to the database and db execute query statement, execute a query.
Syntax:-db_write_records (session_name, output_file [, headers [, record_limit]]); output_file The name of the text file in which the record set is written. headers An optional Boolean parameter that will include or exclude the column headers from the record set written into the text file. record_limit The maximum number of records in the record set to be written into the text file. A value of NO_LIMIT (the default value) indicates there is no maximum limit to the number of records in the record set.
The db_write_records writes the record set of the session_name into an output_file delimited by tabs. Before using this function user must use a db connect statement, connect to the database and db execute query statement, execute a query.
Syntax:-db_get_last_error ( session_name, error ); error The error message.
The db_get_last_error function returns the last error message of the last ODBC or Data Junction operation in the session_name database session. If there is no error message, an empty string will be returned. User must use a db connect statement to connect to the database before using this function.
Disconnect from the database. Syntax:-db_disconnect ( session_name );
The db_disconnect function disconnects from the session_name database session. User must use a db connect statement to connect to the database before using this function.
Specifying the Verification Method User can select the verification method to control how WinRunner identifies columns or rows within a result set. The verification method applies to the entire result set. Specifying the verification method is different for multiple-column and single-column result sets.
Specifying the Verification Method for a Multiple-Column Result Set Column Name (default setting)
WinRunner looks for the selection according to the column names. A shift in the position of the columns within the result set does not result in a mismatch.
Index WinRunner looks for the selection according to the index, or position, of the columns. A shift in the position of the columns within the result set results in a mismatch. Select this option if the result set contains multiple columns with the same name. Row Key WinRunner looks for the rows in the selection according to the key(s) specified in the Select key columns list box, which lists the names of all columns in the result set. A shift in the position of any of the rows does not result in a mismatch. If the key selection does not identify a unique row, only the first matching row will be checked.
Index (default setting) WinRunner looks for the selection according to the index, or position, of the rows. A shift in the position of any of the rows results in a mismatch.
Specifying the Verification Method for a Single-Column Result Set By position WinRunner checks the selection according to the location of the items within the column. By content WinRunner checks the selection according to the content of the items, ignoring their location in the column.
Specifying the Verification Type WinRunner can verify the contents of a result set in several different ways. User can choose different verification types for different selections of cells.
Case Sensitive (the default) WinRunner compares the text content of the selection. Any difference in case or text content between the expected and actual data results in a mismatch.
Case Sensitive Ignore Spaces WinRunner checks the data in the field according to case and content, ignoring differences in spaces. WinRunner reports any differences in case or content as a mismatch.
Case Insensitive WinRunner compares the text content of the selection. Only differences in text content between the expected and actual data result in a mismatch.
Case Insensitive Ignore Spaces WinRunner checks the content in the cell according to content, ignoring differences in case and spaces. WinRunner reports only differences in content as a mismatch.
Numeric Content WinRunner evaluates the selected data according to numeric values. WinRunner recognizes, for example, that “2” and “2.00” are the same number.
Numeric Range WinRunner compares the selected data against a numeric range. Both the minimum and maximum values are any real number that the user specifies. This comparison differs from text and numeric content verification in that the actual database data is compared against the range that user defined and not against the expected results. Synchronization points
Synchronization points enable user to solve anticipated timing problems between the test and the application. By inserting a synchronization point in the test script, user can instruct WinRunner to suspend the test run and wait for a cue before continuing the test. It is useful for testing client-server systems, where the response time of the server varies significantly.
For Analog testing, user can also use a synchronization point to ensure that WinRunner repositions a window at a specific location. While running a test, the mouse cursor travels along exact coordinates. Repositioning the window enables the mouse pointer to make contact with the correct elements in the window.
There are three kinds of synchronization points:
Synchronization point for Property Values of Objects or Windows Synchronization point for Bitmaps of Objects and Windows Synchronization point for Bitmaps of Screen Areas
Depending on which Synchronization Point command user has choose, WinRunner either captures the property value of a GUI object or a bitmap of a GUI object or area of the screen, and stores it in the expected results folder (exp ). User can also modify the property value of a GUI object that is captured before it is saved in the expected results folder. When the user runs the test, WinRunner suspends the test run and waits for the expected bitmap or property value to appear. It then compares the current actual bitmap or property value with the expected bitmap or property value saved earlier. When the bitmap or property value appears, the test continues.
Synchronization point for Property Values of Objects or Windows When the user wants WinRunner to wait for an object or a window to have a specified property, user creates a property value synchronization point. A property value synchronization point is a synchronization point that captures a property value of Objects or Windows. It appears as a _wait_info statement in the test script, such as button_wait_info or list_wait_info.
For example, user can tell WinRunner to wait for a button to become enabled or for an item to be selected from a list.
To create synchronization point for Property Values of Objects or Windows Go to Insert >Synchronization Point > For Object/Window Property.
When the user passes the mouse pointer over the application, objects and windows flash.
To select a window, user has to click the title bar or the menu bar of the desired window. To select an object, user has to click the object. A dialog box opens containing the name of the selected window or object. User can specify which property of the window or object to check, the expected value of that property, and the amount of time that WinRunner waits at the synchronization point.
Syntax:-button_wait_info (button, property, value, time); button The logical name or description of the button. property Any of the properties listed. value The property value. time Indicates the maximum interval, in seconds, before the next statement is executed.
The button_wait_info function waits for the value of a button property and then continues test execution. If the property does not return the required value, the function waits until the time expires before continuing the test run. The other function used for synchronization point for Property Values of Objects or Windows are
edit_wait_info Waits for the value of an edit property. list_wait_info Waits for the value of a list property. menu_wait_info Waits for the value of a menu property. obj_wait_info Waits for the value of an object property. scroll_wait_info Waits for the value of a scroll property. spin_wait_info Waits for the value of a spin property. static_wait_info Waits for a the value of a static text property. statusbar_wait_info Waits for the value of a status bar property. tab_wait_info Waits for the value of a tab property. win_wait_info Waits for the value of a window property.
Synchronization point for Bitmaps of Objects and Windows br> When the user wants WinRunner to wait for a visual cue to be displayed, user has to create a bitmap synchronization point. In a bitmap synchronization point, WinRunner waits for the bitmap of an object or a window, to appear. It appears as a win_wait_bitmap or obj_wait_bitmap statement in the test script.br> br> To create synchronization point for Bitmaps of Objects and Windows Go to Insert >Synchronization Point>For Object/Window Bitmap.
To select the bitmap of an entire window, user has to click the window’s title bar or menu bar. To select the bitmap of an object, user has to click the object. During a test run, WinRunner suspends test execution until the specified bitmap is redrawn, and then compares the current bitmap with the expected one captured earlier. If the bitmaps match, then WinRunner continues the test.
Syntax:-obj_wait_bitmap (object, bitmap, time); object The logical name or description of the object. The object may belong to any class.
bitmap A string expression that identifies the captured bitmap.
time Indicates the interval between the previous input event and the capture of the current bitmap, in seconds. This parameter is added to the timeout The obj_wait_bitmap function synchronizes a test run. It ensures that the bitmap of a specified GUI object appears on the screen before the test continues.
Waiting for Bitmaps of Screen Areas
User can create a bitmap synchronization point that waits for a bitmap of a selected area in the application. User can define any rectangular area of the screen and capture it as a bitmap for a synchronization point. It appears as a win_wait_bitmap or obj_wait_bitmap statement in the test script.
Syntax: - obj_wait_bitmap (object, bitmap, time [, x, y, width, height]); x, y For an area bitmap: the coordinates of the upper left corner, relative to the object in which the selected region is located. width, height For an area bitmap: the size of the selected region, in pixels.
To create synchronization point Bitmaps of Screen Areas Go to Insert >Synchronization Point >For Screen Area Bitmap.
The mouse pointer becomes a crosshairs pointer; user can use the crosshairs pointer to outline a rectangle around the area. The area can be any size, it can be part of a single window, or it can intersect several windows. WinRunner defines the rectangle using the coordinates of its upper left and lower right corners. These coordinates are relative to the upper left corner of the object or window in which the area is located. If the area intersects several objects in a window, the coordinates are relative to the window. If the selected area intersects several windows, or is part of a window with no title (a popup menu, for example), the coordinates are relative to the entire screen (the root window).
During a test run, WinRunner suspends test execution until the specified bitmap is displayed. It then compares the current bitmap with the expected bitmap. If the bitmaps match, then WinRunner continues the test.
In the event of a mismatch, WinRunner displays an error message, when the mismatch_break testing option is on. User can make the mismatch_break testing option off. Execute the following setvar statement:
setvar ("mismatch_break", "off"); WinRunner disables the mismatch_break testing option. The setting remains in effect during the testing session until it is changed again, either with another setvar statement or from the corresponding Break when verification fails check box in the Run >Settings category of the General Options dialog box. Using the setvar function changes a testing option globally, and this change is reflected in the General Options dialog box. However, user can also use the setvar function to set testing options for a specific test, or even for part of a specific test.
The main difference between the wait () and Synchronization Point is the wait () pauses test execution for the specified interval. But the Synchronization point only wait until the specified bitmap or object is displayed.
Syntax: - wait (seconds [, milliseconds]); seconds The length of the pause, in seconds. The valid range of this parameter is from 0 to 32,767 seconds. milliseconds The number of milliseconds that are added to the seconds. Testing Date Operations
The recommended workflow while checking dates in the application is as follows:
Define the date format(s) currently used in the application. Create baseline tests by recording tests on the application. While recording, insert checkpoints that will check the dates in the application. Run the tests (in Debug mode) to check that they run smoothly. If a test incorrectly identifies non-date fields as date fields or reads a date field using the wrong date format, user can override the automatic date recognition on selected fields. Run the test (in Update mode) to create expected results. Run the tests (in Verify mode). If the user wants to check how the application performs with future dates, user can age the dates before running the test.
Analyze test results to pinpoint where date-related problems exist in the application. If the user change date formats in the application, user should repeat the workflow described above after redefining the date formats used in the application.
To specify date formats: Go to Date > Set Date Formats. The Set Date Formats dialog box opens. User can select each date format used in the application. User should move the most frequently-used date format in the application to the top of the list. WinRunner considers the top date format first.
Checking Dates in GUI Objects User can use GUI checkpoints to check dates in GUI objects (such as edit boxes or static text fields). The default check for edit boxes and static text fields is the date. The default check for tables performs a case-sensitive check on the entire contents of a table, and checks all the dates in the table.
Overriding Date Settings When debugging the tests, user may want to override user can override in the following ways:
Aging of a specific date format: - User can override the aging of a specific date format so that it will be aged differently than the default aging setting.
To override the aging of a date format: Go to Date > Set Date Formats. The Set Date Formats dialog box opens. Click the Advanced button. The Advanced Settings dialog box opens. In the Format list, select a date format. Click Change. The Override Aging dialog box opens.
User can increment the date format by a specific number of years, months and days. If the user wants no aging then use 0. User can choose a specific date for the selected date format by selecting the "Change all date to" option or user can stick to the default aging.
Overriding Aging or date format of a specific object: - User can define that a specific object that resembles a date should not be treated as a date object.
To override settings for an object: Go to Date > Override Object Settings. The Override Object Settings dialog box opens. Click the pointing hand button and then click the date object. To override date format settings or to specify that the object is not a date object, clear the Use default format conversion check box
Note: When WinRunner runs tests, it first examines the general settings defined in the Date Operations Run Mode dialog box. Then, it examines the aging overrides for specific date formats. Finally, it considers overrides defined for particular objects.
Checking Dates with TSL
User can enhance the recorded test scripts by adding the following TSL date functions:
date_calc_days_in_field (field_name1, field_name2); field_name1 The name of the 1st date field. field_name2 The name of the 2nd date field.
The date_calc_days_in_field function calculates the number of days between the dates appearing in two date fields. Note that the specified date fields must be located in the same window.
date_calc_days_in_string (string1, string2); string1 The name of the 1st string. string2 The name of the 2nd string.
The date_calc_days_in_string function calculates the number of days between two numeric dates’ strings. Note that the specified strings must be located in the same window.
date_field_to_Julian (date_field); date_field The name of the date field.
The date_field_to_Julian function translates a date string to a Julian number. For example, if the date 121398 (December 13, 1998) appears in the specified date field, WinRunner translates the date to the Julian number 2451162.
date_string_to_Julian (string) string The numeric date string.
The date_string_to_Julian function translates a date string to a Julian number. For example, it calculates the string 12/13/98 (December 13, 1998) to 2451162.
date_is_field (field_name, min_year, max_year); field_name The name of the field containing the date. min_year Determines the minimum year allowed. max_year Determines the maximum year allowed.
The date_is_field function checks that a field contains a valid date by determining whether the date falls within a specified date range.
date_is_string (string, min_year, max_year); string The numeric string containing the date. min_year Determines the minimum year allowed. max_year Determines the maximum year allowed.
The date_is_string function checks that a numeric string contains a valid date by determining whether the date falls within a specified date range.
date_is_leap_year (year); year A year, for example "1998".
The date_is_leap_year function determines whether a year is a leap year. The function returns "0" if the year is not a leap year or "1" if the year is a leap year.
date_month_language (language); language The language used for month names.
The date_month_language function enables user to select the language used for month names in the application so that WinRunner can identify dates. User can select English, French, German, Spanish, Portuguese, or Italian. If the application uses a different language, select "Other" and define the names for all 12 months.
Data-Driven Testing
The Different stages of the data-driven testing process in WinRunner are: Creating a test Converting a test to a Data-Driven test Create a corresponding data table. Running the Test Analyzing test results
Creating a test
In order to create a data-driven test user must create a basic test by recording a test, as usual with one set of data. Converting a test to a Data-Driven test
User can convert the test to a Data-Driven test by Data Driver Wizard or by modifying the script manually. The procedure for converting a test to a data-driven test is composed of the following main steps:
Assigning a variable name to the data table (mandatory when using the Data Driver wizard and otherwise optional) Add statements to the script that open and close the data table. Adding statements and functions to the test so that it will read the data from the data table and run in a loop, while it reads each iteration of data. Replace fixed values in checkpoint statements and in recorded statements with parameters. Create a data table containing values for the parameters. This is known as parameterize the test.
To create data-driven tests select lines in the test script: Go to Choose Table >Data Driver Wizard.
The Data Driver Wizard pop up opens with the "Use a new or existing Excel table" box which displays the name of the Excel file that WinRunner creates, which stores the data for the data-driven test.
In the “Assign a name to the variable” box, enter a variable name with which to refer to the data table.
Check the “Add statements to create a data-driven test" check box which automatically adds statements to run the test in a loop. If the user do not choose to select this option user will receive a warning that data-driven test must contain a loop and statements to open and close the data table. User should not select this option if the user has chosen it previously while running the Data Driver wizard on the same portion of the test script.
If the user wants to Imports data from a database check the "Import data from a database" check box. In order to import data from a database, either Microsoft Query or Data Junction must be installed on the machine.
Check the "Parameterize the test" check box which replaces fixed values in selected checkpoints and in recorded statements with parameters and in the data table, adds columns with variable values for the parameters.
Select the "Line by line" option if the user decide to parameterize a particular line, and if so, whether to add a new column to the data table or use an existing column when parameterize data.
Select the "Automatically" option if the user decides to replaces all data and adds new columns to the data table.
In the Next screen "Test script line to parameterize" box displays the line of the test script to parameterize. The highlighted value can be replaced by a parameter. “Argument to be replaced” box displays the argument (value) that user can replace with a parameter. User can use the arrows to select a different argument to replace. User has to Choose whether and how to replace the selected data. After finishing the parameterization the final screen of the wizard opens where the user if needed can see the data table created.
Assigning the Main Data Table for a Test
The main data table is the table that is selected by default when user choose Tools >Data Table or open the Data Driver wizard. To assign the main data table for a test: Go to File >Test Properties and click the General tab.
Choose the data table user want to assign from the Main data table list. All data tables that are stored in the test folder are displayed in the list.
Using Data-Driven Checkpoints and Bitmap Synchronization Points
When checking the properties of GUI objects in a data-driven test, it is better to create a single property check than to create a GUI checkpoint which contains references to a checklist stored in the test’s chklist folder and expected results stored in the test’s exp folder. A single property check does not contain checklist, so it can be easily parameterized. In order to parameterize GUI and bitmap checkpoints and bitmap synchronization points statements. First create separate columns for each checkpoint or bitmap synchronization point. Then enter dummy values in the columns to represent captured expected results. While running the test in Update mode, WinRunner recaptures expected values for GUI and bitmap checkpoints automatically. WinRunner prompts user before recapturing expected values for bitmap synchronization points. And save all the results in the test’s exp folder.
Using TSL Functions with Data-Driven Tests Opening a Data Table ddt_open (data_table_name [, mode]);
data_table_name The name of the data table. The name may be the table variable name, the Microsoft Excel file or a tabbed text file name, or the full path and file name of the table. The first row in the file contains the names of the parameters. This row is labeled row 0.
mode The mode for opening the data table: DDT_MODE_READ (read-only) or DDT_MODE_READWRITE (read or write). When the mode is not specified, the default mode is DDT_MODE_READ.
The ddt_open function opens the data table file with the specified data_table_name. The active row becomes row number 1. User must use a ddt_open statement to open the data table before using any other ddt_functions.
Saving a Data Table ddt_save (data_table_name);
The ddt_save function saves the information in a data table in its existing format. ddt_save does not close the data table. Use the ddt_close to close the data table.
Closing a Data Table ddt_close (data_table_name);
The ddt_close function closes the specified data table. ddt_close does NOT save changes to the data table. If user makes any changes to the data table, user must use the ddt_save function to save the changes before using ddt_close to close the table. The ddt_close function will not close the table if it is currently open in the table editor, regardless of whether it was opened from the WinRunner menu or using the ddt_show function. The ddt_close function checks if the table editor is displaying the table, and if so, leaves it open.
Displaying the Data Table Editor dt_show (data_table_name [, show_flag]); show_flag The value indicating whether the editor should be shown (default=1) or hidden (0).
The ddt_show function allows the table editor to be shown or hidden. The show_flag value is 1 if the table editor is to be shown and is 0 if the table editor is to be hidden.
Exporting a Data Table ddt_export (data_table_name1, data_table_name2); data_table_name1 The source data table filename. data_table_name2 The destination data table filename. The ddt_export function sends the contents of data_table_name1 to data_table_name2
Returning the Number of Rows in a Data Table dt_get_row_count (data_table_name, out_rows_count); out_rows_count The output variable that stores the total number of rows in the data table. The ddt_get_row_count function retrieves the number of rows in the specified data table.
Changing the Active Row in a Data Table to the Next Row ddt_next_row (data_table_name);
The ddt_next_row function changes the active row in the specified data table to the next row. If the active row is the last row in a data table, then the E_OUT_OF_RANGE value is returned.
Setting the Active Row in a Data Table ddt_set_row (data_table_name, row); row The new active row in the data table.
The ddt_set_row function sets the active row in the specified data table. When the data table is first opened, the active row is the first row.
Setting a Value in the Current Row of the Table ddt_set_val (data_table_name, parameter, value); parameter The name of the column into which the value will be inserted. value The value to be written into the table.
The ddt_set_val function sets a value in a cell of the current row of a database. User can only use this function if the data table was opened in DDT_MODE_READWRITE (read or write mode).
Setting a Value in a Row of the Table ddt_set_val_by_row (data_table_name, row, parameter, value); row The row number in the table. It can be any existing row or the current row number plus 1, which will add a new row to the data table. parameter The name of the column into which the value will be inserted. value The value to be written into the table.
The ddt_set_val_by_row function sets a value in a specified cell in the table. User can only use this function if the data table was opened in DDT_MODE_READWRITE (read or write mode).
Retrieving the Active Row of a Data Table ddt_get_current_row ( data_table_name, out_row ); out_row The output variable that stores the active row in the data table.
The ddt_get_current_row function retrieves the active row in the specified data table and returns this value as out_row.
Determining Whether a Parameter in a Data Table is Valid ddt_is_parameter (data_table_name, parameter); parameter The parameter name to check in the data table.
The ddt_is_parameter function returns whether a parameter in the specified data table is valid.
Returning a List of Parameters in a Data Table ddt_get_parameters (data_table_name, params_list, params_num); params_list This out parameter returns the list of all parameters in the data table, separated by tabs. params_num This out parameter returns the number of parameters in params_list. The ddt_get_parameters function returns a list of all parameters in a data table.
Returning the Value of a Parameter in the Active Row in a Data Table ddt_val (data_table_name, parameter);
The ddt_val function returns the value of a parameter in the active row in the specified data table. Returning the Value of a Parameter in a Row in a Data Table ddt_val_by_row (data_table_name, row_number, p
WinRunner automated software functionality test tool from Mercury Interactive for functional and regression testing
Q: For new users, how to use WinRunner to test software applications automately ? A: The following steps may be of help to you when automating tests 1. MOST IMPORTANT - write a set of manual tests to test your application - you cannot just jump in with WR and expect to produce a set of meaningful tests. Also as you will see from the steps below this set of manual tests will form your plan to tackle automation of your application. 2. Once you have a set of manual tests look at them and decide which ones you can automate using your current level of expertise. NOTE that there will be tests that are not suitable for automation, either because you can't automate them, or they are just not worth the effort. 3. Automate the tests selected in step 2 - initially you will use capture/replay using the steps in the manual test, but you will soon see that to produce meaningful and informative tests you need to add additional code to your test eg. use tl_step() to give test results. As this process continues you will soon see that there are operations that you repeatedly do in multiple tests - these are then candidates for user-defined functions and compiled modules 4. Once you have completed step 3 go back to step 2 and you will find that the knowledge you have gained in step 3 will now allow you to select some more tests that you can do. If you continue going through this loop you will gradually become more familiar with WR and TSL, in fact you will probably find that eventually you do very little capture/replay and more straight TSL coding.
Q: How to use WinRunne to check whether the record was updated or the record was delelte or the record was inserted or not? Using WinRunner check point features: Create->dDB checkpoint->Runtime Record check Q: How to use WinRunner to test the login screen A: When you enter wrong id or password, you will get Dialog box. 1. Record this Dialog box 2. User win_exists to check whether dialog box exists or not 3. Playback: Enter wrong id or password, if win_exists is true, then your application is working good. Enter good id or password, if win_exists is false, then your application is working perfectly.
Q: After clicking on "login" button, they opens other windows of the web application, how to check that page is opened or not When your expecting "Window1" to come up after clicking on Login... Capture the window in the GUI Map. No two windows in an web based application can have the same html_name property. Hence, this would be the property to check.
First try a simple win_exists("window1", ) in an IF condition.
Winrunner testscript for checking all the links at a time location = 0; set_window("YourWindow",5);
while(obj_exists((link = "{class: object,MSW_class: html_text_link,location: " & location & "}"))== E_OK) { obj_highlight(link); web_obj_get_info(link,"name",name); web_link_valid(link,valid); if(valid) tl_step("Check web link",PASS,"Web link \"" & name & "\" is valid."); else tl_step("Check web link",FAIL,"Web link \"" & name & "\" is not valid."); location++; }
Q: How to get the resolution settings Use get_screen_res(x,y) to get the screen resolution in WR7.5. or Use get_resolution (Vert_Pix_int, Horz_Pix_int, Frequency_int) in WR7.01
Q: WITHOUT the GUI map, use the phy desc directly.... It's easy, just take the description straight out of the GUI map squigglies and all, put it into a variable (or pass it as a string) and use that in place of the object name.
Q: What are the three modes of running the scripts? WinRunner provides three modes in which to run tests: Verify, Debug, and Update. You use each mode during a different phase of the testing process. Verify Use the Verify mode to check your application. Debug Use the Debug mode to help you identify bugs in a test script. Update Use the Update mode to update the expected results of a test or to create a new expected results folder.
Q: How do you handle unexpected events and errors? WinRunner uses exception handling to detect an unexpected event when it occurs and act to recover the test run. WinRunner enables you to handle the following types of exceptions: Pop-up exceptions: Instruct WinRunner to detect and handle the appearance of a specific window. TSL exceptions: Instruct WinRunner to detect and handle TSL functions that return a specific error code. Object exceptions: Instruct WinRunner to detect and handle a change in a property for a specific GUI object. Web exceptions: When the WebTest add-in is loaded, you can instruct WinRunner to handle unexpected events and errors that occur in your Web site during a test run.
Q: How do you handle pop-up exceptions? A pop-up exception Handler handles the pop-up messages that come up during the execution of the script in the AUT. TO handle this type of exception we make WinRunner learn the window and also specify a handler to the exception. It could be Default actions: WinRunner clicks the OK or Cancel button in the pop-up window, or presses Enter on the keyboard. To select a default handler, click the appropriate button in the dialog box. User-defined handler: If you prefer, specify the name of your own handler. Click User Defined Function Name and type in a name in the User Defined Function Name box.
Q: How do you handle TSL exceptions? Suppose you are running a batch test on an unstable version of your application. If your application crashes, you want WinRunner to recover test execution. A TSL exception can instruct WinRunner to recover test execution by exiting the current test, restarting the application, and continuing with the next test in the batch. The handler function is responsible for recovering test execution. When WinRunner detects a specific error code, it calls the handler function. You implement this function to respond to the unexpected error in the way that meets your specific testing needs. Once you have defined the exception, WinRunner activates handling and adds the exception to the list of default TSL exceptions in the Exceptions dialog box. Default TSL exceptions are defined by the XR_EXCP_TSL configuration parameter in the wrun.ini configuration file.
Q: How to write an email address validation script in TSL? public function IsValidEMAIL(in strText) { auto aryEmail[], aryEmail2[], n;
n = split(strText, aryEmail, "@"); if (n != 2) return FALSE;
# Ensure the string "@MyISP.Com" does not pass... if (!length(aryEmail[1])) return FALSE;
n = split(aryEmail[2], aryEmail2, "."); if (n < guiname1 = "MMAQ_guimap.gui" guiname2 = "SSPicker_guimap.gui" guiname3 = "TradeEntry.gui" rc =" loadGui(guiLoad);" rc =" (GUI_load(GUIPATH" i ="1;i<="isize;i++)" s =" s"> ); extern long RegCloseKey(long); extern long RegQueryValueExA(long,string,long,long,inout string<1024>,inout long ); extern long RegOpenKeyExA(long,string,long ,long,inout long); extern long RegSetValueExA(long,string,long,long,string,long);
MainKey = 2147483649; # HKEY_CURRENT_USER SubKey = "Software\\TestConverter\\TCEditor\\Settings"; # This is where you set your subkey path const ERROR_SUCCESS = 0;
const KEY_ALL_ACCESS = 983103; ret = RegOpenKeyExA(MainKey, SubKey, 0, KEY_ALL_ACCESS, hKey); # open the key if (ret==ERROR_SUCCESS) { cbData = 256; tmp = space(256); KeyType = 0; ret = RegQueryValueExA(hKey,"Last language",0,KeyType,tmp,cbData); # replace "Last language" with the key you want to read } pause (tmp); NewSetting = "SQABASIC"; cbData = length(NewSetting) + 1; ret = RegSetValueExA(hKey,"Last language",0,KeyType,NewSetting,cbData); # replace "Last language" with the key you want to write
cbData = 256; tmp = space(256); KeyType = 0; ret = RegQueryValueExA(hKey,"Last language",0,KeyType,tmp,cbData); # verifies you changed the key
pause (tmp);
RegCloseKey(hKey); # close the key
Q: How to break infinite loop set_window("Browser Main Window",1); text=""; start = get_time(); while(text!="Done") { statusbar_get_text("Status Bar",0,text); now = get_time(); if ( (now-start) == 60 ) # Specify no of seconds after which u want break { break; } }
Q: User-defined function that would write to the Print-log as well as write to a file function writeLog(in strMessage){ file_open("C:\FilePath\..."); file_printf(strMessage); printf(strMessage); } Q: How to do text matching? You could try embedding it in an if statement. If/when it fails use a tl_step statement to indicate passage and then do a texit to leave the test. Another idea would be to use win_get_text or web_frame_get_text to capture the text of the object and the do a comparison (using the match function) to determine it's existance.
Q: the MSW_id value sometimes changes, rendering the GUI map useless MSW_Id's will continue to change as long as your developers are modifying your application. Having dealt with this, I determined that each MSW_Id shifted by the same amount and I was able to modify the entries in the gui map rather easily and continue testing. Instead of using the MSW_id use the "location". If you use your GUI spy it will give you every detail it can. Then add or remove what you don't want.
Q: Having the DB Check point, its able to show the current values in form but its not showing the values that saved in the table This looks like its happening because the data has been written to the db after your checkpoint, so you have to do a runtime record check Create>Database Checkpoint>Runtime Record Check. You may also have to perform some customization if the data displayed in the application is in a different format than the data in the database by using TSL. For example, converting radio buttons to database readable form involves the following:
# retrieve the three button states button_get_state ( "First", first); button_get_state ( "Business", bus); button_get_state ( "Economy", econ);
# establish a variable with the correct numeric value based on which radio button is set if (first) service="1";
if (bus) service="2";
if (econ) service="3";
set_window("Untitled - Notepad",3);
edit_set("Report Area",service);
db_record_check("list1.cvr", DVR_ONE_MATCH,record_num); Increas Capacity Testing When you begin your stress testing, you will want to increase your capacity testing to make sure you are able to handle the increased load of data such as ASP pages and graphics. When you test the ASP pages, you may want to create a page similar to the original page that will simulate the same items on the ASP page and have it send the information to a test bed with a process that completes just a small data output. By doing this, you will have your processor still stressing the system but not taking up the bandwidth by sending the HTML code along the full path. This will not stress the entire code but will give you a basis from which to work. Dividing the requests per second by the total number of user or threads will determine the number of transactions per second. It will tell you at what point the server will start becoming less efficient at handling the load. Let's look at an example. Let's say your test with 50 users shows your server can handle 5 requests per seconf, with 100 users it is 10 requests per second, with 200 users it is 15 requests per second, and eventually with 300 users it is 20 requests per second. Your requests per second are continually climbing, so it seems that you are obtaining steadily improving performance. Let's look at the ratios: 05/50 = 0.1 10/100 = 0.1 15/200 = 0.075 20/300 = 0.073 From this example you can see that the performance of the server is becoming less and less efficient as the load grows. This in itself is not necessarily bad (as long as your pages are still returning within your target time frame). However, it can be a useful indicator during your optimization process and does give you some indication of how much leeway you have to handle expected peaks.
Stateful testing When you use a Web-enabled application to set a value, does the server respond correctly later on?
Privilage testing What happens when the everyday user tries to access a control that is authorized only for adminstrators?
Speed testing Is the Web-enabled application taking too long to respond?
Boundary Test Boundary tests are designed to check a program's response to extreme input values. Extreme output values are generated by the input values. It is important to check that a program handles input values and output results correctly at the lower and upper boundaries. Keep in mind that you can create extreme boundary results from non-extreme input values. It is essential to analyze how to generate extremes of both types. In addition. sometime you know that there is an intermediate variable involved in processing. If so, it is useful to determine how to drive that one through the extremes and special conditions such as zero or overflow condition.
Boundary timeing testing What happens when your Web-enabled application request times out or takes a really long time to respond?
Regression testing Did a new build break an existing function? Repeat testing after changes for managing risk relate to product enhancement. A regression test is performded when the tester wishes to see the progress of the testing processs by performing identical tests before and after a bug has been fixed. A regression test allows the tester to compare expeted test results with the actual results. Regression testing's primary objective is to ensure that all bugfree features stay that way. In addition, bugs which have been fixed once should not turn up again in subsequent program versions. Regression testing: After every software modification or before next release, we repeat all test cases to check if fixed bugs are not show up again and new and existing functions are all working correctly. Regression testing is used to confirm that fixed bugs have, in fact, been fixed and that new bugs have not been introduced in the process, and that festures that were proven correctly functional are intact. Depending on the size of a project, cycles of regression testing may be perform once per milestone or once per build. Some bug regression testing may also be performed during each accceptance test cycle, forcusing on only the most important bugs. Regression tests can be automated. CONDITIONS DURING WHICH REGRESSION TESTS MAY BE RUN Issu fixing cycle. Once the development team has fixed issues, a regression test can be run t ovalidate the fixes. Tests are based on the step-by-step test casess that were originally reported: • If an issue is confirmeded as fixed, then the issue report status should be changed to Closed. • If an issue is confirmed as fixed, but with side effects, then the issue report status should be changed to Closed. However, a new issue should be filed to report the side effect. • If an issue is only partially fixed, then the issue report resolution should be changed back to Unfixed, along with comments outlining the oustanding problems
Open-status regression cycle. Periodic regression tests may be run on all open issue in the issue-tracking database. During this cycle, issue status is confirmed either the report is reproducible as is with no modification, the report is reproducible with additional comments or modifications, or the report is no longer reproducible Closed-fixed regression cycle. In the final phase of testing, a full-regression test cycle should be run to confirm the status of all fixed-closed issues. Feature regression cycle. Each time a new build is cut or is in the final phase of testing depending on the organizational procedure, a full-regression test cycle should be run to confirm that the proven correctly functional features are still working as expected.
Database Testing Items to check when testing a database What to test Environment toola/technique Seach results System test environment Black Box and White Box technique Response time System test environment Sytax Testing/Functional Testing Data integrity Development environment White Box testing Data validity Development environment White Box testing Q:How do you find an object in an GUI map? The GUI Map Editor is been provided with a Find and Show Buttons. To find a particular object in the GUI Map file in the application, select the object and click the Show window. This blinks the selected object. To find a particular object in a GUI Map file click the Find button, which gives the option to select the object. When the object is selected, if the object has been learned to the GUI Map file it will be focused in the GUI Map file.
Q:What different actions are performed by find and show button? To find a particular object in the GUI Map file in the application, select the object and click the Show window. This blinks the selected object. To find a particular object in a GUI Map file click the Find button, which gives the option to select the object. When the object is selected, if the object has been learned to the GUI Map file it will be focused in the GUI Map file.
Q:How do you identify which files are loaded in the GUI map? The GUI Map Editor has a drop down GUI File displaying all the GUI Map files loaded into the memory.
Q:How do you modify the logical name or the physical description of the objects in GUI map? You can modify the logical name or the physical description of an object in a GUI map file using the GUI Map Editor.
Q:When do you feel you need to modify the logical name? Changing the logical name of an object is useful when the assigned logical name is not sufficiently descriptive or is too long.
Q:When it is appropriate to change physical description? Changing the physical description is necessary when the property value of an object changes.
Q:How WinRunner handles varying window labels? We can handle varying window labels using regular expressions. WinRunner uses two hidden properties in order to use regular expression in an object’s physical description. These properties are regexp_label and regexp_MSW_class. i. The regexp_label property is used for windows only. It operates behind the scenes to insert a regular expression into a window’s label description. ii. The regexp_MSW_class property inserts a regular expression into an object’s MSW_class. It is obligatory for all types of windows and for the object class object.
Q:What is the purpose of regexp_label property and regexp_MSW_class property? The regexp_label property is used for windows only. It operates behind the scenes to insert a regular expression into a window’s label description. The regexp_MSW_class property inserts a regular expression into an object’s MSW_class. It is obligatory for all types of windows and for the object class object.
Q:How do you suppress a regular expression? We can suppress the regular expression of a window by replacing the regexp_label property with label property. Q:How do you copy and move objects between different GUI map files? We can copy and move objects between different GUI Map files using the GUI Map Editor. The steps to be followed are: 1. Choose Tools - GUI Map Editor to open the GUI Map Editor. 2. Choose View - GUI Files. 3. Click Expand in the GUI Map Editor. The dialog box expands to display two GUI map files simultaneously. 4. View a different GUI map file on each side of the dialog box by clicking the file names in the GUI File lists. 5. In one file, select the objects you want to copy or move. Use the Shift key and or Control key to select multiple objects. To select all objects in a GUI map file, choose Edit - Select All. 6. Click Copy or Move. 7. To restore the GUI Map Editor to its original size, click Collapse.
Q:How do you select multiple objects during merging the files? Use the Shift key and or Control key to select multiple objects. To select all objects in a GUI map file, choose Edit - Select All.
Q:How do you clear a GUI map files? We can clear a GUI Map file using the Clear All option in the GUI Map Editor.
Q:How do you filter the objects in the GUI map? GUI Map Editor has a Filter option. This provides for filtering with 3 different types of options. 1. Logical name displays only objects with the specified logical name. 2. Physical description displays only objects matching the specified physical description. Use any substring belonging to the physical description. 3. Class displays only objects of the specified class, such as all the push buttons.
Q:How do you configure GUI map? 1. When WinRunner learns the description of a GUI object, it does not learn all its properties. Instead, it learns the minimum number of properties to provide a unique identification of the object. 2. Many applications also contain custom GUI objects. A custom object is any object not belonging to one of the standard classes used by WinRunner. These objects are therefore assigned to the generic object class. When WinRunner records an operation on a custom object, it generates obj_mouse_ statements in the test script. 3. If a custom object is similar to a standard object, you can map it to one of the standard classes. You can also configure the properties WinRunner uses to identify a custom object during Context Sensitive testing. The mapping and the configuration you set are valid only for the current WinRunner session. To make the mapping and the configuration permanent, you must add configuration statements to your startup test script.
Q:What is the purpose of GUI map configuration? GUI Map configuration is used to map a custom object to a standard object.
Q:How do you make the configuration and mappings permanent? The mapping and the configuration you set are valid only for the current WinRunner session. To make the mapping and the configuration permanent, you must add configuration statements to your startup test script.
Q:What is the purpose of GUI spy? Using the GUI Spy, you can view the properties of any GUI object on your desktop. You use the Spy pointer to point to an object, and the GUI Spy displays the properties and their values in the GUI Spy dialog box. You can choose to view all the properties of an object, or only the selected set of properties that WinRunner learns. Q:What is the purpose of different record methods 1) Record 2) Pass up 3) As Object 4) Ignore.? 1) Record instructs WinRunner to record all operations performed on a GUI object. This is the default record method for all classes. (The only exception is the static class (static text), for which the default is Pass Up.) 2) Pass Up instructs WinRunner to record an operation performed on this class as an operation performed on the element containing the object. Usually this element is a window, and the operation is recorded as win_mouse_click. 3) As Object instructs WinRunner to record all operations performed on a GUI object as though its class were object class. 4) Ignore instructs WinRunner to disregard all operations performed on the class.
Q:How do you find out which is the start up file in WinRunner? The test script name in the Startup Test box in the Environment tab in the General Options dialog box is the start up file in WinRunner.
Q:What are the virtual objects and how do you learn them? • Applications may contain bitmaps that look and behave like GUI objects. WinRunner records operations on these bitmaps using win_mouse_click statements. By defining a bitmap as a virtual object, you can instruct WinRunner to treat it like a GUI object such as a push button, when you record and run tests. • Using the Virtual Object wizard, you can assign a bitmap to a standard object class, define the coordinates of that object, and assign it a logical name. To define a virtual object using the Virtual Object wizard: 1. Choose Tools > Virtual Object Wizard. The Virtual Object wizard opens. Click Next. 2. In the Class list, select a class for the new virtual object. If rows that are displayed in the window. For a table class, select the number of visible rows and columns. Click Next. 3. Click Mark Object. Use the crosshairs pointer to select the area of the virtual object. You can use the arrow keys to make precise adjustments to the area you define with the crosshairs. Press Enter or click the right mouse button to display the virtual object’s coordinates in the wizard. If the object marked is visible on the screen, you can click the Highlight button to view it. Click Next. 4. Assign a logical name to the virtual object. This is the name that appears in the test script when you record on the virtual object. If the object contains text that WinRunner can read, the wizard suggests using this text for the logical name. Otherwise, WinRunner suggests virtual_object, virtual_push_button, virtual_list, etc. 5. You can accept the wizard’s suggestion or type in a different name. WinRunner checks that there are no other objects in the GUI map with the same name before confirming your choice. Click Next.
Q:What are the two modes of recording? There are 2 modes of recording in WinRunner 1. Context Sensitive recording records the operations you perform on your application by identifying Graphical User Interface (GUI) objects. 2. Analog recording records keyboard input, mouse clicks, and the precise x- and y-coordinates traveled by the mouse pointer across the screen.
Q:What is a checkpoint and what are different types of checkpoints? Checkpoints allow you to compare the current behavior of the application being tested to its behavior in an earlier version. You can add four types of checkpoints to your test scripts: 1. GUI checkpoints verify information about GUI objects. For example, you can check that a button is enabled or see which item is selected in a list. 2. Bitmap checkpoints take a snapshot of a window or area of your application and compare this to an image captured in an earlier version. 3. Text checkpoints read text in GUI objects and in bitmaps and enable you to verify their contents. 4. Database checkpoints check the contents and the number of rows and columns of a result set, which is based on a query you create on your database.
Q:What are data driven tests? When you test your application, you may want to check how it performs the same operations with multiple sets of data. You can create a data-driven test with a loop that runs ten times: each time the loop runs, it is driven by a different set of data. In order for WinRunner to use data to drive the test, you must link the data to the test script which it drives. This is called parameterizing your test. The data is stored in a data table. You can perform these operations manually, or you can use the DataDriver Wizard to parameterize your test and store the data in a data table.
Q:What are the synchronization points? Synchronization points enable you to solve anticipated timing problems between the test and your application. For example, if you create a test that opens a database application, you can add a synchronization point that causes the test to wait until the database records are loaded on the screen. For Analog testing, you can also use a synchronization point to ensure that WinRunner repositions a window at a specific location. When you run a test, the mouse cursor travels along exact coordinates. Repositioning the window enables the mouse pointer to make contact with the correct elements in the window. Q:What is parameterizing? In order for WinRunner to use data to drive the test, you must link the data to the test script which it drives. This is called parameterizing your test. The data is stored in a data table.
Q:How do you maintain the document information of the test scripts? Before creating a test, you can document information about the test in the General and Description tabs of the Test Properties dialog box. You can enter the name of the test author, the type of functionality tested, a detailed description of the test, and a reference to the relevant functional specifications document.
Q:What do you verify with the GUI checkpoint for single property and what command it generates, explain syntax? You can check a single property of a GUI object. For example, you can check whether a button is enabled or disabled or whether an item in a list is selected. To create a GUI checkpoint for a property value, use the Check Property dialog box to add one of the following functions to the test script: button_check_info scroll_check_info edit_check_info static_check_info list_check_info win_check_info obj_check_info Syntax: button_check_info (button, property, property_value ); edit_check_info ( edit, property, property_value );
Q:What do you verify with the GUI checkpoint for object/window and what command it generates, explain syntax? • You can create a GUI checkpoint to check a single object in the application being tested. You can either check the object with its default properties or you can specify which properties to check. • Creating a GUI Checkpoint using the Default Checks • You can create a GUI checkpoint that performs a default check on the property recommended by WinRunner. For example, if you create a GUI checkpoint that checks a push button, the default check verifies that the push button is enabled. • To create a GUI checkpoint using default checks: 1. Choose Create - GUI Checkpoint - For Object/Window, or click the GUI Checkpoint for Object/Window button on the User toolbar. If you are recording in Analog mode, press the CHECK GUI FOR OBJECT/WINDOW softkey in order to avoid extraneous mouse movements. Note that you can press the CHECK GUI FOR OBJECT/WINDOW softkey in Context Sensitive mode as well. The WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a help window opens on the screen. 2. Click an object. 3. WinRunner captures the current value of the property of the GUI object being checked and stores it in the test’s expected results folder. The WinRunner window is restored and a GUI checkpoint is inserted in the test script as an obj_check_gui statement Syntax: win_check_gui ( window, checklist, expected_results_file, time ); • Creating a GUI Checkpoint by Specifying which Properties to Check • You can specify which properties to check for an object. For example, if you create a checkpoint that checks a push button, you can choose to verify that it is in focus, instead of enabled. • To create a GUI checkpoint by specifying which properties to check: • Choose Create - GUI Checkpoint - For Object/Window, or click the GUI Checkpoint for Object/Window button on the User toolbar. If you are recording in Analog mode, press the CHECK GUI FOR OBJECT/WINDOW softkey in order to avoid extraneous mouse movements. Note that you can press the CHECK GUI FOR OBJECT/WINDOW softkey in Context Sensitive mode as well. The WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a help window opens on the screen. • Double-click the object or window. The Check GUI dialog box opens. • Click an object name in the Objects pane. The Properties pane lists all the properties for the selected object. • Select the properties you want to check. 1. To edit the expected value of a property, first select it. Next, either click the Edit Expected Value button, or double-click the value in the Expected Value column to edit it. 2. To add a check in which you specify arguments, first select the property for which you want to specify arguments. Next, either click the Specify Arguments button, or double-click in the Arguments column. Note that if an ellipsis (three dots) appears in the Arguments column, then you must specify arguments for a check on this property. (You do not need to specify arguments if a default argument is specified.) When checking standard objects, you only specify arguments for certain properties of edit and static text objects. You also specify arguments for checks on certain properties of nonstandard objects. 3. To change the viewing options for the properties of an object, use the Show Properties buttons. 4. Click OK to close the Check GUI dialog box. WinRunner captures the GUI information and stores it in the test’s expected results folder. The WinRunner window is restored and a GUI checkpoint is inserted in the test script as an obj_check_gui or a win_check_gui statement. Syntax: win_check_gui ( window, checklist, expected_results_file, time ); obj_check_gui ( object, checklist, expected results file, time ); Q:What do you verify with the GUI checkpoint for multiple objects and what command it generates, explain syntax? To create a GUI checkpoint for two or more objects: • Choose Create GUI Checkpoint For Multiple Objects or click the GUI Checkpoint for Multiple Objects button on the User toolbar. If you are recording in Analog mode, press the CHECK GUI FOR MULTIPLE OBJECTS softkey in order to avoid extraneous mouse movements. The Create GUI Checkpoint dialog box opens. • Click the Add button. The mouse pointer becomes a pointing hand and a help window opens. • To add an object, click it once. If you click a window title bar or menu bar, a help window prompts you to check all the objects in the window. • The pointing hand remains active. You can continue to choose objects by repeating step 3 above for each object you want to check. • Click the right mouse button to stop the selection process and to restore the mouse pointer to its original shape. The Create GUI Checkpoint dialog box reopens. • The Objects pane contains the name of the window and objects included in the GUI checkpoint. To specify which objects to check, click an object name in the Objects pane. The Properties pane lists all the properties of the object. The default properties are selected. 1. To edit the expected value of a property, first select it. Next, either click the Edit Expected Value button, or double-click the value in the Expected Value column to edit it. 2. To add a check in which you specify arguments, first select the property for which you want to specify arguments. Next, either click the Specify Arguments button, or double-click in the Arguments column. Note that if an ellipsis appears in the Arguments column, then you must specify arguments for a check on this property. (You do not need to specify arguments if a default argument is specified.) When checking standard objects, you only specify arguments for certain properties of edit and static text objects. You also specify arguments for checks on certain properties of nonstandard objects. 3. To change the viewing options for the properties of an object, use the Show Properties buttons. • To save the checklist and close the Create GUI Checkpoint dialog box, click OK. WinRunner captures the current property values of the selected GUI objects and stores it in the expected results folder. A win_check_gui statement is inserted in the test script. Syntax: win_check_gui ( window, checklist, expected_results_file, time ); obj_check_gui ( object, checklist, expected results file, time );
Q:What information is contained in the checklist file and in which file expected results are stored? The checklist file contains information about the objects and the properties of the object we are verifying. The gui*.chk file contains the expected results which is stored in the exp folder
Q:What do you verify with the bitmap check point for object/window and what command it generates, explain syntax? • You can check an object, a window, or an area of a screen in your application as a bitmap. While creating a test, you indicate what you want to check. WinRunner captures the specified bitmap, stores it in the expected results folder (exp) of the test, and inserts a checkpoint in the test script. When you run the test, WinRunner compares the bitmap currently displayed in the application being tested with the expected bitmap stored earlier. In the event of a mismatch, WinRunner captures the current actual bitmap and generates a difference bitmap. By comparing the three bitmaps (expected, actual, and difference), you can identify the nature of the discrepancy. • When working in Context Sensitive mode, you can capture a bitmap of a window, object, or of a specified area of a screen. WinRunner inserts a checkpoint in the test script in the form of either a win_check_bitmap or obj_check_bitmap statement. • Note that when you record a test in Analog mode, you should press the CHECK BITMAP OF WINDOW softkey or the CHECK BITMAP OF SCREEN AREA softkey to create a bitmap checkpoint. This prevents WinRunner from recording extraneous mouse movements. If you are programming a test, you can also use the Analog function check_window to check a bitmap. • To capture a window or object as a bitmap: 1. Choose Create - Bitmap Checkpoint - For Object/Window or click the Bitmap Checkpoint for Object/Window button on the User toolbar. Alternatively, if you are recording in Analog mode, press the CHECK BITMAP OF OBJECT/WINDOW softkey. The WinRunner window is minimized, the mouse pointer becomes a pointing hand, and a help window opens. 2. Point to the object or window and click it. WinRunner captures the bitmap and generates a win_check_bitmap or obj_check_bitmap statement in the script. The TSL statement generated for a window bitmap has the following syntax: win_check_bitmap ( object, bitmap, time ); 3. For an object bitmap, the syntax is: obj_check_bitmap ( object, bitmap, time ); 4. For example, when you click the title bar of the main window of the Flight Reservation application, the resulting statement might be: win_check_bitmap ("Flight Reservation", "Img2", 1); 5. However, if you click the Date of Flight box in the same window, the statement might be: obj_check_bitmap ("Date of Flight:", "Img1", 1); Syntax: obj_check_bitmap ( object, bitmap, time [, x, y, width, height] );
Q:What do you verify with the bitmap checkpoint for screen area and what command it generates, explain syntax? • You can define any rectangular area of the screen and capture it as a bitmap for comparison. The area can be any size: it can be part of a single window, or it can intersect several windows. The rectangle is identified by the coordinates of its upper left and lower right corners, relative to the upper left corner of the window in which the area is located. If the area intersects several windows or is part of a window with no title (for example, a popup window), its coordinates are relative to the entire screen (the root window). • To capture an area of the screen as a bitmap: 1. Choose Create - Bitmap Checkpoint - For Screen Area or click the Bitmap Checkpoint for Screen Area button. Alternatively, if you are recording in Analog mode, press the CHECK BITMAP OF SCREEN AREA softkey. The WinRunner window is minimized, the mouse pointer becomes a crosshairs pointer, and a help window opens. 2. Mark the area to be captured: press the left mouse button and drag the mouse pointer until a rectangle encloses the area; then release the mouse button. 3. Press the right mouse button to complete the operation. WinRunner captures the area and generates a win_check_bitmap statement in your script. 4. The win_check_bitmap statement for an area of the screen has the following syntax: win_check_bitmap ( window, bitmap, time, x, y, width, height ); Q:What do you verify with the database checkpoint default and what command it generates, explain syntax? • By adding runtime database record checkpoints you can compare the information in your application during a test run with the corresponding record in your database. By adding standard database checkpoints to your test scripts, you can check the contents of databases in different versions of your application. • When you create database checkpoints, you define a query on your database, and your database checkpoint checks the values contained in the result set. The result set is set of values retrieved from the results of the query. • You can create runtime database record checkpoints in order to compare the values displayed in your application during the test run with the corresponding values in the database. If the comparison does not meet the success criteria you • specify for the checkpoint, the checkpoint fails. You can define a successful runtime database record checkpoint as one where one or more matching records were found, exactly one matching record was found, or where no matching records are found. • You can create standard database checkpoints to compare the current values of the properties of the result set during the test run to the expected values captured during recording or otherwise set before the test run. If the expected results and the current results do not match, the database checkpoint fails. Standard database checkpoints are useful when the expected results can be established before the test run. Syntax: db_check(checklist_file, expected_restult); • You can add a runtime database record checkpoint to your test in order to compare information that appears in your application during a test run with the current value(s) in the corresponding record(s) in your database. You add runtime database record checkpoints by running the Runtime Record Checkpoint wizard. When you are finished, the wizard inserts the appropriate db_record_check statement into your script. Syntax: db_record_check(ChecklistFileName,SuccessConditions,RecordNumber ); ChecklistFileName ---- A file created by WinRunner and saved in the test's checklist folder. The file contains information about the data to be captured during the test run and its corresponding field in the database. The file is created based on the information entered in the Runtime Record Verification wizard. SuccessConditions ----- Contains one of the following values: 1. DVR_ONE_OR_MORE_MATCH - The checkpoint passes if one or more matching database records are found. 2. DVR_ONE_MATCH - The checkpoint passes if exactly one matching database record is found. 3. DVR_NO_MATCH - The checkpoint passes if no matching database records are found. RecordNumber --- An out parameter returning the number of records in the database.
Q:How do you handle dynamically changing area of the window in the bitmap checkpoints? The difference between bitmaps option in the Run Tab of the general options defines the minimum number of pixels that constitute a bitmap mismatch
Q:What do you verify with the database check point custom and what command it generates, explain syntax? • When you create a custom check on a database, you create a standard database checkpoint in which you can specify which properties to check on a result set. • You can create a custom check on a database in order to: • check the contents of part or the entire result set • edit the expected results of the contents of the result set • count the rows in the result set • count the columns in the result set • You can create a custom check on a database using ODBC, Microsoft Query or Data Junction.
Q:What do you verify with the sync point for object/window property and what command it generates, explain syntax? • Synchronization compensates for inconsistencies in the performance of your application during a test run. By inserting a synchronization point in your test script, you can instruct WinRunner to suspend the test run and wait for a cue before continuing the test. • You can a synchronization point that instructs WinRunner to wait for a specified object or window to appear. For example, you can tell WinRunner to wait for a window to open before performing an operation within that window, or you may want WinRunner to wait for an object to appear in order to perform an operation on that object. • You use the obj_exists function to create an object synchronization point, and you use the win_exists function to create a window synchronization point. These functions have the following syntax: obj_exists ( object [, time ] ); win_exists ( window [, time ] );
Q:What do you verify with the sync point for object/window bitmap and what command it generates, explain syntax? You can create a bitmap synchronization point that waits for the bitmap of an object or a window to appear in the application being tested. During a test run, WinRunner suspends test execution until the specified bitmap is redrawn, and then compares the current bitmap with the expected one captured earlier. If the bitmaps match, then WinRunner continues the test. Syntax: obj_wait_bitmap ( object, image, time ); win_wait_bitmap ( window, image, time ); :What is the purpose of obligatory and optional properties of the objects? For each class, WinRunner learns a set of default properties. Each default property is classified obligatory or optional. 1. An obligatory property is always learned (if it exists). 2. An optional property is used only if the obligatory properties do not provide unique identification of an object. These optional properties are stored in a list. WinRunner selects the minimum number of properties from this list that are necessary to identify the object. It begins with the first property in the list, and continues, if necessary, to add properties to the description until it obtains unique identification for the object.
Q:When the optional properties are learned? An optional property is used only if the obligatory properties do not provide unique identification of an object.
Q:What is the purpose of location indicator and index indicator in GUI map configuration? In cases where the obligatory and optional properties do not uniquely identify an object, WinRunner uses a selector to differentiate between them. Two types of selectors are available: A location selector uses the spatial position of objects. The location selector uses the spatial order of objects within the window, from the top left to the bottom right corners, to differentiate among objects with the same description. An index selector uses a unique number to identify the object in a window. The index selector uses numbers assigned at the time of creation of objects to identify the object in a window. Use this selector if the location of objects with the same description may change within a window.
Q:How do you handle custom objects? A custom object is any GUI object not belonging to one of the standard classes used by WinRunner. WinRunner learns such objects under the generic object class. WinRunner records operations on custom objects using obj_mouse_ statements. If a custom object is similar to a standard object, you can map it to one of the standard classes. You can also configure the properties WinRunner uses to identify a custom object during Context Sensitive testing.
Q:What is the name of custom class in WinRunner and what methods it applies on the custom objects? WinRunner learns custom class objects under the generic object class. WinRunner records operations on custom objects using obj_ statements.
Q:In a situation when obligatory and optional both the properties cannot uniquely identify an object what method WinRunner applies? In cases where the obligatory and optional properties do not uniquely identify an object, WinRunner uses a selector to differentiate between them. Two types of selectors are available: i. A location selector uses the spatial position of objects. ii. An index selector uses a unique number to identify the object in a window.
Q:What do you verify with the sync point for screen area and what command it generates, explain syntax? For screen area verification we actually capture the screen area into a bitmap and verify the application screen area with the bitmap file during execution Syntax: obj_wait_bitmap(object, image, time, x, y, width, height);
Q:How do you edit checklist file and when do you need to edit the checklist file? WinRunner has an edit checklist file option under the create menu. Select the Edit GUI Checklist to modify GUI checklist file and Edit Database Checklist to edit database checklist file. This brings up a dialog box that gives you option to select the checklist file to modify. There is also an option to select the scope of the checklist file, whether it is Test specific or a shared one. Select the checklist file, click OK which opens up the window to edit the properties of the objects. Q:How do you edit the expected value of an object? We can modify the expected value of the object by executing the script in the Update mode. We can also manually edit the gui*.chk file which contains the expected values which come under the exp folder to change the values.
Q:How do you modify the expected results of a GUI checkpoint? We can modify the expected results of a GUI checkpoint be running the script containing the checkpoint in the update mode.
Q:How do you handle ActiveX and Visual basic objects? WinRunner provides with add-ins for ActiveX and Visual basic objects. When loading WinRunner, select those add-ins and these add-ins provide with a set of functions to work on ActiveX and VB objects.
Q:How do you create ODBC query? We can create ODBC query using the database checkpoint wizard. It provides with option to create an SQL file that uses an ODBC DSN to connect to the database. The SQL File will contain the connection string and the SQL statement.
Q:How do you record a data driven test? We can create a data-driven testing using data from a flat file, data table or a database. Using Flat File: we actually store the data to be used in a required format in the file. We access the file using the File manipulation commands, reads data from the file and assign the variables with data. Data Table: It is an excel file. We can store test data in these files and manipulate them. We use the ‘ddt_*’ functions to manipulate data in the data table. Database: we store test data in the database and access these data using ‘db_*’ functions.
Q:How do you convert a database file to a text file? You can use Data Junction to create a conversion file which converts a database to a target text file.
Q:How do you parameterize database check points? When you create a standard database checkpoint using ODBC (Microsoft Query), you can add parameters to an SQL statement to parameterize the checkpoint. This is useful if you want to create a database checkpoint with a query in which the SQL statement defining your query changes.
Q:How do you create parameterize SQL commands? A parameterized query is a query in which at least one of the fields of the WHERE clause is parameterized, i.e., the value of the field is specified by a question mark symbol ( ? ). For example, the following SQL statement is based on a query on the database in the sample Flight Reservation application: SELECT Flights.Departure, Flights.Flight_Number, Flights.Day_Of_Week FROM Flights Flights WHERE (Flights.Departure=?) AND (Flights.Day_Of_Week=?) SELECT defines the columns to include in the query. FROM specifies the path of the database. WHERE (optional) specifies the conditions, or filters to use in the query. Departure is the parameter that represents the departure point of a flight. Day_Of_Week is the parameter that represents the day of the week of a flight. When creating a database checkpoint, you insert a db_check statement into your test script. When you parameterize the SQL statement in your checkpoint, the db_check function has a fourth, optional, argument: the parameter_array argument. A statement similar to the following is inserted into your test script: db_check("list1.cdl", "dbvf1", NO_LIMIT, dbvf1_params); The parameter_array argument will contain the values to substitute for the parameters in the parameterized checkpoint. Q:What check points you will use to read and check text on the GUI and explain its syntax? • You can use text checkpoints in your test scripts to read and check text in GUI objects and in areas of the screen. While creating a test you point to an object or a window containing text. WinRunner reads the text and writes a TSL statement to the test script. You may then add simple programming elements to your test scripts to verify the contents of the text. • You can use a text checkpoint to: • Read text from a GUI object or window in your application, using obj_get_text and win_get_text • Search for text in an object or window, using win_find_text and obj_find_text • Move the mouse pointer to text in an object or window, using obj_move_locator_text and win_move_locator_text • Click on text in an object or window, using obj_click_on_text and win_click_on_text
Q:How to get Text from object/window ? We use obj_get_text (logical_name, out_text) function to get the text from an object We use win_get_text (window, out_text [, x1, y1, x2, y2]) function to get the text from a window.
Q:How to get Text from screen area ? We use win_get_text (window, out_text [, x1, y1, x2, y2]) function to get the text from a window.
Q:Which TSL functions you will use for Searching text on the window find_text ( string, out_coord_array, search_area [, string_def ] ); win_find_text ( window, string, result_array [, search_area [, string_def ] ] );
Q:What are the steps of creating a data driven test? The steps involved in data driven testing are: Creating a test Converting to a data-driven test and preparing a database Running the test Analyzing the test results.
Q: How to use data driver wizard? You can use the DataDriver Wizard to convert your entire script or a part of your script into a data-driven test. For example, your test script may include recorded operations, checkpoints, and other statements that do not need to be repeated for multiple sets of data. You need to parameterize only the portion of your test script that you want to run in a loop with multiple sets of data. To create a data-driven test: • If you want to turn only part of your test script into a data-driven test, first select those lines in the test script. • Choose Tools - DataDriver Wizard. • If you want to turn only part of the test into a data-driven test, click Cancel. Select those lines in the test script and reopen the DataDriver Wizard. If you want to turn the entire test into a data-driven test, click Next. • The Use a new or existing Excel table box displays the name of the Excel file that WinRunner creates, which stores the data for the data-driven test. Accept the default data table for this test, enter a different name for the data table, or use • The browse button to locate the path of an existing data table. By default, the data table is stored in the test folder. • In the Assign a name to the variable box, enter a variable name with which to refer to the data table, or accept the default name, table. • At the beginning of a data-driven test, the Excel data table you selected is assigned as the value of the table variable. Throughout the script, only the table variable name is used. This makes it easy for you to assign a different data table • To the script at a later time without making changes throughout the script. • Choose from among the following options: 1. Add statements to create a data-driven test: Automatically adds statements to run your test in a loop: sets a variable name by which to refer to the data table; adds braces ({and}), a for statement, and a ddt_get_row_count statement to your test script selection to run it in a loop while it reads from the data table; adds ddt_open and ddt_close statements 2. To your test script to open and close the data table, which are necessary in order to iterate rows in the table. Note that you can also add these statements to your test script manually. 3. If you do not choose this option, you will receive a warning that your data-driven test must contain a loop and statements to open and close your datatable. 4. Import data from a database: Imports data from a database. This option adds ddt_update_from_db, and ddt_save statements to your test script after the ddt_open statement. 5. Note that in order to import data from a database, either Microsoft Query or Data Junction must be installed on your machine. You can install Microsoft Query from the custom installation of Microsoft Office. Note that Data Junction is not automatically included in your WinRunner package. To purchase Data Junction, contact your Mercury Interactive representative. For detailed information on working with Data Junction, refer to the documentation in the Data Junction package. 6. Parameterize the test: Replaces fixed values in selected checkpoints and in recorded statements with parameters, using the ddt_val function, and in the data table, adds columns with variable values for the parameters. Line by line: Opens a wizard screen for each line of the selected test script, which enables you to decide whether to parameterize a particular line, and if so, whether to add a new column to the data table or use an existing column when parameterizing data. 7. Automatically: Replaces all data with ddt_val statements and adds new columns to the data table. The first argument of the function is the name of the column in the data table. The replaced data is inserted into the table. • The Test script line to parameterize box displays the line of the test script to parameterize. The highlighted value can be replaced by a parameter. The Argument to be replaced box displays the argument (value) that you can replace with a parameter. You can use the arrows to select a different argument to replace. Choose whether and how to replace the selected data: 1. Do not replace this data: Does not parameterize this data. 2. An existing column: If parameters already exist in the data table for this test, select an existing parameter from the list. 3. A new column: Creates a new column for this parameter in the data table for this test. Adds the selected data to this column of the data table. The default name for the new parameter is the logical name of the object in the selected. TSL statement above. Accept this name or assign a new name. • The final screen of the wizard opens. 1. If you want the data table to open after you close the wizard, select Show data table now. 2. To perform the tasks specified in previous screens and close the wizard, click Finish. 3. To close the wizard without making any changes to the test script, click Cancel. Q: How do you handle object exceptions? During testing, unexpected changes can occur to GUI objects in the application you are testing. These changes are often subtle but they can disrupt the test run and distort results. You could use exception handling to detect a change in property of the GUI object during the test run, and to recover test execution by calling a handler function and continue with the test execution
Q: What is a compile module? A compiled module is a script containing a library of user-defined functions that you want to call frequently from other tests. When you load a compiled module, its functions are automatically compiled and remain in memory. You can call them directly from within any test. Compiled modules can improve the organization and performance of your tests. Since you debug compiled modules before using them, your tests will require less error-checking. In addition, calling a function that is already compiled is significantly faster than interpreting a function in a test script.
Q: What is the difference between script and compile module? Test script contains the executable file in WinRunner while Compiled Module is used to store reusable functions. Complied modules are not executable. WinRunner performs a pre-compilation automatically when it saves a module assigned a property value of Compiled Module. By default, modules containing TSL code have a property value of "main". Main modules are called for execution from within other modules. Main modules are dynamically compiled into machine code only when WinRunner recognizes a "call" statement. Example of a call for the "app_init" script: call cso_init(); call( "C:\\MyAppFolder\\" & "app_init" ); Compiled modules are loaded into memory to be referenced from TSL code in any module. Example of a load statement: reload (C:\\MyAppFolder\\" & "flt_lib"); or load ("C:\\MyAppFolder\\" & "flt_lib");
Q:How do you write messages to the report? To write message to a report we use the report_msg statement Syntax: report_msg (message);
Q:What is a command to invoke application? Invoke_application is the function used to invoke an application. Syntax: invoke_application(file, command_option, working_dir, SHOW);
Q:What is the purpose of tl_step command? Used to determine whether sections of a test pass or fail. Syntax: tl_step(step_name, status, description);
Q:Which TSL function you will use to compare two files? We can compare 2 files in WinRunner using the file_compare function. Syntax: file_compare (file1, file2 [, save file]);
Q:What is the use of function generator? The Function Generator provides a quick, error-free way to program scripts. You can: Add Context Sensitive functions that perform operations on a GUI object or get information from the application being tested. Add Standard and Analog functions that perform non-Context Sensitive tasks such as synchronizing test execution or sending user-defined messages to a report. Add Customization functions that enable you to modify WinRunner to suit your testing environment.
Q:What is the use of putting call and call_close statements in the test script? You can use two types of call statements to invoke one test from another: A call statement invokes a test from within another test. A call_close statement invokes a test from within a script and closes the test when the test is completed. Q:What is the use of treturn and texit statements in the test script? The treturn and texit statements are used to stop execution of called tests. i. The treturn statement stops the current test and returns control to the calling test. ii. The texit statement stops test execution entirely, unless tests are being called from a batch test. In this case, control is returned to the main batch test. Both functions provide a return value for the called test. If treturn or texit is not used, or if no value is specified, then the return value of the call statement is 0. The syntax is: treturn [( expression )]; texit [( expression )];
Q:What does auto, static, public and extern variables means? auto: An auto variable can be declared only within a function and is local to that function. It exists only for as long as the function is running. A new copy of the variable is created each time the function is called. static: A static variable is local to the function, test, or compiled module in which it is declared. The variable retains its value until the test is terminated by an Abort command. This variable is initialized each time the definition of the function is executed. public: A public variable can be declared only within a test or module, and is available for all functions, tests, and compiled modules. extern: An extern declaration indicates a reference to a public variable declared outside of the current test or module.
Q:How do you declare constants? The const specifier indicates that the declared value cannot be modified. The class of a constant may be either public or static. If no class is explicitly declared, the constant is assigned the default class public. Once a constant is defined, it remains in existence until you exit WinRunner. The syntax of this declaration is: [class] const name [= expression];
Q:How do you declare arrays? The following syntax is used to define the class and the initial expression of an array. Array size need not be defined in TSL. class array_name [ ] [=init_expression] The array class may be any of the classes used for variable declarations (auto, static, public, extern).
Q:How do you load and unload a compile module? In order to access the functions in a compiled module you need to load the module. You can load it from within any test script using the load command; all tests will then be able to access the function until you quit WinRunner or unload the compiled module. You can load a module either as a system module or as a user module. A system module is generally a closed module that is invisible to the tester. It is not displayed when it is loaded, cannot be stepped into, and is not stopped by a pause command. A system module is not unloaded when you execute an unload statement with no parameters (global unload). load (module_name [,10] [,10] ); The module_name is the name of an existing compiled module. Two additional, optional parameters indicate the type of module. The first parameter indicates whether the function module is a system module or a user module: 1 indicates a system module; 0 indicates a user module. (Default = 0) The second optional parameter indicates whether a user module will remain open in the WinRunner window or will close automatically after it is loaded: 1 indicates that the module will close automatically; 0 indicates that the module will remain open. (Default = 0) The unload function removes a loaded module or selected functions from memory. It has the following syntax: unload ( [ module_name test_name [ , "function_name" ] ] );
Q:Why you use reload function? If you make changes in a module, you should reload it. The reload function removes a loaded module from memory and reloads it (combining the functions of unload and load). The syntax of the reload function is: reload ( module_name [ ,10 ] [ ,10 ] ); The module_name is the name of an existing compiled module. Two additional optional parameters indicate the type of module. The first parameter indicates whether the module is a system module or a user module: 1 indicates a system module; 0 indicates a user module. (Default = 0) The second optional parameter indicates whether a user module will remain open in the WinRunner window or will close automatically after it is loaded. 1 indicates that the module will close automatically. 0 indicates that the module will remain open. (Default = 0)
Q:Write and explain compile module? Write TSL functions for the following interactive modes: i. Creating a dialog box with any message you specify, and an edit field. ii. Create dialog box with list of items and message. iii. Create dialog box with edit field, check box, and execute button, and a cancel button. iv. Creating a browse dialog box from which user selects a file. v. Create a dialog box with two edit fields, one for login and another for password input.
Q:How you used WinRunner in your project? Yes, I have been using WinRunner for creating automated scripts for GUI, functional and regression testing of the AUT. Q:Explain WinRunner testing process? WinRunner testing process involves six main stages Create GUI Map File so that WinRunner can recognize the GUI objects in the application being tested Create test scripts by recording, programming, or a combination of both. While recording tests, insert checkpoints where you want to check the response of the application being tested. Debug Test: run tests in Debug mode to make sure they run smoothly Run Tests: run tests in Verify mode to test your application. View Results: determines the success or failure of the tests. Report Defects: If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window.
Q:What is contained in the GUI map? WinRunner stores information it learns about a window or object in a GUI Map. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an object’s description in the GUI map and then looks for an object with the same properties in the application being tested. Each of these objects in the GUI Map file will be having a logical name and a physical description. There are 2 types of GUI Map files. Global GUI Map file: a single GUI Map file for the entire application. GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created.
Q:How does WinRunner recognize objects on the application? WinRunner uses the GUI Map file to recognize objects on the application. When WinRunner runs a test, it uses the GUI map to locate objects. It reads an object’s description in the GUI map and then looks for an object with the same properties in the application being tested.
Q:Have you created test scripts and what is contained in the test scripts? Yes I have created test scripts. It contains the statement in Mercury Interactive’s Test Script Language (TSL). These statements appear as a test script in a test window. You can then enhance your recorded test script, either by typing in additional TSL functions and programming elements or by using WinRunner’s visual programming tool, the Function Generator.
Q:How does WinRunner evaluate test results? Following each test run, WinRunner displays the results in a report. The report details all the major events that occurred during the run, such as checkpoints, error messages, system messages, or user messages. If mismatches are detected at checkpoints during the test run, you can view the expected results and the actual results from the Test Results window.
Q:Have you performed debugging of the scripts? Yes, I have performed debugging of scripts. We can debug the script by executing the script in the debug mode. We can also debug script using the Step, Step Into, Step out functionalities provided by the WinRunner.
Q:How do you run your test scripts? We run tests in Verify mode to test your application. Each time WinRunner encounters a checkpoint in the test script, it compares the current data of the application being tested to the expected data captured earlier. If any mismatches are found, WinRunner captures them as actual results.
Q:How do you analyze results and report the defects? Following each test run, WinRunner displays the results in a report. The report details all the major events that occurred during the run, such as checkpoints, error messages, system messages, or user messages. If mismatches are detected at checkpoints during the test run, you can view the expected results and the actual results from the Test Results window. If a test run fails due to a defect in the application being tested, you can report information about the defect directly from the Test Results window. This information is sent via e-mail to the quality assurance manager, who tracks the defect until it is fixed.
Q:What is the use of Test Director software? TestDirector is Mercury Interactive’s software test management tool. It helps quality assurance personnel plan and organize the testing process. With TestDirector you can create a database of manual and automated tests, build test cycles, run tests, and report and track defects. You can also create reports and graphs to help review the progress of planning tests, running tests, and tracking defects before a software release.
Q:Have you integrated your automated scripts from TestDirector? When you work with WinRunner, you can choose to save your tests directly to your TestDirector database or while creating a test case in the TestDirector we can specify whether the script in automated or manual. And if it is automated script then TestDirector will build a skeleton for the script that can be later modified into one which could be used to test the AUT. What are the different modes of recording? - There are two type of recording in WinRunner. Context Sensitive recording records the operations you perform on your application by identifying Graphical User Interface (GUI) objects. Analog recording records keyboard input, mouse clicks, and the precise x- and y-coordinates traveled by the mouse pointer across the screen. Q:What is the purpose of loading WinRunner Add-Ins? Add-Ins are used in WinRunner to load functions specific to the particular add-in to the memory. While creating a script only those functions in the add-in selected will be listed in the function generator and while executing the script only those functions in the loaded add-in will be executed else WinRunner will give an error message saying it does not recognize the function. What are the reasons that WinRunner fails to identify an object on the GUI? - WinRunner fails to identify an object in a GUI due to various reasons. The object is not a standard windows object. If the browser used is not compatible with the WinRunner version, GUI Map Editor will not be able to learn any of the objects displayed in the browser window.
Q:What is meant by the logical name of the object? An object’s logical name is determined by its class. In most cases, the logical name is the label that appears on an object.
Q:If the object does not have a name then what will be the logical name? If the object does not have a name then the logical name could be the attached text.
Q:What is the different between GUI map and GUI map files? The GUI map is actually the sum of one or more GUI map files. There are two modes for organizing GUI map files. Global GUI Map file: a single GUI Map file for the entire application. GUI Map File per Test: WinRunner automatically creates a GUI Map file for each test created. GUI Map file is a file which contains the windows and the objects learned by the WinRunner with its logical name and their physical description.
Q:How do you view the contents of the GUI map? GUI Map editor displays the content of a GUI Map. We can invoke GUI Map Editor from the Tools Menu in WinRunner. The GUI Map Editor displays the various GUI Map files created and the windows and objects learned in to them with their logical name and physical description.
Q:How do you view the contents of the GUI map? If we are learning a window then WinRunner automatically learns all the objects in the window else we will we identifying those object, which are to be learned in a window, since we will be working with only those objects while creating scripts.
Q:How to compare value of textbox in WinRunner? the problem: textbox on page 1. then after clicking 'Submit' button, value of textbox will be display on page 2 as static. How to compare value of textbox from page 1 if it is equal to on page 2? Capture the value from textbox in page 1 and store in a variable (like a). Then after clicking on submit button when the value is diplaying on page 2 as static. From here using screen area (get text) text point, capture the value and store in second variable (like b). Now compare two variables. Winrunner with combo box Problem: Application has combo box which has some values need to select item 4 in first combo box to run the test Scenario. How to get the value of the selected combo box?
Answer1: Use the GUI spy and compare the values in the SPY with the values in the GUI map for the physical attributes of the TComboBox_* objects. It appears to me that WinRunner is recording an attribute to differentiate combobox_1 from _0 that is *dynamic* rather than static. You need to find a physical property of all the comboboxes that is constant and unique for each combobox between refreshes of the app. (handle is an example of a BAD one). That's the property you need to have recorded in your GUI map (in addition to those physical properties that were recorded for the first combobox that was recorded.
Answer2: Go through the following script, it will help .....
function app_data(dof) { report_msg ("application data entry"); set_window ("Flight Reservation", 6); list_get_items_count ("Fly From:" , flyfromc); list_get_items_count ("Fly To:" , flytoc); report_msg (flyfromc); report_msg (flytoc); for (i =0; i < j="0;" m="0;" j="0;">"); list_select_item ("Fly From:","#"i); # Item Number 0; obj_type ("Fly From:",""); list_select_item ("Fly To:", "#"j); # Item Number 0; obj_mouse_click ("FLIGHT", 42, 20,LEFT); set_window ("Flights Table", 1); list_get_items_count ("Flight" ,flightc); list_activate_item ("Flight", "#"m); # Item Number 1; set_window ("Flight Reservation",5); edit_set ("Name:", "ajay"); button_press ("Insert Order"); m++; }while ( astartup #load gui file GUI_unload_all; if(GUI_load("C:\\Program Files\\Mercury Interactive\\WinRunner\\EMR\\EMR.gui")!=0) { pause("unable to open C:\\Program Files\\Mercury Interactive\\WinRunner\\EMR\\EMR.gui"); texit; } #end loading gui you cant set path for GUI map file in winruner other than Temporary GUI Map File
Answer2: Might suggest to your boss that the GUI is universal to all machines in spite of the fact that all machines must have their own local script in his view. Even if you are testing different versions of the same software, you can have the local machine "aware" of what software version it is running and know what GUI to load from you server. I run a lab with 30 test machines, each with their own copy of the script(s) from the server, but using one master GUI per software roll. As far as how to set search path for the local machine, you can force that in the setup of each machine. Go to Tools=>Options=>General Options=> Folders. Once there, you can add, delete or move folders around at will. WinRunner will search in the order in which they are listed, from top down. "Dot" means search in the current directory, whatever that may be at the time.
Q: WinRunner: How to check the tab order? winrunner sample application set_window ("Flight Reservation", 7); if(E_OK==obj_type ("Date of Flight:","")){ if(E_OK==obj_type ("Fly From:","")){ if(E_OK==obj_type ("Fly To:","")){ if(E_OK==obj_type ("Name:","")){ if(E_OK==obj_type ("Date ofFlight:","")) { report_msg("Ok"); } } } }
Q:WinRunner: Why "Bitmap Check point" is not working with Framework? Bitmap chekpoint is dependent on the monitor resolution. It depends on the machine on which it has been recorded. Unless you are using a machine with a screen of the same resolution and settings , it will fail. Run it in update mode on your machine once. It will get updated to your system and then onwards will pass.
Q: How to Plan automation testing to to impliment keyword driven methodology in testing automation using winrunner8.2? Keyword-driven testing refers to an application-independent automation framework. This framework requires the development of data tables and keywords, independent of the test automation tool used to execute them and the test script code that "drives" the application-under-test and the data. Keyword-driven tests look very similar to manual test cases. In a keyword-driven test, the functionality of the application-under-test is documented in a table as well as in step-by-step instructions for each test. Suppose you want to test a simple application like Calculator and want to perform 1+3=4, then you require to design a framework as follows:
Steps are associated with the manual test case execution. Now write functions for all these common framework required for your test caese. Your representation may be different as per your requirement and used tool. Q: How does winrunner invoke on remote machine? Steps to call WinRunner in remote machine: 1) Send a file to remote machine particular folder (this may contains your test parameters) 2) write a shell script listener & keep always running the remotehost (this script will watching the file in folder mentioned in step 1) 3) write a batch file to invoke the winrunner, test name & kept it in remote machine 4) call the batch file thru shell script whenever the file exist as mentioned in step1
Q: WinRunner: How to connect to ORACLE Database without TNS?
The following code would help the above problem. tblName = getvar("curr_dir")&table; ddt_close_all_tables(); resConnection = ""; db_disconnect("session"); rc = ddt_open(tblName, DDT_MODE_READ); if (rc != E_OK) pause("Unable to open file"); else { dvr = ddt_val(tblName,"DRIVERNAME"); tnsName = ddt_val(tblName,"SERVER"); user = tolower(ddt_val(tblName,"UID")); pass = tolower(ddt_val(tblName,"PWD")); host = ddt_val(tblName,"HOSTNAME"); port = ddt_val(tblName,"PORT"); pro = toupper(ddt_val(tblName,"PROTOCOL")); resConnection = db_connect("session", "driver="dvr";Database="tnsName";hostname="host";port="port";protocol="pro"; uid="user"; pwd="pass";");
if (resConnection != 0) { report_msg("There is a problem in connecting to the Database = "&tnsName&", Check it please.."); treturn; } else { report_msg("Connection to the Database is successful.."); rsEQ1 = db_execute_query("session","your database query",record_number1); } db_disconnect("session"); } How to use this: Assume you have saved the script in c:\winrunner as dbconnect Save data table at same location, ie c:\winrunner as dbdetails.xls
give call to dbconnect from other script which aslo saved at same location c:\winrunner as ==>call dbconnect("dbdetails.xls"); Because the above script is using getvar("curr_dir") function to get the current directory, looks at the same location for data table.
Q: WinRunner: How to Verify the data in excel spread sheet [ A list box which is displaying report names and below that there is a multi line text box provided which is displaying the description of the report corresponding to each report. Able get all the descriptions by using below for loop. But have to verify against excel spread sheet where report descriptions are stored . please guide "how to proceed?"
Q: How to define the variable in script that have stored in excel sheet using winrunner? [In A1 field contains {Class:push button, lable:OK.....} In B1 field Contains OK = button_press(OK); where OK contains the value of field A1 OK should act as a variable which has to contain value of field A1]
Answer1: There is no need to define any variable that is going to use in the Testscript. You can just start using it directly. So, if you want to assign a value to the dynamic variable which is taken from Data Table, then you can use the "eval" function for this. Example: eval( ddt_val(Table,"Column1") & "=\"water\";" ); # The above statement takes the variable name from Data table and assigns "water" as value to it.
Answer2: Write a function that looked down a column in a table and then grabbed the value in the next cell and returned it. However, you would then need to call button_press(tbl_convert("OK")); rather than button_press("OK"); where tbl_convert takes the value from the A1 (in your example) and returns the value in B1. One other difficulty you would have would be if you wanted to have the same name for objects from different windows (e.g., an "OK" button in multiple windows). You could expand your function to handle this by having a separate column that carries the window name.
Q: WinRunner: How to Change physical description? [problem: the application containes defferent objects , but the location property is different/changing. Suppose for example, there is one html table and it contains objects and it's phisical properties, for one object { class: object, MSW_class: html_text_link, html_name: "View/Edit" location:0 } and for other objects. { class: object, MSW_class: html_text_link, html_name: "View/Edit" location:1 } When record the scripts its gives viwe/edit as logical name, Code: web_image_click("view/edit", 11, 7); When run the script win runner cannot identifies which object to click and it gives an error Msg. P.S. WinRunner 7.5 with Java and web addins on Windows XP operating system and IE 6.0 browser(SP2).
Answer1: In dynamically changing the name of the html_table, we have to interchange the physical description. while recording the clicked name inside the table will be the name of the html_table in GUI Map. Change the logical name alone in GUI map. then in coding using the methods in gui_ get the logical name of this html_table.get its physical description.delete the Object thru coding from the Gui map.Then with the logical name and physical description you got previously , add these description using Gui_add methods.
Answer2: Just change the logical names to unique names. winrunner will recognize each object separately using the physical name and the location property.
Answer3: i = 0; web_link_click("{ class: object, MSW_class: html_text_link, html_name: \"View/Edit\", location:" & i & "}"; i = 1; web_link_click("{ class: object, MSW_class: html_text_link, html_name: \"View/Edit\", location:" & i & "}";
Q: Is there any function in winrunner which will clear the history of the browser? [Actually the script is working fine when you execute for the first time. But when you execute in the second time it is directly going inside the application without asking for login credentials by taking the path from the browser history. So the script fails. It is working fine if I clear the history of the browser before each run. ] This is not the matter of clearing the history. In any case it should not allow you to login to application with entering login credentials. I think this is application bug. To clear history: DOS_system with del "C:\Documents and Settings\%USERNAME% \Cookies"\*your_cookie\site_name*
Q: WinRunner: How to read dynamic names of html_link
Answer1: Use the following steps: 1) Using the Function, web_tbl_get_cell_data read the link. 2) use GUI_add function to add to the Map editor. 3) use GUI_save function to save the same link. 4) Now, web_link_click() and pass the variable that you got in step
Answer2: Can try this method. It will reduce the complexity and there is no need to update the GUI Map File. Use web_tbl_get_cell_data() function to get the Description of the link and use the variable name in web_link_click() function. web_tbl_get_cell_data("Tablename","#Rowno","#columnnumber",0,cell_value,cell_val\ ue_len); web_link_click(cell_value);
Answer3: 1.get number of row in your table: tbl_get_rows_count ("tableName",rows); 2.write a for loop: for(i=0;i<=row;i++) 3.get text of specified cell with column and row:tbl_get_cell_data ("Name","#"&i,column,var1); 4.compare with the if condition 5.if true : make any flage and take row number in variable m 6.now end the loop and write tbl_set_selected_cell ( "tableName", "#"& m,column); type (""); Example: tbl_get_cols_count("Name",cols); tbl_get_rows_count("Name",rows); for(i=2;i<=rows;i++) { for(j=1;j<=cols;j++) { tbl_get_cell_data("Name","#"&i,"#"&j,var1); if(var1 == Supplier) { m=i; } } } tbl_set_selected_cell ( "Name", "#"&m,"#"&j type (""); Q: Is it possible to use winrunner for testing .aspx forms or dotnet forms? You can't test dot net application using winrunner 7.6 and also from prior version. Because winrunner do not have addin for dot net. ASP.NET forms it is a code for server side part of an application, if it generates on the front end normal HTML/JavaScript/Java/ActiveX it shouldn't be a problem to test the application using WR.
Q: Can WinRunner put the test results in a file? Yes, You can put the results into the file format. (the file extension is .txt) In Test Results window, you can select one option: tools menu text report then we can get a text file. Another option is to write out the results out into a html file.
WinRunner: What is the difference between virtual object and custom object?
Answer1: The virtual object is an object which is not recognized by Winrunner. The virtual object class like obj_mouse_click which works for that instance only. To work at any time, then we should forcibly to instruct the winrunner to recognize the virtual object with the help of Virtual Object Wizard. Note: the virtual object must be mapped to a relavant standard classes only avail in winruuner. Ex: button (which is avail on the toolbar in a app. window) which is to be mapped to the standard class callled PUSH_BUTTON. when its completed then u can observe the TSL statment would be button_press("logicalName") which is permanent one in u r winrunner. GUI map Configuration: It helps when winrunner is not able locate the object by winruuner. for ex : two or more objects will have same logical name and its physical properties then how winrunner locate the specific object. In which case that should instruct the winrunner to unquely identify the specific object by setting obligatory, optional and MS_WID with the help of GUI Map config.
Answer2: we use the virtual object wizard in winrunner to map the bitmap object while recording winrunner generates the obj_mouse_click. Custom object is an object which do not belong to one of the standard class of winrunner. We use the gui map configuration to map the custom object to standard object of the winrunner.
Answer3: virtual object - image or portion of the window are made virtual object to use functions available for the object just for convenience in scripting. virtual object captures the cordinates of the object. custom object - general object which does not belong to winrunner class, we map this general object to winrunner standard object, i.e. custom object.
Q: How to create an Object of an Excel File in WinRunner? The object part, or actual Excel table is created via the WinRunner Data Table and it is stored inside the same directory that the WinRunner script is stored in. Of course you may create the Excle spreadsheet yourself and reference it from your script manually. This is also mentioned in the User Guide. The Data Table Wizard mentioned earlier will link this object to the script and assist in parameterizing the data from the Excel table object.
Q: How to use values returned by VB script in winrunner? From your VB script create a file system object to write output to a text file: Dim fso, MyFile Set fso = CreateObject("Scripting.FileSystemObject") Set MyFile = fso.CreateTextFile("c:\testfile.txt", True) MyFile.WriteLine("This is a test.") MyFile.Close Then use file_open and file_getline functions in WinRunner to read the file.
Q: WinRunner: What tag is required to allow me to identify a html table?
. Indeed, it is better to ask developer to put ID every place where it is possible. It will avoid lots of trouble and help the resuable of your script (consider localization).
Q: WinRunner: How to work with file type using WinRunner functions? When recording, WinRunner does not record file-type objects. However, you can manually insert file-type statements into your test script using the web_file_browse and web_file_set functions. Q: WinRunner: Do Java Add-Ins required for Web based Application? You do not need any Java add-in to tests simple JSP pages. If you are using Java applets with some swing or awt components drawn on the applet then you need java add-in otherwise simple web add-in will server the purpose.
Q: How to generate unique name? function unique_str() { auto t, tt, leng, i; t = get_time(); leng = length(t); tt = ""; for (i = 1; i <= leng; i++) { tt = tt & (sprintf("%c", 97 + i + substr(t, i, 1)) ); } return tt; } Q; WinRunner: How to access the last window brought up? [set_window("{class: window, active: 1}"); rc = win_get_info("{class: window, active: 1}", property, result); Is there something or some script that can determine the LAST WINDOW DISPLAYED or OPENED on the desktop and in order to use that information to gather the label. there are a couple of solutions, depending on what you know about the window. If you know distinguishing characteristics of the window, use them and just directly describe the gui attributes. I assume that you do not have these, or you would likely have already done so. If not, there is a brute force method. Iterate over all of the open windows prior to the new window opening and grab their handles. After your new window opens, iterate again. The 'extra' handle points to your new window. You can use it in the gui description directly to manipulate the new window. As I said, a bit brutish, but it works. You can use the same technique when you have multiple windows with essentially the same descriptors and need to iterate over them in the order in which they appeared. Any object (or window) can be described by it's class and it's iterator. Ask yourself, if I wanted to address each of the individuals in a room and had no idea what their names were, but would like to do so in a consistent way would it not be sufficient to say - 'person who came into the room first', 'person who came into the room second', or alternately 'person who is nearest the front on the left', 'person who is second nearest the front on the left'. These are perfectly good ways of describing the individuals because we do two things: limit the elements we want to describe (people) and then give an unambiguous way of enumerating them. So, to apply this to your issue - you want to do an 'exist' on a dynamically described element (window, in your case). So you make a loop and ask 'window # 0, do you exist', if the answer is yes, you ask for the handle, store it and repeat the loop. Eventually you get to window n, you ask if it exists, the answer is no and you now have a list of all of the handles of all of the existing windows.. You should note that there will be n windows ( 0 to n-1, makes a count of n). You may need to brush up on programmatically describing an object (or window), the syntax is a little lengthy but extremely useful once you get the feel for it. It really frees you from only accessing objects that are already described in the gui map. Try this as a starting point, you'll need to add storing & sorting the handles yourself: i = 0; finished = FALSE; while (finished == FALSE) { if (win_exists("{class: window, location: \"" & i & "\"}\"") == E_OK ) { win_get_info("{class: window, location: \"" & i & "\"}\"", "handle", handle); printf(" handle was " & handle); i ++; } else { finished = TRUE; } } Q: WinRunner: How to identifying dynamic objects in web applications ? Check whether the object is present inside the table. If yes then the get the table name and the location of that object. Then by using web_obj_get_child_item function you can get the description of the Object. Once you get the Description then you can do any operation on that object. Q: WinRunner: How to delete files from drive? Here is a simple method using dos. where speech_path_file is a variable. example: # -- initialize vars speech_path_file = "C:\\speech_path_verified.txt"; . . dos_system("del " & speech_path_file); Q: WinRunner: Could do we start automation before getting the build? The manual test cases should be written BEFORE the application is available, so does the automation process. automation itself is a development process, you do start the development BEFORE everything is ready, you can start to draw the structure and maybe some basic codes. And there are some benefits of having automation start early, e.g., if two windows have same name and structure and you think it is trouble, you may ask developer to put some unique identifiers, for example, a static which has different MSW_id). If you (& your boss) really treat the automation as the part of development, you should start this as early as possible, in this phase it likes the analyse and design phase of the product. Q: How to create a GUI map dynamically? gmf = "c:\\new_file_name.gui"; GUI_save_as ( "", gmf ); rc = GUI_add(gmf, "First_Window" , "" , ""); rc = GUI_add(gmf, "First_Window" , "new_obj" , ""); rc = GUI_add(gmf, "First_Window" , "new_obj" , "{label: Push_Me}"); Q: WinRunner script for Waitbusy # only need to load once, best in startup script or wherever load( getenv("M_ROOT") & "\\lib\\win32api", 1, 1 ); # returns 1 if app has busy cursor, 0 otherwise public function IsBusy(hwnd) {const HTCODE=33554433; # 0x2000001 const WM_SETCURSOR=32; return SendMessageLong(hwnd, WM_SETCURSOR, hwnd, HTCODE); # wait for app to not be busy, optional timeout public function WaitBusy(hwnd, timeout) {const HTCODE=33554433; # 0x2000001 const WM_SETCURSOR=32; if(timeout) timeout *= 4; while(--timeout) { if (SendMessageLong(hwnd, WM_SETCURSOR, hwnd, HTCODE) == 0) return E_OK; wait(0,250); # 1/4 second } return -1; # timeout error code } # wait busy, provide window instead of hwnd public function WinWaitBusy(win, timeout){auto hwnd ; win_get_info(win, "handle", hwnd); return WaitBusy(hwnd, timeout); } # example of how to use it... set_window(win); WinWaitBusy(win); Q: WinRunner script to get Min and Max public function fnMinMaxWinrunner (in action) { auto handle; const SW_MAXIMIZE = 3; const SW_MINIMIZE = 6; load_dll("user32.dll"); #extern int ShowWindow(long, int); win_get_info("{class: window, label: \"!WinRunner.*\"}", "handle", handle); switch(action) { case "SW_MINIMIZE" : { # Maximizing WinRunner ShowWindow(handle, SW_MINIMIZE); wait(2); break; } case "SW_MAXIMIZE" : { # Maximizing WinRunner ShowWindow(handle, SW_MAXIMIZE); wait(2); break; } } unload_dll("user32.dll"); }; Q: Type special chars in WinRuneer type special chars as they are, instead of interpreting them # data can be read from a data file and then typed into an app # # escape the following chars: <> - + # in a string, quote " and backslash \ will already be escaped # # generally won't be a lot of special chars, so # use index instead of looping through each character # function no_special(data ) { auto esc_data, i, p; esc_data = ""; while(1) { p=32000; i=index(data,"-"); p=i?(i
Q: Clean up script/function from WinRunner public function cleanup(in win) { auto i; auto edit; auto atti; set_window(win); for (i = 0 ; ; i++) { edit = "{class:edit,index:"i"}"; if (obj_exists(edit) != E_OK) break; obj_get_info(edit,"displayed",atti); if (atti == 0) break; obj_get_info(edit,"enabled",atti); if (atti == 0) continue; edit_get_text(edit,atti); if (atti != "") edit_set_text(edit,""); } }
Q: How to convert variable from ascii to string? If you want to generate characters from their ascii codes, you can use the sprintf() function, example: sprintf("%c",65) will generate "A" If you want to add a number onto the end of a string, you can simply stick it next to the string, example: ball=5; print "and the winning number is: " ball; Putting them together can get some interesting effects, example: public arr[] = {72,101,108,108,111,32,102,114,111,109,32,77,105,115,104,97}; msg = ""; for(i in arr) msg = msg sprintf("%c",arr[i]); print msg; Hmmm, interesting effect from the elements not being in order. I'll try it again: msg = ""; for(i=0;i<16;i++) msg =" msg" functions ="="="="="="="="="="="="="="="="="="="="="="="="="="="="="="="="="="="="="="="="="="" gstrconnstring ="DRIVER={Oracle in OraHome92};SERVER=MANOJ; UID=BASECOLL;PWD=BASECOLL;DBA=W;APA=T;EXC=F; XSM=Default;FEN=T;QTO=T;FRC=10;FDL=10;LOB=T;RST=T;GDE=F;FRL=Lo; BAM=IfAllSuccessful;MTS=F;MDI=Me;CSR=F;FWC=F;PFC=10;TLO=O;" gstrconnstring =" ddt_val(gstrConfigFilePath," strsql = "Select PRODUCT_CODE from PRODUCT_MASTER where PRODUCT_NAME = 'WINE'" strcolumn = "PRODUCT_CODE" rc =" GetDBColumnValue(strSql," strsql =" Query" strheader =" Header" nheadercount =" Count" strrow =" Row" strsql =" Query" strcolumn =" Column" nrecord =" Gives" strsql =" Query" strheader =" Header" nheadercount =" Count" nrecord =" Count" strlasterror ="" rc =" -9999;" rc =" db_connect(" rc =" db_execute_query(" nrecord ="="" rc =" 1;" strval =" db_get_field_value" strval="="" rc =" 2;" rc =" db_disconnect(" strsql = "" strcolumn ="" strlasterror ="" strlasterror ="" rc =" -9999;" rc =" db_connect(" rc =" db_execute_query(" nrecord ="="" rc =" 1;" rc =" db_get_row(" rc =" db_get_headers(" strrow ="="" rc =" 2;" rc =" db_disconnect(" strsql = "" strlasterror ="" strlasterror ="" rc =" -9999;" rc =" db_connect(" rc =" db_execute_query(" nrecord ="="" rc =" 1;" i =" 1;" rc =" 2;" rc =" db_disconnect(" strsql = "" strcolumn ="" strlasterror ="" strlasterror ="" strtmp = "" rc =" -9999;" rc =" db_connect(" rc =" db_execute_query(" nrecord ="="" rc =" 1;" i =" 1;" strtmp = "" rc =" db_get_row(" rc =" db_get_headers(" rc =" db_disconnect(" strsql = "" strlasterror ="" strtmp="" rc =" -9999;" rc =" db_connect(strConn,gstrConnString);" second =" 1;" minute =" 60" hour =" MINUTE" day =" HOUR" year =" DAY" plural = "s, " singular = ", " timediff =" oldTime">= YEAR) { remainder = timeDiff % YEAR; years = (timeDiff - remainder) / YEAR; timeDiff = remainder; }
Q: Working with QTP and work on web application which is developed on .Net. Trying to prepare scripts but , when record and run the script most of the liks are not recognizing by qtp,that links are dynamically generating and also into diff place. what should we do for it ? Try changing the Web Event Recording Configurations. Go to Tools > Web Event Recording Configurations, and change the setting to high. If the links are dynamically generated, try changing the recorded object properties. After recording, right click on the recorded object and select object properties. From this screen you can add/remove attributes for playback that were previously recorded. Focus on attributes of the object that are not specific to location and do not change (html ID maybe).
How to verify the animations(gif files) present in the applications using WinRunner? WinRunner doesn't support testing that technology. You will need to find another tool to do that. QuickTest may be a possible choice for you. Go to the Mercury site and look at the list of supported technologies for QuickTest Pro 6.5 & above (not Astra). WinRunner: Should I sign up for a course at a nearby educational institution? When you're employed, the cheapest or free education is sometimes provided on the job, by your employer, while you are getting paid to do a job that requires the use of WinRunner and many other software testing tools. If you're employed but have little or no time, you could still attend classes at nearby educational institutions. If you're not employed at the moment, then you've got more time than everyone else, so that's when you definitely want to sign up for courses at nearby educational institutions. Classroom education, especially non-degree courses in local community colleges, tends to be cheap.
How important is QTP in automated testing, does only with manaul testing (Test Director) is enough for testing. Or do we require automated tools in each and every projects. What are the different advantages of the QTP? Most projects that are being automated should not be because they're not ready to be. Most managers assume that automated functional QUI testing will replace testers. It won't. It just runs the same tests over, and over, and over. When changes are made to the system under test, those changes either break the existing automated tests or they are not covereb by the changes. Automated functional GUI testing is usually a waste of time. TestDirector is not used for executing any actual test activity but it is a test management tool used for Requirements Management, Test Plan, Test Lab, and Defects Management. Even if the individual test cases are not automated, TestDirector can make life much easier during the test cycles. These two are also good reads on the topic: Automation Myths Test Automation Snake Oil You can find information about QTP here: http://www.mercury.com/us/products/quality-center/functional-testing/...
Tell me about the TestDirector® The TestDirector® is a software tool that helps software QA professionals to gather requirements, to plan, schedule and run tests, and to manage and track defects/issues/bugs. It is a single browser-based application that streamlines the software QA process. The TestDirector's "Requirements Manager" links test cases to requirements, ensures traceability, and calculates what percentage of the requirements are covered by tests, how many of these tests have been run, and how many have passed or failed. As to planning, the test plans can be created, or imported, for both manual and automated tests. The test plans then can be reused, shared, and preserved. The TestDirector’s "Test Lab Manager" allows you to schedule tests to run unattended, or run even overnight. The TestDirector's "Defect Manager" supports the entire bug life cycle, from initial problem detection through fixing the defect, and verifying the fix. Additionally, the TestDirector can create customizable graphs and reports, including test execution reports and release status assessments.
What is a backward compatible design? The design is backward compatible, if the design continues to work with earlier versions of a language, program, code, or software. When the design is backward compatible, the signals or data that has to be changed does not break the existing code. For instance, a (mythical) web designer decides he should make some changes, because the fun of using Javascript and Flash is more important (to his customers) than his backward compatible design. Or, alternatively, he decides, he has to make some changes because he doesn't have the resources to maintain multiple styles of backward compatible web design. Therefore, our mythical web designer's decision will inconvenience some users, because some of the earlier versions of Internet Explorer and Netscape will not display his web pages properly (as there are some serious improvements in the newer versions of Internet Explorer and Netscape that make the older versions of these browsers incompatible with, for example, DHTML). This is when we say, "Our (mythical) web designer's code fails to work with earlier versions of browser software, therefore his design is not backward compatible". On the other hand, if the same mythical web designer decides that backward compatibility is more important than fun, or, if he decides that he does have the resources to maintain multiple styles of backward compatible code, then, obviously, no user will be inconvenienced when Microsoft or Netscape make some serious improvements in their web browsers. This is when we can say, "Our mythical web designer's design is backward compatible".
Q: How to get the compiler to create a DLL ? In the Borland compiler, Creating "Console DLL's". A console application is one that does not have a GUI windows queue component. This seems to work well and has a very small footprint.
Q: How to export DLL functions so that WinRunner could recognise them? Created the following definition in the standard header file: #define WR_EXPORTED extern "C" __stdcall __declspec(dllexport) and write a function it looks something like this: WR_EXPORTED UINT WrGetComputerName( ) { . . . }
Q: How to pass parameters between WinRunner and the DLL function? Passing Strings (a DLL function): In WinRunner, extern int WrTestFunction1( in string ); In the DLL, WR_EXPORTED int WrTestFunction1( char *lcStringArg1 ) { . . . return( {some int value} ); } And then to use it in WinRunner, WrTestFunction1( "Fred" ); Receiving Strings: In WinRunner, extern int WrTestFunction1( out string <10>); #The <10> tells WinRunner how much space to use for a buffer for the returned string. In the DLL, WR_EXPORTED int WrTestFunction1( char *lcStringArg1 ) { . . . {some code that populates lcStringArg1}; . . . return( {some int value} ); } And then to use it in WinRunner, WrTestFunction1( lcString1 ); # lcString1 now contains a value passed back from the DLL function
Passing Numbers (a DLL function) In WinRunner, extern int WrTestFunction1( in int ); In the DLL, WR_EXPORTED int WrTestFunction1( int lnIntegerArg1 ) { . . . return( {some int value} ); } And then to use it in WinRunner, WrTestFunction1( 2 ); Recieving Numbers In WinRunner, extern int WrTestFunction1( out int ); In the DLL, WR_EXPORTED int WrTestFunction1( int *lnIntegerArg1 ) { . . . *lnIntegerArg1 = {some number}; return( {some int value} ); } And then to use it in WinRunner, WrTestFunction1( lnNum ); # lnNum now contains a value passed back from the DLL function
Here are some example functions. #define WR_EXPORTED extern "C" __stdcall __declspec(dllexport) #define WR_SUCCESS 0 #define WR_FAILURE 100000 #define FAILURE 0 #define WR_STAGE_1 10000 #define WR_STAGE_2 20000 #define WR_STAGE_3 30000 #define WR_STAGE_4 40000 #define WR_STAGE_5 50000 #define WR_STAGE_6 60000 #define WR_STAGE_7 70000 #define WR_STAGE_8 80000 #define WR_STAGE_9 90000 #define MAX_USERNAME_LENGTH 256 #define HOST_NAME_SIZE 64 WR_EXPORTED UINT WrGetComputerName( LPTSTR lcComputerName ) { BOOL lbResult; DWORD lnNameSize = MAX_COMPUTERNAME_LENGTH + 1; // Stage 1 lbResult = GetComputerName( lcComputerName, &lnNameSize ); if( lbResult == FAILURE ) return( WR_FAILURE + WR_STAGE_1 + GetLastError() ); return( WR_SUCCESS ); } WR_EXPORTED UINT WrCopyFile( LPCTSTR lcSourceFile, LPCTSTR lcDestFile, BOOL lnFailIfExistsFlag ) { BOOL lbResult; // Stage 1 lbResult = CopyFile( lcSourceFile, lcDestFile, lnFailIfExistsFlag ); if( lbResult == FAILURE ) return( WR_FAILURE + WR_STAGE_1 + GetLastError() ); return( WR_SUCCESS ); } WR_EXPORTED UINT WrGetDiskFreeSpace( LPCTSTR lcDirectoryName, LPDWORD lnUserFreeBytesLo, LPDWORD lnUserFreeBytesHi, LPDWORD lnTotalBytesLo, LPDWORD lnTotalBytesHi, LPDWORD lnTotalFreeBytesLo, LPDWORD lnTotalFreeBytesHi ) { BOOL lbResult; ULARGE_INTEGER lsUserFreeBytes, lsTotalBytes, lsTotalFreeBytes; // Stage 1 lbResult = GetDiskFreeSpaceEx( lcDirectoryName, &lsUserFreeBytes, &lsTotalBytes, &lsTotalFreeBytes ); if( lbResult == FAILURE ) return( WR_FAILURE + WR_STAGE_1 + GetLastError() ); *lnUserFreeBytesLo = lsUserFreeBytes.LowPart; *lnUserFreeBytesHi = lsUserFreeBytes.HighPart; *lnTotalBytesLo = lsTotalBytes.LowPart; *lnTotalBytesHi = lsTotalBytes.HighPart; *lnTotalFreeBytesLo = lsTotalFreeBytes.LowPart; *lnTotalFreeBytesHi = lsTotalFreeBytes.HighPart; return( WR_SUCCESS ); } Q: Why Have TSL Test Code Conventions TSL Test Code conventions are important to TSL programmers for a number of reasons: . 80% of the lifetime cost of a piece of software goes to maintenance. . Hardly any software is maintained for its whole life by the original author. . TSL Code conventions improve the readability of the software, allowing engineers to understand new code more quickly and thoroughly. . If you ship your source code as a product, you need to make sure it is as well packaged and clean as any other product you create.
Q: Test Script Naming Test type + Project Name + Version Number + Module name + Test script Function . For example: Test type = UAT Project Name = MHE Version of the Project = 3.2 Module Name = Upload Function Name = Excel_File So the entire file name would be UAT_MHE_3.2_Upload_Excel_File Note & Caution : . Make sure the entire file name saved is below 255 characters. . Use the underscore "_" character instead of hyphen "-" or " " character for separation. . Highly recommended to store the test scripts remotely in a common folder or in the Test director repository , which are accessible and can be accessed by the test team at any time. . Do not use any special characters on the test script name like "*&^ #@!" etc ., . In this document - script or test script(TSL) means the same , pls don't get confused
Q: Test script Directory structure: Winrunner recognizes the testscript as a file which is stored as a directory in the Operating system. The script 's TSL code , header information , checklist files , results , expected results etc are stored in these directories for each and every script. . Do not modify or delete anything inside these directories manually without consulting an expert. . Try to have scripts, which have lines less than or equal to 500 . While creating multiple scripts make sure they follow the directories and subdirectory structure (ie) Every script is stored in their respective modules under a folder and a main script which call all these scripts in a Parent folder above these scripts.In a nutshell "All the scripts must be organized and should follow an hierarchy ". . If a module contains more than 2 scripts , a excel file is kept in the respective folder , which gives details of the testscripts and the functionality of these scripts in a short description E.g the excel sheet can contain fields like TestPlan No, Test script No, Description of the Testscript, Status of Last run, Negative or a non-negative test. . Also make sure that evert script has a text file , which contains the test results of the last run. . Maintenance of script folder that has unwanted files and results folder must be cleaned periodically. . Backup of all the scripts (zipped) to be taken either in hard drive, CD-ROM, zip drives etc and kept safely.
Q: Comments All the TSL script files should begin with a comment that lists the Script name, Description of the script, version information, date, and copyright notice:
################################################################# # Script Name: # # # # Script Description: # # # # Version information: # # # # Date created and modified: # # # # Copyright notice # # # # Author: # ################################################################# Comments generated by WinRunner. WinRunner automatically generates some of the comments during recording..If it makes any sense leave them, else modify the comments accordingly Single line comment at the end of line. Accessfile = create_browse_file_dialog ("*.mdb"); # Opens an Open dialog for an Access table. It is mandatory to add comment for your test call Call crea_org0001 (); #Call test to create organization It is mandatory to add comments when you are using a variable which is a public variable and is not defined in the present script. Web_browser_invoke (NETSCAPE, strUrl); #strUrl is a variable defined in the init script Note:The frequency of comments sometimes reflects poor quality of code. When you feel compelled to add a comment, consider rewriting the code to make it clearer. Comments should never include special characters such as form-feed.
Q: Creating C DLLs for use with WinRunner These are the steps to create a DLL that can be loaded and called from WinRunner. 1. Create a new Win32 Dynamic Link Library project, name it, and click . 2. On Step 1 of 1, select "An empty DLL project," and click . 3. Click in the New Project Information dialog. 4. Select File New from the VC++ IDE. 5. Select "C++ Source File," name it, and click . 6. Close the newly created C++ source file window. 7. In Windows Explorer, navigate to the project directory and locate the .cpp file you created. 8. Rename the .cpp file to a .c file 9. Back in the VC++ IDE, select the FileView tab and expand the tree under the Projects Files node. 10. Select the Source Files folder in the tree and select the .cpp file you created. 11. Press the Delete key; this will remove that file from the project. 12. Select Project Add To Project Files from the VC++ IDE menu. 13. Navigate to the project directory if you are not already there, and select the .c file that you renamed above. 14. Select the .c file and click . The file will now appear under the Source Files folder. 15. Double-click on the .c file to open it. 16. Create your functions in the following format:
#include "include1.h" #include "include2.h" . . . #include "includen.h" #define EXPORTED __declspec(dllexport) EXPORTED (, , …, ) { return ; } . . . EXPORTED (, , …, ) { return ; } 17. Choose Build .DLL from the VC++ IDE menu. 18. Fix any errors and repeat step 17. 19. Once the DLL has compiled successfully, the DLL will be built in either a Debug directory or a Release directory under your project folder depending on your settings when you built the DLL. 20. To change this setting, select Build Set Active Configuration from the VC++ IDE menu, and select the Configuration you want from the dialog. Click , then rebuild the project (step 17). 21. All the DLLs types that you are going to create are loaded and called in the same way in WinRunner. This process will be covered once in a later section. Q: Creating C++ DLLs for use with WinRunner Here are the steps for creating a C++ DLL: 1. Create a new Win32 Dynamic Link Library project, name it, and click . 2. On Step 1 of 1, select "An Empty DLL Project," and click . 3. Click in the New Project Information dialog. 4. Select File New from the VC++ IDE. 5. Select C++ Source File, name it, and click . 6. Double-click on the .cpp file to open it. 7. Create your functions in the following format:
8. Choose Build .DLL from the VC++ IDE menu. 9. Fix any errors and repeat step 8. 10. Once the DLL has compiled successfully, the DLL will be built in either a Debug directory or a Release directory under your project folder depending on your settings when you built the DLL. 11. To change this setting, select Build Set Active Configuration from the VC++ IDE menu, and select the Configuration you want from the dialog. Click , then rebuild the project (step 8). 12. All the DLLs types that you are going to create are loaded and called in the same way in WinRunner. This process will be covered once in a later section.
Q: Creating MFC DLLs for use with WinRunner 1. Create a new MFC AppWizard(DLL) project, name it, and click . 2. In the MFC AppWizard Step 1 of 1, accept the default settings and click . 3. Click in the New Project Information dialog. 4. Select the ClassView tab in the ProjectView and expand the classes tree. You will see a class that has the following name CApp; expand this branch. 5. You should see the constructor function CApp(); double-click on it. 6. This should open the .cpp file for the project. At the very end of this file add the following definition: #define EXPORTED extern "C" __declspec( dllexport ) 7. Below you will add your functions in the following format: #define EXPORTED extern "C" __declspec(dllexport) EXPORTED (, , …, ) { return ; } . . . EXPORTED (, , …, ) { return ; }
8. You will see the functions appear under the Globals folder in the ClassView tab in the ProjectView. 9. Choose Build .DLL from the VC++ IDE menu. 10. Fix any errors and repeat step 9. 11. Once the DLL has compiled successfully, the DLL will be built in either a Debug directory or a Release directory under your project folder depending on your settings when you built the DLL. 12. To change this setting, select Build Set Active Configuration from the VC++ IDE menu, and select the Configuration you want from the dialog. Click , then rebuild the project (step 9). 13. All the DLLs types that you are going to create are loaded and called in the same way in WinRunner. This process will be covered once in a later section.
Q: Creating MFC Dialog DLLs for use with WinRunner 1. Create a new MFC AppWizard(DLL) project, name it, and click . 2. In the MFC AppWizard Step 1 of 1, accept the default settings and click . 3. Click in the New Project Information dialog. 4. Select the ClassView tab in the ProjectView and expand the classes tree. You will see a class that has the following name CApp; expand this branch also. 5. You should see the constructor function CApp(); double-click on it. 6. This should open the .cpp file for the project. At the very end of this file add the following definition: #define EXPORTED extern "C" __declspec( dllexport )
7. Switch to the ResourceView tab in the ProjectView. 8. Select Insert Resource from the VC++ IDE menu. 9. Select Dialog from the Insert Resource dialog and click . 10. The Resource Editor will open, showing you the new dialog. Add the controls you want to the dialog, and set the properties of the controls you added. 11. Switch to the ClassView tab in the ProjectView and select View ClassWizard from the VC++ IDE menu, or double-click on the dialog you are creating. 12. The Class Wizard should appear with an "Adding a Class" dialog in front of it. Select "Create a New Class" and click . 13. In the New Class dialog that comes up, give your new class a name and click . 14. In the Class Wizard, change to the Member Variables tab and create new variables for the controls you want to pass information to and from. Do this by selecting the control, clicking , typing in the variable name, selecting the variable type, and clicking . Do this for each variable you want to create. 15. Switch to the Message Maps tab in the Class Wizard. Select the dialog class from the Object IDs list, then select the WM_PAINT message from the Messages List. Click , then . This should bring up the function body for the OnPaint function. 16. Add the following lines to the OnPaint function so it looks like the following: void ::OnPaint() { CPaintDC dc(this); // device context for painting this-%gt;BringWindowToTop(); UpdateData(FALSE); // Do not call CDialog::OnPaint() for painting messages }
17. Select IDOK from the Object IDs list, then select the BN_CLICKED message from the Messages list. Click , accept the default name, and click . 18. Add the line UpdateData(TRUE); to the function, so it looks like this: void ::OnOK() { UpdateData(TRUE); CDialog::OnOK(); } 19. When you are done with this, click to close the Class Wizard dialog and apply your changes. Your new class should appear in the ProjectView in the ClassView tab. 20. In the tree on the ClassView tab, double-click on the constructor function for the CApp (see step 5). 21. At the top of the file, along with the other includes, add an include statement to include the header file for your dialog class. It should be the same name as the name you gave the class in step 13 with a .h appended to it. If you are unsure of the name, you can look it up on the FileView tab under the Header Files folder. 22. At the very end of the file, after the #define you created in step 6, create a function that looks something like this: EXPORTED int create_dialog(char* thestring) { AFX_MANAGE_STATE(AfxGetStaticModuleState());
hello sir thanks for this beautiful trial really this gives me lot of help to improving my testing skill. sir i am fresher so i need objective type question if you will provide us such type of question then it will be a great help for us. AshokRathore Pune
Kuldeep, I have this simple issue when I test a Borland application. The combo boxes seem to have a different MSW_ID each time I run it and hence the winrunner seems unable to choose items from the combo box.
Is there a work around for the same?
I defined the combo box as a virtual list box and it identifies the box, but is still not able to choose the values from the listbox!
Hello Kuldeep- Thank you for your thorough explanation. I am impressed with your knowledge of testing methods among other things. If you want to talk more about this and a possibility of working a premier international internet company, please contact me at sarahkurien@google.com.
Your blog was very informative. I had a question and thought you might be of help. I am scripting test cases in QA Run and I want to add 2 variables. '+' in QA run is used for Concatenation. Can you please help? Thanks
helloooooo..good information on testing….can anyone say how to select multiple objects in weblist as i have many values in list…my coding is Browser(”name”).Page(”name”).Weblist(”html id:=name”).Select”EMP501″ my necessity is how to select all the employees likewise EMP502,EMP503….????only one value is selected …so i need help
Very good information on testing, i need certain information with regards to Winrunner, when i am recording winrunner doesn't recognize a hidden type field in the web page, how should i handle this. Pls let me know.
12 comments:
Its Very Nice
Thank you for this. I'm new to testing.It's very informative, although some of the pages are cut off. How do I get in contact with you?
Thank you for modifying the pages. Looks great!
hello sir
thanks for this beautiful trial
really this gives me lot of help
to improving my testing skill.
sir i am fresher so i need objective type question if you will provide us such type of question then it will be a great help for us.
AshokRathore
Pune
Kuldeep, I have this simple issue when I test a Borland application. The combo boxes seem to have a different MSW_ID each time I run it and hence the winrunner seems unable to choose items from the combo box.
Is there a work around for the same?
I defined the combo box as a virtual list box and it identifies the box, but is still not able to choose the values from the listbox!
Hello Kuldeep-
Thank you for your thorough explanation. I am impressed with your knowledge of testing methods among other things. If you want to talk more about this and a possibility of working a premier international internet company, please contact me at sarahkurien@google.com.
thank you,
Sarah
Your blog was very informative. I had a question and thought you might be of help. I am scripting test cases in QA Run and I want to add 2 variables. '+' in QA run is used for Concatenation. Can you please help? Thanks
I recommend to study my LoadRUnner visual tutorials:
http://motevich.blogspot.com/search/label/visual%20tutorials
To simplify understanding, I add screenshots and pictures to my posts. So, I hope, they will be usefull for you.
In any case, feel free to contact me, if you have ideas for further LoadRunner topics, to be explained, or any LoadRunner questions.
helloooooo..good information on testing….can anyone say how to select multiple objects in weblist as i have many values in list…my coding is
Browser(”name”).Page(”name”).Weblist(”html id:=name”).Select”EMP501″
my necessity is how to select all the employees likewise EMP502,EMP503….????only one value is selected …so i need help
hi kuldeep very good information on testing
Hi Kuldeep,
Very good information on testing, i need certain information with regards to Winrunner, when i am recording winrunner doesn't recognize a hidden type field in the web page, how should i handle this. Pls let me know.
Thanks,
Ramu
Hi Kuldeep,
Can i have your contact info. Actually I am looking for Advanced QTP Traning.
Thank you,
Post a Comment