Scrigroup - Documente si articole

     

HomeDocumenteUploadResurseAlte limbi doc
BulgaraCeha slovacaCroataEnglezaEstonaFinlandezaFranceza
GermanaItalianaLetonaLituanianaMaghiaraOlandezaPoloneza
SarbaSlovenaSpaniolaSuedezaTurcaUcraineana

AdministrationAnimalsArtBiologyBooksBotanicsBusinessCars
ChemistryComputersComunicationsConstructionEcologyEconomyEducationElectronics
EngineeringEntertainmentFinancialFishingGamesGeographyGrammarHealth
HistoryHuman-resourcesLegislationLiteratureManagementsManualsMarketingMathematic
MedicinesMovieMusicNutritionPersonalitiesPhysicPoliticalPsychology
RecipesSociologySoftwareSportsTechnicalTourismVarious

Test Automation Report

computers



+ Font mai mare | - Font mai mic





Test Automation Report

Table of Contents

Introduction 3

Objective 3

Focuses 3

Strategy 4

Unit Test Bed 4

Unit Testing 4

White Box Testing Basis Path Testing of Tree Repository Module 4

Black Box Testing - Equivalence Partitioning 20

System Testing 22

System-Level Testing - Equivalence Partitioning 22

System-Level Testing - Boundary Value Testing 24

System-Level Testing - Performance Testing 27

Problem Reporting 28

Tool Usage 28

Junit 28

Custom Automation Tool 28

eValid 28

Element Tool 29

Experience and Lessons Learned 29

Overall Experience 29

Junit 30

Custom Automation Tool 30

eValid 30

Element Tool 30

Comparison of Manual vs. Automated Testing 31

References 32

Introduction

This document includes the objective, focuses, and strategy used in our test automation. In addition, it includes the test bed structure and tools used for unit white box testing and the test environment and supporting tools for the system level testing. The results and planning involved in of our automation testing will be compared with the manual testing. The actual code written for the automation and the results themselves will be included in accompanying documentation.

Objective

The primary objective for our test automation was to become familiar with the process of automating test cases and generating test data. Through this process, we were able to compare the automation with manual testing. In determining how much automation would be accomplished, we took into account the given period of time as well as the availability of tools to assist in the automation. Since all of the testing had previously been performed manually, there was no need to develop test cases as we were able to reuse the test cases used in manual testing. We kept track of the number of hours spent developing the automated tests for the selected features and in our conclusion, a comparison to the number of hours that we spent to test the same features manually is presented. These time studies will provide two measures that can be used for setting future goals and predicting the percentage of coverage from test automation:

Time to automate per feature

Time to automate as compared to time to manually test

We anticipate however, that subsequent projects would take less time because of experience.

Focuses

Because none of our team had any experience in developing automation tools, determining a focus for our testing was difficult. We spent some time just deciding what to automate and agreed to perform the following automation:

Perform test data generation

Automate test case execution (using both an existing tool and a using our own routine)

System-level testing using eValid

Problem report generation using an existing tool (Element Tool)

We then had to plan the test automation and determine how we would automate the test cases and generate test case data. The ability to control test data in the test environment is important in both manual and automated testing to ensure the validity of expected results. When tests are automated, it is important to have the ability to reset the data prior to test execution and to simulate feeding the data. We decided to prepare a separate set of data for each test case to ensure that the result of one test case did not impact the result of another. In addition, we decided to use random inputs that were generated using a specific set of rules for each required input. In generating the test case functions, we developed and enforced a set coding standards and naming conventions to promote efficiency so the results of each test case could be easily evaluated.

Strategy

Our strategy was to provide automated coverage for as many areas as possible where manual testing had already taken place. This allowed us to perform a direct comparison of the two.

Unit Test Bed

This section addresses the tools used to automate the white box testing for the repository portion of the program.

Unit Testing

Unit Testing is done at the source or code level for language-specific programming errors such as bad syntax, logic errors, or to test particular functions or code modules. The unit test cases were designed to test the validity of the programs correctness.

White Box Testing Basis Path Testing of Tree Repository Module

In white box testing, the user interface is bypassed. Inputs and outputs are tested directly at the code level and the results are compared against specifications. This form of testing ignores the function of the program under test and will focus only on its code and the structure of that code. The test cases that have been generated shall cause each condition to be executed at least once.

Each function of the binary tree repository is executed independently; therefore, a program flow for each function was derived from the code. Using the program flow graph for each function in our tree repository module, we were be able to determine all of the paths that needed to tested and have developed the corresponding test cases. In order to test the success of each path, we used Junit to create a test suite, which included all of our white box test cases. Any preconditions needed to exercise a path were created upon execution of each test case. The test cases were all executed independently, so the test data was reloaded with each test case.

Upon initialization of the complete suite being run, a random number generator is used to set the input data. It sets the variables according to the rules associated with the respective variable (i.e., root, right child, left child, etc.) We also created a version of the Junit test suite that utilizes the specific values called out in the test cases. Both sets of results are shown in the Test Automation Results. To make the internal test visible in the program, assertion checking was used. If the path executed correctly, the test was set up to return true. The test cases included in the Junit test suite are shown in the following tables.

Insert

Path ID

Path

1.1

1, 2, 3, 5, 1, 2, 3, 6, 7, 11, 12

1.2

1, 2, 3, 6, 7, 11, 12

1.3

1, 2, 4, 8, 10, 11, 12

1.4

1, 2, 4, 9, 1, 2, 4, 8, 10, 11, 12

Test Cases

Test to be Performed By:

Executed by Junit

Test Type

White Box Basis Path

Test Case Number

W-1.1

Test Case Precondition

root (28) and left child (27) exist and the key to be inserted is less than 27

Item(s) to be tested

Repository Module: Insert-Path 1.1

Specifications

Input

Expected

Output/Result

key 25 inserted: return true

Test to be Performed By:

Executed by Junit

Test Type

White Box Basis Path

Test Case Number

W-1.2

Test Case Precondition

root (28) exists and key to be inserted is less than 28

Item(s) to be tested

Repository Module: Insert-Path 1.2

Specifications

Input

Expected

Output/Result

key 27 inserted: return true

Test to be Performed By:

Executed by Junit

Test Type

White Box Basis Path

Test Case Number

W-1.3

Test Case Precondition

root (28) exists and key to be inserted is greater than 28

Item(s) to be tested

Repository Module: Insert-Path 1.3

Specifications

Input

Expected

Output/Result

key 50 inserted: return true

Test to be Performed By:

Executed by Junit

Test Type

White Box Basis Path

Test Case Number

W-1.4

Test Case Precondition

root (28) and right child (50) exist and the key to be inserted is greater than 50

Item(s) to be tested

Repository Module: Insert-Path 1.4

Specifications

Input

Expected

Output/Result

key 55 inserted: return true

Delete

Path ID

Path

2.1

1, 2, 3, 4, 5, 7, 17, 18, 19, 21, 22, 24, 25, 35, 36

2.2

1, 2, 3, 4, 5, 7, 17, 18, 19, 21, 23, 24, 25, 35, 36

2.3

1, 2, 4, 5, 6, 7, 17, 26, 27, 29, 30, 32, 25, 35, 36

2.4

1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 35, 35

2.5

1, 2, 3, 4, 5, 6, 7, 8, 9, 13, 14, 16, 35, 36

2.6

1, 2, 4, 5, 7, 33, 34, 1, 2, 3, 4, 5, 6, 7, 8, 9, 13, 15, 16, 35, 36

Test Cases

Test to be Performed By:

Executed by Junit

Test Type

White Box Basis Path

Test Case Number

W-2.1

Test Case Precondition

tree contains root (28), left child (27)

Item(s) to be tested

Repository Module: Delete-Path 2.1

Specifications

Input

Expected

Output/Result

28 deleted - return true

Test to be Performed By:

Executed by Junit

Test Type

White Box Basis Path

Test Case Number

W-2.2

Test Case Precondition

tree contains root (28), right child (50)

Item(s) to be tested

Repository Module: Delete-Path 2.2

Specifications

Input

Expected

Output/Result

28 deleted return true

Test to be Performed By:

Executed by Junit

Test Type

White Box Basis Path

Test Case Number

W-2.3

Test Case Precondition

tree contains root (28), right child (50), right child (55)

Item(s) to be tested

Repository Module: Delete-Path 2.3

Specifications

Input

Expected

Output/Result

28 deleted - return true

Test to be Performed By:

Executed by Junit

Test Type

White Box Basis Path

Test Case Number

W-2.4

Test Case Precondition

tree contains root (28)

Item(s) to be tested

Repository Module: Delete-Path 2.4

Specifications

Input

Expected

Output/Result

return null 

Test to be Performed By:

Executed by Junit

Test Type

White Box Basis Path

Test Case Number

W-2.5

Test Case Precondition

tree contains root (28), right child (50)

Item(s) to be tested

Repository Module: Delete-Path 2.5

Specifications

Input

Expected

Output/Result

50 deleted return true 

Test to be Performed By:

Executed by Junit

Test Type

White Box Basis Path

Test Case Number

W-2.6

Test Case Precondition

tree contains root (28), left child (27)

Item(s) to be tested

Repository Module: Delete-Path 2.6

Specifications

Input

Expected

Output/Result

27 deleted return true  

Search

Path ID

Path

3.1

1, 2, 3, 10

3.2

1, 2, 4, 5, 1, 2, 3, 10

3.3

1, 2, 4, 6, 10

3.4

1, 2, 7, 8, 10

3.5

1, 2, 7, 9, 1, 2, 3, 10

Test Cases

Test to be Performed By:

Executed by Junit

Test Type

White Box Basis Path

Test Case Number

W-3.1

Test Case Precondition

tree contains root (28)

Item(s) to be tested

Repository Module: Search-Path 3.1

Specifications

Input

Expected

Output/Result

28 found - return true

Test to be Performed By:

Executed by Junit

Test Type

White Box Basis Path

Test Case Number

W-3.2

Test Case Precondition

tree contains root (28), left child (27)

Item(s) to be tested

Repository Module: Search-Path 3.2

Specifications

Input

Expected

Output/Result

27 found - return found true

Test to be Performed By:

Executed by Junit

Test Type

White Box Basis Path

Test Case Number

W-3.3

Test Case Precondition

tree contains root (28), left child (27), right child (50)

Item(s) to be tested

Repository Module: Search-Path 3.3

Specifications

Input

Expected

Output/Result

5 not found - return not found true

Test to be Performed By:

Executed by Junit

Test Type

White Box Basis Path

Test Case Number

W-3.4

Test Case Precondition

tree contains root (28), left child (27) right child (50)

Item(s) to be tested

Repository Module: Search-Path 3.4

Specifications

Input

Expected

Output/Result

60 not found return not found true

Test to be Performed By:

Executed by Junit

Test Type

White Box Basis Path

Test Case Number

W-3.5

Test Case Precondition

tree contains root (28), right child (50)

Item(s) to be tested

Repository Module: Search-Path 3.5

Specifications

Input

Expected

Output/Result

50 found - return found true

List - Ascending

Path ID

Path

4.1

1, 2, 5

4.2

1, 2, 3, 1, 2, 5

4.3

1, 2, 3, 4, 5

4.4

1, 2, 3, 4, 1, 2, 5

Test Cases

Test to be Performed By:

Executed by Junit

Test Type

White Box Basis Path

Test Case Number

W-4.1

Test Case Precondition

there are no nodes in the tree

Item(s) to be tested

Repository Module: List Ascending-Path 4.1

Specifications

Input

Expected

Output/Result

execute ascend passing null root

return tree traversed true

Test to be Performed By:

Executed by Junit

Test Type

White Box Basis Path

Test Case Number

W-4.2

Test Case Precondition

tree contains root (28)

Item(s) to be tested

Repository Module: List Ascending-Path 4.2

Specifications

Input

Expected

Output/Result

execute ascend, passing root value of 28

return tree traversed true

Test to be Performed By:

Executed by Junit

Test Type

White Box Basis Path

Test Case Number

W-4.3

Test Case Precondition

tree contains root (28), left child (27), right child (50)

Item(s) to be tested

Repository Module: List Ascending-Path 4.3

Specifications

Input

Expected

Output/Result

execute ascend, passing root value of 28

return tree traversed true

Test to be Performed By:

Executed by Junit

Test Type

White Box Basis Path

Test Case Number

W-4.4

Test Case Precondition

tree contains root (28), right child (50)

Item(s) to be tested

Repository Module: List Ascending-Path 4.4

Specifications

Input

Expected

Output/Result

execute ascend, passing root value of 28

return tree traversed true

List - Descending

Path ID

Path

5.1

1, 2, 5

5.2

1, 2, 3, 1, 2, 5

5.3

1, 2, 3, 4, 5

5.4

1, 2, 3, 4, 1, 2, 5

Test Cases

Test to be Performed By:

Executed by Junit

Test Type

White Box Basis Path

Test Case Number

W-5.1

Test Case Precondition

there are no node in the tree

Item(s) to be tested

Repository Module: List Descending-Path 5.1

Specifications

Input

Expected

Output/Result

execute descend passing null root

return tree traversed true

Test to be Performed By:

Executed by Junit

Test Type

White Box Basis Path

Test Case Number

W-5.2

Test Case Precondition

tree contains root (28)

Item(s) to be tested

Repository Module: List Descending-Path 5.2

Specifications

Input

Expected

Output/Result

execute descend, passing root value of 28

return tree traversed true

Test to be Performed By:

Executed by Junit

Test Type

White Box Basis Path

Test Case Number

W-5.3

Test Case Precondition

tree contains root (28), left child (27), right child (50)

Item(s) to be tested

Repository Module: List Descending-Path 5.3

Specifications

Input

Expected

Output/Result

execute descend, passing root value of 28

return tree traversed true

Test to be Performed By:

Executed by Junit

Test Type

White Box Basis Path

Test Case Number

W-5.4

Test Case Precondition

tree contains root (28), left child (27)

Item(s) to be tested

Repository Module: List Descending-Path 5.4

Specifications

Input

Expected

Output/Result

execute descend, passing root value of 28

return tree traversed true

Read

Path ID

Path

6.1

1, 2, 6

6.2

1, 2, 3, 4, 5, 2, 6

Test Cases

Test to be Performed By:

Executed by Junit

Test Type

White Box Basis Path

Test Case Number

W-6.1

Test Case Precondition

no items in the file

Item(s) to be tested

Repository Module: Read-Path 6.1

Specifications

Input

Expected

Output/Result

empty file: test.txt

return file read true 

Test to be Performed By:

Executed by Junit

Test Type

White Box Basis Path

Test Case Number

W-6.2

Test Case Precondition

2 items in the file

Item(s) to be tested

Repository Module: Read-Path 6.2

Specifications

Input

Expected

Output/Result

file test.txt containing:

 return file read true 

Store

Path ID

Path

7.1

1, 2, 6

7.2

1, 2, 3, 4, 1, 2, 6

7.3

1, 2, 3, 4, 5, 1, 2, 6

Test Cases

Test to be Performed By:

Executed by Junit

Test Type

White Box Basis Path

Test Case Number

W-7.1

Test Case Precondition

there are no nodes in the tree

Item(s) to be tested

Repository Module: Store-Path 7.1

Specifications

Input

Expected

Output/Result

execute store passing null root

return tree traversed true 

Test to be Performed By:

Executed by Junit

Test Type

White Box Basis Path

Test Case Number

W-7.2

Test Case Precondition

tree contains root (28), left child (27)

Item(s) to be tested

Repository Module: Store-Path 7.2

Specifications

Input

Expected

Output/Result

execute store, passing root value of 28

return tree traversed true

Test to be Performed By:

Executed by Junit

Test Type

White Box Basis Path

Test Case Number

W-7.3

Test Case Precondition

tree contains root (28), left child (27), right child (50)

Item(s) to be tested

Repository Module: Store-Path 7.3

Specifications

Input

Expected

Output/Result

execute store, passing root value of 28

return tree traversed true 

Write

Path ID

Path

8.1

1, 2, 3, 4

8.2

1, 2, 3, 2, 3, 4

Test Cases

Test to be Performed By:

Executed by Junit

Test Type

White Box Basis Path

Test Case Number

W-8.1

Test Case Precondition

create vector containing 1 integer root (28)

Item(s) to be tested

Repository Module: Write-Path 8.1

Specifications

Input

Expected

Output/Result

execute write

return file written true

Test to be Performed By:

Executed by Junit

Test Type

White Box Basis Path

Test Case Number

W-8.2

Test Case Precondition

create vector containing 2 integers (28, 50)

Item(s) to be tested

Repository Module: Write-Path 8.2

Specifications

Input

Expected

Output/Result

execute write

return file written true

Black Box Testing - Equivalence Partitioning

The following table represents the automated equivalence classes, both valid and invalid, for the repository. There are many other equivalence classes at the system level that were not within the scope of this automation.

Input/Output Event

Valid Equivalence Classes

Invalid Equivalence Classes

Input maximum number of allowed values

1: 25 values

2: > 25 values

In addition to testing official test cases, we added some other miscellaneous function testing into this tool. Since a number of values were loaded (25 integers), we decided to test the functionality of the ascending and descending lists functions as well as search, delete, and clear.

Test Cases

Test to be Performed By:

Custom Tool

Test Type

Black Box Equivalence Partitioning

Test Case Number

B-1

Test Case Precondition

Tree is empty, input the maximum number of allowed values (25)

Item(s) to be tested

Unit Level Input maximum number of allowed values (Valid Case 9)

Specifications

Input

Expected

Output/Result

Insert 80, 24, 91, 25, 88, 54, 97, 22, 45, 101, 82, 99, 12, 15, 87, 27, 65, 32, 94, 105, 72, 46, 34, 8, 77

display message: The tree is full

Procedural Steps

1. Enter unique valid integer and click insert button

2. Repeat 24 times

3. Halt entry and view displayed tree

Test to be Performed By:

Custom Tool

Test Type

Black Box Equivalence Partitioning

Test Case Number

B-2

Test Case Precondition

Tree is empty, input greater than maximum number of allowed values (30)

Item(s) to be tested

Unit Level Input maximum number of allowed values (Invalid Case 2)

Specifications

Input

Expected

Output/Result

Insert: 80, 24, 91, 25, 88, 54, 97, 22, 45, 101, 82, 99, 12, 15, 87, 27, 65, 32, 94, 105, 72, 46, 34, 8, 77, 20, 200, 1, 150, 7

display message: The tree is full

Procedural Steps

1. Enter unique valid integer and click insert button

2. Repeat 29 times

3. Halt entry and view displayed tree

System Testing

The goals of system testing are to detect faults that can only be exposed by testing the entire integrated system or some major part of it. Generally, system testing is mainly concerned with areas such as performance, security, validation, load/stress, and configuration sensitivity. We will perform the system-level testing allowed by eValid.

System-Level Testing - Equivalence Partitioning

The valid and invalid classes are shown below along with the corresponding valid and invalid test values. By using eValid to test the equivalence classes, we can guarantee that using invalid equivalence class values will not cause any unexpected problems within the applet.

Input/Output Event

Valid Equivalence Classes

Invalid Equivalence Classes

Input integers

3: Integers between 999 and 999

4: Integers > 999

5: Integers < -999

6: Non-integers (characters)

7: Non-integers (decimal values)

8: Non-duplicate integer

Test Cases

Test to be Performed By:

eValid

Test Type

Black Box Equivalence Partitioning

Test Case Number

B-3

Test Case Description

Try to insert integer between 999 and 999

Item(s) to be tested

System Level Input integers (Valid Case 3)

Specifications

Input

Expected

Output/Result

25 displayed in tree

Test to be Performed By:

eValid

Test Type

Black Box Equivalence Partitioning

Test Case Number

B-4

Test Case Description

Try to insert integer between > 999

Item(s) to be tested

System Level Input integers (Invalid Case 4)

Specifications

Input

Expected

Output/Result

display message: Integer is out of range

Test to be Performed By:

eValid

Test Type

Black Box Equivalence Partitioning

Test Case Number

B-5

Test Case Description

Try to insert integer between < -999

Item(s) to be tested

System Level Input integers (Invalid Case 5)

Specifications

Input

Expected

Output/Result

display message: Integer is out of range

Test to be Performed By:

eValid

Test Type

Black Box Equivalence Partitioning

Test Case Number

B-6

Test Case Description

Try to insert non-integer value (character)

Item(s) to be tested

System Level Input integers (Invalid Case 6)

Specifications

Input

Expected

Output/Result

a

display message: Input error non-integer value

Test to be Performed By:

eValid

Test Type

Black Box Equivalence Partitioning

Test Case Number

B-7

Test Case Description

Try to insert non-integer value (decimal)

Item(s) to be tested

System Level Input integers (Invalid Case 7)

Specifications

Input

Expected

Output/Result

display message: Input error non-integer value

Test to be Performed By:

eValid

Test Type

Black Box Equivalence Partitioning

Test Case Number

B-8

Test Case Description

Tree contains the value 5 try to reinsert 5

Item(s) to be tested

System Level Input integers (Invalid Case 8)

Specifications

Input

Expected

Output/Result

display message: The integer has already been inserted

System-Level Testing - Boundary Value Testing

The acceptable range of values for this application was set by the development team. Due to the limitations of the GUI, the developers also limited the size of the input values to three digit integers. The valid and invalid ranges are shown below along with the corresponding valid and invalid boundary test values. By using eValid to test the boundary values, we can guarantee that using values outside of the allowed boundaries will not cause any unexpected problems within the applet.

Acceptable Range: -999 x 999

Invalid Range: - < x < -999 and 999 < x < +

Valid Boundary Tests:

Boundary1: x = -999

Boundary2: x = 0

Boundary3: x = 999

Invalid Boundary Tests:

Boundary4: x = 1000

Boundary5: x = -1000

Boundary6: x = 999999

Boundary7: x = -999999

Test Cases

Test to be Performed By:

eValid

Test Type

Black Box Boundary Value Analysis

Test Case Number

B-9

Test Case Description

Try to insert valid boundary integer value (999)

Item(s) to be tested

System Level Valid Boundary

Specifications

Input

Expected

Output/Result

-999 displayed in tree

Test to be Performed By:

eValid

Test Type

Black Box Boundary Value Analysis

Test Case Number

B-10

Test Case Description

Try to insert valid mid-boundary integer value (0)

Item(s) to be tested

System Level Valid Boundary

Specifications

Input

Expected

Output/Result

0 displayed in tree

Test to be Performed By:

eValid

Test Type

Black Box Boundary Value Analysis

Test Case Number

B-11

Test Case Description

Try to insert valid boundary integer value (999)

Item(s) to be tested

System Level Valid Boundary

Specifications

Input

Expected

Output/Result

999 displayed in tree

Test to be Performed By:

eValid

Test Type

Black Box Equivalence Partitioning

Test Case Number

B-12

Test Case Description

Try to insert invalid boundary integer (1000)

Item(s) to be tested

System Level Invalid Boundary

Specifications

Input

Expected

Output/Result

display message: Integer is out of range

Test to be Performed By:

eValid

Test Type

Black Box Equivalence Partitioning

Test Case Number

B-13

Test Case Description

Try to insert invalid boundary integer (-1000)

Item(s) to be tested

System Level Invalid Boundary

Specifications

Input

Expected

Output/Result

display message: Integer is out of range

Test to be Performed By:

eValid

Test Type

Black Box Equivalence Partitioning

Test Case Number

B-14

Test Case Description

Try to insert distant invalid boundary integer (999999)

Item(s) to be tested

System Level Invalid Boundary

Specifications

Input

Expected

Output/Result

display message: Integer is out of range

Test to be Performed By:

eValid

Test Type

Black Box Equivalence Partitioning

Test Case Number

B-15

Test Case Description

Try to insert distant invalid boundary integer (-999999)

Item(s) to be tested

System Level Invalid Boundary

Specifications

Input

Expected

Output/Result

display message: Integer is out of range

System-Level Testing - Performance Testing

This test will be conducted to evaluate the fulfillment of a system with specified performance requirements. It will be done using the eValid test tool and will be performed on every button/text field in the GUI. eValid will help guarantee that there are not missing links and that all buttons are functioning properly. It will also display the results which will allow us to test the functionality of our system. (All functional specifications are included in the Software Specification).

  • Load a file
  • Insert an integer
    • Text field
    • Insert button
  • Ascending list
    • Ascending button
  • Descending list
    • Descending button
  • Delete an integer in the tree and try to delete an integer not in the tree
    • Text field
    • Delete button
    • Text field
    • Delete button
  • Search an integer in the tree and try to search an integer not in the tree
    • Text field
    • Search button
    • Text field
    • Search button
  • Store
    • Store button
  • Clear
    • Clear button

Problem Reporting

The online freeware tool, Element Tool, was used for problem reporting. Since all of our bugs had been fixed by the time we did the automated testing, we entered our bug from the manual testing.

Tool Usage

We were able to apply tools to all aspect of testing white box, black box, and system-level testing.

Junit

Junit was used to perform white-box testing on the Binary Search Tree repository code. We were able to create a test suite that would test every path identified using the Basis-Path Testing approach. We tested a total of 31 of our test cases. We were also able to implement automated test data generation within this tool.

Custom Automation Tool

We used our custom automation tool to perform black-box, equivalence partitioning for the Binary Search Tree repository code. In addition, we were able to do some unit-level functional testing to see how each of the separate pieces worked together. We included detailed comments that were both written to a file and displayed to the screen to verify every step in the automation routine.

eValid

The eValid tool was used at the system-level to test the functionality of the applet. It can confirm that there are no broken links and that each button is working properly. The advantage to using eValid is that it can record a string of operations that can be performed over and over using the playback feature. This is useful if changes are made to the code since it records the operations.

Element Tool

Element Tool was used for bug tracking. All of the bugs from the manual testing phase were entered into Element Tool for tracking.

Experience and Lessons Learned

Overall Experience

Upon completion of our automation project, there were many lessons learned about the automation process as well as some different pros and cons while working within an automation paradigm. One of the biggest challenges of automation is setting up a specific application to integrate within an automation infrastructure. Because all software is different, third party automation software vendors can only make their best predictions on how an application might be structured in order to successfully automate. This can cause problems within any automation project because most software is created differently and will not always use standard objects. In fact, software engineers will sometimes use rather clever methodologies in creating certain modules that a third party vendor never even thought of designing their automation tool around. This problem affected our own binary search application while trying to automate within eValid. It was apparent that eValid could not find a certain class within Java and so it would not allow any automation scripts to run. It took weeks of communication with the vendor before we were given a solution to the problem.

Other problems with automation can sometimes occur while writing the scripts themselves. If a script is not written properly or not robust enough, although it may work correctly, many a time it is only temporary. As a software product evolves over time, scripts that have once worked in the past will cease to work and may either require modification or abortion. This system will create a constant maintenance requirement for all automation scripts produced past, present, and future.

Although automation can be difficult to setup and maintain, there are definitely strong benefits to it. During the manual testing phase of our project, it took many hours to determine requirements, write each black box test cases, and write each white box test case. Furthermore, after the test cases were completed, it took even longer to go through each one and manually test each part of the product. After each test case was modified, this process quickly became effortless. What initially took hours to complete, was done in a matter of seconds. On account of this, these tests could be repeated over and over again. This would be of a great benefit as when the product changes over time, the entire test conglomeration could easily be run as many times as management would require. Furthermore, test generation could be truly random and tested more thoroughly. Overall, difference in time is so significant that testing the legacy areas of a product become faster and easier.

Junit

Junit was our favorite tool as it was easy to use and provided just the right features to perform white box testing in an organized, simple manner. Since white box testing can sometimes be difficult to perform, this tool is especially useful. It only took a few minutes to figure out how to apply and the coding of the test cases was very straightforward. It would also be a useful tool if changes were to be made to the code. The complete suite of white box testing can be done in one step. In addition, be implementing the random number generation, all of the test cases could be run using completely different sets of values in a very short time. This provided even more assurance in the correctness of the code.

Custom Automation Tool

The custom generated test tool was more of an experiment to see how difficult it would be to develop our own tool. Since the white-box testing was already covered by the Junit tool, we decided to implement this tool for black-box testing. With the idea of how to develop a tool achieved using Junit, the custom tool was not too difficult to create. It turned out to be a great way to not only perform two of the black-box test cases, but also test how all of the functions in the tree repository function together.

eValid

The eValid tool turned out to be by far the most frustrating. Not only did we have to convert our application into an applet just to use the tool, we also spent two weeks emailing back and forth with eValid to get the tool to recognize our applet. Then, even after we got the tool to recognize the applet, it was difficult to get the system properly configured so it would operate properly. It would initially not recognize our mouseclicks or text entry and we finally received word from eValid that we had to use the absolute mode for it to record. Because it took so long to get everything worked out with eValid, we ended up using another tool that would allow our application be recognized by eValid. We found a tool called WebCream that converted the applet/application into JSP/HTML pages. Once we did this, we were finally able to actually create and play back a script. Automation tools are only useful if they make testing easier or provide better coverage. This tool really did neither. The documentation was not adequate to solve the problems on our own and relying on the support team resulted in numerous delays in our testing. Now that we have the tool working, future testing would be simplified as we could just playback our recorded series of mouseclicks to retest the whole set of system tests. Unfortunately, were not sure if it was worth all the trouble.

Element Tool

Our team did not feel that there was any value added by using this tool. Since we were only using the freeware version, it did not provide enough detail or flexibility to effectively manage our problem reports. It was however very easy to use and would probably be more worth while if were able to use one of the more robust versions.

Comparison of Manual vs. Automated Testing

Since this was our first experience with test automation, we werent as efficient as we could have been. It took us a while to understand you to use Junit, we had numerous problems with eValid that increased testing time, and we also needed to spend time figuring our where automation could be used. All of these tasks would have taken less time if we had some previous experience. With the automation in place, however, subsequent testing will be much quicker and easier. We could make changes to the software and easily rerun all of the unit tests. We can also change the input data and rerun the test suite. Based on our best estimation, the following timetable was produced to provide a direct comparison. The times below do not reflect the time spent in developing the test cases which took up much more time than the actual testing.

Item Tested

Time for Manual Testing

Time for Automated Testing

Basis-Path Testing - Repository

6 hours

4 hours

Equivalence Partitioning Repository

30 minutes

2 hours

Functional Testing System-Level

1 hour

no tool available

Equivalence Partitioning System-Level

15 minutes

15 minutes*

Boundary Value System-Level

15 minutes

15 minutes*

Performance Testing System-Level

no tool available

15 minutes*

Element Tool

1 hour

30 minutes

* does not include the 20-30 hours spent working out the eValid problems

References

Pressman, Roger S. Software Engineering - A Practitioner's Approach. Fifth edition. The McGraw-Hill companies, Inc.

Kaner, C., Falk, J., Nguyen, H.-Q. Testing Computer Software. Wiley Computer Publishing, 1999.



Politica de confidentialitate | Termeni si conditii de utilizare



DISTRIBUIE DOCUMENTUL

Comentarii


Vizualizari: 1217
Importanta: rank

Comenteaza documentul:

Te rugam sa te autentifici sau sa iti faci cont pentru a putea comenta

Creaza cont nou

Termeni si conditii de utilizare | Contact
© SCRIGROUP 2024 . All rights reserved