Tải bản đầy đủ - 0 (trang)
15  Building a Multistage Test Case

15  Building a Multistage Test Case

Tải bản đầy đủ - 0trang

echo -n "step [${step} "

${CURL} -s -L -A "${UA}" -c "${JAR}" -b "${JAR}" -e ";auto" \

-o "step-${step}.html" http://www.ebay.com/

if [ $? = 0 ]; then

step=$step+1

echo -n "OK] [${step} "

else

echo "FAIL]"

exit 1

fi

# Next, click the sign-in link to bring up the sign-in page.

# Observation tells us that this non-SSL link usually results in a 300-series

# redirection. We'll use -L to follow the redirection and fetch the other

# page, too.

${CURL} -s -L -A "${UA}" -c "${JAR}" -b "${JAR}" -e ";auto" \

-o "step-${step}.html" \

'http://signin.ebay.com/ws/eBayISAPI.dll?SignIn'

if [ $? = 0 ]; then

step=$step+1

echo -n "OK] [${step} "

else

echo "FAIL]"

exit 1

fi

# Now login. This is a post. Observation tells us that this probably

# results in a 200-series "OK" page, when successful. We should probably

# figure out what happens on failure and handle that case, huh?

${CURL} -s -L -A "${UA}" -c "${JAR}" -b "${JAR}" -e ";auto" \

-d MfcISAPICommand=SignInWelcome \

-d siteid=0 -d co_partnerId=2 -d UsingSSL=1 \

-d ru= -d pp= -d pa1= -d pa2= -d pa3=

\

-d i1=-1 -d pageType=-1 -d rtmData=

\

-d userid="${USER}"

\

-d pass="${PASS}"

\

-o "step-${step}.html"

\

"https://signin.ebay.com/ws/eBayISAPI.dll?co_partnerid=2&siteid=0&UsingSSL=1"

if [ $? = 0 ]; then

step=$step+1

echo -n "OK] [${step} "

else

echo "FAIL]"

exit 1

fi

# Prove we're logged in by fetching the "My eBay" page

${CURL} -s -L -A "${UA}" -c "${JAR}" -b "${JAR}" -e ";auto" \

-o "step-${step}.html" \

'http://my.ebay.com/ws/eBayISAPI.dll?MyEbay'

if [ $? = 0 ]; then

echo "OK]"

else



148 | Chapter 7: Automating Specific Tasks with cURL



fi



echo "FAIL]"

exit 1



# Check the output of the most recent step. Our userid will appear in

# the HTML if we are logged in. It will not if we aren't.

count=$(grep -c ${USER} step-${step}.html)

if [ $count -gt 0 ]

then

echo -n "PASS: ${USER} appears $count times in step-${step}.html"

else

echo "FAIL: ${USER} does not appear in step-${step}.html"

fi



Discussion

Note that Example 7-13 is tailored very carefully to work with eBay. It is not really a

general purpose solution, but rather it shows you the steps necessary to perform a single

test case on a real website. You can see how, using this basic script as a framework, it

is relatively easy to build variations on it that surf through different paths of the

application.



Notes on execution

This script is pretty simple. It fetches four pages and then quits. Example 7-14 shows

output from a successful execution of the script.

Example 7-14. Output from the solution to Recipe 7.15

step [1 OK] [2 OK] [3 OK] [4 OK]

PASS: eBay-Test-User appears 5 times in step-4.html



You should almost always see “OK” in the output, because cURL will only exit with a

failure when something major is wrong. For example, if you type the URL incorrectly,

cURL will fail to find the server and you will see “FAIL” instead of “OK.” Even if you

visit a URL that does not exist (e.g., you get a 404 “not found” error or a 302 “moved”

response), cURL still exits 0 (indicating success).

To be more sophisticated about checking success or failure, you could add the -i flag

to the cURL options in the script and then parse the very first line of the file (which will

contain a string like HTTP/1.1 404 Not Found). If you get the code you expect, continue;

otherwise, fail.



The pages that are fetched

The first page is just eBay’s main page. In a sense, fetching this page is not necessary at

all. We could go straight to the login page and log in without first fetching the main

eBay page. We are trying to simulate a regression test, however. A good regression test

includes all the steps in the use case, not just those known to invoke interesting business

logic. The second page visited is the link that you would click on that says “sign in” on

7.15 Building a Multistage Test Case | 149



eBay. Note that it is a HTTP link (i.e., nonSSL), but it immediately redirects your web

browser to a secure URL. Again, we could jump straight to that secure URL, but that

would not be following the use case. I also did not take the time to learn exactly which

cookies are important and which stage of the process sets them. Thus, I don’t want to

shortcut the use case, only to discover that my test fails because the test is bad.

The third page visited is where something interesting happens. We visit the signin.ebay.com (http://sign-in.ebay.com) server and send our user ID and password. At this

point the server updates our cookies with good cookies that correspond to a signed-in

user. Any further invocations of cURL with that cookie jar will be authorized according

to our user.

The final page visited is our proof that we are logged in. If we are logged in and we visit

the “My eBay” page, then our user ID will appear in the HTML somewhere. If we have

not successfully logged in, we will receive generic HTML that directs us to login.



How to build this script

Building a script like this requires a lot of patience and detailed observation of your

web browser’s interaction with the website.

1. Start Firefox and TamperData.

We are not going to use TamperData’s ability to tamper with requests, but rather

its ability to capture all the data exchanged between the browser and server, and

then store that data to an easily parsed XML file. Don’t click the Start Tamper

button. It will passively gather the information needed without our doing anything

else.

2. Visit the website and perform the actions we want to model.

At this point TamperData will record many, many more URLs than you want.

That’s normal. Try to do as little as possible, other than the test case you want to

model. It is also helpful to have no other tabs or windows open while you are

capturing. With so many websites using AJAX and other JavaScript techniques,

just inadvertently mousing over elements can produce dozens of spurious HTTP

requests. Every resource your browser requests (advertisements, banners, images,

CSS files, icons, etc.) will appear in the list. Even though our test script only makes

4 requests to eBay, TamperData captured 167 individual requests when gathering

this data. I routinely use Firefox’s Adblock Plus extension, which blocks virtually

all advertisements. Without that extension, my browser would have requested

many more resources while recording the 4 I needed.

3. Export all the requests to an XML file.

Figure 7-3 shows the Tamper Data “Ongoing Requests” window. If you right-click

on an entry, you can choose “Export XML - All.” This will produce an XML file

with each request clearly encapsulated. If you’re comfortable with XML, you can



150 | Chapter 7: Automating Specific Tasks with cURL



Figure 7-3. Exporting from TamperData



probably write an XSLT parser that will extract the data you want in a format

suitable for our purposes. I’m not a whiz with XML, so I use good old grep and Perl.

4. Find interesting requests and extract them.

This is a bit involved, but it basically boils down to excluding the requests you were

not interested in and learning more about the requests you are interested in. You

can do this in a couple of ways. You can either write grep patterns that exclude all

the things you’re not interested in (e.g., .gif, .jpg, .css) or you can write a grep

pattern that finds the things you are interested in. In my case, I know a little bit

about eBay. The requests that I’m most interested in probably have eBayI

SAPI.dll somewhere in them. If I grep for that pattern, I happen to get what I’m

looking for. Of course, I have to include the request for http://www.ebay.com/, too.

5. Turn interesting requests into curl commands.

For GET requests, this is pretty straightforward. You simply copy the URI from

the tdRequest element in the XML file.

For POST requests, you have to dig into the tdRequest XML structure and find all

the tdPostElement elements. Each tdPostElement becomes a -d argument. Sometimes, as in the eBay case, you find empty elements. They still should be present,

if only to maintain the accuracy of the test.

7.15 Building a Multistage Test Case | 151



7.16 Conclusion

The single most important feature of cURL is its ability to focus specifically on very

small bits of application logic. Executing a test case in a web browser leaves many

variables uncontrolled: JavaScript, implicit behavior fetching remote images, and

browser-specific idiosyncrasies. There is also the fundamental limitation of using a

browser that plays by the rules. Using cURL, we can ignore JavaScript and browserbased behaviors and focus exclusively on what the server-side logic does or does not do.

There are notable differences, summarized in Table 7-2, between what cURL does when

visiting a page and what a browser does. The overall theme is minimalism. The only

thing cURL does is fetch the page. It does not consider any of the content.

Table 7-2. Summary of differences between cURL and web browsers

What browsers do



What cURL does



Impact on test accuracy



Fetch images and cascading

style sheets (CSS) referenced in

the web page, and favorite icons

(favicon.ico).



Fetch exactly the one page that

you tell it. It can follow redirects,

but only if they are HTTP redirects (not JavaScript document.location() redirects).



Frequently the differences mean nothing when testing

server-side logic. If important calculations occur in JavaScript in a web browser, they will not take place during

a cURL simulation.



Fetch remote scripting resources and execute client-side

scripts.



Fetch HTML, but it cannot execute any JavaScript, VBScript, or

other client-side instructions in

it.



Sites that perform significant logic in the browser (e.g.,

AJAX) will look and work very different from cURL’s perspective. cURL may not be a good choice for simulating

requests to such sites.



Allow clicks on graphical image

maps.



Transmit x/y coordinates as

parameters.



If your website has graphical image maps (e.g., a map of

a country), you will have to determine x/y coordinate

pairs to send as parameters to simulate clicking on the

image.



Because cURL fetches only a single page, a series of cURL

requests imposes significantly less load on a web server

than a browser session.



So the conclusion as to the use of cURL is that it is very good at highly specialized jobs

and things that need automation. You don’t use it for user-acceptance testing (UAT),

but you can get a lot of mileage out of it on tedious, repetitive tasks.



152 | Chapter 7: Automating Specific Tasks with cURL



CHAPTER 8



Automating with LibWWWPerl



I have not failed. I’ve just found 10,000 ways that won’t

work.

—Thomas Alva Edison



Anyone who has spent a little time with Perl knows that it does a few things really,

really well: it handles strings and pattern matching, it allows for rapid development of

scripts, it is portable across platforms, and it can make use of a wealth of third-party

modules that save you a lot of time. When you bring Perl to bear on your scripting, you

leverage not only your own programming, but also the programming of thousands of

others. Perl is also supported in major commercial testing systems, such as HP’s Quality

Center.

To be fair, Perl has some disadvantages, too, which we will mention up front. Perl has

been accused of being a “write-only” language. That is, writing Perl that does what you

need is one thing; writing working Perl code that you can read six months later is

something else. Perl’s motto is “there’s more than one way to do it.” That’s great most

of the time, but it also means that there are a lot of variations in the modules you might

use. One programmer thinks that procedural functions are the best way to express his

solutions, while another thinks that an object-oriented approach is the best way for his

module. You’ll find that you need to understand and live with the many paradigms Perl

supports if you want to leverage other people’s work.

A Perl guru looking at the examples in this chapter may find them unnecessarily verbose. We’re trying to make them readable and understandable for you. Resist the

temptation of Perl machismo: don’t worry that you wrote in five lines what can be

written in one. Most of the time that doesn’t matter. What matters most is whether

you or your teammates can read and understand it six months or a year from now.

In this chapter, we are going to focus on specific recipes that solve specific problems.

We assume you understand the basics of Perl syntax and usage. If you aren’t familiar

with Perl, we recommend any of the O’Reilly books on Perl. They range from the basics

(Learning Perl) to intermediate (Programming Perl) to advanced (Mastering Perl). There

are too many books on special topics in Perl to name here. Suffice it to say that there

153



are ample books, both general and specialized, that can lay the foundation for what

we’re talking about here. Like Chapter 6, this chapter gradually builds from basic tasks

to complicated ones. It culminates in a somewhat difficult task: programmatically editing a page on Wikipedia.

We also talked about how to install Perl and Perl modules in Chapter 2. Before you

embark on any of the recipes here, make sure you have a basic installation of Perl. For

those that require specific modules, we’ll highlight the requirements in each recipe.

We will start with the basics of fetching web pages and add in variations like capturing

cookies, parsing pages, and generating malicious inputs. The discussion section in

many recipes will show you how you can programmatically generate malicious inputs

or programmatically analyze the response from your application to determine the next

security-oriented test to send.

Note that we’ll be doing things in this chapter “the hard way.” That is, we will be

building up functionality one feature at a time. There are many features that have been

optimized or bundled into shortcuts in the LibWWWPerl (LWP) library. Chances are,

however, that you will need some pretty fine grained control over the way your scripts

interact with your web application. Thus, the recipes in this chapter show you how to

control each detail of the process. Be sure to look through the documentation for LWP

(run perldoc lwpcook) to learn about shortcuts that you can use (e.g., getstore() and

getprint()) when you have simple needs like fetching a page and storing it to a file.



8.1 Writing a Basic Perl Script to Fetch a Page

Problem

For basic testing, or as a basis for something larger, you want a Perl script that fetches

a page from an application and stores the response in a Perl data structure you can use.

This is a basic GET request.



Solution

This is what LibWWWPerl (LWP) is all about. You need to install the following Perl

modules (see Chapter 2 or CPAN):

• LWP

• HTTP::Request

Example 8-1 shows a basic script that issues a request for a page and checks the return

value. If the return code is successful, it prints the response contents to standard output.

If the return code indicates failure, just the return code and error message are printed.



154 | Chapter 8: Automating with LibWWWPerl



Example 8-1. Basic Perl script to fetch a page

#!/usr/bin/perl

use LWP::UserAgent;

use HTTP::Request::Common qw(GET);

$UA

= LWP::UserAgent->new();

$req = HTTP::Request->new( GET => "http://www.example.com/" );

$resp = $UA->request($req);

# check for error. Print page if it's OK

if ( ( $resp->code() >= 200 ) && ( $resp->code() < 400 ) ) {

print $resp->decoded_content;

} else {

print "Error: " . $resp->status_line . "\n";

}



Discussion

This script is a fundamental building block for all kinds of basic web requests. Throughout the rest of this chapter we will make more complex requests, but they will all begin

much the same way Example 8-1 begins.

There are many kinds of requests you might make. Example 8-1 shows a GET request.

POST is the other common request type. Additional request types are defined in HTTP

and are supported by LWP. They include PUT, DELETE, OPTIONS, and PROPFIND,

among others. One interesting set of security tests would be to determine your application’s response to some of these less frequently used methods. You may be surprised

to find that, instead of a simple “405 Method Not Allowed” response, you receive a

response that a hacker can use, like an error 500 with debugging information.



Other Useful LWP Scripts

It’s worth noting here that, in true Perl style, “there’s more than one way to do it,” and

in fact Example 8-1 is a bit redundant. There are pre-made scripts that come with the

LWP library that do basic jobs like this. When you’re building a test case, you might

be more interested in using one of these pre-built scripts unless you need some special

behavior. So that you’re aware of them, here’s a brief list. Each has its own man page or

online documentation for more detailed information.

lwp-download

Use lwp-download to simply fetch something using a GET request and store it to a

file. Similar to curl (see Chapter 7), it takes the URL from the command line.

Unlike curl (or lwp-request), it has no ability to do anything sophisticated like



cookies, authentication, or following redirects.

lwp-mirror



If you want to download a local copy of a file, but only if you don’t have the latest

version, lwp-mirror can do that. That’s really its purpose: to be like lwp-down

load, but to check the server for the modification date of the file and only download

it if the file has been modified.

8.1 Writing a Basic Perl Script to Fetch a Page | 155



Tài liệu bạn tìm kiếm đã sẵn sàng tải về

15  Building a Multistage Test Case

Tải bản đầy đủ ngay(0 tr)

×