Tải bản đầy đủ - 0 (trang)
14  Bypassing User-Interface Restrictions

14  Bypassing User-Interface Restrictions

Tải bản đầy đủ - 0trang

Figure 5-11. Removing the disabled attribute

Figure 5-12. Adding additional options

The option is now enabled in our web browser. Whatever options we could normally

choose, if it were enabled, will now be available to us.


If you see an option in a web form that is grayed-out or otherwise disabled, it is an

excellent candidate for this kind of testing. It represents an obvious place that the developers do not expect input. That doesn’t mean they won’t handle it properly, but it

means you should check.

Fortunately, the Kelley Blue Book application properly validates input and does not do

anything bad if you bypass their user-interface restrictions. This is a very common web

application flaw, however. When business logic or security features depend on the

consistency of the HTML in the browser, their assumptions can be subverted.

In an application the authors assessed, changing existing medical records was prohibited. Yet changes to other data, such as address and billing information, were allowed.

The only security aspect preventing changes to medical data was this technique of

disabling screen elements. Tweaking one small attribute enabled these forms, which

could then be submitted for changes just like a changed address.

This recipe can go farther. We can actually add values. By right-clicking to edit the

HTML for the object, we can insert additional values as shown in Figure 5-12.

Then, we can select our new value and see how the system handles this malicious input,

as shown in Figure 5-13.

5.14 Bypassing User-Interface Restrictions | 99

Figure 5-13. Modified, enabled content

This technique allows you to both bypass the restrictions on the user interface, as well

as insert malicious strings into parts of the user interface that a developer may have

overlooked. At first glance, most web developers and testers assume that the only valid

values a browser will ever send are the ones that the user was offered. Clearly, with

tools like these, that is not the case.

100 | Chapter 5: Tampering with Input


Automated Bulk Scanning

For many years it was believed that countless monkeys

working on countless typewriters would eventually reproduce the genius of Shakespeare. Now, thanks to the

World Wide Web, we know this to be false.

—Robert Wilensky

Automation is a tester’s friend. It gives you repeatability, consistency, and better coverage over the software. From a security point of view, you have so much to test that

you have to automate in order to have any confidence that you’re covering enough

interesting security test cases.

In Chapter 1, we talked about how vital it is to narrow our focus and to get a manageable

number of security tests. Even after narrowing our focus, we’ve got a small slice of

infinity to test. That’s where automation comes in. This chapter gives you some tools

that can help you automate by programmatically exploring your web application. There

are two kinds of tools we’ll discuss: those that systematically map a website and those

that try to automatically find security problems.

Mapping tools are typically called “spiders” and they come in a variety of shapes and

sizes. They fetch a starting page that you tell them to fetch, and then they parse that

web page. They look for every link on the page and then they follow it. After following

the link, they read that page and record all the links from it, and so on. Their goal is to

visit every web page in your application.

There are a few benefits to mapping your website with a tool like this. You get an

inventory of all the web pages and interfaces that are available in the application—or

at least those that the tool can find. By having an inventory of web pages and interfaces

(forms, parameters, etc.), you can organize your tests and make a respectable effort at

determining the extent of your test coverage.

Security assessment software does the same sort of work as a spider, but it performs

some of the testing for you. Security assessment tools spider a website and record the

various web pages that they find. However, rather than just record the pages it finds,


Figure 6-1. Injecting known cookies in WebScarab

a security tool will apply well-known tests for well-known vulnerabilities. They typically have a lot of canned tests for pretty obvious flaws and a handful of subtle variations. By systematically crawling the website and then applying the well-known tests,

these tools can sniff out common weaknesses quickly. Although you cannot use them

as the only security tool in your arsenal, such tools are still useful as part of your overall


6.1 Spidering a Website with WebScarab


“Spidering” a website is the process of systematically visiting every page and following

every link. You most commonly do this when you want to enumerate all the pages that

need to be tested for security issues. This might also be useful for functional testing,

too, since coverage is a useful metric there as well. By connecting a web “spider” to the

site, we will make an inventory of most of the site and be able to use it generate test cases.


1. Launch WebScarab.

2. Configure your web browser to use WebScarab (see Recipe 3.4).

3. Configure WebScarab to “Get cookies from responses” and “Inject known cookies” into requests as shown in Figure 6-1.

a. Choose the Proxy pane from the top row of buttons.

b. Choose the Miscellaneous pane of Proxy settings.

c. Make sure the two check boxes are checked.

4. Browse to the start page where you want to begin spidering. If necessary, log in first.

102 | Chapter 6: Automated Bulk Scanning

Figure 6-2. WebScarab spidering options

5. In WebScarab’s Spider pane, find the request that corresponds to your starting

point. Figure 6-2 shows the Spider pane with the root of a web server highlighted.

It will be the starting point for our scan.

6. Check the “Fetch Recursively” box and enter the domain you want to scan, as

shown in Figure 6-2. In this example, we’re going to scan http://www.nova.org/. In

this scan, we are not logged in as an authorized user, but are instead browsing as

an anonymous public user.

7. With your starting point highlighted (http://www.nova.org:80/ in this example),

click the Fetch Tree button. WebScarab will fetch everything within the domain

you specify, and it will follow all links on all the pages it fetches.

8. Switch to the Messages pane in WebScarab’s interface to watch the Spider’s progress. Although there is no explicit and obvious indicator that it is finished, messages will stop scrolling in that pane when it is done. Depending on the depth and

complexity of your target site, this could take a few seconds to many minutes.

6.1 Spidering a Website with WebScarab | 103

Tài liệu bạn tìm kiếm đã sẵn sàng tải về

14  Bypassing User-Interface Restrictions

Tải bản đầy đủ ngay(0 tr)