Tải bản đầy đủ - 0 (trang)
3  Observing Live Request Headers with Firebug

3  Observing Live Request Headers with Firebug

Tải bản đầy đủ - 0trang

Figure 3-4. Firebug inspecting request headers



Request

Response

Internet



Web Server



Web browser



Figure 3-5. Basic web request model



Discussion

Threat modeling and trust boundary diagrams are a great exercise for assessing the

security of an application, but is a subject worthy of a book unto itself. However, the

first steps are to understand dependencies and how portions of the application fit together. This basic understanding provides quite a bit of security awareness without the

effort of a full assessment. For our purposes, we’re looking at something as simple as

what is shown in Figure 3-5. A browser makes a request, the server thinks about it, and

then responds.

In fact, you’ll notice that your browser makes many requests on your behalf, even

though you requested only one page. These additional requests retrieve components

of the page such as graphics or style sheets. You may even see some variation just visiting

the same page twice. If your browser has already cached some elements (graphics, style

3.3 Observing Live Request Headers with Firebug | 37



sheets, etc.), it won’t request them again. On the other hand, by clearing the browser

cache and observing the request headers, you can observe every item on which this

page depends.

You may notice the website requesting images from locations other than its own. This

is perfectly valid behavior, but does reveal an external dependency. This is exactly the

sort of trust issue that a test like this can reveal. What would happen if the origin site

changed the image? Even more dangerous is fetching JavaScript from an external site,

which we’ll talk about in Chapter 12. If you’re retrieving confidential data, can someone

else do the same? Often, relying broadly on external resources like this is a warning

sign—it may not appear to be a security threat, but it hands control of your content

over to a third party. Are they trustworthy?

The request URL also includes any information in the query string, a common way to

pass parameters along to the web server. On the server side, they’re typically referred

to as GET parameters. These are perhaps the easiest items to tamper with, as typically

you can change any query string parameters right in the address bar of their browser.

Relying on the accuracy of the query string can be a security mistake, particularly when

values are easily predictable.

Relying on the query string

What happens if a user increments the following ID variable? Can she

see documents that might not be intended for her? Could she edit them?

http://example.com?docID=19231&permissions=readonly



Dissecting the request headers, the following variables are the most common:













Host

User-Agent

Accept

Connection

Keep-Alive

Sometimes you’ll see Referer or Cookie, as well. The request header

specifications can be found at http://www.w3.org/Protocols/rfc2616/

rfc2616-sec5.html.



User-Agent is a particularly interesting request header, as it is used to identify which

browser you’re using. In this case, yours will probably include the words Mozilla and

Firefox somewhere in the string. Different browsers will have different User-Agent

strings. Ostensibly, this is so that a server may automatically customize a web page to

display properly or use specially configured JavaScript. But this request header, like

most, is easily spoofed. If you change it, you can browse the web as a Google Search

38 | Chapter 3: Basic Observation



Spider would see it; useful for search engine optimization. Or perhaps you’re testing a

web application intended to be compatible with mobile phone browsers—you could

find out what User-Agent these browsers send and test your application via a desktop

computer rather than a tiny mobile phone. This could save on thumb cramps, at least.

We discuss malicious applications of this spoofing in Recipe 7.8.

The Cookie headers may potentially reveal some very interesting insights as well. See

Chapter 4 to better identify basic encodings.



Proxying

Web proxies are a valuable tool for security testing. WebScarab, used in the next recipe,

is a web proxy. If you’re new to the concept of web proxies, read on.

Proxies were originally conceived (and are still frequently used) to aggregate web traffic

through a single inbound or outbound server. That server then performs some kind of

processing on the web traffic before passing the browser’s request to the ultimate web

server. Web browsers (e.g., Internet Explorer and Firefox) explicitly understand the

idea of using a proxy. That is, they have a configuration option for it and allow you to

configure the browser to route all its traffic through the proxy. The browser actually

connects to the proxy and effectively says “Mr. Proxy, please make a request to http://

www.example.com/ for me and give me the results.”

Because they are in between browsers and the real web server, proxies can intercept

messages and either stop them or alter them. For instance, many workplaces block

“inappropriate” web traffic via a proxy. Other proxies redirect traffic to ensure optimal

usage among many servers. They can be used maliciously for intermediary attacks,

where an attacker might read (or change) confidential email and messages. Figure 3-6 shows a generic proxy architecture, with the browser directing its requests

through the proxy, and the proxy making the requests to the web server.



Web browser



Request



Request



Response



Response

Internet



WebScarab



Web Server



Database

of Requests



Figure 3-6. General proxy concept



As testing tools, particularly security testing tools, they allow us to deeply inspect and

have complete control over the messages flowing between our web browser and the

web application. You will see them used in many recipes in this book.

3.3 Observing Live Request Headers with Firebug | 39



WebScarab is one such security-focused web proxy. WebScarab differs slightly from

the typical web proxy in two distinct ways. First of all, WebScarab is typically running

on the same computer as the web client, whereas normal proxies are set up as part of

the network environment. Secondly, WebScarab is built to reveal, store, and manipulate security-related aspects of HTTP requests and responses.



3.4 Observing Live Post Data with WebScarab

Problem

POST requests are the most common method for submitting large or complex forms.

Unlike GET values, we can’t just look at the URL at the top of our web browser window

to see all the parameters that are passed. Parameters are passed over the connection

from our browser to the server. We will have to use a tool to observe the input instead.

This test can help you identify inputs, including hidden fields and values that are calculated by JavaScript that runs in the web browser. Knowing the various input types

(such as integers, URLs, HTML formatted text) allows you to construct appropriate

security test cases or abuse cases.



Solution

POST data can be elusive, in that many sites will redirect you to another page after

receiving the data itself. POST data can be helpful by preventing you from submitting

the same form twice when you press the Back button. However, this redirect makes it

difficult to grab the post data directly in FireBug, so instead we’ll try another tool:

WebScarab.

WebScarab requires you to adjust your Firefox settings, as seen in Figure 3-7. Once it

has been configured to intercept data, it can be used for any recipe in this chapter. It’s

that powerful, and we highly recommend it.

In order to configure Firefox to use WebScarab, follow these steps:

1. Launch WebScarab.

2. Select Tools → Options from the menu (Windows, Linux) or press ⌘-, (Cmdcomma) to activate Firefox preferences on Mac OS. The Firefox preferences menus

are shown in Figure 3-7.

3. Select the Advanced tab, and then the Network tab inside that.

4. From there, click Settings, and set up a manual proxy to localhost, with port 8008.

5. Apply this proxy server to all protocols.



40 | Chapter 3: Basic Observation



Figure 3-7. Setting up Firefox to use the WebScarab proxy



Then, to use WebScarab to observe POST data:

1. Browse to a page that uses a POST form. You can recognize such a form by viewing

its source (see Recipe 3.1) and looking for specific HTML. If you find the


tag, look for the method parameter. If it says method="post", you have found a form

that uses POST data.

2. Enter some sample information into the form and submit it.

3. Switch to WebScarab, and you should see several entries revealing your last few

page requests.

WebScarab picked up what you can see in Figure 3-8.

Double-click any request where the method is set to POST. You’ll be presented with

all the details for this page request. Underneath the request headers, you’ll find a section

containing all the POST variables and their values.

These headers follow the same format as request headers, just name-value pairs, but

are set by the server rather than the browser. For an example, see the bottom of Figure 3-9, where URL-encoded POST data is displayed.



Discussion

WebScarab is a powerful tool. As a proxy it reveals everything there is to see between

your browser and the web server. This is unlike Firebug, which resets every time you

click a link. WebScarab will keep a record for as long as it is open. You can save this

history, in order to resubmit a HTTP request (with certain values modified). In essence,

with WebScarab, you can observe and change anything the web server sends you.



3.4 Observing Live Post Data with WebScarab | 41



Tài liệu bạn tìm kiếm đã sẵn sàng tải về

3  Observing Live Request Headers with Firebug

Tải bản đầy đủ ngay(0 tr)

×