Tải bản đầy đủ - 0trang
7 Interactively Impersonating Another Device
Figure 7-1. User Agent Switcher menu option
Use Chris Pederick’s User Agent Switcher extension for Firefox. It can be found at http:
//chrispederick.com/work/useragentswitcher/. It is installed like any Firefox extension
(see Recipe 2.2).
Once installed, it provides an option on the Tools menu, as shown in Figure 7-1. From
there you can easily choose another User-Agent. Firefox will continue to masquerade
as that user agent until you choose something else.
To change your User-Agent to Googlebot, for example, simply select Tools → User Agent
Switcher → Googlebot.
To add a user agent, go to Tools → User Agent Switcher → Options → Options... and
then choose the “Agents” option on the left. Figure 7-2 shows the dialog box where
you can manage your existing User-Agent strings and add new ones.
7.7 Interactively Impersonating Another Device | 137
Figure 7-2. User Agent Switcher agents dialog
There are several online databases of User-Agent strings:
As a quick reference, Table 7-1 lists several popular web browsers and their UserAgent strings, for use in your tests. Note that these strings are pretty long, and they will
be presented across multiple lines. In actuality, they are single strings, with no line
breaks or special characters in them.
Table 7-1. Popular User-Agent strings
Internet Explorer 6.0 on Windows XP SP2
Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727)
Safari 2.0.4 on MacOS X 10.4.9
Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en) AppleWebKit/419 (KHTML, like Gecko)
Firefox 22.214.171.124 on Windows XP
Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:126.96.36.199) Gecko/20070309 Firefox/188.8.131.52
Treo 600 Smartphone (“Blazer” web
Mozilla/4.0 (compatible; MSIE 6.0; Windows 95; PalmSource; Blazer 3.0)
Motorola RAZR V3
MOT-V3/0E.40.3CR MIB/2.2.1 Profile/MIDP-2.0 Configuration/CLDC-1.0
138 | Chapter 7: Automating Specific Tasks with cURL
Googlebot (Google’s search spiders)
Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
cURL on MacOS X 10.4.9
curl/7.15.4 (i386-apple-darwin8.9.1) libcurl/7.15.4 OpenSSL/0.9.7l zlib/1.2.3
The User Agent Switcher dialog box will prompt you for a variety of things: appver
sion, description, platform, useragent, vendor, and vendorsub. These things roughly
correspond to the historical components of the User-Agent header. You don’t need to
worry about them, however. You can simply put the entire string in the useragent field
and it will work as you expect.
Some developers will wrongly view cURL as a “hacker tool” and will
want to recognize its User-Agent and deny access to anyone using cURL.
This is a misguided security effort, as you should realize from reading
this recipe. Anyone using cURL (or wget, or fetch, or a Perl script) can
change their User-Agent to impersonate anything they want. Rejecting
requests from cURL doesn’t really keep a competent hacker out at all.
7.8 Imitating a Search Engine with cURL
Your web application reacts to the User-Agent header, and you want to see how the
web page looks when Google, Yahoo!, MSN, or some other robot crawls your site. This
may be necessary, especially from a security standpoint, to be sure that no confidential
information is being leaked when a robot crawls the site or application.
See Example 7-6.
Example 7-6. Fetching a page as googlebot
#Attempt to fetch. Get a registration page instead.
curl -o curl-normal.html http://www.linux-mag.com/id/744/
# Fetch as Google. Get the article content.
curl -o curl-google.html -A \
'Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)' \
The authors have found a few interesting websites that react to different User-Agent
strings, and they have good reasons. One of those reasons is to remain visible to search
engines, like Google and Yahoo!, but to require normal users to pay or register to view
the content. Linux Magazine (http://www.linux-mag.com/), at the time of this writing,
7.8 Imitating a Search Engine with cURL | 139
was one such site. If you search Google for an article that is published at Linux Magazine, Google will be able to find it. That’s because Google actually sees all the content
at the website. If you naïvely click Google’s link with your web browser, you’ll find that
you don’t go to the article. Rather you go to a web page that prompts you to register.
How is it that Google gets the contents of the article, but you don’t? Google sees things
that the average browser does not. The http://www.linux-mag.com web server distinguishes between Google and you by the User-Agent string. Your browser identifies itself
as Firefox or Safari or Internet Explorer. Google identifies itself as “Googlebot.” If we
tell cURL to fetch the page with a Google User-Agent, we will actually get the content
of the article. If we tell cURL to fetch the page with its normal User-Agent or a normal
browser User-Agent, we’ll receive a registration page instead. Run the script in Example 7-6 and compare the two output files it produces.
As a security tester, you would want to fetch pages this way and be sure that nothing
confidential was leaked to the search engines. The value in something like Example 7-6 is that you can automate the process and add it to your regression tests.
7.9 Faking Workflow by Forging Referer Headers
As a means of protection, or to aid in workflow, some web applications consider the
referer header. When a page is loaded in a normal web browser, the web browser will
send a referer header that indicates what page it had previously been viewing. Thus,
some applications read what is in the referer and make decisions about whether or
not to allow a request, based on whether the referer is what they expected. If your
application works this way, you will need to pretend that you loaded a prerequisite
page prior to loading the page you’re testing.
The referer is intentionally misspelled. The official RFC standards were
inadvertently published with this misspelling. It has been perpetuated
# Fetch login page
curl -o login.php http://www.example.com/login.php
# Fetch reports page, with login page as Referer
curl -o reports.php -e http://www.example.com/login.php
140 | Chapter 7: Automating Specific Tasks with cURL