Tải bản đầy đủ - 0 (trang)
Chapter 26. Performance Testing with Selenium and JavaScript

Chapter 26. Performance Testing with Selenium and JavaScript

Tải bản đầy đủ - 0trang

First, we store the time at which the server side script started, right before we invoke

the MVC framework:

// store script start time in microseconds

define('START_TIME', microtime(TRUE));


Secondly when the MVC framework is ready to buffer the page back to the browser,

we insert some inline javascript code which includes:

• The captured start time (“request time”)

• The current time (“response time”)

• The total time spent doing backend calls (How do we know this information? Our

web service client keeps track of the time spent doing webservice calls; and with

each webservice response, the backend include the time spent doing database


In addition to those metrics, we include some jquery code to capture:

• The document ready event time

• The window onload event time

• The time of the last click (which we store in a cookie for the next page load)

In other words, in in our HTML document (somewhere toward the end), we have a

few lines of javascript that look like this:

Finally, we insert a couple more javascript lines in the head tag, so that we can record

an approximate time at which the page was received by the browser. As Alois Reitbauer

pointed out in Timing the Web (http://calendar.perfplanet.com/2011/timing-the-web/),

this is an approximation as it does not account for things like DNS lookups.

[...] more code [...]

Now that we have some metrics for a given request in the browser, how do we retrieve

them so that we can examine them?

Collecting and Analyzing the Data

This is where Selenium comes into play. We use Selenium to simulate a person using

our web application. Again this is technology agnostic as you can control Selenium

from various languages (we use PHP and PHPUnit, but you could do the same with

python or ruby).

Selenium has an API that you can call to invoke some javascript snippet and get back

the output of the executed code. This API is called getEval.

Within our test code, we first open a page we want to analyze, then use the getEval

API to retrieve the metrics we recorded and finish with storing the metrics for later


class ExampleSeleniumTest extends PHPUnit_Extensions_SeleniumTestCase


public function testLoadSomePage()


// Open our web application


// Click a link to load the page we want to analyze

$this->clickAndWait('Some Page')

// Use getEval API to retrieve the metrics we recorded

$metrics = $this->getEval('window.Perf');

// Call our internal method that will store the metrics for later use

// Note: we include a reference to the page or to what use case we are testing

$this->saveMetrics('some-page', $metrics);



We use this pattern for multiple use cases in our application. Also note that while I

used the example of a full page load, our framework also supports collecting metrics

for AJAX interactions, which we do quite a lot (for instance remotely loading content

triggered by a user click).

One of the great things about using Selenium is multiple browser support. We have a

set of virtual machines running various versions of Internet Explorer and Firefox. This

enables our performance test suite to run across multiple platforms.

The last piece of the puzzle is analyzing the data we collected. For this purpose, we

built a small database-driven application that reads the metrics we collected and plots

Collecting and Analyzing the Data | 147


them. We can apply filters such as specific browser vendor or version, specific use case,

specific version of our software, etc. We can then look at the complete data over time.

Figure 26-1 shows the logic we use to plot the data we collected.

Figure 26-1. Web request times

Sample Results

Figure 26-2 is an example of chart generated after collecting data.

Figure 26-2. Web timings sample

In the above sample, we can observe a client-side performance issue in Sample 1, some

inefficient code in the backend web services in Sample 2 and a slow database query in

Sample 3.

148 | Chapter 26: Performance Testing with Selenium and JavaScript



When we built this framework in 2009, we had multiple goals in mind:

• Monitor performance between our software release and catch eventual regressions

• Monitor performance of upcoming features

• Monitor the scalability of the software as we add more users/more data

Looking back, this tool yielded some great results and here are a few examples:

• Discovery of bugs in our javascript code that would result in much higher load

times in IE

• Found issues in the way we were manipulating HTML with javascript and were

able to improve the responsiveness of the impacted user interactions

• Eliminated bottlenecks in our backend web services as we raised the amount of

data: we were able to pinpoint exactly where the problem was (inefficient backend

code, slow database queries, etc.)

Closing Words

In conclusion, I’d like to look into some ideas we have in mind to improve our setup.

I’d like to use the tool more often. We currently run the test suite several times during

our development process and before each releases, but this is a manual process. It would

be great to tie in the test suite with our Jenkins CI builds. A different idea would be to

ship the tool as part of our product and run it in production, providing us with some

analytics on real world usage of our platform.

As I mentioned, we are using virtual machines to test on multiple platforms. This adds

a bit of overhead in terms of maintenance. Maybe we should look at the hosted Selenium solution from Sauce labs?

When we built the product, the performance landscape was a bit different and there

are tools today that were not available back then. Would we see any benefits if we were

to leverage WebPageTest, boomerang, etc.?


I’d like to acknowledge Bill Scott for his presentation on RUM at Netflix, which inspired

us to build our framework.

To comment on this chapter, please visit http://calendar.perfplanet.com/



published on Dec 26, 2011.

Credits | 149




A Simple Way to Measure Website


Pavel Paulau

Not so long ago, folks from Neustar demonstrated at Velocity Conference the possibility of effective client-side performance testing using only free, open-source solutions.

They introduced bundle of tools, such as Selenium and BrowserMob Proxy. The first

one is intended to automate emulation of user interactions, the second one is a good

for metric capturing. That was really inspiring presentation.

The greatest feature of their approach was the fact that all performance data are consolidated into a single container—HTTP Archive (HAR). It makes further processing

of test results more controlled and predictable due to strict format standardization.

However, there were no advanced tools for dealing with HAR files at that moment.

HAR Viewer is wonderful but not suitable for common testing workflow. ShowSlow

is instead a perfect example of a repository for automated performance measurement.

Unfortunately, handling of HAR files is not the strongest trait of it. So a new project

HAR Storage (http://code.google.com/p/harstorage/) appeared.


The testing process is rather straightforward. All you need is to create a Selenium script

that describes common user actions. Then you arm your script with methods to control

a proxy server via its API. It not only means capturing and storing streams of HTTP

requests, but also customization of network characteristics (e.g., bandwidth and latency) and traffic filtering. The last point is extremely important for analysis of the

impact of third-party components on overall site performance.

Finally you can send HAR of each page or asynchronous event to local repository—

HAR Storage. Actually, HAR Storage (http://harstorage.com/) is a simple web applica-



tion built on Pylons and MongoDB. It allows extracting detailed metrics from HAR

files, storing test results, and visualizing all gathered data.


The key advantage is high flexibility. With BrowserMob Proxy, you can test a website

in any modern browser that supports custom proxy settings. You can even deal with

mobile browsers.

Selenium in turn makes it possible to simulate any sophisticated user scenario. Therefore you can analyze both the speed of single page and the performance of complex

business transactions.

HAR Storage has cool features too. For instance, you can compare results of different

tests. This is a great help for analyzing third-party party content or for investigating the

relationship between site speed and network quality (Figure 27-1).

Figure 27-1. Performance Trends

At least with HAR Storage you can continuously track the performance of your website

or application at any development phase.


Nothing is perfect in this world. BrowserMob proxy runs outside the browser and on

the one hand has minimal impact on its performance; on the other hand, internal

browser events are inaccessible. Thus you can’t estimate performance of rendering or

JavaScript parsing. Tools like dynaTrace AJAX Edition are more suitable for such tasks.

This approach may seem too complicated to some people. In fact it isn’t. WebPagetest.org lets you simply put in the URL and enjoy the result. But if you need real crossbrowser testing, measurements over time, and implementation of complex use cases—

this method will work for you.

152 | Chapter 27: A Simple Way to Measure Website Performance


Tài liệu bạn tìm kiếm đã sẵn sàng tải về

Chapter 26. Performance Testing with Selenium and JavaScript

Tải bản đầy đủ ngay(0 tr)