Tải bản đầy đủ - 0 (trang)
Chapter 21. Introducing mod_spdy: A SPDY Module for the Apache HTTP Server

Chapter 21. Introducing mod_spdy: A SPDY Module for the Apache HTTP Server

Tải bản đầy đủ - 0trang

Figure 21-1. Apache’s connection and request processing

This works well for HTTP, but it presents a problem for multiplexed protocols like

SPDY because in this flow, each connection can only process one request at a time.

Once Apache starts processing a request, control is transferred to the request handler

and does not return to the connection handler until the request is complete.

To allow for SPDY multiplexing, mod_spdy separates connection processing and request processing into different threads. The connection thread is responsible for decoding SPDY frames and dispatching new SPDY requests to the mod_spdy request

thread pool. Each request thread can process a different HTTP request concurrently.

The diagram on Figure 21-2 shows the high-level architecture.

To learn more about how mod_spdy works within Apache, consult our wiki (http://


Help to Improve mod_spdy

You can help us to make mod_spdy better by doing compatibility and performance

testing, by reviewing the code (http://code.google.com/p/mod-spdy/source/browse/trunk/

src#src%2Fmod_spdy%2Fcommon) and sending us feedback on the mod_spdy discussion list (https://groups.google.com/group/mod-spdy-discuss). We look forward to

your contributions and feedback!

To comment on this chapter, please visit http://calendar.perfplanet.com/


-server/. Originally published on Dec 21, 2011.

124 | Chapter 21: Introducing mod_spdy: A SPDY Module for the Apache HTTP Server


Figure 21-2. High-level architecture

Help to Improve mod_spdy | 125




Lazy Evaluation of CommonJS Modules

Tobie Langel

About two years ago, the mobile Gmail team posted an article focused on reducing the

startup latency (http://googlecode.blogspot.com/2009/09/gmail-for-mobile-html5-series

-reducing.html) of their HTML5 application. It described a technique which enabled

bypassing parsing and evaluation of JavaScript until it was needed by placing it inside

comments. Charles Jolley (http://www.okito.net/) of SproutCore (http://sproutcore

.com/) fame was quick to jump on the idea. He experimented with it (http://blog.sprout

core.com/faster-loading-through-eval/) and found that similar performance gains could

be achieved by putting the code inside of a string rather then commenting it. Then,

despite promises (http://www.okito.net/post/8409610016/on-sproutcore-2-0) of building it into SproutCore, this technique pretty much fell into oblivion. That’s a shame

because it’s an interesting alternative to lazy loading that suits CommonJS modules

really well.

Close Encounters of the Text/JavaScript Type

To understand how this technique works, let’s look at what happens when the

browser’s parser encounters a script element with a valid src attribute. First, a request

is sent to the server. Hopefully the server responds and the browser proceeds to download (and cache) the requested file. Once these steps are completed, the file still needs

to be parsed and evaluated (Figure 22-1).

Figure 22-1. Uncached JavaScript resource fetching, parsing, and evaluation



For comparison, Figure 22-2 shows the same request hitting a warm HTTP cache.

Figure 22-2. Cached JavaScript resource fetching, parsing, and evaluation

What’s worth noticing here—other than the obvious benefits of caching—is that parsing and evaluation of the JavaScript file still happen on every page load, regardless of

caching. While these steps are blazing fast on modern desktop computers, they aren’t

on mobile. Even on recent, high-end devices. Consider the graph in Figure 22-3, which

compares the cost of parsing and evaluating jQuery on the iPhone 3, 4, 4S, iPad, iPad

2, a Nexus S, and a MacBook Pro. (Note that these results are indicative only. They

were gathered using the test hosted at lazyeval.org (http://lazyeval.org/), which at this

point is still very much alpha.)

Remember that these times come on top of whatever networking costs you’re already

facing. Furthermore, they’ll be incurred on every single page load, regardless of whether

or not the file was cached. Yes, you’re reading this right. On an iPhone 4, parsing and

evaluating jQuery takes over 0.3 seconds, every single time the page is loaded. Arguably,

those results have substantially improved with more recent devices, but you can’t count

on your whole user base owning last generation smartphones, can you?

Lazy Loading

A commonly suggested solution to the problem of startup latency is to load scripts on

demand (for example, following a user interaction). The main advantage of this technique is that it delays the cost of downloading, parsing, and evaluating until the script

is needed. Note that in practice—and unless you can delay all your JavaScript files—

you’ll end up having to pay round trip costs twice (Figure 22-4).

There are a number of downsides to this approach, however. First of all, the code isn’t

guaranteed to be delivered: the network or the server can become unavailable in the

meantime. Secondly, the speed at which the code is transferred is subject to the network’s quality and can thus vary widely. Lastly, the code is delivered asynchronously.

These downsides force the developer to build both defensively and with asynchronicity

in mind, irremediably tying the implementation to it’s delivery mechanism in the process. Unless the whole codebase is built on these premises—which is probably

128 | Chapter 22: Lazy Evaluation of CommonJS Modules


Tài liệu bạn tìm kiếm đã sẵn sàng tải về

Chapter 21. Introducing mod_spdy: A SPDY Module for the Apache HTTP Server

Tải bản đầy đủ ngay(0 tr)