Online Book Reader

Home Category

Programming Microsoft ASP.NET 4 - Dino Esposito [31]

By Root 5357 0
Output caching refers to caching for performance reasons some of the semi-dynamic content served by the Web server. Semi-dynamic content is any content that partially changes from request to request. It is the opposite of static content, such as JPEG images or HTML pages, and also different from classic ASP.NET pages that need to be entirely regenerated for every request.

The whole point of output caching is skipping the processing of a given ASP.NET page for a number of seconds. For each interval, the first request is served as usual; however, its response is cached at the IIS level so that successive requests for the same resource that could be placed in the interval are served as if they were for some static content. When the interval expires, the first incoming request will be served by processing the page as usual but caching the response, and so forth. I’ll say a lot more about output caching in Chapter 17.

When it comes to configuring output caching in IIS, you proceed by first defining the extensions (for example, aspx) you intend to cache, and then you have to choose between user-mode and kernel-mode caching. What’s the difference?

It all depends on where IIS ends up storing your cached data. If you opt for user-mode caching, any content will be stored in the memory of the IIS worker process. If you go for kernel-mode caching, it is then the http.sys driver that holds the cache.

Using the kernel cache gives you a throughput of over ten times the throughput you would get with a user-mode cache. Additionally, the latency of responses is dramatically better. There are some drawbacks too.

Kernel caching is available only for pages requested through a GET verb. This means that no kernel caching is possible on ASP.NET postbacks. Furthermore, pages with semi-dynamic content that needs to be cached based on form values or query string parameters are not stored in the kernel cache. Kernel caching only supports multiple copies of responses based on HTTP headers. Finally, note that ASP.NET Request/Cache performance counters will not be updated for pages served by the kernel cache.

Application Warm-up and Preloading


As mentioned, an ASP.NET application is hosted in an IIS application pool and run by an instance of the IIS worker process. An application pool is started on demand when the first request for the first of the hosted applications arrives. The first request, therefore, sums up different types of delay. There’s the delay for the application pool startup; there’s the delay for the ASP.NET first-hit dynamic compilation; and finally, the request might experience the time costs of its own initialization. This delay sums up any time the application pool is recycled, or perhaps the entire IIS machine is rebooted.

In IIS 7.5, with the IIS Application Warm-up module (also available as an extension to IIS 7), any initialization of the application pool is performed behind the scenes so that it doesn’t add delays for the user. The net effect of the warm-up module is simply to improve the user experience; the same number of system operations is performed with and without warm-up.

Behavior of a Warmed-up Application Pool


You apply the warm-up feature to an application pool. An application pool configured in this way has a slightly different behavior when the whole IIS publishing service is restarted and in the case of process recycling.

In the case of an IIS service restart, any application pools configured for warm-up are started immediately without waiting for the first request to come in, as would the case without warm-up.

When warm-up is enabled, IIS also handles the recycling of the worker process differently. Normally, recycling consists of killing the current instance of the worker process and starting a new one. For the time the whole process takes, however, IIS keeps getting requests; of course, these requests experience some delay. With warm-up enabled, instead, the two operations occur in the reverse order. First a new worker process is started up, and next the old one is killed.

When the new process

Return Main Page Previous Page Next Page

®Online Book Reader