Tweaking IIS settings eases large client uploads in Windows Server

Making the proper metabase settings in Microsoft IIS can ease the process of large client uploads in Windows Server. Here are some changes to consider.

Many websites offer a way for users to upload content directly through a Web browser, but the lack of interactivity among the client, the remote server and the user doesn’t make it easy. For one, there is little or no feedback during the upload process, and sometimes the only feedback you get after a particularly lengthy upload is a nasty error.

Still, uploading files through the browser has become a widely-accepted method of file transfer because users prefer its simplicity, rather than mucking around with File Transfer Protocol (FTP).

While this is widely accepted, it doesn't guarantee immunity from problems. One issue Microsoft’s Internet Information Server (IIS) has been known to have when dealing with file uploads, is a timeout error that can buzz out when uploading a file larger than 48K. Sometimes this just results in a broken upload, but other times it can throw the browser into a loop where it keeps trying to resend data since there’s no standard response for this particular situation.

The reason for the slipup has to do with the way IIS deals with input from a client that’s meant for an application, such as an ASP. When a client starts submitting data, IIS reads the first 48K into a buffer, which is then passed on to the application for processing. Anything beyond this 48K sits in the network socket and stays there unless the application specifically requests it -- typically via a command like Request.BinaryRead(Request.TotalBytes). If the application doesn’t request it, then the rest of the data still sitting in the socket connection is treated as a whole new request from the client. This typically results in a 413 error that reads Request entity too large.

Normally, this sort of issue can be avoided with good coding practices, as described above, but certain situations might call for specific property settings. For example, if you’re administering a site where the uploads are being processed by a third-party ISAPI extension that doesn’t follow such practices, some tweaking of the 48K limit may be in order. This limit isn’t set in stone, but is instead a soft default set by IIS through a metabase property known as UploadReadAheadSize. The default is 49152K, but you can bump this up to as high as 4GB if needed. It can also be set on an individual website or for the entire IIS service if need be.

This may not be the only property you need to set. You might also need to change the maxRequestLength (in IIS 6) or maxAllowedContentLength (in IIS 7+) attribute to allow for a larger upload size, although the defaults are set pretty high for both.

In some instances, it actually helps to set UploadReadAheadSize explicitly to zero. This forces IIS to stream the contents of the submission directly to the ISAPI extension processing the request. It may be worth it to try this first to see if it resolves the problem, but watch for possible side effects from IIS applications that don’t deal well with the read-ahead buffer turned off.

Finally, keep in mind that increasing UploadReadAheadSize can create an attack vector. If someone decides to bombard your system with uploads that do nothing but eat up bandwidth, they’ll be able to do it more easily with this setting’s threshold ramped up. To avoid attacks, use a realistically high value that reflects the size of uploads users are likely to make to a site and whenever possible, insist on some kind of authentication from the client before allowing uploads to ensure they come from a trusted party.

Serdar Yegulalp has been writing about computers and information technology for more than 15 years for a variety of publications, including InformationWeek and Windows Magazine.

Dig Deeper on Microsoft Internet Information Services (IIS)