Dynamic or static memory allocation in SQL Server?

This tip compares two basic administrative options for controlling memory used by SQL Server programs: dynamic and static allocation.

This Content Component encountered an error

SQL Server has two basic administrative options that control how much memory is used by the program: dynamic and static allocation. Dynamic allocation allows the administrator to declare a range of memory sizes; SQL Server can allocate up to the maximum amount of memory allowed for its use, and (in theory) releases memory when not in use. Static allocation creates a fixed-memory space for SQL Server to work with -- no more, no less...

.

By default, SQL Server is set to dynamically allocate up to all of the available physical memory on the computer it's run on. Many administrators who see SQL Server's memory usage swelling over time may attribute this to a malfunction or a memory leak, but this is simply the program's design. SQL Server is meant to be run on its own computer whenever possible, and so makes use of all of the available memory for the best possible performance. If SQL Server is running on its own machine, then let it allocate and release memory as needed.

On a small-business-server box, where SQL may run in conjunction with other programs like IIS, administrators may be tempted to set SQL Server to a static memory size to keep it from eating up shared memory. This doesn't always pay off as intended. For one thing, it's all too easy to set the memory ceiling too low and not give SQL Server enough available memory for things like the transaction log or the query execution cache. The only way for SQL Server to get the memory it needs for those operations is to swap other pages out, which is a slow process.

There are a number of ways to calculate the best memory allocation. If you have a predictable user load, go by the maximum number of users. Microsoft recommends using at least 4 MB (24 KB times the number of users) as one possible formula for the dynamic maximum size. If your user load varies widely -- as may be the case when you're serving a database connected to the public Internet through an IIS front end -- live statistics will help better than guesswork. During peak hours, gather the cache-hit ratio and page faults per second performance counters for SQL Server. If these stats show that SQL Server is swapping a great deal, increase the maximum memory size until the swapping drops off. Swapping once per second or more is bad.

Another alternative is to enable the "Reserve physical memory for SQL Server" option, which prevents SQL Server from swapping out its allocated memory to disk even when other applications could use it. This can be a double-edged sword: It can either enhance performance considerably or detract from it. On systems with plenty of RAM to spare (1 GB or more) it's worth trying, but it should not be used when other critical processes may need to use memory explosively (and SQL Server may be forced to give up some of its own if needed). If SQL Server is run on its own box, this is worth enabling to maximize performance.


Serdar Yegulalp is the editor of the Windows 2000 Power Users Newsletter. Check out his Windows 2000 blog for his latest advice and musings on the world of Windows network administrators -- please share your thoughts as well!

This was first published in December 2004

Dig deeper on Windows Storage Systems Hardware

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchServerVirtualization

SearchCloudComputing

SearchExchange

SearchSQLServer

SearchWinIT

SearchEnterpriseDesktop

SearchVirtualDesktop

Close