TechEd 2010: Fine Tuning Your SharePoint Server 2010 Environment

I chose Fine Tuning Your Microsoft SharePoint Server 2010 Environment as my first session of the day today; a 300-level presentation by Shannon Bray (Technical Architect with Planet Technologies) and Mike Watson (a Microsoft alumnus, and now owner and Principal Consultant with SeriousLabz).  The starting point for their session was the acknowledgment that SharePoint 2010 is awesome but, as Mike said, "It's very easy to get bit by this new functionality [since] this functionality comes at a cost: performance."

(L to R) Shannon Bray and  Mike Watson presenting at TechEd 2010

Offering their expert advice on how to minimize disruptions to the performance of your SharePoint environment, Shannon began by tackling recommendations for the optimization of your front end, stating that "If you're in SharePoint, performance is your job."  Shannon began by discussing some of the available latency tools before demonstrating two of them:  the IE8 JavaScript Profiler (built into IE8) and Fiddler 2, a Web debugging proxy.  Using a team site pre-loaded with lots of pictures and videos (to put some load on the system) for his demo, Shannon demonstrated some of the features of the JS Profiler, including the View Image Report which shows the actual file size and file type, in addition to the actual and adjusted image height and width, and more.  Shannon then demonstrated Fiddler 2, which intercepts requests to show the cost of each request, returning both bytes sent and bytes received among the actual performance data relating to a given page load.

Moving on to the subject of throughput tools, Shannon demonstrated two related tools: the Logging Database and Developer Dashboard, both of which are built into SharePoint 2010 out-of-the-box.  Shannon explained that with the Logging Database, "You can set queries against the databse and execute [those queries]," and that with the Developer Dashboard, "You get to see a lot of the real costs of running a particular page," seeing all of the raw data associated with having rendered the page.  Most importantly, as Shannon explained, the Developer Dashboard allows you to "take a look at the costs of the page and start isolating your issues."

After walking through some of the available means of "tweaking the system," Shannon began his final front end optimization demo by opening the Web Config file and enabling the BLOB cache.  Shannon cautioned that you should "Always make a copy of your Web Config" before making any changes.  Shannon explained that by enabling the BLOB cache (in essence, resetting the flag in Web config from "false" to "true"), you're able "to eliminate another trip to SQL Server."  Shannon then cleared his cache before sending a page load request again, then showed the results in Fiddler 2 where, voila, "a lot of the bytes received for these are a lot smaller now than they were before" with the BLOB cache enabled.

Mike took over at this point to address the optimization of your back end, focusing on SQL Server.  Beginning with a few rules of thumb, Mike said that: "the more processors, the better (4-8 cores)"; the more memory, the better (16-129GB)"; and the more disks, the better (>2000 IOPS)."  Mike then segued to his primary recommendation for the optimization of your back end, following on Shannon's enabling of the BLOB cache saying, "You want to minimize calls to BLOBs."  Noting that in SharePoint 2010 you can now remotely store BLOBs, Mike said that since "BLOBs don't need to have high throughput performance … getting the BLOBs out of SQL allows you to focus on performance where it counts: rendering the schema of SharePoint."

Mike then switched into his demo.  Having already uploaded a WSP file to a SharePoint library, he opened SQL Server Management Studio to show everything present that's a BLOB.  Your goal should be to get these out of SQL Server, and this is made possible through remote BLOB storage, which "takes a BLOB and moves it to some secondary system."  Mike explained that a provider is necessary to accomplish this, and SQL Server 2010 has a provider built in which will allow this file stream capability.  Mike then walked through the steps necessary to set up such a process, including the running of an MSI file on the front end (via the command line), and enabling the capability.  Mike also mentioned that there is a blog post on TechNet which walks through the steps necessary to do this, and that post is available here.

Mike explained that, once the file stream to the remote storage has been installed and enabled, every new BLOB that attempts to install on SQL Server will automatically be passed off to the specified storage location.  Mike did point out, however, that there are some limitations to be aware of, such as the fact that you can't mirror and you can't specify multiple BLOB locations per database.  That being said, however, if you get the BLOBs out of your SQL database, as Mike summed up, "it allows you to do some very important stuff."

Bamboo Nation's complete coverage of TechEd 2010: