SharePoint 2010 Cookbook: Using ULS Logging

As many of you have already experienced in SharePoint 2007 (WSS and/or MOSS), even though ULS (Unified Logging Service) was shipped with the platform, extra work was required to write directly to the log files, or to monitor the health of your SharePoint farm servers. The next generation, SharePoint 2010, improves the situation considerably. Let’s explore.

Challenge:

Where can I read, and how can I write to SharePoint 2010 ULS logs?

Solution:

I. Reading SharePoint 2010 USL Logs

SharePoint keeps track of every event, and everything it does, logging into files. In SharePoint 2007, information is written to the ULS log files under the 12 hive; SharePoint 2010 writes its event information under the 14 hive, {SharePoint Root}LOGS, on each SharePoint server in a farm.

These files contain raw data which, depending on how the Event Throttling was configured, trace everything happening in your SharePoint site. Therefore, it can be challenging to find a specific log entry in the files. However, SharePoint 2010 overcomes this task assigning some uniqueness to each log entry. To quote MSDN, “This uniqueness, called a correlation token, is a GUID that is presented to the user when an error occurs.” When an error occurs, to locate the entry in the file, we simply copy this GUID string and use it for our search.

See the error message below:

Then, go back to your 14/LOGS folder, open the most recently modified file, and search for a specific entry its ID:

Now, you can analyze the entry and diagnose what caused the error.

However, reading log files on a local drive doesn’t tell you the whole story. Someone may ask, “How can I monitor or maintain a healthy and stable farm? Do I have to watch it on each server?” The answer to that question is “If you’re doing it properly, no.” Microsoft has improved SharePoint 2010 providing a central logging location called the WSS_Logging database, which was not shipped with WSS 3.0/MOSS 2007. The WSS_Logging database contains all logging information from all servers which we use to monitor what is happening in our farm servers.

By default, it stores information about USL Logs, Event Logs, Selected Performance Monitor Counters, Blocking SQL Queries, SQL DMV Queries, Timer Job Usage, and so on.

Now, let’s spend a little time diving deeper to see what happens behind the scenes, and where the data comes from.

1. Open “Configure usage and health data collection” from SharePoint 2010 Central Administration:

2. Now let’s see what events are checked in the section, “Events to log”:

 

This section allows you to specify what events to trace on the local drive of each front-end Web server. The information is then imported to the logging database. Now you know where the data comes from, but what is happening behind the scenes? Keep reading!

Did you notice the “Log Collection Schedule” section? Click on it and open the link Microsoft SharePoint Foundation Usage Data Import.

SharePoint 2010 creates a timer job named “Microsoft SharePoint Foundation Usage Data Import” to help us collect the log files located under the 14 hive on each server, and copy the events you specified into your central logging database (which can be exposed later for other purposes such as reporting).

You can even schedule this timer job based on the load patterns of your server:

Now you understand where the data come from and what happens behind the scenes, but the story doesn’t end there. Let’s continue!

II. Writing to SharePoint 2010 USL Logs

Writing to log files is always a best practice; no matter if it’s for diagnostic purposes or error handling, logging allows you to track the health of your applications when things don’t work as they were designed. Fortunately, Microsoft has made life easier providing additional logging and debugging capabilities in SharePoint 2010.

To write to USL log files we simply use the SPDiagnosticsService class:

1:  try
2:  {
3:      throw new Exception(“Test USL LOGS”);
4:  }
5:  catch (Exception ex)
6:  {
7:      SPDiagnosticsService.Local.WriteTrace(0, new SPDiagnosticsCategory("My Category", TraceSeverity.Unexpected, EventSeverity.Error), TraceSeverity.Unexpected, ex.Message, ex.StackTrace);
8:  }

Everything seems OK, but there is one mirrored shortcoming in the text log. The Product column of the entry is “Unknown.”

It’s confusing if we have many custom components trying to write to the same log file at the same time. We couldn’t determine which entry belonged to what component.

SharePoint 2010 doesn’t make it easy for us to provide the name of the product that logged the message; we need a full trust farm solution to interact with SPDiagnosticsService.

Therefore, in order to write the product name to the ULS log files, we have to create a custom Logging Service inheriting from the SPDiagnosticsServiceBase class.

1:  using System;
2:  using System.Collections.Generic;
3:  using System.Linq;
4:  using System.Text;
5:  using Microsoft.SharePoint.Administration;
6:
7:  namespace Bamboo.WebParts
8:  {
9:      class USLCustomLogService : SPDiagnosticsServiceBase
10:      {
11:          private static string PRODUCT_NAME = "Bamboo_PMC";
12:
13:          private USLCustomLogService() : base("Custom USL Logging Service", SPFarm.Local) { }
14:          private static USLCustomLogService logSerivce;
15:          public static USLCustomLogService LogSerivce
16:          {
17:              get
18:              {
19:                  if (logSerivce == null)
20:                  {
21:                      logSerivce = new USLCustomLogService();
22:                  }
23:                  return logSerivce;
24:              }
25:          }
26:
27:          protected override IEnumerable<SPDiagnosticsArea> ProvideAreas()
28:          {
29:              List<SPDiagnosticsArea> areas = new List<SPDiagnosticsArea>{
30:              new SPDiagnosticsArea(PRODUCT_NAME, new List<SPDiagnosticsCategory>{
31:                  new SPDiagnosticsCategory("BB_Info", TraceSeverity.Verbose, EventSeverity.Information),
32:                  new SPDiagnosticsCategory("BB_Error", TraceSeverity.Unexpected, EventSeverity.Warning),
33:              })
34:              };
35:              return areas;
36:          }
37:
38:          public static void DisplayInfo(string methodName, string errorMessage)
39:          {
40:              SPDiagnosticsCategory category = USLCustomLogService.LogSerivce.Areas[PRODUCT_NAME].Categories["BB_Info"];
41:              USLCustomLogService.LogSerivce.WriteTrace(0, category, TraceSeverity.Verbose, methodName + "::" + errorMessage);
42:          }
43:
44:          public static void DisplayError(string methodName, string errorMessage)
45:          {
46:              SPDiagnosticsCategory category = USLCustomLogService.LogSerivce.Areas[PRODUCT_NAME].Categories["BB_Error"];
47:              USLCustomLogService.LogSerivce.WriteTrace(0, category, TraceSeverity.Unexpected, methodName + "::" + errorMessage);
48:          }
49:      }
50:  }

Now, let’s test the custom logging service with the following example:

1:  protected override void OnInit(EventArgs e)
2:  {
3:              base.OnInit(e);
4:              try
5:              {
6:                  throw new ApplicationException("Test my ULS custom log service.");
7:
8:              }
9:              catch (ApplicationException ex)
10:              {
11:                  USLCustomLogService.DisplayError("OnInit", ex.Message);
12:              }
13:  }

Next, go to the 14/LOGS folder, open the most recently modified log file, and search for the key “Bamboo_PMC” to see if it was traced.

As you can see,  using a custom Logging Service, we can log the name of our products which makes it easier to track our own messages and to locate our entry quickly in a bunch of text.

Note that when implementing a custom Logging Service, we have to provide a list of categories and logging levels supported the service. We can do this overriding the ProvideAreas method. In the example above, I implemented two categories, but you can increase categories as necessary.

Now we know exactly how SharePoint 2010 ULS can help us measure the stability of our SharePoint farm servers and how to use it effectively and efficiently. I hope that you found this article useful!

See Also: