-->

NEW EVENT: KM & AI Summit 2025, March 17 - 19 in beautiful Scottsdale, Arizona. Register Now! 

Getting more from SharePoint-Part 3: Metrics, content processes and governance policies

Article Featured Image

The prior two installments of this series on SharePoint discussed architecture and user adoption. This part will review the role of governance and content processes in SharePoint and discuss ways of measuring results to create a virtuous cycle of improvement.

There are a few main considerations for governance and metrics in SharePoint implementations:

  • metrics to gauge maturity, success, adoption, compliance and progress in your program;
  • mechanisms for managing content across the full lifecycle including compliance with standards for tagging; and
  • governance processes and policies to control site and content ownership.

Among the chief goals of governance is to prevent SharePoint from becoming a dumping ground by segmenting collaboration spaces from content to be reused and enforcing standards for curation and tagging.

Metrics—success, adoption, compliance and progress

What is measured can be managed. When no objective ways have been put in place to measure how well a program is functioning, it is not possible to course correct or improve it. In the world of SharePoint or any content program for that matter, it is essential to have a way of monitoring how things are going so changes can be made to serve the needs of the program.

Maturity
The first metric to consider is overall maturity and capability. Maturity in the SharePoint space can be considered across multiple dimensions—from the level of intentionality and structure of a process to the formal presence and level of sophistication of governing bodies. Consider a maturity model in which each dimension is mapped with a set of capabilities and characteristics that indicate a general level of maturity. Based on the overall characteristics of those processes (reflected in the rating for each dimension), the maturity of the organization’s SharePoint implementation can be measured at the start of a program and throughout its life. As processes are installed, the maturity is increased. That snapshot in time is a good indicator of the state of the program and can be used as a general measure of success.

Because SharePoint success is indicated by the ability to locate information (“findability”) and findability is the result of a combination of factors, it is possible to describe those factors in terms of existing practices and processes as well as benchmark the level of functionality or activity (for example, content quality measures, the presence of a process or the measure of the effectiveness of that process). One governance maturity measure regards whether there are any governing bodies or policies in place. Another might be the participation levels in governance meetings.

The table (Download PDF or see page 15, KMWorld, October 2016, Vol.25 Issue 9) shows maturity ratings for selected SharePoint functions, including search, user experience and content management. The ratings range from chaotic to choreographed, with each higher level indicating greater proficiency and coordination with other dimensions.

Use cases and usability
Maturity alone does not equate to value. A second important measure of value includes overall usability based on use cases for specific classes of users. Use cases should be part of every content and information program, and there should be a library to access for testing each use case. Use cases are tasks that are part of day-to-day work processes and support specific business outcomes. At the start of the program, assessing the ability of users to complete their job tasks, which requires the ability to locate content, provides a practical baseline score to compare with later interventions.

User satisfaction is a subjective measure of acceptance. Although subjective, if measured in the same way—before an intervention or redesign and then after the intervention—the results will show a comparative improvement or decrease in perceived usability. The perception can be impacted by more than design—training and socialization can have a large impact on user satisfaction.

Adoption
One simple metric for adoption is the volume of e-mail containing attachments as compared with those containing links. As users post their files on SharePoint and send links within messages rather than e-mailing attachments, they are clearly demonstrating use of the environment. Looking at that metric as a baseline and then periodically on a department-by-department basis as well as enterprisewide provides a valuable proof point regarding adoption.

Other adoption metrics include the number of collaboration spaces or sites that are set up and actively managed, the numbers of documents uploaded or downloaded, the degree of completeness of metadata, the accuracy of tagging and the number of documents being reviewed based on defined lifecycles.

It is important to have self-paced tutorials regarding your particular environment and to monitor the number of people who have completed this kind of training. Participation in “lunch-and-learns,” webinars or conference calls on the use of the environment are other engagement metrics that can be tracked.

Socialization includes a narrative of success through sharing stories about the value of knowledge contained in knowledgebases, problems being solved and collaboration that leads to new sales or cost savings. Publicizing new functionality along with examples showing how that functionality can be used in day-to-day work processes will help people see the positive aspects of the program and help to overcome inevitable challenges with any new deployment. Those successes need to be communicated through different mechanisms and by emphasizing themes appropriate to the audience and process. An application for executives may not resonate with line-of-business users.

Alignment with business outcomes
A more challenging but also more powerful approach to metrics is to link the SharePoint functionality to a business process that can be impacted and that can be measured. One example is a proposal process that enables salespeople to sell more when they are able to turn proposals around more quickly, allowing more selling time or reduced cost of highly compensated subject matter experts. Employee self-service knowledgebases can be linked to help desk call volume. Those metrics are more challenging because they require the development of a model that predicts the impact of one action on another or at least an understanding that causality is involved, but they also can be a strong indication of success.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues