Fresh Links Sundae – April 29, 2012 Edition

Fresh Links Sundae encapsulates some pieces of information I have come across during the past week. They maybe ITSM related or not entirely. Often they are from the people whose work I admire, and I hope you will find something of value.

Managing the Business of IT Needs More Than Just Good Project Management Robert Stroud discussed the three key elements of “Business of IT,” Portfolio Analysis, Financial Transparency, and Performance Management, and why it is critical to execute them well. (CA on Service Management)

End users: should we put them in padded cells? David Johnson discussed the term “end user” and why people oriented considerations are important in any infrastructure design decisions. (Computerworld UK)

Do you have a people strategy? Seth Godin argued that strategies for communication medium such as email, web, and mobile are not addressing the most important strategy of it all. (Seth’s Blog)

Help Desk 101 – 10 Things to Consider for your EMAIL ONLY Support Team Joshua Simon gave ten solid suggestions on running an email only support operation. (ITSM Lens)

What is Service Management? Rob England gave a detailed run-down of the service management concepts using a railway example. (The ITSM Review)

ITSM Customer Relationships: Mad Customer Disease Julie Montgomery talked about ways to help customers with getting things done effectively, efficiently, economically and equitably to get value for money. (Plexent)

SDITS 12 – A New Beginning? James Finister shared his recent experience at SDITS 12. (Core ITSM)

The cult of innovation Rob England discussed why innovation for its own sake is counter-productive and why instead we need to concentrate on the efficiency and effectiveness of what we do for the organization. (The IT Skeptic)

You Don’t Need This “Recovery” Umair Haque discussed we might be in a eudaimonic depression, in his terms, and suggested what to do about it. (Harvard Business Review)

Overcome the Addiction to Winning Marshall Goldsmith discussed the importance of not winning on everything; include the meaningless or trivial stuff. (Marshall Goldsmith)

COBIT 5 and What You Can Leverage for ITSM Work

ISACA recently released COBIT 5, a governance and management framework that can help organizations create optimal value from IT. If you are familiar with COBIT, hopefully you have already downloaded the framework documents. If you are not familiar with COBIT or ISACA, follow this link to get more information on the framework. In this post, I will outline some of the useful information you can leverage from COBIT to help you in your ITSM journey, based on my early perusal of the framework.

Good Practices

For a number of processes we use in ITSM, there is a corresponding one in COBIT. For example, DSS02 in COBIT “Manage Service Requests and Incident” maps approximately to the Incident Management and Service Request Management processes in ITIL. Within the process DSS02, COBIT breaks the processes down further into seven management practices. Within each management practice, there are a number of activities associated with each management practice. If you want to implement or improve an ITIL Incident Management process for your organization and wonder what are considered as good practices, these management practice activities can provide some valuable insights for your effort. Tailor those activities further into exactly what you would do in your organization and you have a list of good practices for your shop.

Metrics

For each process, COBIT 5 has outlined various IT-related and process goals that a process contributed directly towards. Next to each goal, COBIT outlines a list of recommended metrics to measure for those goals. Of course, depending on your organization and the availability of certain service management data, you will have to find tune those metrics for your environment. It offers an excellent starting point for defining the list of metrics you plan to capture.

RACI Chart

For each process, COBIT 5 has a RACI chat that talks about who is responsible and/or accountable for certain key management practices within the process. Granted, the RACI chart can be high-level and somewhat generic. It nevertheless offers a good starting point for those who are working on a process design exercise or just want to better define the roles and responsibilities within your environment.

In summary, I must say I like what I have seen from COBIT 5 so far because the framework offers a great deal of good information to use for your ITSM work. I definitely recommend downloading and checking out the new framework further. On Tuesday, April 17, 2012, Debbie Lew of Ernst & Young and Robert Stroud of CA hosted an education session on COBIT 5 during ISACA Los Angeles Chapter’s annual spring conference. Normally the presentation deck is available only to the attendee of the conference. Ms. Lew has graciously given me the permission to make the presentation deck available via this blog. Check out their deck for more information on COBIT 5 and feel free to post questions and comments.

DIY Process Assessment Wrap-up – Constructing the Report and Presenting the Results

This is the concluding post on the DIY Process Assessment series. In the previous posts, we went from lining up the approaches and resources, planning various aspects of the assessment, running the assessment and collecting the data, and eventually making sense of the data collected. The last major steps are to write up the report and present the results to the stakeholders.

Writing up the Report

The final report should summarize the assessment effort, provide solid findings on the current maturity level, and suggest both near-term and long-term actions for improvement. Generally, the assessment report will contain the following elements:

  • Executive Summary
    • Short summary of project background and problem definition
    • Brief description of the assessment methodology used
    • Summary of maturity scores for each process assessed
    • Discussion on the integration between processes and other comparative benchmark information
    • Project Scope – mention the processes and organization units covered under the assessment
    • Overall conclusion, recommendations, and next steps
      • Did the conclusions appear to be logically drawn based on data gathered?
      • Did the results confirm the perceived problem?
      • Are the recommendations aligned logically with the conclusions?
      • A roadmap showing the sequence of actions and dependencies between actions
      • Analysis of the Processes (for each process)
        • Scores or maturity levels by processes
        • Process goals, intended outcomes, and perceived importance
        • Process specific conclusions and recommendations
        • Organizational Considerations
          • Any noteworthy factors encountered during the assessment that could provide more insight or context on the conclusions
          • Any other organization related factors that should be taken into account when implementing the recommendations or actions

Presenting the Results

When presenting the results, keep the following suggestions in mind.

  • Depending on your organization, you may use different types of meetings or communication vehicles to present the results. At a minimum, I feel the project sponsor should host at least one presentation with all assessment participants and senior leadership team.
  • Hold additional meetings with the process owners to discuss the results and to identify quick-wins or other improvement opportunities.
  • Anticipate questions and how to address them, especially the ones that could be considered emotional or sensitive due to organization politics or other considerations.

It took seven posts in total to cover this process assessment topic, and I feel we have only covered this subject at a somewhat rudimentary level. There are more things to drill down in-depth, but everything we have covered so far will make a very good starting point. As you can see from the steps involved, the assessment is not a trivial effort. Before you go off and start planning the next assessment, some people might ask one important question “why bother?” I can think of a few good reasons for taking the time to plan and to do the assessment.

  1. Most organizations do not have the processes at a minimally effective level they need to support their business or operations. They want to fix or improve those processes, and a process assessment effort can help to identify where things might be broken and need attention. The problem definition is a key area to spend some effort on.
  2. Many organizations undertake process improvement projects and need some ways to measure progress. Process assessment helps not only for establishing the initial benchmarks but also for providing subsequent benchmarks that can be used to calculate the progress. A lot of us do measurements by gut-feel. Intuition and gut-feel can be right sometimes about these things but having the more concrete measurement is much better.
  3. Along the same line of reasoning for having the concrete measurement, I cannot think of another better way to show evidence of process improvement or ROI to your management or project sponsor than with formal assessment. Many people do process improvement initiatives via a grass-root or informal effort with internal funding due to organizational realities. At some point, you may find yourself needing to go to management and ask for real budget for time, people, and tools. Having a structured approach to show the potential contributions or ROI down the road can only help out your cause.

In conclusion, process assessment can be an effective way to understand where your process pain points are, how to address those pain points, and how far your organization has come along in term of improvement. All meaningful measurements usually take two or more data points to calculate the delta. Conducting process assessment periodically can provide the data points you need to measure your own effectiveness and to justify further improvement work.

Links to other posts in the series

Fresh Links Sundae – April 22, 2012 Edition

Fresh Links Sundae encapsulates some pieces of information I have come across during the past week. They maybe ITSM related or not entirely. Often they are from the people whose work I admire, and I hope you will find something of value.

Why a “rules based” approach to Change Management always fails Glen Taylor discussed why rule-based change management practices have limited effectiveness and why risk-based approach is the better target. (ITSM Portal)

COBIT 5 Miscellany Geoff Harmer gave his initial impression of COBIT 5 and how it differs from the previous version of the framework. (ITSM Portal)

IT Metrics Planning: The Business Meeting Julie Montgomery suggested ways for IT and business to work together and come up with metrics that can help both organizations. (Plexent Blog)

At the Helm of the Data Refinery: Key Considerations for CIOs Perry Rotella discussed that “data refinery” is the new strategic operating model for companies and why CIO is the executive best positioned to lead the enterprise forward in this new model. (Forbes)

5 Ways to Access the Power of the Hive for ITIL Initiatives Jeff Wayman discussed ways to leverage a diverse group of people for the benefit of ITSM initiatives. (ITSM Lens)

7 Benefits of Using a Known Error Database Simon Morris gave an in-depth discussion of KEDB and suggested ways to extract values and benefits from it. (The ITSM Review)

The ABC of ITSM: Why Building The Right Process Matters Ben Cody discussed the human aspect of ITSM and why a positive dedication to “process” should be at the heart of how organizations solve complex IT services challenges. (The ITSM Review)

How to Make Your Company More Like Apple Daniel Burrus talked about how companies, large or small, can build their future by competing on things other than price. (Strategic Insights Blog)

An Asshole Infested Workplace — And How One Guy Survived It Surviving a toxic work environment is not a trivial undertaking – you do what you could and had to do without spreading the toxic atmosphere further. (Bob Sutton)

How to fix IBM in a week Robert Cringely wrote a long series of blog entries discussing what is going on within IBM, what is wrong, and how to fix it, maybe. (I, Cringely)

ISACA Los Angeles Chapter Spring Conference, Week of April 16, 2012

Apologies to the readers of the blog.

There will be no posting this week due to my volunteer work with ISACA Los Angeles Chapter Spring Conference Committee. ISACA LA is celebrating 40th anniversary for their annual 3-day conference. This education event covers fundamental information systems auditing concepts and emerging technology risks. The conference also provides rich opportunities for its attendees to network with other governance, assurance and security professionals.  The Spring Conference has turned into the leading IT governance, security, and assurance event for the Southern California area. The 2012 conference attracted over 300 participants.

I have been working with the Spring Conference organizing committee for the last nine years. The committee has always come with dedicated volunteers who gave their time and superb effort to deliver a professional quality event for the benefits of the Chapter’s members. I have nothing but great things to say about this group of people with whom I have come to know and respect.

I will be back next week. In the meantime, if you are curious about the ISACA LA Spring Conference, head over to the Spring Conference website and check it out.

Fresh Links Sundae – April 15, 2012 Edition

Fresh Links Sundae encapsulates some pieces of information I have come across during the past week. They maybe ITSM related or not entirely. Often they are from the people whose work I admire, and I hope you will find something of value.

5 Ways to Fix Your High Value Jerks Susan Cramm suggested strategies to deal with “talent jerks” who deliver results yet intimidate their colleagues and reports in the organization. (Valuedance)

Moving IT into the unknown with boldness, courage and strength to drive business value Robert Stroud discussed the importance of transforming IT from followers of the business to equal partners sharing in the common goals of the organization’s mission. (CA on Service Management)

Man Alive, It’s COBIT 5: How Are You Governing And Managing Enterprise IT? With the release of COBIT 5, Stephen Mann outlined his initial thoughts on the new framework from ISACA. (Forrester Blogs)

A Change Management Strategy for Clouds in Azure Skies Jeff Wayman discussed five Change Management strategies that can promote success to your cloud operations. (ITSM Lens)

Meet your iceberg. Now in 3D Roman Jouravlev explained why selling IT processes to business customers is, in most cases, pointless and doomed from the start. (ITSM Portal)

Leadership Encourages Hope Bret Simmons discussed what leaders can do to give the followers hope, in his word, the belief that one knows how to perform and is willing to direct and sustain consistent effort to accomplish goals that matter. (Positive Organizational Behavior)

10 Predictions from Experts on Big Data will Impact Business in 2012 10 Big Data predictions from experts at Forrester, Gartner, Ovum, O’Reilly, and more discussed how the Big Data realm will develop and impact business. (Evolven Blog)

Too much information Barclay Rae talked about the ‘inconvenient truth,’ where the conventional IT reporting is for the most part of little business or IT management value. (BarclayRae Website)

Reducing Negativity in the Workplace Marshall Goldsmith discussed a simple, yet effective strategy to reduce “whining time.” (Marshall Goldsmith)

The Great Collision Umair Haque talked about a Great Collision in which the future we want is at odds with the present we choose, and what to do about it. (Harvard Business Review)

DIY Process Assessment Execution – Analyzing Results and Evaluating Maturity Levels

In the previous post, I gave an example of process assessment survey. Using a one-to-five scale model, you can arrive at a weighted (or a simple average) score for a given process after collecting the data from the assessment participants. The more data points (or survey results) you can collect, the more realistic (and hopefully accurate) the process maturity score will be. Before you take the process maturity scores and start making improvement plans, I think there two other factors to consider when analyzing and evaluating the overall effectiveness of your processes. The two additional factors are:

  • Perceived importance of the process:

In addition to measuring the maturity level of a process, I think it is also important to measure how your customer and the business perceive the importance of the processes. The information gained from measuring the perceived importance can be important when gauging the level of and prioritizing the investments that should go into the improvement plan for a process. For example, a process with a low maturity level but perceived to be of high importance to business may be a good candidate for some serious and well-planned investment. On the other hand, a process, that has a high maturity level in IT’s eyes but perceived to have a lower importance to business, may signal you to have a further look at the current investment level and see whether some scaling-back or reallocation of the funds could be an option. After all, we want to be in a position where the investments of any process will yield the most value for the organization overall. We simply cannot make decisions on the improvement plans without understanding the perceived business values.

Measuring the perceived importance accurately requires asking the right questions and getting the feedback from the right audience. People from the senior management team or IT customers who are considered power users are probably in a better position than others to provide this necessary insight. Also, simply asking IT customers how important a process is to the organization may not be effective because those customers are not likely to be as familiar with the nitty-gritty IT processes as we are. We will need to find a way to extract the information by stating the questions in a way that our customers can understand and respond, without getting into too much of technical jargons.

As an example, the result of this analysis could be a bar chart showing the maturity level and the perceived importance level for the processes under assessment.

  • Degree of Integration Between Processes

Another factor to consider before taking a process maturity score and making an improvement plan is to also understand how well processes integrate with one another. Most ITSM processes rarely act alone, and the effectiveness of an overall ITSM program also depends on the level of integration between processes. Assessing how well one process integrates with another generally involves looking at just how well the output from one process is used in other processes. Some examples of process integration for problem management can include:

    • Processes Providing Input Into Problem Management:
      • Capacity management process could provide historical usage and capacity trends information to aid the root cause analysis or formulation of permanent solutions.
      • Incident management process could provide incident details for the root cause analysis activities. Incident data can also enable proactive problem management through the use of trend analysis.
      • Configuration management process could provide relationship information between configuration items, which can help in determining the impact of problems and potential resolutions.
    • Processes Receiving Output from Problem Management:
      • Incident management process could receive known error records and details of temporary fixes in order to minimize the impact of incidents.
      • Change management process could receive requests for change triggered by problem management to implement permanent solutions to known errors.

What scale should you use to rate the integration between processes? I think a simple scale of one to five should work just fine. For example:

    • One could indicate the output from the originating process is used inconsistently by the target process
    • Two could indicate the output from the originating process is used consistently but only informally by the target process
    • Three could indicate the output from the originating process is both consistently and equally by the target process in a documented manner
    • Four could indicate the output from the originating process is used consistently to support the target process in a managed way
    • Five could indicate the output from the originating process is used consistently to support the target process in an optimized way

You define what the scale really means for your environment in a way that is easily understandable by your team. Also keep in mind that not all processes must integrate seamlessly with other processes on every possible front in order to have an effective ITSM program; however, a good use of the integration scores can help us to uncover opportunities to capitalize on our strengths or to improve upon our challenges. For example, a low integration score between incident and problem management processes could signal us an opportunity to improve how those two processes exchange and consume the output from one another. If we find the known error database is not being utilized as much as we should during incident triage, we should dig in further and see what actions we can take to improve the information flow. If the problem management process is being hampered due to lack of accurate incident information coming from the incident management process, the integration score should point us to the need that we need to raise the quality of information exchange between the two processes.

As an example, the result of the process integration analysis could be a two-by-two chart showing the integration scores between processes.

We have come a long way in this DIY process assessment journey, from gathering the potential resources, planning for the assessment, executing the assessment, to analyzing the results. For the next and concluding post on the process assessment topic, we will discuss presenting the assessment results and suggesting some quick-win items to consider as part of the follow-up activities.

DIY Process Assessment Execution – Process Survey Example

From the last DIY assessment post, we discussed the data gathering methods and instruments to use for the surveys, workshops, and interviews. No matter what method(s) you end up deploying for your assessment, you will need a list of good/effective/best practices for a process in order to formulate the assessment questions. During the first post of the series, we talked about what reference sources you can use to come up with a list of good practices for a given process. In this post, we will illustrate an example of what the good practices and survey questions might look like for Problem Management.

Problem Management Process Assessment Questionnaire Example

As you look through the example document, I would like to point out the following:

  1. Each question in the questionnaire represents a good practice as part of what a mature process would look like. To come up with the list of practices, I leveraged the information from ISO/IEC 20000 Part 2: Guidance on the Application of Service Management Systems. With helpful information sources like ITIL, ISO 20000, COBIT, etc., they provide a great starting point for us DIY’ers and there is no reason to reinvent the wheels for the most part.
  2. To rank the responses and calculate the maturity level, I plan to use the 5-point scale of CMMI. The maturity levels used by CMMI include 1) Initial, 2) Repeatable, 3) Defined, 4) Managed, and 5) Optimized. However, the maturity levels will not likely be something your survey audience will know very well, so we need to find some other ways for our survey audience to rank their answers. As you can see from the example, I used either the scale of 1) Never, 2) Rarely, 3) Sometimes, 4) Often, 5) Always or 1) Not at All, 2) Minimally, 3) Partially, 4) Mostly, 5) Completely. You don’t have to use both scales – it all depends on how you ask the questions. I could have asked all questions with the scale of 1) Never, 2) Rarely, 3) Sometimes, 4) Often, 5) Always or vice versa. In my example, I chose to mix things up a bit by using both scales just to illustrate the fact that both scales are viable for what we need to do.
  3. Some questions are better asked with close-end options like Yes or No, instead of using a scale. Those questions tend to deal with whether you have certain required artifacts or deliverables. For example, you either have documented problem management process and procedures, or you don’t.
  4. As you can see, the scale questions translate nicely when calculating the maturity level. You may calculate the maturity level by using a simple average of all responses from the scale questions, where all questions have an equal weight or preference. Depending on your environment or organizational culture, you may also assign a different weight to each question by emphasizing certain practices over others. For the close-end questions, you will need to think about what the responses of “yes” and “no” mean when you calculate the final maturity level. For example, you may say having an “Yes” for a group of questions gets a score of 3 out 5, where the response of “no” equal to 1. For some questions, you may even say the “yes” response equals to 5.
  5. This is a simplistic model for assessing and calculating maturity level for a DIY approach. You will need to construct a similar good practice model for each process you plan to assess. Coming up with a list of good practices model to assess against can turn into a significant time investment. However, the majority of effort is upfront and you can re-use the model for subsequent assessments. If you contract out the assessment exercise to a consultant, coming up with the best practice model to evaluate your processes against is normally a deliverables from the consultant. Be sure to spend some time to understand your consultant’s model, and make sure the best practice model is applicable to your organization. It is an important way to ensure the assessment results will be meaningful and easier for everyone to understand.

Please have a look at the example document and let me know what you would do to improve it. On the next post, we will continue the discussion of the assessment execution phase by examining how to analyze the results and evaluate the maturity Levels. We will also discuss how inter-process integration as well as organization and culture could play a part in the maturity level assessment.