Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Mastering System Center 2012 Operations Manager
Mastering System Center 2012 Operations Manager
Mastering System Center 2012 Operations Manager
Ebook1,250 pages11 hours

Mastering System Center 2012 Operations Manager

Rating: 3 out of 5 stars

3/5

()

Read preview

About this ebook

An essential guide on the latest version of Microsoft's server management tool

Microsoft's powerful Mastering System Center 2012 Operations Manager introduces many exciting new and enhanced feature sets that allow for large-scale management of mission-critical servers. This comprehensive guide provides invaluable coverage to help organizations monitor their environments across computers, network, and storage infrastructures while maintaining efficient and effective service levels across their applications.

  • Provides intermediate and advanced coverage of all aspects of Systems Center 2012 Operations Manager, including designing, planning, deploying, managing, maintaining, and scripting Operations Manager
  • Offers a hands-on approach by providing many real-world scenarios to show you how to use the tool in various contexts
  • Anchors conceptual explanations in practical application

Mastering System Center 2012 Operations Manager clearly shows you how this powerful server management tool can best be used to serve your organization's needs.

LanguageEnglish
PublisherWiley
Release dateOct 18, 2012
ISBN9781118238424
Mastering System Center 2012 Operations Manager

Related to Mastering System Center 2012 Operations Manager

Related ebooks

Networking For You

View More

Related articles

Reviews for Mastering System Center 2012 Operations Manager

Rating: 3 out of 5 stars
3/5

1 rating1 review

What did you think?

Tap to rate

Review must be at least 10 words

  • Rating: 3 out of 5 stars
    3/5
    good

Book preview

Mastering System Center 2012 Operations Manager - Bob Cornelissen

Introduction

Welcome to this book on System Center 2012 Operations Manager. During the past 10 years and after multiple iterations of the product, Operations Manager has grown to be one of the main monitoring products for medium to large enterprises. The Operations Manager 2007 version, for instance, brought us from server-based monitoring to application-based monitoring, whereas the 2007 R2 version brought us cross-platform monitoring toward the Unix/Linux machines in our datacenter. Operations Manager 2012 is now bringing us expanded network monitoring, a much better dashboarding experience, application performance monitoring (which used to be known as a separate product called AVIcode), and an important infrastructural change by removing the root management server and using resource pools, just to name a few improvements. We are excited about this version of the product and keen to tell you more about it. This book will give you an overview of Operations Manager with deep dives into important pieces. The chapters contain real-world examples from the field, which also might give you ideas on how to use Operations Manager in different ways.

Who Should Read This Book?

The answer to this question: everyone. Well, maybe anyone who wants to monitor their network using Operations Manager 2012. Included between the covers of this book is a comprehensive look at deploying, managing, troubleshooting, and working with Operations Manager and the new command set for PowerShell.

As complex as products are becoming, no one can be an expert on all of them. If you are like most administrators, you have time to learn only enough about a product so that you can manage it effectively. However, there is probably a lot more that you could be doing with any one product. This book is meant to get you up to speed quickly and then help you through some of the more arcane topics.

Not every administrator will have the same type of infrastructure to work with. What works well in a large corporation does not always work for small companies. What works well for small companies may not scale well for large organizations. Microsoft has attempted to address the differences among companies and deliver a product that can be implemented quickly for a small company, yet will still scale well for large organizations. No matter which scenario fits you, you will want to learn how this product will work for you.

But most of all, any administrator who wants to try to get to the proactive side of managing their infrastructure should consider looking through these pages to see how they can start monitoring their systems effectively. Being on the other side—the reactive management side—means that you are constantly having to respond to emergencies and continually putting out fires. If you are a reactive administrator, you probably already understand how hard it is to try to make your infrastructure more efficient when you don’t have enough time to work on anything else but emergencies.

The Contents at a Glance

There are two parts to this book. Part 1 covers implementation and administration. Part 2 covers maintenance and troubleshooting. As you read through each section, you will find that the material flows from one subject to another, building as you go. By the time you finish Part 1, you should have a good understanding of what goes into deploying and managing your management group. Throughout Part 2, you will learn some of the tips and tricks to keep your management group running smoothly.

Part 1: Implementation and Administration

Chapter 1: Overview of Operations Management This chapter explains what operations management is and how Microsoft has implemented it with Operations Manager 2012. The topics covered include Microsoft Operations Framework, Information Technology Infrastructure Library, and the role played by Operations Manager 2012.

Chapter 2: Installing Operations Manager 2012 This chapter identifies the prerequisites and examines the installation options in detail.

Chapter 3: Management Group Settings The settings that control the management servers within the management group are discussed in this chapter. You will see how to manage the global settings applied to all management servers, and then how to override the settings on a server-by-server basis.

Chapter 4: Installing and Configuring Agents Each of the monitored systems needs to have an agent installed in order for the monitored system to send detailed information to the management server and to be monitored and managed. The options for installing and configuring the client are discussed in this chapter.

Chapter 5: Managing Management Packs Management packs define what is monitored on each of the managed systems. They also contain the rules for creating reports and tasks within the Operations Console. This chapter covers importing management packs and working with all of the objects contained within them.

Chapter 6: Understanding Management Packs Management packs are dynamic in nature. After you create a management pack, it will need to be fine-tuned and additional object modifications will need to be made. In this chapter you will be introduced to life cycle management for your management packs.

Chapter 7: Working with Consoles After the management group has been created and the management packs and agents deployed, you will need to start monitoring. The Operations Console is the primary tool for working with all of the objects for the management group. Here you will find out how to work with the Operations console effectively.

Chapter 8: Network Monitoring Operations Manager 2012 has implemented improvements in network monitoring. This chapter explains how to discover, manage, monitor, and report on network devices.

Chapter 9: Application Performance Monitoring Deep monitoring of applications has always been a problem in the past. This chapter discusses how to use Application Performance Monitoring features in Operations Manager to get to the bottom of certain application-related problems.

Chapter 10: Working with Complex Configurations You may find yourself having to work with multiple management groups or allow your management group to interface with other management products. This chapter explains how connections can be made and how you can create a hierarchy of management groups.

Part 2: Maintenance and Troubleshooting

Chapter 11: Optimizing Your Environment Not everyone can use the management packs right out of the box. You will need to test them in your organization to see how they work. This chapter will cover some of the options that you have to make your management group work more efficiently.

Chapter 12: Backup, Restore, and Disaster Recovery You do not want to start over from scratch, and you definitely don’t want to lose all of the data that you have collected. This chapter covers your options for making sure that you have your management group protected in case a disaster strikes.

Chapter 13: Troubleshooting Even though we would like to think that everything works perfectly all of the time, we know that is not the case. Some troubleshooting tips are detailed in this chapter.

Chapter 14: Scripting Operations Manager 2012 PowerShell is a very powerful tool that will help you manage your management group. This chapter is a primer on PowerShell as it is used in Operations Manager 2012.

The Mastering Series

The Mastering series from Sybex provides outstanding instruction for readers with intermediate and advanced skills, in the form of top-notch training and development for those already working in their field and clear, serious education for those aspiring to become pros. Every Mastering book includes:

Real-World Scenarios, ranging from case studies to interviews, that show how the tool, technique, or knowledge presented is applied in actual practice

Skill-based instruction, with chapters organized around real tasks rather than abstract concepts or subjects

Self-review test questions, so you can be certain you’re equipped to do the job right

Final Comments

Make sure you take the time to become familiar with Operations Manager 2012. The more comfortable you are with it, the more you will be able to do with it. At the very end of some chapters, you’ll find Master It self-tests: small labs that help reinforce the topics in the chapters. Instructions have been included that allow you to create a virtual environment. Building a virtual test environment can come in handy when you are trying to work through a new topic or troubleshoot a problem.

Most of all, have fun as you are going through the topics contained in this book. Once you find out how much power Operations Manager has in store for you, you will be amazed at some of the things you can do. Just looking at the surface, being able to monitor servers and services, may be impressive enough, but the additional features can be equally impressive, such as monitoring the health of an application from the time a user submits a request until the response is returned from your servers, reporting on the availability of servers and services, and creating scripts that will automatically alter the way the service functions when a problem arises.

How to Contact the Authors

We welcome feedback from you about this book or about books you’d like to see from us in the future. You can reach us by writing to feedback@masteringscom.com. For more information about the authors and additional information concerning the book’s content, please visit our website at www.masteringscom.com.

Sybex strives to keep you supplied with the latest tools and information you need for your work. Please check their website at www.sybex.com/go/mastsc2012opsmgr, where we’ll post additional content and updates that supplement this book if the need arises.

Part 1

Implementation and Administration

Chapter 1: Overview of Operations Management

Chapter 2: Installing Operations Manager 2012

Chapter 3: Management Group Settings

Chapter 4: Installing and Configuring Agents

Chapter 5: Managing Management Packs

Chapter 6: Understanding Management Packs

Chapter 7: Working with Consoles

Chapter 8: Network Monitoring

Chapter 9: Application Performance Monitoring

Chapter 10: Working with Complex Configurations

Part I of this book explains how to install, configure, and manage Microsoft System Center 2012 Operations Manager.

Chapter 1 is an overview of operations management, where it integrates within the Microsoft Operations Framework (MOF) and Dynamic Systems Initiative (DSI), and how they complement the Information Technology Infrastructure Library (ITIL). These are the backbone of Operations Manager. Once we have introduced you to those concepts, we will discuss the concept of cloud computing and explain the IT as a Service (ITaaS) model. Operations Manager delivers the ITaaS model and empowers businesses to create service maps based on their IT service catalog. In the chapters that follow, we move into installing the management servers and creating the management group. After the management group is created, you will learn how to configure and manage your Operations Manager deployment. All of this will be presented to you in an orderly fashion so you can follow along and build your environment to meet your needs.

Chapter 1

Overview of Operations Management

Before we delve into the System Center 2012 Operations Manager product, we must explain what operations management is, what it defines, and why you need it. As an IT manager, you are not responsible for all key business activities within the company. When those activities are being processed on your servers, however, you become a critical piece of the puzzle in overall IT systems management. You may control the database servers, but they house information that is critical to the day-to-day operation of the billing department, for example. Suddenly, you start to see how everything ties together. A missing or damaged link in the chain or an unplanned removal of the chain may cause much more damage than you originally thought.

This is just one of the many reasons Microsoft created the Microsoft Operations Framework (MOF), based on the Information Technology Infrastructure Library (ITIL). The idea behind MOF and ITIL is to create a complete team structure with the ultimate goal of service excellence. Numerous groups fall under the IT department tag, but we often see many of them acting as separate departments rather than as one cohesive unit. Desktop support, application developers, server support, storage administrators, and so forth are all members of IT, but they are not always as tight as they should be.

Operations Manager is much more than just a centralized console view of the events and processes in your network. It was built with ITIL and MOF in mind, and so we would like to start the book with a background of both these IT service management standards.

In this chapter, you will learn to:

Understand IT service management

Explore the IT Infrastructure Library (ITIL) and Microsoft Operations Framework (MOF)

Explore the Dynamic Systems Initiative

Define cloud computing

Understand the IT as a Service (ITaaS) model

Define the Microsoft System Center 2012 products

Define operations management

Understanding IT Service Management

ITIL and MOF were introduced as a way to deliver consistent IT service management (ITSM). Some of the key objectives of ITSM are:

To align IT services with current and future needs of the business and its customers

To improve the quality of IT services delivered

To reduce the long-term cost of service provisioning

Think of ITSM as a conduit between the business and the technology that helps run the business. Without a proper conduit in place, one cannot function properly without the other. ITSM is process focused as opposed to vendor specific and technology centered.

Exploring ITIL

In the early 1980s, computing technology evolved from a centralized IT organization model to distributed computing and geographically spanned resources. With this distributed computing model came greater flexibility, but a downside to this was also a deterioration and inconsistency in process application for technology delivery and support. The UK Office of Government Commerce (OGC) identified the need to use consistent practices for all aspects of a service life cycle to deliver organizational effectiveness and efficiency as well as predictable service levels. As a result, ITIL was born. ITIL is now the most widely adopted framework in the world for ITSM.

ITIL version 1 was published between 1989 and 1995. The original ITIL volumes, consisting of 31 books total, provided a cohesive set of best practices for ITSM. These books were created by industry leaders of the time, and their best practices gave direction and guidance for providing high-quality IT facilities and services to support IT.

In 2000 and 2001, the initial version was revised to become ITIL v2. This second version consolidated the original 31 publications into 7 more closely connected and consistent books within an overall framework.

In June 2007, ITIL v3 was published with an enhanced and further consolidated third version of ITIL consisting of five core books covering the service life cycle, as shown in Figure 1.1. These books were updated in July 2011 for consistency.

Figure 1.1 The service life cycle

The five core volumes that make up ITIL v3 each cover a stage of the service life cycle. Table 1.1 shows these volumes and their key areas.

Table 1.1: The ITIL v3 volumes

There is much more to ITIL than just the books, however. ITIL as a whole includes the books, certification, ITIL consultants and services, and ITIL-based training and user groups. The IT Service Management Forum (itSMF) is the primary contributor and promoter for ITIL and is made up of more than 6,000 member companies covering in excess of 40,000 individuals spread over 53 autonomous chapters worldwide.

The itSMF International Executive Board is the separate international entity that provides overall steering and support functions to existing and emerging chapters. You can find more information about them at their website: http://itsmfi.org.

You can find a lot more in-depth information about ITIL on the official website: www.itil-officialsite.com.

Exploring the MOF

IT organizations are continuously challenged to deliver better IT services at lower cost in a turbulent environment, and ITIL is the best-known management framework developed to deal with this challenge. The MOF is Microsoft’s structured approach to the same goal.

In 1999, Microsoft created the first version of MOF (MOF v1). The key focus of developing MOF was two-pronged:

To provide a framework specifically designed for managing Microsoft technologies

To give IT professionals the knowledge and processes required to manage Microsoft platforms cost-effectively and thus achieve high reliability and security

In early 2008, MOF v4 was released (based on ITIL v3), and it is the current version of the framework. MOF v4 was built to respond to new IT challenges, such as demonstrating the IT business value, responding to regulatory requirements, and improving organizational capability.

The MOF guidance includes all of the activities and processes involved in managing an IT service from its conception, development, operation, maintenance, to—ultimately—its retirement. MOF organizes these activities and processes into service management functions (SMFs), which are grouped together in phases that mirror the IT service life cycle. Each SMF is anchored within a life cycle phase and contains a unique set of goals and outcomes supporting the objectives of that phase. An IT service’s readiness to move from one phase to the next is confirmed by management reviews (MRs), which ensure that goals are achieved in an appropriate fashion and that the goals of IT are aligned with the goals of the organization.

MOF includes a great number of resources that are available to help you achieve mission-critical system reliability, manageability, supportability, and availability with Microsoft products and technologies. These resources are in the form of whitepapers, operations guides, assessment tools, best practices, case studies, templates, support tools, courseware, and services. All of these resources are available on the official MOF website at www.microsoft.com/mof.

How MOF Expands

While ITIL is based on IT operations as a whole, MOF has taken the route of providing a service solution as its core. MOF focuses on the release and life cycle of a service solution, such as an application or infrastructure deployment.

Since ITIL was based on a philosophy of adopt and adapt, Microsoft decided to use it as its basis for MOF. Although Microsoft supports ITIL from a process perspective, the company decided to make some changes and add a few things when they built MOF. One of these changes was moving to a prescriptive process model. Microsoft defines the ITIL process model as descriptive. It has more of a what to do, when and why approach, whereas MOF has a prescriptive, or how to do, approach.

The MOF v4 Model

Information in this section about the Microsoft Operations Framework 4.0 is provided with permission from Microsoft Corporation (© 2009 Microsoft Corporation).

Figure 1.2 The MOF v4 IT service life cycle

© 2009 Microsoft Corporation

The MOF v4 life cycle areas are:

Plan Phase The goal of this phase is to enable planning and optimization of IT service strategy so as to support business goals and objectives. This is where business and IT collaborate to ensure that the delivered services are valuable and compelling, predictable and reliable, compliant, cost-effective, and adaptable to the changing needs of the business.

Deliver Phase The primary goals here are to ensure that IT services, infrastructure projects, or packaged product deployments are envisioned, planned, built, stabilized, and deployed in line with business requirements and the customer’s specifications.

Operate Phase Goals at this phase are to ensure that deployed services are operated, monitored, and supported in line with the agreed-to service-level agreement (SLA) targets in a way that meets business needs and expectations.

Manage Layer The primary goal here is to establish an integrated approach to IT service management activities. This layer operates continuously through the Plan, Deliver and Operate phases and promotes consistency and accountability in planning and delivering IT services.

Service Management Functions

MOF also introduced the concept of SMFs. Each life cycle area of the MOF model contains SMFs that define the processes, people, and activities needed to align IT services to the requirements of the business.

Even though each SMF can be interpreted as a stand-alone set of processes, it’s important to understand how the SMFs in all the phases work to ensure that service delivery is at the desired quality and risk level. In some phases (such as Deliver), the SMFs are performed consecutively, whereas in other phases (such as Operate), the SMFs may be performed simultaneously to create the outputs for the phase. Figure 1.3 shows the SMFs within the MOF v4 model and their placement within each of the phases.

Figure 1.3 The MOF v4 model with SMFs

© 2009 Microsoft Corporation

As Table 1.2 illustrates, there are currently 16 SMFs that describe the series of management functions performed in an IT environment. All of these SMFs map to an ITIL-based best practice for performing each function.

Table 1.2: MOF v4 SMF placement

Management Reviews

For each phase in the life cycle, MRs serve to bring together information and people to determine the status of IT services and to establish readiness to move forward in the life cycle. These reviews help ensure that business objectives are being met and that initiatives, projects, and services are on track to deliver expected value. The scope of MRs can be either project-specific or broad.

During an MR, the criteria that a service must meet to move through the life cycle are reviewed against actual progress. Figure 1.4 shows the MRs and their placement within the MOF v4 model.

Figure 1.4 The MOF v4 model with MRs

© 2009 Microsoft Corporation

Table 1.3 shows the six MOF v4 management reviews, their placement within the IT service life cycle, and their inputs and outputs.

Table 1.3: MOF v4 management reviews

Real World Scenario

Mixing It Up

So you say you don’t have enough coworkers to fill all of the positions for your project? According to MOF recommendations, you don’t need to have one person for every role. Looking at any of the MOF guides for services, you will find role clusters that define the responsibilities for each of the roles. The separation of responsibilities is meant to define who is allowed to perform each action and also grants a level of accountability. When one administrator needs to make a change, there are others who can validate that the change is required, and still others who can verify that the change was made appropriately.

Not every organization can have enough personnel to fulfill every one of the roles. If your company has a small staff, you will need to assign multiple roles to each of your administrators. The best option is to look at each of the role clusters and have one administrator take on all of the actions that are defined for a cluster. Not enough personnel to assign one person per role cluster? Look at each role cluster and determine which role clusters do not pose a possible conflict of interest. You want to maintain some accountability—assign responsibilities so that there are checks and balances in place.

Exploring the Dynamic System Initiative

As software becomes more and more complex—thus introducing new components and systems to the infrastructure—the IT department will in turn become increasingly diverse and heterogeneous. For example, an inventory application moves from being client server–based, to multitier, to a web service–based application. As the application grows and more users start using it, the decision is made to install a hardware load balancer in front of it. Then the data is moved to a storage area network (SAN) to give the IT department better control over backup and recovery options. The result is an IT environment in which the definition of a distributed application has evolved to include much more than just the software.

All of these changes result in various teams in the IT department being involved with this application. You quickly see how a change to the application can affect more than just the application developers. You now have to coordinate changes with the web server team, the database administrators, the networking team, and the storage team.

Whether these teams are made up of one person (you), or they consist of dozens of people on each team, you realize how complex the infrastructure can become, and why there is a need for management of these distributed systems. With so many teams working on a system over its lifetime, if the knowledge they each have could be captured in machine-readable form over the life of a system, then this knowledge could be harnessed to automate many of the well-defined management tasks that are handled manually today. Not only does the impact of this process drive down support costs, but it also reduces the risk of mistakes and omissions associated with humans carrying out all the steps of even a single best-practice management process. This concept of capturing and reusing knowledge over the life of the system and each manageable component is at the heart of the management aspects of the Dynamic Systems Initiative (DSI).

The DSI is an industry strategy led by Microsoft to effect these fundamental changes. It is a plan to build software that incorporates ITSM capabilities and MOF best practices to match IT capabilities and operations with business needs.

DSI solutions not only address the complexity of enterprise IT infrastructures but also deliver enterprise-like capabilities to small and medium-sized businesses in a simple and cost-effective way.

DSI helps IT organizations deliver end-to-end offerings that will:

Increase productivity and reduce costs across the entire IT organization

Reduce time and effort required to troubleshoot and maintain systems

Improve system compliance with business and IT policies

Increase responsiveness to changing business demands

DSI also delivers dynamic systems technology to businesses. There are three architectural elements of the dynamic systems technology strategy:

Virtualized Infrastructure Achieves a greater agility and leverages existing infrastructure by consolidating system resources into a virtual service pool. Virtualized infrastructure makes it easier for a system to quickly add, remove, move, or change the resources it draws on to do its work.

Knowledge-Driven Management Enables systems to capture desired states of configuration and health in models based on business priorities and demands, and uses this inherent knowledge to provide a level of self-management to systems.

Design for Operations Captures the diverse knowledge of people, such as business architects and managers, application developers, IT professionals, and industry partners, by embedding it within the IT infrastructure through the use of system models.

These three architectural elements are the building blocks for dynamic systems. Virtualized infrastructure mobilizes the resources and brings elasticity to the infrastructure. Knowledge-driven management is the mechanism for putting those resources to work to meet dynamic business demands, and design for operations ensures that systems are built with operational requirements for excellence.

SDM versus SML

Originally, Microsoft implemented the System Definition Model (SDM) as the standard schema within DSI. SDM was developed to capture a consistent model of IT resources.

SDM is an Extensible Markup Language (XML)-based language and modeling platform through which a schematic blueprint for effective management of distributed systems can be created. SDM models can be consumed by specific management systems, such as members of the System Center product family and third-party management products; they can also be hosted by each component of the system to enable local self-management by the components themselves. Just as a distributed system is a set of related software and hardware resources running on one or more computers that are working together to accomplish a common function, SDM models combine to form a common management definition of that distributed system, created out of the resultant sum of its component parts.

In July 2006, Microsoft, along with other major hardware and software corporations, published a draft of a new specification defining a consistent way to express how heterogeneous computer networks, applications, servers, and other IT resources are described—or modeled—in XML. Based extensively on SDM, this new specification was called the System Modeling Language (SML). SML provides a consistent method for hardware manufacturers and software developers to define how infrastructure, applications, and services are modeled.

Microsoft has now realigned all of their work on the SDM platform with SML and has renamed the platform the SML platform. This approach will give IT departments end-to-end solutions that are integrated across applications, operating systems, hardware, and management tools, and it will provide reduced costs, improved reliability, and increased responsiveness in the entire IT life cycle.

DSI in the Real World

The goals of DSI are accomplished through management packs in System Center 2012 Operations Manager, a topic we will discuss in greater detail in Chapter 5, Managing Management Packs. A real-world example of the SML platform when applied to Operations Manager can be understood by the fact that any management pack, once unsealed, has the XML file format. This is because XML is the language that SML is based on and with which all management packs are built.

Tying It All Together

Microsoft took the ITIL, MOF, and DSI standards and created a suite of products known as System Center 2012 that helps you in your quest to align with the best practices set forth in those frameworks. The System Center suite helps IT organizations capture and use information to design more manageable systems and automate IT operations. This is achieved by integrating systems-management tools and knowledge of the systems to help you with day-to-day operations of the environment, as well as to ease your time spent on troubleshooting and to improve planning capabilities.

With System Center 2012, you are empowered with the tools that can help you build, migrate to, and manage your cloud infrastructures—both private and public. To fully realize the potential of System Center as a whole suite, you need to understand the various cloud deployment models, the cloud service models, and the capabilities of each of the System Center products.

Cloud Computing Defined

The National Institute of Standards and Technology (NIST) is the globally recognized authority on developing standards and guidelines for the cloud computing model. In September 2011, they released their final version of the cloud computing definition. This definition states that:

Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model is composed of five essential characteristics, three service models, and four deployment models.

Figure 1.5 shows a graphical representation of the NIST definition of cloud computing.

Figure 1.5 The NIST cloud computing model

Essential Cloud Characteristics

NIST has specified five essential cloud characteristics within their cloud computing definition and these are described below:

On-Demand Self-Service A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider.

Broad Network Access Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations).

Resource Pooling The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a sense of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Examples of resources include storage, processing, memory, and network bandwidth.

Rapid Elasticity Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be appropriated in any quantity at any time.

Measured Service Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.

The source for this information is the website of the National Institute of Standards and Technology. See http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf.

Cloud Service Models

Along with the five essential cloud characteristics, the NIST has defined three types of services that exist within the cloud:

Software as a Service (SaaS) The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through either a thin client interface, such as a web browser (e.g., web-based e-mail), or a program interface. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.

Platform as a Service (PaaS) The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or -acquired applications created using programming languages, libraries, services, and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure, including network, servers, operating systems, or storage, but has control over the deployed applications and possibly configuration settings for the application-hosting environment.

Infrastructure as a Service (IaaS) The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications, and possibly limited control of select networking components (e.g., host firewalls).

The source for this information is the website of the National Institute of Standards and Technology. See http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf.

Cloud Deployment Models

With the three cloud service definitions and five essential characteristics explained, the NIST publication defines four types of cloud models:

Private Cloud The Private Cloud infrastructure is provisioned for exclusive use by a single organization comprising multiple consumers (e.g., business units). It may be owned, managed, and operated by the organization, a third party, or some combination of them, and it may exist on or off premises.

Public Cloud The Public Cloud infrastructure is provisioned for open use by the general public. It may be owned, managed, and operated by a business, academic, or government organization, or some combination of them. It exists on the premises of the cloud provider.

Hybrid Cloud The Hybrid Cloud infrastructure is a composition of two or more distinct cloud infrastructures (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load balancing between clouds).

Community Cloud The Community Cloud infrastructure is provisioned for exclusive use by a specific community of consumers from organizations that have shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be owned, managed, and operated by one or more of the organizations in the community, a third party, or some combination of them, and it may exist on or off premises.

The source for this information is the website of the National Institute of Standards and Technology. See http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf.

For more information on all of the special publications that the NIST has released, download the whitepapers from csrc.nist.gov/publications/PubsSPs.html.

Understanding IT as a Service

Now that you understand the various characteristics, service, and deployment models of a cloud environment, it is time to learn about a new cloud service model that Microsoft and their partners have defined and modeled the System Center 2012 suite around. To Microsoft, cloud computing represents a transformation in the industry in which we work. Their goal was to deliver a strategy that will let you focus on your business and the IT services that run your business from a holistic management perspective instead of on an individual component-based design. This model is defined as IT as a Service (ITaaS). It offers organizations greater flexibility in leveraging the power of IT to meet their business needs than has been available before. In short, System Center 2012 helps deliver ITaaS through the various product offerings within the suite. It enables you to manage your environment in the same way your users consume the services. With System Center 2012 you can create service maps based on your service catalog.

As an example, within System Center 2012 Operations Manager, you can create a distributed application based on the Messaging Service environment. This distributed application can encompass all of the components that make up your Messaging Service environment—from the Microsoft Exchange mail servers (both physical and virtual) to the network hardware gateways and even the end-user client perspectives. The health state of each of these individual components will roll up to a single top-level icon that represents the health state of the entire Messaging Service. Built-in processes and health roll-up policies will determine whether or not a particular alert generated should change the entire Messaging Service from healthy to warning or even to critical. With this ITaaS model, you are moving away from dealing with the silo-based method of individual component alerts and notifications and instead seeing the whole service as a single entity, giving you the transparency that you need across the service catalog.

As you progress through this book, you will learn much more about distributed applications, alerting, health states, and roll-up policies within Operations Manager.

System Center 2012

Before we look at the System Center 2012 line of products, let’s first explore the System Center management disciplines that were introduced by Microsoft to help define IT service management:

Operations management

Change management

Configuration management

Data protection management

Service management

Virtual machine management

Orchestration management

Security management

Incident management

Problem management

Through internal products, close work with partners, and acquisitions of software from other companies, Microsoft has addressed each one of these disciplines with a product in the System Center 2012 suite. The suite includes the following eight core products:

System Center 2012 Operations Manager Operations Manager is an end-to-end monitoring solution that covers Microsoft and 19 different cross-platform environments. It enables you to centrally monitor services, devices, and operations for many computers from a central console. It takes a holistic view of application health from the infrastructure right up to the application itself, thus complementing the ITaaS model by design. SLAs can be tracked and reported on easily to determine the overall level of SLA compliance. Operations Manager is Microsoft’s systems management solution for operations management.

The key benefits are:

Monitors infrastructure, network, applications, transactions, and code

Monitors both public and private clouds

Monitors physical and virtual environments

Provides central monitoring and diagnostics of Microsoft and non-Microsoft platforms, including Unix, Linux, and VMware

System Center 2012 Configuration Manager Configuration Manager helps you to maintain corporate compliance and control by enabling management of servers, PCs, and mobile devices. It accomplishes this with a unified infrastructure that provides a central console from which to push out updates, deploy applications and operating system packages, and even manage your antivirus. Configuration Manager fits into Microsoft’s systems management solution for change and configuration management.

The key benefits are:

Empowers user productivity

Unifies management and security infrastructure

Simplifies IT administration

Manages a wide range of mobile devices

Operations Manager Family Tree

In October 2000, Microsoft purchased the rights to NetIQ’s Operations Manager (OM) and by the middle of 2001 subsequently released Microsoft Operations Manager 2000 (MOM) to provide real-time monitoring of enterprise Windows servers. Nearly 12 years since the original NetIQ Operations Manager purchase, the product has evolved from Microsoft Operations Manager (2000/2005) to Microsoft System Center Operations Manager (2007/2007 R2) and now to Microsoft System Center 2012 Operations Manager.

System Center 2012 Data Protection Manager Data Protection Manager is a backup and recovery product that delivers unified data protection for Windows servers and applications. It is a best-of-breed product for protecting SQL, Exchange, SharePoint, Virtualization, and file servers. It provides bare-metal recovery for Windows desktops and laptops too. With native site-to-site replication for disaster recovery, it can back up to disk, tape, and cloud-based repositories. Data Protection Manager is the systems management solution for data protection management. It is an excellent choice for Windows systems because it is a Microsoft backup product for Microsoft products.

The key benefits are:

Provides centralized management and integration with the Operations Manager console

Optimizes SharePoint and Hyper-V item-level recovery (ILR)

Offers certificate-based authentication for nondomain backups

System Center 2012 Virtual Machine Manager Virtual Machine Manager helps enable centralized management of your physical and virtual IT infrastructure. As well as Hyper-V, it can manage heterogeneous hypervisors such as Citrix and VMware. With Virtual Machine Manager, you can plan, deploy, manage, and optimize your virtual infrastructure. The 2012 release now also supports storage integration where you can manage your Storage Management Initiative – Specification (SMI-S) provider SANs. It promotes the ITaaS model by using service templates for quick and easy provisioning of resources across multiple devices. Virtual Machine Manager is Microsoft’s systems management solution for virtual machine management.

The key benefits are:

Provides heterogeneous hypervisor support to manage Microsoft, VMware, and Citrix environments

Offers Server App-V integration

Provides service templates

Enables private cloud creation and management

System Center 2012 Orchestrator Orchestrator brings automation to the datacenter. Through workflows, you can automate the creation, monitoring, and deployment of resources in your environment. Orchestrator can take over the type of manual tasks that you need to perform on a regular basis, such as new employee account creation, virtual machine provisioning, and alert remediation. Individual tasks and subtasks are automated but typically not the whole process. Orchestrator can tie tasks together to provide efficient end-to-end solutions in your infrastructure. Orchestrator supports the automation and orchestration systems management solutions.

The key benefits are:

Optimizes and extends your existing investments

Delivers flexible and reliable services

Lowers costs and improves predictability

Lets you connect systems from different vendors without having to know how to use scripting and programming languages

System Center 2012 Service Manager Service Manager delivers an integrated platform for automating and adapting IT service management best practices to your organization’s requirements.

Service Manager can provide increased productivity, reduced costs, improved resolution times, and tighter compliance of IT standards. Included in Service Manager are the core process management packs for incident and problem resolution, change control, and configuration and knowledge management. It uses a central configuration management database (CMDB) to automatically connect knowledge and information from Operations Manager, Configuration Manager, Orchestrator, Virtual Machine Manager, and Active Directory Domain Services. Service Manager fits into Microsoft’s systems management solutions for incident, problem, and change management.

The key benefits are:

Provides datacenter management and integration efficiency

Offers incident ticketing and problem management

Provides user-centric support through the Self-Service Portal

Manages IT governance, risk, and compliance (IT GRC)

System Center 2012 App Controller App Controller is the newest member of the System Center 2012 suite and provides a unified console that helps you manage public and private clouds, as well as cloud-based virtual machines and services. It enables application self-service across both the public and private cloud. App Controller supports the ITaaS model with its deep integration to Virtual Machine Manager and Windows Azure — Microsoft’s public cloud service offering. App Controller fits into Microsoft’s virtual machine systems management solution.

The key benefits are:

Connects both public and private clouds

Provides centralized management of multiple Virtual Machine Manager deployments and Azure subscriptions

Contains a central library shared across clouds

Allows for quick deployment of service templates to the cloud

System Center 2012 Endpoint Protection Endpoint Protection allows you to consolidate desktop security and management into a single solution.

Built on System Center 2012 Configuration Manager, Endpoint Protection provides a single, integrated platform that helps you reduce costs by using your existing client management infrastructure to deploy and manage your endpoint protection. The unified infrastructure also provides improved visibility into the security and compliance of your client systems. Endpoint Protection supports the Security Management systems management solution.

The key benefits are:

Implements role-based management across security and operations

Features industry-leading malware detection

Provides a single infrastructure for client management and security

Improves visibility for identifying and remediating vulnerabilities

Defining Operations Management

There is often some confusion when it comes to the definition of operations management. The Microsoft System Center 2012 suite spans a wide range of management ground. The most confusing portions of this area are between systems management and operations management. We will look at the difference between the two.

Systems Management

Systems management is typically defined as software that is used to centrally manage large groups of computer systems. This software contains the tools to control and measure the configuration of both hardware and software in the environment.

System Center 2012 Configuration Manager and System Center 2012 Endpoint Protection are the Microsoft products that function in the systems management space. Configuration Manager provides remote control, patch management, software distribution, hardware and software inventory, user activity, and capacity monitoring. Endpoint Protection provides antivirus and vulnerability detection and protects against both known and unknown threats.

Operations Management

Now that you have an understanding of the System Center products that provide systems management, we will concentrate on operations management. Operations management is mainly focused on ensuring that business operations are efficient and effective through processes that are aimed at improving the reliability and availability of IT systems and services. You accomplish this by gathering information from your current systems, having the right people in place to decipher that data, and having proper procedures in place to carry out any tasks that may arise if there is a current or potential problem in your environment.

The Microsoft product that addresses this need is System Center 2012 Operations Manager, which is based on MOF, which in turn is based on ITIL. Operations Manager is a product that allows centralized monitoring of numerous computers and services on a network. Let’s break down the components of operations management now and apply them to the Operations Manager product.

Gathering Information

Operations Manager can gather information on all the components that come together to provide a service to your business. These components can consist of software such as Windows Server, SQL, Exchange, SharePoint, VMware, and Hyper-V. They can also include hardware components such as servers, SANs, routers, switches, and even the air-conditioning units that keep your datacenters at the right temperature.

When all of this data is collated into Operations Manager, it can provide transparency across your IT service catalog by bringing all the components that make up each service into individual distributed applications, which are essentially a top-level view of each service.

Having the Right People in Place

Once you have configured your ITaaS using Operations Manager, you need to ensure that the right people are in place to oversee its management and ensure that the data collected is deciphered and handled in the correct manner.

These people will most likely have roles that encompass the various security access levels within your company and will map to the Operations Manager user roles of administrators, advanced operators, operators, and read-only operators, to name a few.

This team will be the eyes of the business into what is going on each day within the company IT infrastructure and service catalog.

Having Proper Procedures in Place

When the information has been gathered and the team has been put together to decipher the data, proper procedures need to be put in place to enable the implementation of specific tasks in the event of a critical service failure.

When distributed applications have been deployed for each service, they define the service as a single object within Operations Manager. SLAs can then be configured for each of these single object services to provide reportable statistics on how that service is performing across the business. When you have procedures such as these in place, it will be much easier to define what services are operating at full efficiency and which ones need attention or additional support to bring them in line with the rest of the IT service catalog SLAs.

The Result

Operations Manager provides you with the information you need to help reduce time and effort in managing your IT infrastructure. It gives you a proactive approach at determining possible problems and is a powerful tool that enables you to get the most out of your environment. Operations Manager delivers top-class operations management.

The Bottom Line

Understand IT service management. You can think of ITSM as a conduit between the business and the technology that helps run the business. Along with the technical approach to IT, consider implementing a process-based approach to complement the technical side. This will help minimize downtime and help your business meet requirements set forth in SLAs.

Master It Which of the following is not a key objective of ITSM? (Choose one.)

a. To align IT services with current and future needs of the business and its customers

b. To reduce decision time so that options can be held open until very late in the process

c. To improve the quality of IT services delivered

d. To reduce the long-term cost of service provisioning

Explore the IT Infrastructure Library (ITIL) and Microsoft Operations Framework (MOF). ITIL was created by the United Kingdom’s Office of Government Commerce (OGC) to create a framework of best-practice procedures to help support business. This increases quality and value of both IT systems and services through the creation of a set of policies for service support and service delivery.

Master It When the ITIL was first released it contained 31 books. What is the most current version of the ITIL and how many books does it have now?

Understand how MOF expands ITIL Using ITIL as its foundation, Microsoft set out to customize this set of best practices and tune them to fit the Microsoft philosophy. MOF takes the ITIL processes in service delivery and service support and breaks them out into SMFs. The SMFs are located in four quadrants: Plan, Deliver, Operate, and Manage.

Master It Which of the four MOF quadrants operates continuously as a layer throughout the other three phases?

Explore the Dynamic Systems Initiative. To help support MOF, Microsoft has built systems that will ease the administrative burden of managing the growing complexity of the IT infrastructure. This is known as the Dynamic Systems Initiative (DSI). DSI will help IT administrators tie together the MOF best practices with the software and systems that they manage.

Master It What is the standard XML-based schema that currently operates within the DSI?

Define cloud computing. The National Institute of Standards and Technology (NIST) has published a specific definition of cloud computing that is composed of five essential characteristics, three service models, and four deployment models. Microsoft has modeled System Center 2012 around this definition to bring full management and elasticity to the cloud.

Master It Name the four different cloud deployment models as defined by NIST.

Understand the IT as a Service (ITaaS) model. The IT as a Service (ITaaS) model delivers a strategy that will let you focus on your business and the IT services that run your business from a holistic management perspective instead of on an individual component-based design. It offers organizations greater flexibility in leveraging the power of IT to meet their business needs than has been available before. System Center 2012 helps deliver ITaaS through the various product offerings within the suite.

Master It Identify at least two products in the System Center 2012 suite that complement the IT as a Service model.

Define the Microsoft System Center 2012 products. Microsoft has aligned a suite of products under the title of System Center 2012 to help an IT organization meet the best practices set forth in MOF. This suite includes eight core products, and each operates within different System Center management disciplines that were introduced by Microsoft to help define IT service management.

Master It Choose two products from this list that are not products of the System Center 2012 suite:

a. Operations Manager

b. Virtual Machine Manager

c. App Controller

d. Capacity Planner

e. Orchestrator

f. Data Protection Manager

g. Reporting Manager

h. Service Manager

i. Configuration Manager

j. Endpoint Protection

Define operations management. Operations management is focused on ensuring that business operations are efficient and effective through processes that are aimed at improving the reliability and availability of IT systems and services.

Master It System Center 2012 Operations Manager is the product that addresses the operations management requirement within an organization but how exactly do you accomplish this?

Chapter 2

Installing Operations Manager 2012

You are probably now eager to start working with Operations Manager 2012, and therefore you need to deploy the product. The installation itself is not the most difficult job in the world, but a properly planned design and delivery will lay the foundation for a successful and optimally performing monitoring solution. If you know you have installed the product correctly from the start, you have the right building blocks in place to proceed from. On the other hand, if you install the product poorly or incorrectly, then you will find that your deployment won’t live up to expectations and is destined to cause you problems. This is why it’s important to take some time to think about the decisions to be made instead of diving straight into the installation.

In this chapter, you will learn to:

Plan your Operations Manager environment

Understand the Operations Manager building blocks

Understand the prerequisites for installing Operations Manager

Install Operations Manager 2012

Planning the Operations Manager Environment

The first thing to do for any deployment is to define your project and then plan it properly before you implement the installation. It’s tempting to skip over a design plan and go straight into running the installation wizard, but Operations Manager is a product with so many considerations and potential complexities to take into account during deployment.

Defining the project will help you to understand who the sponsors and stakeholders are in the environment that you will need to work with for the duration of the deployment. It will also go a long way in delivering an understanding of what your customers or internal IT teams will need from the monitoring solution that you are about to introduce or upgrade.

Figure 2.1 shows an example of a flowchart that can be used as a reference for the steps involved in designing an Operations Manager 2012 infrastructure.

Figure 2.1 Deployment decision flowchart

Infrastructure Planning and Design Guides

If you haven’t yet heard of or read through one of Microsoft’s Infrastructure Planning and Design (IPD) guides, you will definitely want to check them out before deploying their products. The IPD series of guides is a collection of Microsoft Word, Visio, and PowerPoint files that walks through a sequence of core decision points to design an infrastructure for specific Microsoft applications. They also serve as a means to validate design decisions within the business to ensure that the solution meets the requirements for both business and infrastructure stakeholders.

The IPD guides are designed to be used with the following IT staff in mind:

Infrastructure planners and architects who have a firm operational grasp of the specific technology

Partners and consultants who design infrastructure solutions

Business managers who want to understand the implications of how the technology decisions being made both support and affect the business

Individual IPD guides are available for specific Microsoft products including Active Directory, Exchange Server, SQL Server, and of course System Center 2012. We cannot stress enough the importance of reviewing the guides for System Center 2012 and SQL Server when planning your deployments and infrastructure requirements, as they contain all the information you need to complement your Operations Manager design plan.

You can choose to download each guide individually or simply download the entire series in a single compressed file from:

www.microsoft.com/en-us/download/details.aspx?id=732

Sizing Helper Tool

To assist with planning your Operations Manager database sizes, the Operations Manager Engineering Team has released a handy Sizing Helper Tool for System Center 2012 Operations Manager.

It’s an interactive Microsoft Excel document that allows you to choose from different deployment scenarios and contains recommendations around supported configurations and best practices.

It can be used to assist with planning the right infrastructure needed for a new Operations Manager deployment and helps to make decisions around optimizing cost on IT hardware purchases. The Sizing Helper Tool will return recommendations on such things as minimum hardware specification for each server role, topology diagrams, and storage requirements. In Figure 2.2 you can see an example of some of the options in the tool.

Figure 2.2 Sizing Helper Tool

As is the case with all sizing tools, it’s always recommended to use the tool as a guide and starting point for your Operations Manager designs; there may be some situations where you will need to use your own calculations to suit the deployment needs. You can download the tool directly from the Operations Manager Engineering Team blog here:

http://blogs.technet.com/b/momteam/archive/2012/04/02/operations-manager-2012-sizing-helper-tool.aspx

Operations Manager Building Blocks

You will need to decide on the Operations Manager architecture that is relevant to the environment that you are deploying into. Operations Manager consists of a number of roles that form the building blocks of your deployment. Deciding which roles to deploy is dependent on the requirements and infrastructure layout of what needs to be monitored. Common areas you have to cover in your architecture include monitoring untrusted domain or workgroup servers located in a DMZ, for example. You might also have a governance, risk, and compliance (GRC) requirement to audit security event logs on specific servers such as domain controllers.

Which roles to implement and how you configure them is an important part of your Operations Manager skillset, and a full understanding of each role is essential before starting an installation.

Along with the different roles, you need to know about the other components that make up an Operations Manager deployment. These roles and components are as follows:

Management group

Management server

Resource pools

Operations Manager agent

Gateway server

Operations console

Web console

Reporting server

Operational database

Operations Manager data warehouse

Management packs

ACS collector

ACS forwarder

ACS database

AEM file share

The following sections will discuss these roles and components in detail to ensure you have the knowledge required to start installing the product.

Management Group

Before the various roles are discussed, you first need to know about another concept, the management group.

The management group is a logical administrative unit and is created during the installation of the first Operations Manager management server in the environment. The management group name must be unique within your Active Directory forest and is also case sensitive—which is important to remember when you need to input its name as you are connecting it with other management groups at a later time.

Being an administrative unit means that all the actions and configuration you do will be contained within the management group. At a bare minimum it consists of a single management server, the operational database, and the operational data warehouse—and

Enjoying the preview?
Page 1 of 1