Home > Articles > Software Development & Management

📄 Contents

  1. The History of Operations Manager
  2. Introducing System Center 2012 Operations Manager
  3. Terminology and Concepts
  4. Summary
  • Print
  • + Share This
This chapter is from the book

Introducing System Center 2012 Operations Manager

System Center Operations Manager 2007 Unleashed (Sams, 2008) discussed how the nomenclature change from Microsoft Operations Manager 2005 to System Center Operations Manager 2007 suggested OpsMgr had a facelift with that release. The same is true with System Center 2012 Operations Manager. While the core architecture has not undergone the vast changes introduced with the OpsMgr 2007 product and changes for the most part are evolutionary, the name change emphasizes System Center is the actual product, with the latest release being System Center 2012, with the component of interest being Operations Manager.

Beginning with System Center 2012, Microsoft is releasing System Center as a single product, rather than in waves of individual components. Corresponding with this are significant licensing changes for System Center, discussed in Chapter 4, “Planning an Operations Manager Deployment.” System Center 2012 also includes a higher level of integration and cohesion between its components, along with a common installer, although the first version of the installer is far from the robust installer available in products such as Microsoft Office. This is not to say that System Center 2012 is the ultimate vision of integration between its components. Rather, it is a first release of a single System Center product, first announced at the Microsoft Management Summit (MMS) in 2003, and then consisting of two technically unrelated products: MOM 2000 and SMS 2003. System Center, you’ve come a long way, but the journey isn’t over yet!

The System Center 2012 Operations Manager component includes a number of new features and capabilities; some key improvements being changes in management server architecture to eliminate single points of failure, new capabilities for network monitoring, and application performance monitoring, just to name a few. The next sections discuss the changes in this release, which Microsoft documents at http://technet.microsoft.com/en-US/library/jj656648.aspx. Additional information is available at http://www.windowsnetworking.com/articles_tutorials/Introduction-System-Center-Operations-Manager-2012-Part1.html, although this was written during the beta timeframe.

Supported Configurations

System Center 2012 Operations Manager continues to build on the robustness of OpsMgr 2007 R2. Table 2.1 lists supported configurations for agents, applications, consoles, and network devices. “Supported” means Microsoft has tested these configurations and will support them; these are not necessarily the outer limits of what will run. In some cases, the load on individual agents may affect what will perform well. For additional information on monitored item capacity, see http://technet.microsoft.com/en-us/library/hh205990.aspx#BKMK_MonitoredItem.

Table 2.1. Supported configurations for agents, applications, consoles, and network devices

Monitored Item

Recommended Limit

Open Operations consoles

50

Agent-monitored computers reporting to a management server

3,000

Agent-monitored computers reporting to a gateway server

2,000

Agents in a single management group

15,000

Collective client monitored computers per management server

2,500

Agentless Exception Monitored (AEM) computers per dedicated management server

25,000

AEM computers per management group

100,000

Management servers per agent for multi-homing

4

Agentless-managed computers per management server

10

Agentless-managed computers per management group

60

Agent-managed and UNIX or Linux computers per management group

6,000 (with 50 open consoles); 15,000 (with 25 open consoles)

UNIX or Linux computers per dedicated management server

500

UNIX or Linux computers monitored per dedicated gateway server

100

Network devices managed by a resource pool with three or more management servers

1,000

Network devices managed by two resource pools

2,000

Application performance monitoring (APM) agents

700

APM applications

400

URLs monitored per dedicated management server

3,000

URLs monitored per dedicated management group

12,000

URLs monitored per agent

50

Root Management Server and Root Management Server Emulator

Sometimes an area of angst with the management server architecture introduced in OpsMgr 2007 was the root management server (RMS). The RMS, often referred to as the product’s “Achilles’ heel,” was a single point of failure, as it ran workflows and services that did not run on other management servers. When the RMS was unavailable, these functions would not run. Here’s what the RMS was solely responsible for providing:

  • Console access
  • Role based access control
  • Distribution of configurations to agents
  • Connectors to other management systems
  • Alert notifications
  • Health aggregation
  • Group calculations
  • Availability
  • Dependency monitoring
  • Database grooming
  • Enabling model-based management

In addition, there were two Windows services that started only on the RMS and were disabled on other management servers: the SDK and Config services. A third service, the Health Service—renamed in OpsMgr 2007 R2 to System Center Management but often referred to by its previous name—runs on all management servers (and agent-monitored systems), but had a unique role on the RMS, running workloads for the entire management group.

With all these responsibilities, the RMS often was a performance and scalability bottleneck, in addition to being a single point of failure. Making the RMS highly available required clustering—not necessarily for the faint of heart—or having a secondary management server available to be promoted to the RMS role. If the RMS became unavailable, those functions the RMS was responsible for were also unavailable until another management server could take over that role.

System Center 2012 Operations Manager removes the RMS role. The SDK service (renamed to the System Center Data Access Service or DAS) now starts automatically on every management server, and the Config service (now the System Center Management Configuration service), rewritten and renamed from OMCG to CSHOST, is federated among management servers. Rather than using data stored in memory on the RMS as was the case in OpsMgr 2007, the configuration service uses a new set of tables in the Operations Manager database (known as the Configuration Store and identified as CS.*) to persist the instance space.

To distribute the RMS-specific workloads to all management servers, Microsoft developed the concept of resource pools, discussed in depth in the “Resource Pools” section. Three resource pools distribute the RMS-specific workloads:

  • All Management Servers Resource Pool: This pool has most RMS-specific instances and workflows. Exceptions are the AD Integration workflows and Alert Subscription Service.
  • Notifications Resource Pool: The Alert Subscription Service instance is targeted to this pool. Using a separate pool allows you to easily remove management servers from the pool that should not be participating in notifications.
  • AD Assignment Resource Pool: AD Integration workflows are targeted to this pool so you can more easily control the location where the AD assignment workflows are running.

The RMS emulator (RMSE) is a special role added to one of the management servers, by default the first installed management server. The RMS emulator is for backwards compatibility to legacy management packs; it is not actually required for the management group to function correctly. To determine which management server has the role of RMS emulator, open the OpsMgr Operations console and navigate to Administration -> Management Servers. Figure 2.3 shows that Helios is currently acting as the RMS emulator in the Odyssey OMGRP management group. Should you need to move the RMS emulator role, Microsoft provides the Get-SCOMRMSEmulator, Set-SCOMRMSEmulator, and Remove-SCOMRMSEmulator PowerShell cmdlets to identify, move, and delete the RMSE; these scripts are discussed in Chapter 23, “PowerShell and Operations Manager.”

Figure 2.3

Figure 2.3. The Management Servers view identifies the current RMS emulator.

High Availability with Resource Pools

Resource pools are a collection of management or gateway servers that can take on monitoring workflows if one of the management/gateway servers becomes unavailable. Several resource pools are installed with the OpsMgr 2012, and you can create your own for specific purposes. As an example, you could create a resource pool of management servers located in the same geographic area that provide network device monitoring. Using resource pools helps your management group to be highly available.

The management servers in any given resource pool are treated as having equal capacity; differences in processors and memory are not considered; different workloads are not taken into account and are distributed among available servers in the pool. The authors recommend you plan for all management servers to have similar configurations.

By default, all management servers are members of the resource pools created when OpsMgr is installed, and management servers added to the management group are automatically added to any resource pool with a membership type of Automatic. Removing a management server from a pool changes the membership type of that pool to Manual. Figure 2.4 shows the resource pools created by default. To see the current list of resource pools, navigate in the Operations console to Administration -> Resource Pools.

Figure 2.4

Figure 2.4. Resource pools created by OpsMgr management group installation.

Here are several caveats to keep in mind:

  • Windows agents do not use resource pools for failover; they continue to function as in OpsMgr 2007. You can implement Active Directory Integration (ADI) as discussed in Chapter 8, “Installing and Configuring Agents.”
  • Resource pools only cover health service functionality. The workflows targeted to the instances are loaded by the health service in the pool that is managing that instance. Should one of the health services in the resource pool fail, the other health services pick up the work the failed member was running. For failover of the SDK service, consider Network Load Balancing, covered in Chapter 9, “Complex Configurations.”

Viewing Resource Pool Health

You can view the health of resource pool members in the Administration pane of the Operations console. Navigate to the Resource Pool node, select the pool you want to work with and then select View Resource Pool Members in the Tasks pane to view the health of the members of the selected resource pool.

Resource Pool Events

A number of events provide feedback regarding the state of each member of the pool. These events, listed in Table 2.2, are not collected or stored by Operations Manager; you must connect to each pool member’s event log to view the events. If there is a problem with the health of a pool, start your investigation by looking at the event log. All resource pool events are logged under the Pool Manager category.

Table 2.2. Resource pool events

Event ID

Name

Severity

Explanation

Description

15000

Pool Member Initialized Event

Informational

Indicates a member of a pool has received configuration and started the failover process.

The pool member has initialized.

15001

Operations Manager Resource Pool Member Initial Check Ack Quorum

Informational

More than half the pool members must be able to communicate with each other for a pool to perform work. Once the initial check ack quorum is achieved, a lease request can be sent out.

More than half of the members of the pool have acknowledged the most recent initialization check request. The pool member will send a lease request to acquire ownership of managed objects assigned to the pool.

15002

Operations Manager Resource Pool Member Quorum Failure

Error

This is the only event that can occur repeatedly without intermediate events, so it is throttled to once every 10 minutes per pool if no other changes in the pool occur. It indicates a majority of pool members has not acknowledged the initial check request.

The pool member cannot send a lease request to acquire ownership of managed objects because half or fewer members of the pool acknowledged the most recent initialization check request. The pool member will continue to send an initialization check request.

15003

Operations Manager Resource Pool Member Failover

Informational

This event is the result of a check ack message that influenced the logic used to determine if a pool member is available.

Availability of one or more members of the pool has changed. Ownership for all managed objects assigned to the pool will be redistributed between pool members.

15004

Operations Manager resource pool member not allowed to perform work

Error

This event is logged if a majority of pool members do not respond to the last lease request before the previous lease expired.

The pool member no longer owns any managed objects assigned to the pool because half or fewer members did not acknowledge the most recent lease request. The pool member has unloaded the workflows for managed objects it previously owned.

Network Monitoring Enhancements

OpsMgr 2012 provides physical network monitoring of routers and switches, extending to their interfaces and ports. This and other functionality is made available by Microsoft’s licensing of EMC’s SMARTS. The SMARTS technology provides root-cause analysis capability across an enterprise by looking at network connections and interpreting events by looking at the symptoms those events spawn. Here’s what is included in networking monitoring:

  • Network device discovery, monitoring and reporting
  • SNMP v3 support; previous versions supported SNMP v1 and v2c
  • IPv4 and IPv6 support
  • Port/interface monitoring; this includes, along other monitors:
    • Up/down monitoring
    • Traffic volume
    • Utilization
    • Dropped packet rate
    • Broadcast traffic statistics
  • Virtual Local Area Network (VLAN) health monitoring
  • Overall connection health
  • Hot Standby Router Protocol (HSRP) group health
  • New visualization/dashboards
  • Overall network summary: Shows the health of the network.
  • Network node: Shows health of a device on the network. Figure 2.5 is an example of this dashboard.
    Figure 2.5

    Figure 2.5. The Network Node Dashboard.

  • Network interface: Displays interface-level statistics.
  • Vicinity: Shows a device, its neighbors, and connected Window servers.

See Chapter 16, “Network Monitoring,” for a detailed discussion of network monitoring.

Monitoring .NET Applications

Also known as application performance monitoring, APM is Microsoft’s integration of AVIcode into OpsMgr. APM has the ability to drill into an application performance alert down to the code level to see exactly what is going on and where the issues are, giving you a single tool to monitor both applications and your infrastructure. As APM is configured using templates, it does not require authoring management packs or code modifications. Interfaces include the Application Advisor (shown in Figure 2.6) and Applications Diagnostics Web consoles, with the Web Application Availability Monitoring Wizard added to the Monitoring space of the Operations console.

Figure 2.6

Figure 2.6. APM Application Advisor.

This feature, discussed in Chapter 15, “Monitoring .NET Applications,” is available for .NET web applications and web services running IIS 7.x in OpsMgr 2012 RTM; SP 1 adds support for WFC, ASP.NET MVC, .NET Windows Services, Azure SDK, and IIS 8.

APM is the Gartner Group 2012 Challenger’s quadrant for Application Performance monitoring, as discussed at http://www.gartner.com/technology/reprints.do?ct=120820&id=1-1BRNFO0&st=sg and http://innetworktech.com/wp-content/uploads/2012/08/2012-Magic-Quadrant-for-Application-Performance-Monitoring.pdf.

Using PowerShell

System Center 2012 Operations Manager incorporates about 30 new PowerShell cmdlets, enabling support of anything performed in the console via a cmdlet. Previous OpsMgr 2007 cmdlets are renamed to have a SCOM prefix in the name and now have new parameters; the OpsMgr 2007 cmdlets, although depreciated, still work in this version by loading the OpsMgr 2007 snap-in. PowerShell capabilities are discussed in Chapter 23.

Saving Overrides

A welcome change is that the Default management pack is no longer the default location for saving overrides or creating new management packs. Figure 2.7 shows where the drop-down now prompts you to select a management pack.

Figure 2.7

Figure 2.7. Overrides are no longer saved in the Default management pack by default.

Using the Console

The Operations Manager “full” console remains largely unchanged in OpsMgr 2012. Unlike consoles in some of the other System Center 2012 components, the Operations console does not include the ribbon bar (some may consider this an advantage). One change is the Actions pane is renamed to the Tasks pane. For an in-depth discussion of the console, see Chapter 7, “Configuring and Using System Center 2012 Operations Manager.” Figure 2.8 shows the Operations console.

Figure 2.8

Figure 2.8. Initial view of the Operations console opens to the Monitoring Overview in the Monitoring space.

Web Console Updates

The OpsMgr 2012 Web console is completely redesigned and based on Silverlight. All monitoring capabilities in the full console are now available in the Web console, along with access to the My Workspace node. Figure 2.9 shows the Monitoring space in the Web console.

Figure 2.9

Figure 2.9. Active Alerts view in the Monitoring space of the Web console.

Maintenance Mode on Management Servers

Operations Manager does not block placing management servers into maintenance mode. However, the way management servers behave when placed into maintenance mode has changed:

  • When you place a management server in maintenance mode, the System Center Management Configuration Service changes the agent assignment to force the agents to fail over to another management server, ensuring data loss does not occur.
  • In OpsMgr 2007, a management server was unable to take itself out of maintenance mode. This was because maintenance mode unloaded all workflows, including the End Maintenance Mode workflow! This now changes with the assignment of the End Maintenance Mode workflow to the All Management Servers resource pool. By being in the resource pool, the workflow fails over to another management server and reloads to be able to exit maintenance on the management server that is placed in maintenance mode. (By definition, having a resource pool requires at least two management servers, so if you only have a single management server it will never come out of maintenance mode because there is no other management server available for workflow failover.)

Dashboards

OpsMgr contains significant enhancements in its capability to display data through use of dashboard technologies. Discussed in Chapter 11, “Dashboards, Trending, and Forecasting,” here are some of the highlights:

  • Dashboards are distributed in management packs.
  • Authoring dashboards does not require coding.
  • You can access dashboards via the Operations console, the Web console, and a SharePoint Web Part.

What’s New in Security

In OpsMgr 2012, the local Administrators group is used to populate the Operations Manager Administrator role. This differs from OpsMgr 2007, where you could configure the group in that role; if you specified an incorrect group, you ran the risk of being unable to start the Operations console to be able to change that group without editing MomAuth.xml.

Operations Manager 2007 R2 simplified the original presentation of Run As accounts and profiles by adding distribution and targeting features. When you associate a Run As account with a particular Run As profile, you can target the profile to any class available in Operations Manager and see the logical relationship between the two. Both Run As account distribution and Run As account targeting must be correctly configured for the Run As profile to work properly.

Connector Functionality Replaced by System Center 2012 Orchestrator

Previous versions of Operations Manager used connectors to connect to other systems. In System Center 2012 Operations Manager, this functionality is replaced by Orchestrator integration packs. For information on Orchestrator, see Chapter 21, “System Center Integration,” and System Center 2012 Orchestrator Unleashed (Sams, 2013).

Changes in Capacity

With the most recent version of Operations Manager, Microsoft has increased capacity in several areas to extend the product’s monitoring capabilities. Table 2.3 compares management features across the different versions.

Table 2.3. Comparison of Operations Manager capabilities across versions

Feature

MOM 2000

MOM 2000 SP 1

MOM 2005

MOM 2005 SP 1

OpsMgr 2007

OpsMgr 2007 SP 1

OpsMgr 2007 R2

OpsMgr 2012

Managed computers per management group

1,000

2,000

3,500

4,000

5,000

6,000

6,000

15,000

Managed computers per management server

700

1,000

1,200

2,000

2,000

2,000

2,000
UNIX/Linux: 200

3,000
UNIX/Linux:500

Management servers per management group

4

10

10

10

10

No defined limit

No defined limit

No defined limit

Agentless managed computers per management group

n/a

n/a

60

60

60

60

60

60

Agentless managed computers per management server

n/a

n/a

10

10

10

10

10

10

Agents per gateway server

n/a

n/a

n/a

n/a

200

800

1,500

2,000
UNIX/Linux: 100

  • + Share This
  • 🔖 Save To Your Account