Quick Links
Advertise with Sarbanes Oxley Compliance Journal
Features


< Back

Sarbanes Oxley : Technology : Data Center

Server Virtualization: Impacting More Than Just IT


By David Lynch
David Lynch
VP of Marketing
Embotics

In today’s economic times, most IT organizations are accelerating their adoption of server virtualization as means to reduce TCO, slash their energy costs and keep up with competitors.  This increased use of server virtualization has a broader impact than just IT operations.  Virtualization directly affects overall governance models and requiring new controls, processes and systems.  While the amount of virtualization in the datacenter is small, this impact is manageable.  However, as the amount of virtualization grows, its impact on the business as a whole grows with it. 

This article examines the impact of this new architecture on the datacenter and business controls, recommending new policies, systems and controls that virtualization requires in order to be effective.

Impact on the Datacenter
At first, most datacenter managers believed that they could manage virtual machines (VMs), in much the same way as they were managing their physical servers, but experience has shown that this is simply not possible.  While there are a lot of commonalities between the two environments, there are also some significant differences.

One of the biggest differences is in the potential impact of virtualization on existing control processes and procedures. For example, every datacenter has specific processes and procedures when it comes to provisioning new servers; these generally involve sign-offs and handovers between the various datacenter teams, eventually resulting in a new server being installed in the datacenter.   

But when you can create a new VM, literally with the click of a mouse – either by copying an existing one or generating a new one from a template, these processes become easy to bypass. 

When you can make 10 identical copies of a specific server, and distribute them around the organization, it can be a challenge to keep track of them.  The management systems provided by the virtualization vendors will tell you where a specific VM is now, but not where it has been, or about its relationships with other VMS.  This is compounded by the fact that VMs are, by definition, mobile.  They can (and do) move around the environment after they have been deployed; which not only makes them more difficult to track and manage, but can also compromise data/application segmentation policies.

These differences, and others, also make it difficult for the existing datacenter management tools to operate in the virtual space; leaving potential blind spots.  This is further compounded by the lack of virtualization management tools, reporting and automation.  The end result is a very manual environment that does not integrate well into existing datacenter compliance and control models. 

This means that traditional “red flag” or alert systems may not be working properly in this space, and they are either being circumvented or simply not seeing the day to day operation, leaving your datacenter exposed.

 In the physical datacenter, with all its associated systems, processes and checks and balances, flags are raised if things move out of process.  Management by exception is the rule of the day and it’s an environment where “no news” is truly “good news”.   

But in an environment where the controls are predominantly manual and reporting is scarce, the lack of control visibility can create issues that you will not see coming.   This may not be too much of an issue when the VM environment is small, but as the environment grows – the impact of these issues grow with it.

Impact on business controls
As virtual environments grow the lack of automated controls and the increased manual activity starts to create an imbalance between IT and audit operations, which increases as the environment grows.

You can see this effect on business control systems more clearly using an integrated GRC (Governance, Risk & Compliance) model (Figure 1).   The lack of automated controls, monitoring and continuous measurement in the virtual datacenter, combined with the corresponding increase in manual processes, produce an environment that has less effective controls and more risk than its physical counterpart.

There is not a lot of impact when environments are small and well contained, but the situation changes rapidly as environments grow; eventually resulting in a significant imbalance within the whole system.

The more the GRC system moves out of balance, the greater the impact to the overall business.

The individual “tipping point” is going to be different for each organization, but sooner or later it will be reached and when it does, organizations start to see costs and risks escalate: A lot of organizations are already running into unplanned IT expenses, caused by inefficient virtual environments, others are seeing a rise in visible and invisible datacenter incidents, all of which eventually impact the overall business performance.  

Additional policies WILL be needed
A lot of existing operational and risk policies and control objectives still apply to the virtual world.  They may have to be adjusted to fit the dynamics of this new architecture, but they still apply.  These include elements common to all servers (physical or virtual), such as configuration, patch management and security.

However, virtualization has uniqueness’s that require new policies and controls.  These include:

  • Identity management:  Given the dynamic nature of VMs, some level of identity management that is not based on simple naming conventions will be needed, if only to assure that policies are being applied correctly.
  • VM mobility control: Part of the value that virtualization brings is the flexibility it provides to IT groups.  VMs are designed to be mobile and are easily moved from host to host in response to automated (load balancing) or manual imperatives (removing VMs from a physical host that is in need of maintenance).  But mobility can be a double edged sword, as not all VMs should be mobile.  For example: You will want to control (and demonstrate that control for audit purposes) any application that has to conform to regulation or internal corporate standards.  Policies around where specific VMs should and should not be allowed to run, as well as how long VMs should be allowed to stay offline (powered off) will be required.
  • Provisioning: Traditional server provisioning processes can be easily circumvented in the virtual space, so new processes need to be established that control both what gets provisioned and who has the authority to authorize new servers.
  • Data separation: Every datacenter has specific rules around application separation; usually driven either by security concerns or compliance issues.  As you virtualize applications that fall under these standards, it’s important to consider how you will enforce this on the virtual side, not just when the VM is provisioned, but also to protect against inappropriate movement throughout its lifecycle; either malicious or inadvertent.
  • Reclamation: Ensuring that redundant or unused VMs are removed in a timely fashion is another area requiring specific policy and objectives.  Also, security impacts of this new technology need to be considered: including the impact of VMs on existing security systems (some systems don’t work well in the virtual space) and the potential of new attack threats.

Addition Systems WILL be necessary
Server virtualization, despite its rapid worldwide adoption, is still a relatively immature technology.  It entered the datacenter through the back door as an operations tool with a very good ROI, and has since evolved into a new datacenter architecture.  But entering this way, it was never subject to the normal vetting process that technologies usually go through before deployment into the datacenter, and lacks some much needed functionality.

There are a number of different virtualization platforms on the market today including VMware (the market leader), Microsoft and Citrix – and a few others are positioning to “get in.”  Each of these platforms have a different set of strengths and weaknesses, and most organizations believe that they will end up running a mix of all three, raising the issue of managing across heterogeneous environments. 

The management systems provided by virtualization vendors, not surprisingly, focus more on VM deployment than management of the environment.  Additionally, most of the traditional datacenter management vendors whose systems are built for the physical datacenter, don’t work well in the virtualized environments. Consequently, these vendors have begun to expand into virtual management, but they have a significant task ahead of them.  As fast as they upgrade their existing tool sets, new capabilities are emerging from the virtualization platforms, leaving many of them as far behind as they were when they started.

Additional systems will be needed for the “virtual datacenter,” both to make up for the shortcomings of the existing datacenter management and virtualization systems, and to implement and enforce the additional policies that this space requires.  These include:

  • Discovery and reporting: The reporting available from the virtualization platform is minimal and focused on current state.  This may be appropriate for the provisioning of VMs, but not very useful for ongoing management or audit.  Most datacenters are using manual spreadsheets to keep track of and report on VMs; and like most manual systems, this is subject to error and easily gets out of sync with reality.   A true reporting system that can gather information across multiple VirtualCenters (the VM management system), and create a trusted realtime “Inventory of Record” is an essential component of the new virtualization architecture.
  • Mobility Control: Having policies that delineate where specific VMs are allowed to run (i.e. Sarbanes-Oxley (SOX) applicable VMs, restricted to a specific SOX area of the infrastructure) need to be complemented with a system that can enforce such rules.
  • Reclamation: When VMs can be created at the click of a mouse, and can have lifecycles that measure in minutes, the potential for sprawl is much higher in the virtual world than the physical one.  Reclamation, the removal of unused VMs from the environment to reclaim the resources (disk, license, cpu and support), is mostly a manual affair.  Given the workloads of most IT organizations, it is infrequently performed.  Automating this so that it occurs on a regular basis can greatly increase datacenter optimization.
  • Monitoring: Unlike the physical side of the datacenter, virtual machines do not necessarily have to be created internally.  Virtual Appliances (fully functioning trial applications that do not need the “traditional” installation process) can be downloaded from an external website and easily installed on a datacenter host.  VMs can be created in a number of different ways, and by anyone with access to a VirtualCenter or vCenter, or they can be “walked-in” on a CD or memory stick.  Consequently, some form of automatic monitoring of the entire environment looking for “out of process” or unauthorized VMs, is essential.  This is a task that is potentially feasible to do manually when environments are small and not very dynamic, but quickly becomes impossible as environments grow.   This kind of monitoring system also becomes a “safety net” for inexperienced, junior or simply overworked administrators.  Should they inadvertently try to deploy an unauthorized VM, or in the case of mobility controls accidently move a VM out of its legitimate environment, this kind of system can catch the errors before they become incidents.

Fortunately, there are a range of new companies that have built tools and systems specifically for the virtualized environment, supporting multiple platforms and focusing on the discrete issues and challenges that the virtual world creates.  There are vendors that focus on performance and capacity planning, while others focus on provisioning and reporting, and others who look at overall (and integrated), management and control, specifically for the virtual datacenter. 

The collection of monitoring, automation, mobility control and insight capabilities are generally described as “VM Lifecycle management.”

Recommendations:
Virtualization is here to stay.  If you are ramping up the adoption of server virtualization in the datacenter, like most IT organizations, then you must consider its impact on your business controls now, before you hit the tipping point…

Specifically, you should:

  • Overlay existing standards and processes into the virtual space wherever possible: They will probably require tweaking, but ensuring they are in place will start to provide needed visibility and control, as well as insight into what additional management systems will be needed.
  • Establish the new standards and processes that this technology needs: Increasing your visibility and control even further.  At a minimum, these should include:
    • Monitoring and Reporting
    • Provisioning control
    • VM Identity management
    • Mobility controls to enforce data separation
    • Reclamation of end-of-life VMs
  • Automate as much as possible:  Manual processes are inconsistent and time consuming. Ultimately, the only way to get your GRC model back in balance is to increase the level of automation by implementing the additional systems that this space needs.  This will have the additional effect of reducing the amount of manual activity and processes, increasing your control and reducing risk, as well as saving a significant amount of administration time.  
  • Review your security architecture: Many types of security appliances and monitoring tools need to know what they’re protecting and where it is in order to be effective, and the mobility of VMs can be problematic.    Other security technologies like IDS, IPS, Data Leak Prevention and Malware Prevention can also be impacted.  The constant change enabled by virtualization can place dynamic demands on any “static” types of security solutions, even in small virtualized infrastructures.  Bottom line: some of your security infrastructures will not work well in a virtualized environment, and a security product that does not work well, for all intents and practices, does not work at all.
  • Implement specific audit standards (and reporting systems) for the virtual space: As discussed, the virtualization environment requires new control policies, processes and standards.  As these come into place, they will require corresponding new audit process and procedures.  For example, SOX compliant applications will need to be segmented from other systems, as well as tighter access controls applied.  The inherent mobility of virtual servers adds an additional factor to the audit process – Has this VM moved to an additional host during the audited period?  If so, what other VMs were on that host?  

Software licensing is another area that will need specific focus in this space.

Virtualization started in most datacenters as a tactical effort; lead by IT, and focused on the ROI associated with server consolidation.  But moving from tactical server consolidation to more production-oriented uses of virtualization, and increasing the percentage of your datacenter that is virtual, requires a more strategic view about the impact of this architecture on datacenter management and business controls.  The recommended actions in this paper will ensure that you optimize the overall value of this technology.





David Lynch
VP of Marketing
Embotics

Vice president of marketing for Embotics, David is dynamic, successful and well rounded - a 30 year veteran of the high tech marketplace with international expertise, David has spent the past 11 years in the high tech industry, working, writing, commenting on technologies and trends and their impact on the industry. Subjects have included authentication, access control, public key infrastructure (PKI), deperimeterization, compliance and virtualization.

A long time member of Mensa, Mr. Lynch holds degrees in Nautical Science (City of London), Computer Technology (Montreal) and an MBA from the University of Toronto.






About Us Editorial

© 2019 Simplex Knowledge Company. All Rights Reserved.   |   TERMS OF USE  |   PRIVACY POLICY