Cool Green IT Products from DNS-DIRECT

IGEL Slide

Save money & energy Green IT

WEB UD2 Summerpromo 600px

Monday, 17 May 2010

VIRTUALIZATION

COST MAYBE A FACTOR, BUTTHE ARGUMENTS AGAINST
VIRTUALIZATION ARE GETTING WEAKER.
VIRTUALIZATION is arguably one of the most important data center technologies to emerge in the last decade. 
By abstracting the application from the physical hardware
that runs it, virtualised workloads are far more flexible, mobile and available than traditional applications running on non-virtualised
hardware. In fact, virtualization technology has permeated IT so quickly and thoroughly that it’s hard to imagine a business without it.
Still, deploying and managing virtualization presents a variety of challenges that some organisations just can’t handle Issuesby themselves. 
For data centers that aren’t on board yet, its time to examine the principal concerns that are stalling the adoption and expansion of virtualization and look for ways to address those issues.
Perhaps the biggest sticking point for virtualization is cost. In a recent TechTarget survey that questioned more than 900 IT professionals about data center virtualization from July to September 2009, 28% of respondents said they avoided virtualization because it’s just too expensive. 
COST AS A DEAL-BREAKER That’s no big surprise. Virtualization often requires newer and more powerful server hardware that can adequately support the performance of numerous virtual workloads. Doing that may require a server with four, eight or even 16 CPU cores, along with up to 16 GB of memory or more. Nearly 20% of respondents indicated that their existing servers were not adequate but that buying new servers wasn’t an option.
Network architectures may also need to be updated before adopting virtualization. For example, a single 1 Gbps network connection may be totally inadequate to support the traffic from 10 or 20 virtual machine (VM) instances at the same time. Storage is another area
that is frequently overlooked. Virtualization deployments are best served with fast Fibre Channel SAN for VM images and periodic snapshots.

That's it from Vince Bailey

Green IT Virtualization 2010

Virtualization in 2010
Many businesses still fail to take advantage of the significant benefits
offered by server virtualization. This expert E-Guide, brought to you by
SearchVirtualDataCentre.co.uk and EMC, highlights the top six predictions
for server virtualization in 2010. Discover how this technology will
improve the efficiency of disaster recovery, consolidation efforts, and
storage initiatives. Gain insight into the significant changes you can
expect from server virtualization in the coming year.

The growing adoption of server virtualization technologies in the enterprise has dramatically changed the traditional
data center for the better. As the technologies evolve, hypervisors become more sophisticated and the list of
available management tools continues to grow. But that's just the beginning of the changes you can expect. This
tip highlights a few predictions for various facets of server virtualization technologies throughout 2010.

Server virtualization technologies prediction No. 1: disaster recovery

As server virtualization technologies continue to evolve, so will virtual disaster recovery considerations and
planning. Although virtualization provides a backup of sorts, it is not a foolproof method. If one virtual server goes
down, it can take hundreds of virtual machines (VMs) with it -- bringing enterprise operations to a screeching halt.
Having a solid DR plan in place and examining each aspect will make all the difference.
Concerns about compliance and business continuance are also driving the need for disaster recovery strategies.
Fibre Channel on Ethernet (FCoE), network virtualization, growing computing power and attention to security will all
influence the future of virtual DR.

Server virtualization technologies prediction No. 2: server consolidation

The future holds promise for server virtualization and consolidation. Server technologies continue to advance,
offering more processor cores and memory for the same amount of capital investment. This means that each
technology refresh can potentially host more VMs and further reduce the total number of physical servers.
Organizations must adjust their practices to evaluate new VMs in the same way that they evaluate new physical
servers. This is critical to prevent uncontrollable virtual machine sprawl --the possibility of organizations moving to
100 physical servers today spiraling to 1,000 virtual servers tomorrow.
Experts consider server consolidation to be one step closer to a fully virtualized data center that abstracts business
data from its infrastructure.
"It's not just for power savings, hardware reduction or DR anymore," said Pierre Dorion, a Denver-based data
center practice director at Long View Systems, an IT solutions and services company. "We're looking to completely
abstract the physical layer at more levels than just the server." Inevitably, that will combine integration with other
virtualization efforts throughout network, storage and the desktops/endpoints.
Server virtualization technologies prediction No. 3: test and development

If you've deployed server virtualization in the test and development lab and have worked with VMs, it may be time
to take that next step. Try adding more virtual machines to the lab environment. You may also want to investigate
using storage area networks (SANs) to centralize all files comprising your VMs in an effort to improve laboratory
performance.

SANs cost more than typical tape and disk storage, so don't move to this level until the new virtual labs have
proven records of performance and cost reductions. Keep in mind that you must still put proper backup and
recovery practices into place.
Server virtualization technologies prediction No. 4: storage

Storage virtualization has been around for some time in one form or another. While its use in storage pooling and
consolidation may have peaked, experts like Greg Schulz, founder and senior analyst at The Server and StorageIO
Group, insist that this is just the beginning for other aspects of the technology.
"Storage virtualization in terms of agility, transparency, data movement, migration, emulation such as virtual tape …
we're seeing the tip of the iceberg," said Schulz. Administrators should expect to see continued product maturity
that will lead to more features that create more stability, interoperability and scalability, he added.
Network performance is also shifting, as FCoE and 10 GbE slowly emerge to provide the bandwidth needed for
critical storage-intensive applications across Ethernet LANs. Deduplication also plays an indirect role by reducing the
size of the overall data set, which can improve backup times and dramatically boost data migration speeds to DR
sites.



Server virtualization technologies prediction No. 5: security

Virtualization has its weak points, and security flaws can easily surface in server configuration and OS patching. It's
much easier to overlook a configuration setting or OS patch level when there are dozens of VMs on a physical
server.
Traditional security techniques generally monitor network traffic and its behavior, which is suitable with distinct
physical servers and networking hardware. But when multiple servers are hosted on the same machine -- along
with network virtualization technologies like soft switches -- virtual security must emphasize inter-process monitoring
of VM interaction.
Virtualization security flaws also surface in server configuration and OS patching. It's much easier to overlook a
configuration setting or OS patch level when there are dozens of VMs on a physical server. Keep an eye on your
hypervisors, which can also suffer from security flaws and potentially expose the VMs that run under them.
Virtualization users should add hypervisor patching/updating to their OS maintenance process.
Server virtualization technologies prediction No. 6: hypervisors

Today's major hypervisors are well-developed, so innovation may have slowed. While the industry is largely settling
into the "big three" products -- Citrix XenServer, VMware ESXi and Microsoft Hyper-V -- there is plenty of speculation
on where hypervisor technology is heading in 2010.
Improvements in hypervisors will facilitate inspection and enforcement tasks. "We have a lot of clients under
pressure to support more multitenant-type architectures where [implementers] start to mix some security zones on
the same physical infrastructure," said Chris Wolf, senior analyst for Burton Group data center strategies, noting the
importance of security in public cloud situations.
Vendors will dramatically develop management tools, including that help provide data path visibility and troubleshoot
applications in virtual environments as well as planning tools for virtual desktop environments. Tool
evolution will incorporate products that manage virtualization throughout the data center infrastructure, including
servers, storage and the network. Eventually, virtualization tools that provide integration with the cloud will become
available.