Cool Green IT Products from DNS-DIRECT

IGEL Slide

Save money & energy Green IT

WEB UD2 Summerpromo 600px

Monday, 17 May 2010

VIRTUALIZATION

COST MAYBE A FACTOR, BUTTHE ARGUMENTS AGAINST
VIRTUALIZATION ARE GETTING WEAKER.
VIRTUALIZATION is arguably one of the most important data center technologies to emerge in the last decade. 
By abstracting the application from the physical hardware
that runs it, virtualised workloads are far more flexible, mobile and available than traditional applications running on non-virtualised
hardware. In fact, virtualization technology has permeated IT so quickly and thoroughly that it’s hard to imagine a business without it.
Still, deploying and managing virtualization presents a variety of challenges that some organisations just can’t handle Issuesby themselves. 
For data centers that aren’t on board yet, its time to examine the principal concerns that are stalling the adoption and expansion of virtualization and look for ways to address those issues.
Perhaps the biggest sticking point for virtualization is cost. In a recent TechTarget survey that questioned more than 900 IT professionals about data center virtualization from July to September 2009, 28% of respondents said they avoided virtualization because it’s just too expensive. 
COST AS A DEAL-BREAKER That’s no big surprise. Virtualization often requires newer and more powerful server hardware that can adequately support the performance of numerous virtual workloads. Doing that may require a server with four, eight or even 16 CPU cores, along with up to 16 GB of memory or more. Nearly 20% of respondents indicated that their existing servers were not adequate but that buying new servers wasn’t an option.
Network architectures may also need to be updated before adopting virtualization. For example, a single 1 Gbps network connection may be totally inadequate to support the traffic from 10 or 20 virtual machine (VM) instances at the same time. Storage is another area
that is frequently overlooked. Virtualization deployments are best served with fast Fibre Channel SAN for VM images and periodic snapshots.

That's it from Vince Bailey

Green IT Virtualization 2010

Virtualization in 2010
Many businesses still fail to take advantage of the significant benefits
offered by server virtualization. This expert E-Guide, brought to you by
SearchVirtualDataCentre.co.uk and EMC, highlights the top six predictions
for server virtualization in 2010. Discover how this technology will
improve the efficiency of disaster recovery, consolidation efforts, and
storage initiatives. Gain insight into the significant changes you can
expect from server virtualization in the coming year.

The growing adoption of server virtualization technologies in the enterprise has dramatically changed the traditional
data center for the better. As the technologies evolve, hypervisors become more sophisticated and the list of
available management tools continues to grow. But that's just the beginning of the changes you can expect. This
tip highlights a few predictions for various facets of server virtualization technologies throughout 2010.

Server virtualization technologies prediction No. 1: disaster recovery

As server virtualization technologies continue to evolve, so will virtual disaster recovery considerations and
planning. Although virtualization provides a backup of sorts, it is not a foolproof method. If one virtual server goes
down, it can take hundreds of virtual machines (VMs) with it -- bringing enterprise operations to a screeching halt.
Having a solid DR plan in place and examining each aspect will make all the difference.
Concerns about compliance and business continuance are also driving the need for disaster recovery strategies.
Fibre Channel on Ethernet (FCoE), network virtualization, growing computing power and attention to security will all
influence the future of virtual DR.

Server virtualization technologies prediction No. 2: server consolidation

The future holds promise for server virtualization and consolidation. Server technologies continue to advance,
offering more processor cores and memory for the same amount of capital investment. This means that each
technology refresh can potentially host more VMs and further reduce the total number of physical servers.
Organizations must adjust their practices to evaluate new VMs in the same way that they evaluate new physical
servers. This is critical to prevent uncontrollable virtual machine sprawl --the possibility of organizations moving to
100 physical servers today spiraling to 1,000 virtual servers tomorrow.
Experts consider server consolidation to be one step closer to a fully virtualized data center that abstracts business
data from its infrastructure.
"It's not just for power savings, hardware reduction or DR anymore," said Pierre Dorion, a Denver-based data
center practice director at Long View Systems, an IT solutions and services company. "We're looking to completely
abstract the physical layer at more levels than just the server." Inevitably, that will combine integration with other
virtualization efforts throughout network, storage and the desktops/endpoints.
Server virtualization technologies prediction No. 3: test and development

If you've deployed server virtualization in the test and development lab and have worked with VMs, it may be time
to take that next step. Try adding more virtual machines to the lab environment. You may also want to investigate
using storage area networks (SANs) to centralize all files comprising your VMs in an effort to improve laboratory
performance.

SANs cost more than typical tape and disk storage, so don't move to this level until the new virtual labs have
proven records of performance and cost reductions. Keep in mind that you must still put proper backup and
recovery practices into place.
Server virtualization technologies prediction No. 4: storage

Storage virtualization has been around for some time in one form or another. While its use in storage pooling and
consolidation may have peaked, experts like Greg Schulz, founder and senior analyst at The Server and StorageIO
Group, insist that this is just the beginning for other aspects of the technology.
"Storage virtualization in terms of agility, transparency, data movement, migration, emulation such as virtual tape …
we're seeing the tip of the iceberg," said Schulz. Administrators should expect to see continued product maturity
that will lead to more features that create more stability, interoperability and scalability, he added.
Network performance is also shifting, as FCoE and 10 GbE slowly emerge to provide the bandwidth needed for
critical storage-intensive applications across Ethernet LANs. Deduplication also plays an indirect role by reducing the
size of the overall data set, which can improve backup times and dramatically boost data migration speeds to DR
sites.



Server virtualization technologies prediction No. 5: security

Virtualization has its weak points, and security flaws can easily surface in server configuration and OS patching. It's
much easier to overlook a configuration setting or OS patch level when there are dozens of VMs on a physical
server.
Traditional security techniques generally monitor network traffic and its behavior, which is suitable with distinct
physical servers and networking hardware. But when multiple servers are hosted on the same machine -- along
with network virtualization technologies like soft switches -- virtual security must emphasize inter-process monitoring
of VM interaction.
Virtualization security flaws also surface in server configuration and OS patching. It's much easier to overlook a
configuration setting or OS patch level when there are dozens of VMs on a physical server. Keep an eye on your
hypervisors, which can also suffer from security flaws and potentially expose the VMs that run under them.
Virtualization users should add hypervisor patching/updating to their OS maintenance process.
Server virtualization technologies prediction No. 6: hypervisors

Today's major hypervisors are well-developed, so innovation may have slowed. While the industry is largely settling
into the "big three" products -- Citrix XenServer, VMware ESXi and Microsoft Hyper-V -- there is plenty of speculation
on where hypervisor technology is heading in 2010.
Improvements in hypervisors will facilitate inspection and enforcement tasks. "We have a lot of clients under
pressure to support more multitenant-type architectures where [implementers] start to mix some security zones on
the same physical infrastructure," said Chris Wolf, senior analyst for Burton Group data center strategies, noting the
importance of security in public cloud situations.
Vendors will dramatically develop management tools, including that help provide data path visibility and troubleshoot
applications in virtual environments as well as planning tools for virtual desktop environments. Tool
evolution will incorporate products that manage virtualization throughout the data center infrastructure, including
servers, storage and the network. Eventually, virtualization tools that provide integration with the cloud will become
available.

Thursday, 25 February 2010

Sgn up and become apart of the movement for Christ. From a youth to the youths

We have to take God out of the four walls of the church and stop keeping him trapped in a box. If the people on the streets are not coming to church then we need to take the church to them...it's all about relationships...check out the link below and show your support.

www.choppinituptv.ning.com

in reference to: Troubleshooting : Contribute with Sidewiki - Toolbar Help (view on Google Sidewiki)

Monday, 22 February 2010

DNS virtualization storage secrets

VMware vSphere v4
Best Practices
VMware vSphere v4 is an extremely powerful virtualization software, designed to reduce costs and improve IT control.
Most storage systems provide a highly reliable, easy-to-use, high value storage platform for this deployment.
There are several factors for you to consider, however, to
ensure that the needs of your applications and data are met. This document provides valuable insight on determining the number and size of datastores, planning the storage environment, and tuning performance of the system. It is intended for storage administrators with an
understanding of VMware vSphere v4 and your storage systems.



VMware vSphere v4 includes the ESX server virtualization layer, VMware VMotion for live virtual machine migration, VMware DRS for continuous load balancing, and more. VMware ESX abstracts processor, memory, storage, and networking resources into multiple virtual machines (VMs), enabling greater hardware utilization and flexibility.

However, in order to leverage the power of VMware vSphere v4, you need to do your homework planning the deployment.
DNS recommends you do extensive performance monitoring of your VMware ESX server environment to determine the optimal number of servers and datastores. VMware publishes comprehensive material on monitoring ESX servers; please visit www.vmware.com to learn more.

Datastores
When designing your organization’s vSphere storage environment, it is important to determine the
number of datastores that best fit your virtualization needs. Factors that weigh into this decision include
the desired number of VMs, organizational design/departmental units, the use of VMware High Availability (HA) or VMware DRS, and backup and restore requirements/service level agreements (SLAs).
DNS recommends that each datastore exist on its own virtual disk (VDisk). For example, if your organization has a single ESX server with 2 datastores, you will need to create two VDisks, one for each datastore.

This provides much more granularity in how you can manage that datastore on the storage array. If you wish to expand a VDisk because the related datastore has grown near capacity or you wish to move the mailbox to a different storage tier, you can do this independently of the other datastores.
In most cases, striping each VDisk across all the available ISE in the Emprise 7000 system will meet the performance requirements of the ESX server.


Planning Storage for VMware vSphere v4.x
Prior to rolling out VMware ESX, you must first determine the I/O requirements of the VMs so you can determine the optimal number of physical spindles (disk drives) needed to support the environment.
Your applications will still have the same I/O requirements even though they are virtualized, so remember to add all I/O requirements together. Please review the appropriate technical references or white papers for the applications that you are planning to run on the ESX server to determine the I/O requirement for each.

Having an I/O pool that supports all the virtualized servers/applications is rule #1.
The second most important decision you can make when designing your VMware ESX environment is to decide on maximum performance or maximum data protection.

• ISE-RAID provides the maximum VDisk performance and usable storage capacity by having data protection handled in the ISE module itself.

• ISE-Mirror provides the maximum level of data protection by using both ISE-level and controllerlevel data protection, but the amount of usable space is decreased.

Planning assumptions for calculating IOPS:
• 10k rpm spindles = 100 IOPS
• 15k rpm spindles = 150 IOPS

Sizing the Datastores
In addition to planning the necessary I/O pool to support your applications, you will need to determine storage capacity requirements. You will need to consider:

• VM disk files
• VM swap
• Configuration files
• Redo/Snapshot files
• Metadata

Basic Sizing Assumptions
VMware has given a number of presentations at VMworld that contain formulae for calculating the
recommended storage capacity.

in reference to: Proweb for Reliable Web Hosting & custom CMS systems (view on Google Sidewiki)

Sunday, 21 February 2010

I am a Technician, and I hate Windows 7!

Posted on February 19th, 2010 by John Allen

john_allen_background.jpgFollowing on from my “Devalued Skill Base” blog, that hoped to put all us “Silver Technicians” back in the running, I would like to raise a blog regarding the over ‘Dumbing Down’ of Windows and scaling up of windows graphics.

Since the start of Windows I have seen a gradual dumbing down of the OS, to a pictographic hunt for and click-on environment that often takes twice as long to find what you are looking for and takes twice as much space to load as previous version.

No doubt Microsoft (probably) asked home and end users what they wanted, but did they actually ask of us any Technicians? Yes, of course it is great that we now get real problems to deal with, and not just the “Have you turned it off and on again?” ones. People’s knowledge of the self-fix solutions is getting better, but as they say “A little knowledge is dangerous.” However, as a Techie having to go through menu after menu to get to where you want to be is so frustrating.

After phoning a help desk regarding setting up a broadband connection, I had to speak to 5 different support guys before I found anyone that could navigate around Windows 7. In fact, the first four support people were working from XP and Vista prompts and couldn’t find what they were looking for, and so just gave up. So why does Microsoft have to complicate / change what was working well before? Is it to protect their own Certified qualifications that we all need to have? Do we really need this extra over-simplification of Windows and the hiding of the things you need too see?

As many Users out there are getting more knowledgeable, and use the Self-Help Fixes and Wizards. So why does Microsoft hide the very thing you are looking for in a multi layer screens, that relies so heavily on random pictographics? Really, the Windows OS does need to be smaller and clearer, but that doesn’t seem to of happened. When Windows looks and works more like an application than an OS (as it does now), system resources suffer, giving us slower and slower PCs.

I wouldn’t mind so much, if Windows did everything I wanted it to do out of the box, but even with the super big Vista loading up at 12-16Gb, we still had to load codec, Flash Player etc to get it to run what we wanted. Wouldn’t a smaller Windows OS have been better? A version that you decided what you loaded on it or not, depending on your usage or needs, a thin end version, totally user customisable, without all the dross that Microsoft insist we have and hardly ever use, and no more silly graphics whizzing around the screen and bloody Nag screens.

The more I use Windows now, the more I like Linux! It runs from a single DVD, it has all the codec, media players, apps and bits I need… Without the Dross and runs so Fast, I wish Widows did!

in reference to: IT blog: I am a Technician, and I hate Windows 7! « theitblogjobboard (view on Google Sidewiki)