Cool Green IT Products from DNS-DIRECT

IGEL Slide

Save money & energy Green IT

WEB UD2 Summerpromo 600px

Friday 13 November 2009

Green IT and Green data centres


Server virtualisation
Server virtualisation is the traditional starting point for creating a green data centre. However, virtualisation has moved on considerably since its re-incarnation from mainframe computing several years ago. Concerns over server performance are now tempered with VMware claiming up to 85% native performance with its vSphere 4, and Parallels using near-native performance technologies.
One initial aim of server virtualisation was to address physical server sprawl. IT managers now face a fresh challenge -- virtual server sprawl. With vendors like Microsoft, VMware and Citrix now offering free server virtualisation software, it is easier than ever to create and deploy servers for every conceivable need. These are often left in the server farm, unnecessarily using physical resources. Vendor-agnostic assessment software like VReady from Lanamark will provide reports that show how your virtualised data centre could look using technologies from Microsoft, VMware, Citrix and Parallels. Reassessment of fully or partially virtualised data centres can be as revealing as assessing a data centre for the first time.
Decommission unused servers
The larger the data centre, the more likely it is that there will be servers running without providing services. Decommissioning unused servers saves power, release software licensing for use elsewhere, reduces maintenance charges and lowers the required cooling level. BT has saved 5.3 GWh per year by adding server decommissioning to its virtualisation plans and associated cooling reductions.
Upgrade existing UPS systems
This is often overlooked, but a refresh in uninterruptible power supplies (UPS) could reduce both power and heat. Newer UPS systems can provide up to 97% efficiency, meaning only 3% of power is leaked as heat. Older UPS models can leak as much as 30% power in the form of heat. Modular UPS systems like Symmetra from APC only use the power cells that are required.
Work more closely with facilities
The old adage is that IT uses the power but another department gets the bill.Working with facilities or finance to understand how power is purchased can ensure that any green benefits are highlighted. It is possible to provision required power to the data centre only from a reusable energy source. For example, Rackspace's hosting centre in Slough uses power supplied by Slough Heat & Power Limited from its biomass energy plant. Creating the green data centre extends beyond the IT only remit.
Reduce storage requirements
Single-instance storage (SIS) or data de-duplication is now an accepted and widespread practice and has helped to reduce storage requirements in data centres. However, the requirement of the data centre to provide files to users across the WAN has led to newer and complementary technologies like Local Instance Networking (LIN). This provides a local cache of used files at locations on the WAN, removing the requirement to interrogate the data centre.
For companies with high file transfers across the WAN, introducing LIN technologies can both reduce the workload on the data centre and reduce the usage of the WAN. Lesser-known vendors like Silver Peak are helping enterprises like Ernst & Young achieve WAN-wide benefits.
Storage virtualisation
Addressing the ever-growing requirement for storage by using existing available storage will both reduce capital expenditure and lessen the associated power and cooling demands of introducing new equipment. Thin provisioning allocates storage space only as the data is written, thus removing the need for pre-allocated storage. Technologies from vendors like NetApp and 3Par offer opportunities to maximise storage efficiency and reduce storage hardware.
Using benchmarks
Establishing a baseline from which the introduction of a greener strategy can be measured helps bring the new approach into a more formal business environment. Indicators such as electricity usage and costs are simple for all disciplines of a business to understand, and organisations like the Carbon Trust can assist a company in identifying business-wide energy measurements.
From a data centre perspective, more IT-specific measurements of physical server host uptime in the virtual data centre will prove useful indicators as to how energy efficient you really are. The Standard Performance Evaluation Corporation (SPEC) was formed to create a standardised set of relevant benchmarks that can be applied to the latest generation of servers.
Increase the temperature
Data centres are historically cold, as evidence suggests servers run better when it's cooler. But making a data centre too cold creates a high energy bill. With newer data centre hardware able to withstand a slightly higher temperature and humidity, a reassessment of the data centre temperature could be in order. Research from Hewlett-Packard concludes that for every one- to two-degree increase in temperature, a two to four% savings in cooling can be achieved. With average data centre temperatures running between 65-70 degrees, data centres running below this could see significant savings by adjusting the temperature.
Design improvements
Simple design changes to existing data centres can help reduce cooling requirements and associated cost. Inefficient air cooling and management, resulting from poor server orientation, inappropriate use of rack cooling and badly positioned floor vents, compound the problem.
Check that all rackmounted equipment is either cooling "front to back" or "back to front," and that multiple rows of racks do not compound the problem by sending their hot exhaust air into the neighbouring server's cool air intake. Proper use of available floor vents to maximise and complement rack airflow is important, too. For example, rearranging floor tiles to create hot aisle/cold aisle will bring airflow benefits.
Choose more manageable servers
When selecting new servers or blades for the data centre, attention should be given to the management of the components and their ability to switch off when in use. Configured correctly, this can bring considerable reductions in power consumption by automatically switching off unused components like CPUs. When combined with virtualised server resource management software, like VMware's Distributed Resource Scheduler, power utilisation can be maximised. For example, a data centre serving a U.K.-based workforce will see power consumption savings as unused server resources are powered -- not as they become unused.
Finally, it is worth noting that an increasing number of innovative green finance options are becoming available. For example, Salix gains funding from the Carbon Trust and works across all areas of the public sector, allowing interest-free grants. Grants pay for the initial capital expenditure with financial savings made by the organisation, being returned to the fund, until the original capital investment is repaid.
Properly planned and appropriately executed, a green approach to the data centre brings technical, operational and business benefits, and contributes to the wider effort to reduce our dependency on non -renewable energy resources.

Thursday 12 November 2009

disaster recovery plan


Implementing an effective disaster recovery plan is predominantly about defining and introducing suitable processes and procedures, but technology plays a key role in enabling execution.
Here's a quick guide to the technologies that can help you establish DR provision – with links to in-depth SearchStorage.co.uk content on each.

Archiving
What it does: Archiving tools store data no longer used in day-to-day activity but which must be retained, in some cases for decades and often to meet compliance rules. Archiving tools monitor data, migrate it to appropriate media (cheaper disk or tape) and in some cases maintain checks and controls on the integrity of the data for compliance reasons. Email is a major application covered by archiving tools and services.


More on disaster recovery
Disaster recovery for data storage environmentsData asset inventory key to disaster recovery planning and design
Disaster recovery for virtual servers


DR role: Archiving is a complement to backup rather than a replacement for it. Backups of the archive are necessary, but archiving tools will allow restoration of the items in the archive, in some cases quite rapidly. By first restoring shells of files that can be accessed by users, the archiving tool prioritises the restoration of the full files that correspond to those users have requested.
Key vendors: Archiving software: Autonomy Zantaz, EMC, Mimosa Systems, Mimecast, Quest, Symantec. Hardware: EMC, Hewlett-Packard, Hitachi Data Systems

Backup
What it does: Backup tools copy data held in files or databases to secondary storage media. The aim is to ensure swift access to data to minimise downtime in case of an equipment failure or other disaster. Unlike with archiving software, it is often impossible to search easily for individual files and restore them from backups.
DR role: Copies of backups are recovered from storage media in the event of a disaster and can be restored to file and database systems running at a secondary DR site. The downside is that if disaster strikes, any data generated since the last backup took place will be lost. Backup activity is still the foundation of many DR plans.
Key vendors: Acronis, Asigra, BakBone, Commvault, Computer Associates, EMC, HP, IBM Tivoli, Symantec.

Continuous data protection
What it does: Continuous data protection (CDP) copies data from a source to a target system on an ongoing basis every time a change is made.
DR role: Employing CDP enables restoration of a system to the point where the last modification to data occurred, reducing the recovery point objective to almost zero. Because CDP keeps a record of every transaction that takes place, the most recent clean copy of a document can be recovered quickly if disaster strikes. The downside is that, because copies are not taken at regular timed intervals, it can be difficult to establish when the last one was taken without suitable management tools in place to help.
Key vendors: Atempo, Computer Associates, Double-Take, EMC, FalconStor, IBM Tivoli, InMage, Microsoft, Symantec

Snapshots
What it does: Snapshots are copies of data made from a source to a target system at pre-determined set points, whether that is every 10 minutes, once an hour or three times a day. Depending on their RPOs, many organisations substitute frequent snapshots for CDP.
DR role: Snapshotting enables the restoration of a system to the point where the last copy took place. The downside is that if disaster hits any data that was generated since the last snapshot occurred will be lost, as it would be with other traditional forms of backup.
Key vendors: All major storage vendors

Tape
What it does: For years, people have been backing up data by copying contents from their storage devices to a tape cartridge that can be stored offsite. If there is a disk failure or data gets deleted, missing data can be copied back from tape onto live storage. Tape backup has been losing ground to disk backup, which is faster and easier to restore from.
DR role: Although its role in backup has been diminished in favor of disk, tape still has a firm presence in DR. Backup to tape enables storage professionals to periodically copy data from a primary storage device to a tape library so it can be recovered if a failure occurs. Backup to tape is quick if optimized, and can be undertaken at the primary data centre with the media moved to a secure secondary site.
Key vendors: HP, IBM, Overland Storage, Quantum, SpectraLogic, Sun Microsystems, Tandberg Data

Disk
What it does: Disk-to-disk backup involves backing up data on a computer's hard disk to another disk in a storage array.
DR role: Backing up to disk is usually faster than tape backups because it's difficult to optimise tape streaming. Where disk-based backup does win hands down, however, is in restoration of files. It is easier and quicker to restore specific files from disk than tape in the event of a disaster. If network-connected, disk also removes the need to move multiple backup tapes back and forth between primary and secondary sites for recovery purposes.

Virtual tape library
What it does: A virtual tape library (VTL) is a staging device comprised of disk that mimics a tape library. Data backed up to a VTL can be replicated to other VTLs at a remote site or moved off to real tape for DR.
DR role: Employing a VTL lets organizations maintain the same backup and restore procedures for disk that they would employ for tape. Because VTL is disk-based, backup and recovery is quicker than with tape. One VTL can also replicate to other VTLs at a secondary location on a one-to-one or one-to-many basis, and users can decide which workloads require such site-to-site replication. This removes the need to move multiple backup tapes between primary and secondary sites for recovery purposes.
Key vendors: EMC, Data Domain, Hewlett-Packard, NetApp, Overland Storage, Quantum, Sepaton, Sun Microsystems, Tandberg Data

Data deduplication
What it does: Data deduplication reduces the amount of data backed up by eliminating duplicate files and blocks. This reduces the amount of storage capacity required for backup by between 2:1 and 40:1 typically, depending on the type of data and frequency of backups.
The process works by ensuring that only the first unique instance of a piece of data is retained. Subsequent iterations of the data are replaced with a pointer to the original.
DR role: The technology can – depending on whether it is the inline or post-process flavour of deduplication - reduce the amount of data that has to be sent over a WAN, optimising bandwidth use when undertaking remote backup, replication and disaster recovery. Careful attention must be paid to restoration processes, as data must be rebuilt from its deduplicated state.
Key vendors: Copan, EMC, Data Domain, NEC, FalconStor, NetApp, Overland Storage, Quantum, Symantec.


Failover
What it does: Failover refers to a way of configuring servers to enable redundant or standby secondary devices to take over from primary ones in the case of failure, abnormal termination of activity or scheduled downtime. This process can be undertaken manually or automatically depending on the setup.
DR role: A secondary site takes over critical operations after failover occurs, allowing the network to continue functioning following a disaster. When the primary server or site is ready to resume, operations fail back to the original location

Wednesday 11 November 2009

Is cloud data storage right for your IT infrastructure?

Cloud data storage is becoming an increasingly appealing option for many IT infrastructures. However, because cloud storage is relatively new, most enterprise data storage managers are unsure of what questions to ask potential service providers to determine how storage clouds will impact their environment.

Let's examine five questions storage managers should ask when considering cloud data storage:

1. Is the application or end user able to tolerate low performance from the storage?

Because of the architecture of cloud storage, you can expect high latency on file or directory access requests. So if your next project is a SQL server database or a mail server for a large site, then cloud storage probably isn't the way to go.

But if you're implementing a file server for your remote sales force, then the access time of the cloud is probably consistent with the Wi-Fi or shared Internet connection they would be expecting to use. The cloud also provides access to the data from anywhere that has an Internet connection.

There's a user experience component to this as well: Are users expecting the performance of local storage? It's important to communicate upfront that moving data to the cloud may affect the user experience. If end users have local file servers or even centralized file servers in a data center accessible by a private WAN, accessing data from the cloud will most likely be slower. This can be mitigated by providing multiple Internet connections or a higher level of network service for outbound storage requests.

2. Does the buck stop with the storage provider or hosting service provider?

Most cloud storage providers give a 99.99% service-level agreement (SLA) as their peak. A few, most notably Nirvanix Inc., advertise a 100% SLA. However, when considering these claims, it's important to understand what they are guaranteeing.

Most cloud storage providers reside in co-locations or hosted data centers, and are dependent on yet another vendor's infrastructure. If the hosting provider has connectivity issues, will the potential loss of access to your data be covered under the cloud provider's SLA or will it fall back to the hosting provider?

3. Does data at rest need encryption?

All cloud storage providers offer encryption for data in flight. But if the data for the cloud is sensitive, then the security of that data must be addressed once it lands on the service provider's infrastructure. Several providers now support encryption of data at rest, but you need to find out if the provider you're evaluating has this capability. Additionally, keys or logins for data encryption may become an issue if there are several pools of data that need to be segregated. Depending on the vendor, user accounts may need to be set up for each individual user in the environment.

4. Will cloud data storage costs rise based on user activity?

Most cloud storage providers will charge customers based on storage access in terms of the type of access request and the space utilized. This becomes important when a large number of search either for file names or text within a file. For example, if files are required to be indexed or scanned for viruses on a regular basis, the cost of cloud storage may increase significantly.

E-discovery data stores, application data caches and temporary file storage are other types of data storage that do not work well with the cloud cost model.

5. Can the cloud storage provider put an appliance on site?

For businesses that require fast access to recent files and slower access to older files, the cloud may still be a viable storage alternative if a caching appliance can be housed in the local network. These machines will store the most recent data sent to the cloud assuming that the data has a higher probability of being accessed in the near future.

in reference to:

"ttWriteMboxDiv('searchStorage_Tip_Content_Body'); ttWriteMboxContent('searchStorage_Tip_Content_Body'); Cloud data storage is becoming an increasingly appealing option for many IT infrastructures. However, because cloud storage is relatively new, most enterprise data storage managers are unsure of what questions to ask potential service providers to determine how storage clouds will impact their environment. Let's examine five questions storage managers should ask when considering cloud data storage: Is the application or end user able to tolerate low performance from the storage? Because of the architecture of cloud storage, you can expect high latency on file or directory access requests. So if your next project is a SQL server database or a mail server for a large site, then cloud storage probably isn't the way to go. But if you're implementing a file server for your remote sales force, then the access time of the cloud is probably consistent with the Wi-Fi or shared Internet connection they would be expecting to use. The cloud also provides access to the data from anywhere that has an Internet connection. There's a user experience component to this as well: Are users expecting the performance of local storage? It's important to communicate upfront that moving data to the cloud may affect the user experience. If end users have local file servers or even centralized file servers in a data center accessible by a private WAN, accessing data from the cloud will most likely be slower. This can be mitigated by providing multiple Internet connections or a higher level of network service for outbound storage requests. Does the buck stop with the storage provider or hosting service provider? Most cloud storage providers give a 99.99% service-level agreement (SLA) as their peak. A few, most notably Nirvanix Inc., advertise a 100% SLA. However, when considering these claims, it's important to understand what they are guaranteeing. Most cloud storage providers reside in co-locations or hosted data centers, and are dependent on yet another vendor's infrastructure. If the hosting provider has connectivity issues, will the potential loss of access to your data be covered under the cloud provider's SLA or will it fall back to the hosting provider? Does data at rest need encryption? All cloud storage providers offer encryption for data in flight. But if the data for the cloud is sensitive, then the security of that data must be addressed once it lands on the service provider's infrastructure. Several providers now support encryption of data at rest, but you need to find out if the provider you're evaluating has this capability. Additionally, keys or logins for data encryption may become an issue if there are several pools of data that need to be segregated. Depending on the vendor, user accounts may need to be set up for each individual user in the environment. Will cloud data storage costs rise based on user activity? Most cloud storage providers will charge customers based on storage access in terms of the type of access request and the space utilized. This becomes important when a large number of search either for file names or text within a file. For example, if files are required to be indexed or scanned for viruses on a regular basis, the cost of cloud storage may increase significantly. E-discovery data stores, application data caches and temporary file storage are other types of data storage that do not work well with the cloud cost model. Can the cloud storage provider put an appliance on site? For businesses that require fast access to recent files and slower access to older files, the cloud may still be a viable storage alternative if a caching appliance can be housed in the local network. These machines will store the most recent data sent to the cloud assuming that the data has a higher probability of being accessed in the near future."
- Is cloud data storage right for your IT infrastructure? (view on Google Sidewiki)