Robust And Scalable Data Centre Solutions
Traditionally the focus of the facilities manager has been, “Is it in place and functional?” However, the data center represents a very high financial investment, with value residing in both functionality and aesthetics. Today’s data centers have become showcase areas to demonstrate to customers a visually appealing reflection of the company image. In this sense, facilities managers are expected to maintain an infrastructure that is highly professional in appearance.
Redundancy Tier I, Tier II, Tier III and Tier IV
Redundancy comes by eliminating single points of failure. A facility without UPS or a generator makes electrical power its single point of failure. For the infrastructure systems (telecommunications, electrical, HVAC and architectural/structural), TIA-942 outlines four levels of redundancy, called ‘tiers’. Infrastructures are less susceptible to interruption in a higher tier. A data centre is defined by its lowest tier for an infrastructure system – a data centre with Tier III electrical and Tier II telecommunications access has Tier II redundancy. The multiple systems and multiple pathways that create higher redundancy also create higher expense.
Tier I defines a ‘basic’ data centre with no redundancy. It has one telecommunications point of entry, one entry for electrical power, one HVAC system and meets minimal standards for floor loading – if it has a raised floor. Optional UPS or back-up power runs over the same wiring as the main power. A Tier I data centre is typically shut down annually for maintenance, although failure due to unforeseen incidents will cause further interruptions. A best-case scenario puts Tier I uptime at 99.67% translating into 29 hours of downtime per year.
Tier II introduces redundant components into the various infrastructures for slightly higher projected uptime over a Tier I data centre. These components include a second telecommunications entry point, UPS and diesel generator electrical backup and a second HVAC system. The building structure has higher loads for floors, thicker walls (and a requirement for peepholes in the doors to the data centre). Like a Tier I data centre, a Tier II data centre should be shut down once a year for maintenance. A best-case scenario puts Tier lI uptime at 99.75% translating into under 22 hours of downtime per year.
Tier III is a concurrently maintainable data centre – any of the main infrastructure components may be shut down without disruption of computer operation. This requires both redundant components and redundant paths for telecommunications, electrical and HVAC systems. Floor loads are higher and building access more stringently controlled with CCTV, mantraps for one-person-at-a-time access, electromagnetic shielding in the walls and more, including 24 hour staffing. Downtime should be due only to errors in operation. A Tier III data centre should be operational as long as the computer equipment is functioning. Best-case uptime jumps to 99.98% translating into 105 minutes of downtime per year.
Tier IV is a fault tolerant data centre with multiple pathways and components so that it stays in operation during a planned shutdown of any of these infrastructures. It is also built to withstand at least one worst-case unplanned event. All equipment has redundant data and power cabling over separate routes. Separate distribution areas may serve mirrored processing facilities. Seismic protection is increased to beyond minimum requirements, as is the ability to withstand hurricanes, flooding or even terrorist attack. A Tier IV data centre should expect an uptime of 99.995% or better – downtime, which should be due to a planned test of fire alarm or emergency power-off, should be no more than a few minutes a year. A Tier II facility has a second entrance hole at least 66 feet (20 meters) from the primary entrance hole. In a Tier III facility, this leads to a second entrance room, also 66 feet (20 meters) from the primary entrance room, and with separate power distribution, HVAC and fire protection. Cabled conduit may be used to interconnect the primary and secondary maintenance holes and entrance rooms for further flexibility.
Redundancy can be further enhanced by using a second telecommunications provider, as long as the back-up provider uses different routing and a different central office than the first provider. Within the computer room, a second distribution area makes sense as long as it and the equipment it serves are in a different room than the main distribution area. Redundant horizontal and backbone cabling provide another level of redundancy if they are placed in different routes. As a secondary route may be longer, take care to make sure that the maximum channel length is not exceeded.