A data center is a facility that houses IT (Information Technology) equipment used to process, communicate, and store data for all our digital activities. A data center is the physical building that you can go touch, point to and say, “This is the internet”. The term ‘data center’ can range from a small server closet located in an office building to a one-million square-foot multistory building. However in recent years, a data center typically means a large stand alone building.
It is important to keep in mind that data centers are built for the sole purpose of operating IT equipment. Just like office buildings are designed for humans to conduct their daily work, data centers are designed for continuous IT equipment operation. Today we rely on this IT equipment, and its underlying software, for almost all of our daily activities such as communication, entertainment, navigation, finances, and security. This is why data centers are classified as mission critical facilities.The major components that make up a data center are as follows:
This photo shows a data center in downtown San Francisco owned and operated by Digital Realty. You’ll notice that from the outside, it doesn’t look much different than a regular office building. Oftentimes data centers are difficult to identify in order to remain inconspicuous.
The architectural and structural components of a data center are pretty basic, consisting of four walls and a roof. A data centers is very similar to a warehouse where the middle of the building is empty (but will be filled up with IT equipment instead of boxes). You can almost think of a data center as a warehouse on steroids. Data centers are built using the same techniques and materials as typical office buildings, however the two structures differ in robustness. Their structural components will be bigger and stronger in order to sustain natural disasters or explosives. The most critical data centers, such as those used by defense departments for national security, might even be located within a mountain or deep underground for protection.
IT equipment is the stuff that fills up the data center and is the reason the facility exists in the first place. The IT equipment is where all the magic happens that makes the internet, mobile communication, and apps what they are today. The IT infrastructure is its own industry in itself, and there are many other blogs, resources, and entire college programs that cover information on this subject.
At a high level, IT equipment can be categorized into three buckets: servers, communication gear, and storage equipment. It is not important that you understand how this equipment works or interacts to understand how data centers work. If you are interested in the IT infrastructure, then this is not the right blog for you. However, here is a quick overview of the basic functions of different IT equipment:
Servers run software applications (like Gmail, Facebook, or Angry Birds) and are almost exactly like your desktop or laptop computer…except much faster and more powerful. They are about the size of a pizza box, mounted in racks within the data center. Servers look like this:
Communication gear (or networking gear) manages how data is transferred in and out of the data center and between the IT equipment. There are many different kinds of communication gear, but an example of a simple networking switch looks like this:
Storage equipment is where all the data, like your Facebook photos, bank records, and old emails, is stored. Storage arrays look like this and are also mounted in racks:
IT equipment needs two things to operate: (1) electricity for power and (2) cooling for removing the heat that it generates. It is the data centers job to provide both electricity and cooling 24/7/365…without ANY interruption. This is where the electrical and cooling infrastructure come into play.
The purpose of the electrical infrastructure is to take power from the utility grid and deliver it to the IT equipment without interruption…ever. If the IT equipment loses power, it will shut down, and this is very bad and costly. Imagine if you could not access your bank account, send an email, or make a cell phone call? That really puts the criticality of IT equipment into perspective. Therefore the design and operation of the electrical infrastructure is based on one thing…redundancy. The electrical distribution is designed and built so that if one system fails or a power connection is lost, there is another energy source to keep the power flowing and the IT equipment up and running. Data center staff spend a lot of time managing the health of the electrical infrastructure to prevent any failures.
Below is a general line diagram of the electrical infrastructure showing how power gets from the utility grid to the IT equipment. In practice this is more complex and may include further layers of redundancy, but the concept is the same.
The utility grid provides the ultimate source of power for the data center. Some facilities are connected to two separate utility grids for redundancy if one goes down.
Backup generators are diesel powered electrical generators that produce electricity in the event the utility grid goes offline. Generators can provide power for hours or even days (however they will need to be refueled for ongoing operation) until the utility grid can provide power again. Backup generators are usually deployed in sets, especially for larger data centers. They look like this:
Automatic Transfer Switches (ATS) are able to switch the source of power from the utility grid to the backup generators without interruption. ATS are nondescript electrical panels, so a photo won’t do any good.
In the event of a sudden utility outage, Uninterrupted Power Supplies (UPS) provide power to the IT equipment for the minute or two it takes to startup the backup generators. UPS systems typically consist of batteries or flywheels that store just enough energy to bridge the power gap. UPS systems cannot provide power to the IT equipment for a lengthy period of time. They looks like this:
Power Distribution Units (PDU) are the physical ‘outlets’ that the IT equipment plugs into at the rack level. PDUs are mounted in each rack and contain multiple outlets to be used by the IT equipment. They look like this:
The purpose of the cooling infrastructure is to remove the heat generated by the IT equipment. Just as your home desktop or laptop gets hot when you use it a lot, so do servers, communication gear, and storage equipment in a data center. If this heat is not removed, the IT equipment will get too hot and shut down…again this would be very bad and costly. If you’ve ever left your iPhone in direct sunlight on a hot summer day, you’ve experienced the same thing (and freak out in just the same way). Just like the IT infrastructure, cooling redundancy is extremely important in order to provide ongoing cooling operation.
The cooling infrastructure consists of air conditioning units called Computer Room Air Conditioners (CRAC) or Computer Room Air Handlers (CRAH). Learn the difference here: CRACs vs CRAHs. Newer data centers are using more advanced cooling technology such as evaporative cooling and free cooling, but that is not important for Data Center 101.
Data center cooling equipment is deployed in the same physical space as the IT equipment. This YouTube video by hosting company SoftLayer provides a nice explanation of how cooling units deliver cold air to the IT equipment.
Most cooling units consists of two components: a fan to move the air and a cooling coil to remove the heat. It works just like your air conditioner at home except it is much bigger. Learn more about how these cooling units work in this blog post: What causes hot spots? Here is a photo of a data center cooling unit:
Chiller plants are often used as a cooling source for larger data centers because they are more efficient at removing heat than direct expansion (DX) units. Chiller plants create cold water that is delivered to the cooling unit (pictured above) within the data center. If you’ve never heard of a chiller plant, don’t worry. Just know they exist for the sole purpose of removing heat from the data center. Here is a photo of a chiller plant:
Cooling systems can be complex because there are many interoperating mechanical and electrical components, however their purpose is the same – to remove heat from the data center. DC Huddle contains many blog posts that discuss cooling components and optimization in more detail.