Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Data Center Handbook
Data Center Handbook
Data Center Handbook
Ebook2,347 pages26 hours

Data Center Handbook

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Provides the fundamentals, technologies, and best practices in designing, constructing and managing mission critical, energy efficient data centers

Organizations in need of high-speed connectivity and nonstop systems operations depend upon data centers for a range of deployment solutions. A data center is a facility used to house computer systems and associated components, such as telecommunications and storage systems. It generally includes multiple power sources, redundant data communications connections, environmental controls (e.g., air conditioning, fire suppression) and security devices.

With contributions from an international list of experts, The Data Center Handbook instructs readers to:

  • Prepare strategic plan that includes location plan, site selection, roadmap and capacity planning
  • Design and build "green" data centers, with mission critical and energy-efficient infrastructure
  • Apply best practices to reduce energy consumption and carbon emissions
  • Apply IT technologies such as cloud and virtualization
  • Manage data centers in order to sustain operations with minimum costs
  • Prepare and practice disaster reovery and business continuity plan

The book imparts essential knowledge needed to implement data center design and construction, apply IT technologies, and continually improve data center operations.

LanguageEnglish
PublisherWiley
Release dateDec 1, 2014
ISBN9781118937570
Data Center Handbook

Related to Data Center Handbook

Related ebooks

Computers For You

View More

Related articles

Reviews for Data Center Handbook

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Data Center Handbook - Hwaiyu Geng

    Technical Advisory Board

    David Bonneville, S.E., Degenkolb Engineers San Francisco, California

    John Calhoon, Microsoft Corporation Redmond, Washington

    Yihlin Chan, Ph.D., OSHA (Retiree) Salt Lake City, Utah

    Sam Gelpi, Hewlett-Packard Company Palo Alto, California

    Hwaiyu Geng, P.E., Amica Association Palo Alto, California

    Magnus Herlin, Ph.D., ANCIS Incorporated San Francisco, California

    Madhu Iyengar, Ph.D., Facebook Inc. Menlo Park, California

    Jonathan Jew, J&M Consultants San Francisco, California

    Jacques Kimman, Ph.D., Zuyd University Heerlen, Netherlands

    Jonathan Koomey, Ph.D., Stanford University Stanford, California

    Veerendra Mulay, Ph.D., Facebook Inc. Menlo Park, California

    Dean Nelson, eBay Inc. San Jose, California

    Jay Park, P.E., Facebook Inc. Menlo Park, California

    Roger Schmidt, Ph.D., IBM Corporation Poughkeepsie, New York

    Jinghua Zhong, China Electronics Engineering Design Institute Beijing, China

    Chapter Organization

    This book is designed to cover following five major parts:

    Part 1: Data Center Overview and Strategic Planning

    Part 2: Data Center Design and Construction

    Part 3: Data Center Technology

    Part 4: Data Center Operations and Management

    Part 5: Disaster Recovery and Business Continuity

    This organization allows readers to have an overview of data centers including strategic planning, design and construction; the available technologies and best practices; how to efficiently and effectively manage a data center and close out with disaster recovery and business continuity. Within 5 parts, there are 36 chapters.

    Part 1: Data Center Overview and Strategic Planning

    Chapter 1—Data Centers—Strategic Planning, Design, Construction, and Operations: This chapter provides high-level discussion of some key elements in planning and designing data centers. It covers the definition of data centers; vision; principles in preparing a roadmap and strategic planning; global location planning; sustainable design relating to reliability, computational fluid dynamics, DCIM, and PUE; best practices; proven and emerging technologies; and operations management. It concludes with disaster recovery and business continuity. All of these subjects are described in more detail within the handbook.

    Chapter 2—Energy and Sustainability in Data Centers: This chapter gives an overview of best practices in designing and operating data centers that would reduce energy consumption and achieve sustainability.

    Chapter 3—Hosting or Colocation Data Centers: This chapter describes the definition of hosting, colocation, and data center. It explores ‘build vs. buy" with financial considerations. It also describes the elements to consider in evaluating and selecting hosting or colocation providers.

    Chapter 4—Modular Data Centers: Design, Deployment, and other Considerations: An anatomy of modular data center using ISO container standards is presented. The benefits and applications using MDC as well as site preparation, installation, and commissioning are introduced.

    Chapter 5—Data Center Site Search and Selection: This chapter gives you a roadmap for site search and selection, process, and team members, and critical elements that lead to a successful site selection are described.

    Chapter 6—Data Center Financial Analysis, ROI, and TCO: This chapter starts with fundaments of financial analysis (NPV, IRR), return on investment, and total cost of ownership. Case studies are used to illustrate NPV, breakeven, and sensitivity analysis in selecting different energy savings retrofits. It also includes an analysis of Choosing to build, reinvest, least, or rent of data centers, colocation, and cloud.

    Chapter 7—Overview Data Centers in China: Overview of policies, laws, regulations, and GB (standards) of China’s data centers is presented. Development status, distribution and energy efficiency of data centers, and cloud are discussed.

    Chapter 8—Overview of Data Centers in Korea: Overview of policies, laws, regulations, codes and standards, and market of Korea’s data centers is presented. Design and construction practices of Korea’s data centers are discussed.

    Part 2: Data Center Design and Construction

    Chapter 9—Architecture Design: Data Center Rack Floor Plan and Facility Layout Design: An overview of server rack, cabinet, network, and large frame platform is introduced. Computer room design with coordination of HVAC system, power distribution, fire detection and protection system, lighting, raised floor vs. overhead system, and aisle containment is discussed. Modular design, CFD modeling, and space planning are also addressed.

    Chapter 10—Mechanical Design in Data Centers: Design criteria including reliability, security, safety, efficiency, and flexibility are introduced. Design process with roles and responsibilities from predesign, schematics design, design development, construction documents, and construction administration are well explained. Considerations in selecting key mechanical equipment and best practices on energy efficiency practices are also discussed.

    Chapter 11—Electrical Design in Data Centers: Electrical design requirements, uptime, redundancy, and availability are discussed.

    Chapter 12—Fire Protection and Life Safety in Data Centers: Fundamentals of fire protection, codes and stands, local authorities, and life safety are introduced. Passive fire protection, early detection, and alarm and signaling systems are discussed. Hot and cold aisle ventilations are reviewed.

    Chapter 13—Structural Design in Data Centers: Natural Disaster Resilience: Strengthening building structural and nonstructural components are introduced. Building design using code based vs. performance based is discussed. New design considerations and mitigation strategies relating to natural disasters are proposed. This chapter concludes with comprehensive resiliency strategies with pre- and postdisaster planning.

    Chapter 14—Data Center Telecommunication Cabling: Telecommunication cabling organizations and standards are introduced. The spaces, cabling topology, cable type, cabinet/rack placement, pathways, and energy efficiency are discussed. It concludes with discussion on patch panel, cable management, and reliability tiers.

    Chapter 15—Dependability Engineering for Data Center Infrastructures: This chapter starts with definition of system dependability analysis. System dependability indexes including reliability, availability, and maintainability are introduced. Equipment dependability data including MTTF, MTBF, and failure rate are also introduced. System dependability, redundancy modeling, and system dysfunctional analysis are discussed.

    Chapter 16—Particulate and Gaseous contamination in Data Centers: IT equipment failure rates between using outside air vs. recirculated air are discussed. ISO standards addressing particulate cleanliness, ANSI standards evaluating gaseous contamination, and ASHRAE TC9.9 Committee on particulate and gaseous contaminations are addressed.

    Chapter 17—Computational Fluid Dynamics Applications in Data Centers: Fundamentals and theory of CFD are introduced. Applying CFD in data centers including design, troubleshooting, upgrade, and operations management are discussed. Modeling data centers that include CRAC/CRAH, cooling infrastructure, control system, time-dependent simulation, and failure scenarios are performed. This chapter concludes with benefits of CFD and future virtual facility.

    Chapter 18—Environment Control of Data Centers: Thermal management of data centers including structural parameters, placement of CRAC units, cooling system design and control, and data center design are discussed. Energy management of data centers including airside or waterside economizer, CRAH, liquid cooling, and dynamic cooling are discussed.

    Chapter 19—Data Center Project Management and Commissioning: This chapter describes project management that involves planning, scheduling, safety and security, tracking deliverables, test and commissioning, and training and operations. Commissioning tasks starting from design stage all the way through test and commissioning to final occupancy phases are discussed. This chapter details how to select a commissioning team, what equipment and systems to be tested and commissioned, and roles and responsibilities of commissioning team at different stage of project life cycle.

    Part 3: Data Center Technology

    Chapter 20—Virtualization, Cloud, SDN, and SDDC: Fundamentals of virtualization, cloud, SDN, and SDDC are described. What benefits and challenges of those technologies to data center practitioners are described.

    Chapter 21—Green Microprocessor and Server Design: This chapter concerns itself with microprocessor and server design on how to judge and select them as the best fit to sustainable data centers. This chapter starts with guiding principles to aid your server selection process. It follows in detail by the prime criteria for the microprocessor and server system, as well as, considerations with respect to storage, software, and racks.

    Chapter 22—Energy Efficiency Requirements in Information Technology Equipment Design: This chapter addresses energy efficiency of servers, storage system, and uninterruptible power supply (UPS) being used in data centers. Each device is being examined at component level and in operating condition as regards how to improve energy efficiency with useful benchmark.

    Chapter 23—Raised Floors versus Overhead Cooling in Data Centers: This chapter discusses benefits and challenges between raised floors cooling vs. overhead cooling in the areas of air delivery methodology, air flow dynamics, and underfloor air distribution.

    Chapter 24—Hot Aisle versus Cold Aisle Containment: This chapter covers design basics of models for airflow architecture using internal and external cooling units. Fundamentals of hot/cold aisle containments and airflow management systems are presented. The effects of increased return air temperatures at cooling units from HAC are discussed. Concerns with passive ducted return air systems are discussed. HAC and CAC impacts on cooling fan power and redundancy with examples are provided. Consideration is given to peripheral equipment and economizer operations.

    Chapter 25—Free Cooling Technologies in Data Centers: This chapter describes how to use ambient outside air to cool a data center. What is economizer thermodynamic process with dry-bulb and wet-bulb temperatures has been discussed. Air to air heat exchanger vs. an integer air to air and cooling tower is reviewed. Comparative energy savings and reduced mechanical refrigeration are discussed.

    Chapter 26—Rack-Level Cooling and Cold Plate Cooling: Fundamentals and principles of rack level cooling are introduced. Energy consumption for conventional room cooling vs. rack level cool is discussed. Advantages and disadvantages of rack level cooling including enclosed, in flow, rear door, and cold plate cooling are discussed.

    Chapter 27—Uninterruptible Power Supply System: UPSs are an important part of the electrical infrastructure where high levels of power quality and reliability are required. In this chapter, we will discuss the basics of UPS designs, typical applications where UPS are used, considerations for energy efficiency UPS selection, and other components and options for purchasing and deploying a UPS system.

    Chapter 28—Using Direct Current Networks in Data Centers: This chapter addresses why AC power, not DC power, is being used. Why DC power should be used in data centers and trending in using DC power.

    Chapter 29—Rack PDU for Green Data Centers: An overview of PDU fundamentals and principles are introduced. PDUs for data collection that includes power energy, temperature, humidity, and air flow are discussed. Considerations in selecting smart PDUs are addressed.

    Chapter 30—Renewable and Clean Energy for Data Centers: This chapter discusses what is renewable energy, the differences between renewable and alternative energy, and how they are being used in data centers.

    Chapter 31—Smart Grid-Responsive Data Centers: This chapter examines data center characteristics, loads, control systems, and technologies ability to integrate with the modern electric grid (Smart Grid). The chapter also provides information on the Smart Grid architecture, its systems, and communication interfaces across different domains. Specific emphasis is to understand data center hardware and software technologies, sensing, and advanced control methods, and how they could be made responsive to identify demand response (DR) and automated DR (auto-DR) opportunities and challenges for Smart Grid participation.

    Part 4: Data Center Operations and Management

    Chapter 32—Data Center Benchmark Metrics: This chapter provides information on PUE, xUE, RCI, and RTI. This chapter also describes benchmark metrics being developed or used by SPEC, the Green 500, and EU Code of Conduct.

    Chapter 33—Data Center Infrastructure Management: This chapter covers what DCiM is, where it stands in hype cycle, why it is important to deploy DCiM in data centers, what are modules of a DCiM, what are future trends, and how to select and implement a DCiM system successful.

    Chapter 34—Computerized Maintenance Management System for Data Centers: This chapter covers the basics of CMMS, why it is important to deploy CMMS, what CMMS modules included, maintenance service process, management and reporting, and how to select, implement, and operate a CMMS in a data center successfully.

    Part 5: Disaster Recovery and Business Continuity

    Chapter 35—Data Center Disaster Recovery and High Availability: This chapter aims to give a sense of the key design elements, planning and process approaches to maintain the required level of service and business continuity from the data centre and the enterprise architectures residing within disaster recovery and high availability.

    Chapter 36—Lessons Learned from Natural Disasters and Preparedness of Data Centers: This chapter covers lessons learned from two major natural disasters that will broaden data center stakeholders toward natural disaster awareness, prevention, and preparedness. Detailed lessons learned from the events are organized in the following categories: Business Continuity/Disaster Recovery Planning, Communications, Emergency Power, Logistics, Preventive Maintenance, Human Resources, and Information Technology. They can be easily reviewed and applied to enhance your BC/DR planning.

    About the Companion Website

    This book is accompanied by a companion website:

    http://www.wiley.com/go/datacenterhandbook

    The website includes:

    Color figures for:

    Chapter 1. Figure 1.5

    Chapter 4. Figures 4.10, 4.11

    Chapter 14. Figures 14.1, 14.2, 14.9

    Chapter 17. Figures 17.1, 17.6, 17.7, 17.8, 17.9, 17.10, 17.14, 17.15, 17.16

    Chapter 18. Figures 18.10, 18.11

    Chapter 24. Figures 24.8, 24.15, 24.16,

    Chapter 29. Figures 29.8, 29.9, 29.10, 29.11

    Chapter 35. Figure 35.3

    Chapter 36. Figure 36.1

    Editable Excel spreadsheet of figures and tables for Chapter 6:

    Figure 6.1

    Figure 6.2

    Figure 6.3

    Figure 6.5

    Figure 6.6

    Figure 6.10

    Figure 6.12

    Table 6.1

    Table 6.2

    Table 6.3

    Table 6.4

    Table 6.9 through Table 6.14

    Table 6.16

    Table 6.17 through Table 6.23

    Part I

    Data Center Overview and Strategic Planning

    1

    Data Centers—Strategic Planning, Design, Construction, and Operations

    Hwaiyu Geng

    Amica Association, Palo Alto, CA, USA

    1.1 Introduction

    In a typical data center, electrical energy is used to operate Information and Communication Technology (ICT) equipment and its supporting facilities. About 45% of electrical energy is consumed by ICT equipment, which includes servers, storages, and networks. The other 55% of electrical energy is consumed by facilities, which include power distribution system, uninterruptible power supplies, chillers, computer room air conditioners, lights, and so on. Improving power consumption by ICT equipment and facilities is imperative for efficient use of energy. Many studies have proven increasing greenhouse gases due to human activities resulting in global warming.

    1.1.1 Data Centers and Global Warming

    A study by the journal Science estimates that, from 1992 to 2012, the melting ice from Greenland and Antarctica has raised the global sea level by 11.1 mm (0.43 in.). Rising sea levels have gained more attention from the flooding caused by the superstorm Sandy in 2012 that struck the heavily populated U.S. East Coast.

    A report titled Climate Change 2013: The Physical Science Basis [1], prepared by the Intergovernmental Panel on Climate Change (IPCC), set up by the World Meteorological Organization and the UN’s Environment Program, states as follows: Warming of the climate system is unequivocal. Since the 1950s, many of the observed changes are unprecedented over decades to millennia. The atmosphere and ocean have warmed, the amounts of snow and ice have diminished, sea level has risen, and the concentrations of greenhouse gases have increased. "The rate of sea level rise since the mid-nineteenth century has been larger than the mean rate during the previous two millennia (high confidence). Over the period 1901–2010, global mean sea level rose by 0.19 [0.17–0.21] m."

    The World Bank issued a report in November 2012, titled Turn Down the Heat: Why a 4°C Warmer World Must be Avoided [2]. The report describes what the world would be like if it warmed by 4°C (7.2°F). The 4°C world scenarios are devastating: the inundation of coastal cities; increasing risks for food production potentially leading to higher malnutrition rates; many dry regions becoming dryer, wet regions wetter; unprecedented heat waves in many regions, especially in the tropics; substantially exacerbated water scarcity in many region, increase frequency of high-intensity tropical cyclones; and irreversible loss of biodiversity, including coral reef system.

    The science is unequivocal that humans are the cause of global warming, and major changes are already being observed: global mean warming is 0.8°C above pre-industrial levels; oceans have warmed by 0.09°C since the 1950s and are acidifying; sea levels rose by about 20 cm since pre-industrial times and are now rising at 3.2 cm per decade; an exceptional number of extreme heat waves occurred in the last decade; major food crop growing areas are increasingly affected by drought.

    Human beings generate all kinds of heat from cooking food, manufacturing goods, building houses, passenger and freight transport, and ICT activities. ICT continues as a pervasive force in the global economy, which includes Internet surfing, computing, online purchase, online banking, mobile phone, social networking, medical services, and exascale machine (supercomputer). They all require energy in data centers and give out heat as a result. One watt input to process data results in 1 W of heat output. As a result, all data centers take energy and give out heat. We can’t stop giving out heat, but we can reduce heat output by efficiently managing energy input.

    1.1.2 Data Center Definition

    The term data center means differently to different people. Some of the names used include data center, data hall, data farm, data warehouse, computer room, server room, R&D software lab, high-performance lab, hosting facility, colocation, and so on. The U.S. Environment Protection Agency defines a data center as:

    Primarily electronic equipment used for data processing (servers), data storage (storage equipment), and communications (network equipment). Collectively, this equipment processes, stores, and transmits digital information.

    Specialized power conversion and backup equipment to maintain reliable, high-quality power, as well as environmental control equipment to maintain the proper temperature and humidity for the ICT equipment.

    Data centers are involved in every aspect of life running Amazon, AT&T, CIA, Citibank, Disneyworld, eBay, FAA, Facebook, FEMA, FBI, Harvard University, IBM, Mayo Clinic, NASA, NASDAQ, State Farm, U.S. Government, Twitter, Walmart, Yahoo, Zillow, etc. This A–Z list reflects the basic needs of food, clothing, shelter, transportation, health care, and social activities that cover the relationships among individuals within a society.

    A data center could consume electrical power from 1 to over 500 MW. Regardless of size and purpose (Table 1.1), all data centers serve one purpose, and that is to process information. In this handbook, we use data center that refers to all names stated earlier.

    Table 1.1 Data center type, server volume, and typical size

    Sources: EPA, 2007; CHP in Data Centers, ICF International, Oak Ridge National Laboratory, 2009.

    1.1.3 Energy Consumption Trends

    Electricity used in global data centers during 2010 likely accounted for between 1.1 and 1.5% of total electricity use, respectively. For the U.S., that number was between 1.7 and 2.2% [3].

    IDC IVIEW, sponsored by EMC Corporation, stated [4] as follows: Over the next decade, the number of servers (virtual and physical) worldwide will grow by a factor of 10, the amount of information managed by enterprise data centers will grow by a factor of 50, and the number of files the data center will have to deal with will grow by a factor of 75, at least.

    Gartner estimated [5], In 2011, it is believed that 1.8 Zettabytes of data was created and replicated. By 2015, that number is expected to increase to 7.9 Zettabytes. That is equivalent to the content of 18 million Libraries of Congress. The majority of data generation originates in North America and Europe. As other global regions come online more fully, data generation is expected to increase exponentially.

    Evidently, as a result of increasing activities such as big data analytics, online services, mobile broadband, social activities, commercial business, manufacturing business, health care, education, medicine, science, and engineering, energy demand will continue to increase.

    1.1.4 Using Electricity Efficiently

    A data center houses ICT equipment and facilities that are used to cool ICT equipment. While air cooling is still the most economical way to cool servers in racks, water cooling is the most efficient way to remove heat generated by processors.

    Based on Power Usage Effectiveness, March 2012² prepared by LBNL, 33.4% of total energy is used in power and cooling a data center and 66.6% by IT load (Fig. 1.1). For a typical server, 30% of power is consumed by a processor and 70% by peripheral equipment that includes power supply, memory, fans, drive, and so on. A server’s utilization efficiency is estimated to be at a disappointing 20% [6].

    c1-fig-0001

    Figure 1.1 The DOE national average PUE for data centers is 1.75. 50B-1275 data center has evolved from an average PUE of 1.65 (calculated in 2009) to today’s 1.47. Getting there, staying there, and further improving the PUE is an ongoing effort .

    (Source: Nina Lucido, Data Center Utilization Report, March 2012, LBNL, U.S. Department of Energy. https://commons.lbl.gov/display/itdivision/2012/04)

    Opportunities of saving energy at the server level include the use of ENERGY STAR-rated equipment, water cooling server, solid-state drive, and variable-speed fan in servers. Virtualization could be applied to improve the server’s utilization efficiency.

    1.1.5 Virtualization, Cloud, Software-Defined Data Centers

    As illustrated in Figure 1.2, Virtualization is a method of running multiple independent virtual operating systems on a single physical computer. It is a way of allowing the same amount of processing to occur on fewer servers by increasing server utilization. Instead of operating many servers at low CPU utilization, virtualization combines the processing power onto fewer servers that operate at higher utilization [7].

    c1-fig-0002

    Figure 1.2 Virtualization .

    (Source: https://www.energystar.gov/index.cfm?c=power_mgt.datacenter_efficiency_virtualization)

    Cloud computing is an evolving model [8]. It is characterized as easy access, on demand, rapidly adaptable, flexible, cost-effective, and self-service to share pool of computing resources that include servers, storage, networks, applications, and services. Cloud capacity could be rapidly provisioned, controlled, and measured.

    Cloud computing provides various service models including Software as a Service (SaaS), Infrastructure as a Service (IaaS), and Platform as a Service (PaaS). HP’s Everything as a Service provides service model as follows: Through the cloud, everything will be delivered as a service, from computing power to business processes to personal interactions. Cloud computing is being deployed in public, private, community, or hybrid cloud models. It benefits data center managers by offering resource pooling and optimizing resource uses with lower costs. IDC estimate that by 2015, 20% of the information will be touched by cloud computing.

    The Software-Defined Data Center (SDDC), pioneered by VMware, is an architectural approach that has all ICT infrastructure (server, storage, networking, and security) virtualized through hardware-independent management system. SDDC can be a building block to Cloud, or Cloud can be an extension of an SDDC [9]. Virtual machines can be deployed in minutes with little human involvement. Provisioning applications can be operational in minutes that shorten time to value. SDDC maximizes the utilization of physical infrastructure [10]. As a result, SDDC reduces capital spending, advances asset utilization, improves operational efficiency, and enhances ICT productivity. SDDC is likely to drive down data center hardware costs.

    1.2 Data Center Vision and Roadmap

    Table 1.2 provides a framework of vision, possible potential technology solutions, and key benefits. This table consolidates the ideas and solutions from 60 experts who attended the Vision and Roadmap Workshop on Routing Telecom and Data Centers Toward Efficient Energy Use. The table could be tailored to individual needs by enhancing with emerging technologies such as SDDC, fuel cell technology, etc.

    Table 1.2 ICT vision and roadmap summary [11]

    1.2.1 Strategic Planning and Roadmap

    Strategic planning for a holistic data center could encompass a global location plan, site selection, design, construction, and operations that support ICT and emerging technology. There is no one correct way to prepare a strategic plan. Depending on data center acquisition strategy (i.e., host, colocation, expand, lease, buy, or build) of a new data center, the level of deployments could vary from minor modifications of a server room to a complete build out of a green field project.

    Professor Michael E. Porter’s How Competitive Forces Shape Strategy [12] described the famous Five Forces that lead to a state of competition in an industry. They are threat of new entrants, bargaining power of customers, threat of substitute products or services, bargaining power of suppliers, and the industry jockeying for position among current competitors. Chinese strategist Sun Tzu, in The Art of War, stated five factors: the Moral Law, Heaven, Earth, the Commander, and Methods and Discipline. Key ingredients in both strategic planning reflect the following [13]:

    What are the goals

    What are the knowns and unknowns

    What are the constraints

    What are the feasible solutions

    How the solutions are validated

    How to find an optimum solution

    In preparing a strategic plan for a data center, Figure 1.3 [14] shows four forces: business driver, process, technologies, and operations. Known business drivers and philosophies of a data center solution include the following:

    Agility: Ability to move quickly.

    Resiliency: Ability to recover quickly from an equipment failure or natural disaster.

    Modularity and Scalability: Step and repeat for fast and easy scaling of infrastructures.

    Reliability and Availability: Reliability is the ability of equipment to perform a given function. Availability is the ability of an item to be in a state to perform a required function.

    Sustainability: Apply best practices in green design, construction, and operations of data centers to reduce environmental impacts.

    Total cost of ownership: Total life cycle costs of CapEx (e.g., land, building, green design, and construction) and OpEx (e.g., energy costs) in a data center.

    c1-fig-0003

    Figure 1.3 Data center strategic planning forces .

    (Courtesy of Amica Association)

    Additional knowns to each force could be added to suit the needs of individual data center project. It is clear that knowns Business Drivers are complicated and sometimes conflicting. For example, increasing resiliency, or flexibility, of a data center will inevitably increase the costs of design and construction as well as continuous operating costs. Another example is that the demand for sustainability will increase the Total Cost of Ownership. He can’t eat his cake and have it too, so it is essential to prioritize business drivers early on in the strategic planning process.

    A strategic plan should also consider emerging technologies such as using direct current power, fuel cell as energy source, or impacts from SDDC.

    1.2.2 Capacity Planning

    Gartner’s study indicated that data center facilities rarely meet the operational and capacity requirements of their initial design [15]. It is imperative to focus on capacity planning and resource utilization. Microsoft’s top 10 business practices estimated [16] that if a 12 MW data center uses only 50% of power capacity, then every year approximately US$4–8 million in unused capital is stranded in UPS, generators, chillers, and other capital equipment invested.

    1.3 Strategic Location Plan

    In determining data center locations, the business drivers include market demands, market growth, emerging technology, undersea fiber-optic cable, Internet exchange points, electrical power, capital investments, and other factors. It is essential to have an orchestrated roadmap to build data centers around global locations. Thus, it is important to develop a strategic location plan that consists of a long-term data center plan from a global perspective and a short-term data center implementation plan. This strategic location plan starts from considering continents, countries, states, cities to finally the data center site.

    Considerations for a macro long-term plan that is at continent and country levels include:

    Political and economic stability of the country

    Impacts from political economic pacts (e.g., EU, G8, OPEC, and APEC)

    Gross Domestic Products or relevant indicators

    Productivity and competitiveness

    Market demand and trend

    Strengths, Weaknesses, Opportunities, and Threats (SWOT) analysis

    Political, Economic, Social, and Technological (PET) analysis (PEST components)

    Considerations for a midterm plan that is at province and city levels include:

    Natural hazards (e.g., earthquake, tsunami, hurricane, tornado, and volcano)

    Electricity sources with dual or multiple electrical grid services

    Electricity rate

    Fiber-optic infrastructure with multiple connectivity

    Public utilities (e.g., natural gas and water)

    Airport approaching corridor

    Labor markets (e.g., educated workforce and unemployment rate)

    Considerations for a microterm plan within a city, which is at campus level, include:

    Site size, shape, accessibility, expandability, zoning, and code controls

    Tax incentives from city and state

    Topography, 100-year flood plan, and water table

    Quality of life (staff retention)

    Security and crime rate

    Proximity to airport and rail lines

    Proximity to chemical plant and refinery

    Proximity to electromagnetic field from high-voltage power lines

    Operational considerations

    Other tools that could be used to formulate location plans include:

    Operations research

    – Network design and optimization

    – Regression analysis on market forecasting

    Lease versus buy analysis or build lease back

    Net present value

    Break-even analysis

    Sensitivity analysis and decision tree

    As a reference, you might consider to compare your global location plan against data centers deployed by Google, Facebook, or Yahoo.

    1.4 Sustainable Design

    Every business needs data centers to support changing environment such as new market demanding more capacity, new ICT products consuming higher power that requires rack-level cooling [17], and merge and requisition. Sustainable design is essential because data centers can consume 40–100 times more electricity compared to similar-size office spaces on a square foot basis. Data center design involves architectural, structural, mechanical, electrical, fire protection, security, and cabling systems.

    1.4.1 Design Guidelines

    Since a data center is heavily involved with electrical and mechanical equipments that cover 70–80% of data center capital costs (Fig. 1.4), oftentimes, a data center is considered an engineer-led project. Important factors for sustainable design encompass overall site planning, A/E design, energy efficiency best practices, redundancy, phased deployment, and so on. Building and site design could work with requirements as specified in the Leadership in Energy and Environment Design (LEED) program. The LEED program is a voluntary certification program that was developed by the U.S. Green Building Council (USGBC). Early on in the design process, it is essential to determine the rack floor plan and elevation plan of the building. The floor plate with large column spacing is best to accommodate the data center’s ICT racks and cooling equipment. A building elevation plan must be evaluated carefully to cover needed space for mechanical (HVAC), electrical, structural, lighting, fire protection, and cabling systems. Properly designed column spacing and building elevation ensure appropriate capital investments and minimize operational expenses. Effective space planning will ensure maximum rack locations and achieve power density with efficient and effective power and cooling distribution [18].

    c1-fig-0004

    Figure 1.4 Focus on mechanical and electrical expenses to reduce cost significantly [16] .

    (Courtesy of Microsoft Corporation)

    International technical societies have developed many useful design guidelines. To develop data center design requirements and specification, the following guidelines could be consulted:

    LEED Rating Systems¹

    ANSI/ASHRAE/IES 90.1-2010: Energy Standard for Buildings

    ASHRAE TC 9.9 2011: Thermal Guideline for Data Processing Environments—Expanded Data Center Classes and Usage Guidance

    ASHRAE 2011: Gaseous and Particulate Contamination Guidelines for Data Center

    ANSI/BICSI 002-2011: Data Center Design and Implementation Best Practices

    ANSI/TIA-942-A (August 2012): Telecommunications Infrastructure Standard for Data Center

    Data Centre Code of Conduct Introduction Guide (EU)

    2013 Best Practices Guidelines² (EU)

    Outline of Data Center Facility Standard³ by Japan Data Center Council (JDCC)⁴

    Code for Design of Information Technology and Communication Room (GB50174-2008)

    1.4.2 Reliability and Redundancy

    Redundancy ensures higher reliability but it has profound impacts in initial investments and ongoing operating costs.

    Uptime Institute® pioneered a tier certification program that structured data center redundancy and fault tolerance in a four-tiered scale. Different redundancies could be defined as follows:

    N: base requirement

    N+1 redundancy: provides one additional unit, module, path, or system to the minimum requirement

    N+2 redundancy: provides two additional units, modules, paths, or systems in addition to the minimum requirement

    2N redundancy: provides two complete units, modules, paths, or systems for every one required for a base system

    2(N+1) redundancy: provides two complete (N+1) units, modules, paths, or systems

    Based on the aforementioned, a matrix table could be established using the following tier levels in relation to component redundancy categorized by telecommunication, architectural and structural, electrical, and mechanical:

    Tier I Data Center: basic system

    Tier II Data Center: redundant components

    Tier III Data Center: concurrently maintainable

    Tier IV Data Center: fault-tolerant

    The Telecommunication Industry Association’s TIA-942-A [19] contains tables that describe building and infrastructure redundancy in four levels. JDCC’s Outline of Data Center Facility Standard is a well-organized matrix illustrating Building, Security, Electric Equipment, Air Condition Equipment, Communication Equipment and Equipment Management in relation to redundancy Tiers 1, 2, 3, and 4. It is worthwhile to highlight that the matrix also includes seismic design considerations with Probable Maximum Loss (PML) that relates to design redundancy.

    The Chinese National Standard Code (GB 50174-2008) defines Design of Information Technology and Communication Room in A, B, and C tier levels with A being the most stringent.

    Data center owners should work with A/E consultants to establish balance between desired reliability, redundancy, and total cost of ownership.

    1.4.3 Computational Fluid Dynamics

    Whereas data centers could be designed by applying best practices, the locations of systems (e.g., rack, air path, and CRAC) might not be in its optimum arrangement collectively. Computational Fluid Dynamics (CFD) technology has been used in semiconductor’s cleanroom projects for decades to ensure uniform airflow inside a cleanroom. CFD offers a scientific analysis and solution to validate cooling capacity, rack layout, and location of cooling units. One can visualize airflow in hot and cold aisles for optimizing room design. During the operating stage, CFD could be used to emulate and manage airflow to ensure that air path does not recirculate, bypass, or create negative pressure flow. CFD could also be used to identify hot spots in rack space.

    1.4.4 DCiM and PUE™

    In conjunction with CFD technology, Data Center Infrastructure Management (DCiM) is used to control asset and capacity, change process, and measure and control power consumption, energy, and environment management.⁵ The Energy Management system allows integrating information such as from the Building Management System (BMS), utility meters, and UPS into actionable reports, such as accurate asset inventory, space/power/cooling capacity, and bill-back reports. A real-time dashboard display allows continuous monitoring of energy consumption and to take corrective actions.

    Professors Robert Kaplan and David Norton once said: If you can’t measure it, you can’t manage it. Power Usage Effectiveness (PUE™), among other accepted paradigms developed by the Green Grid, is a recognized metrics for monitoring and thus controlling your data center energy efficiency.

    Incorporating both CFD and DCiM early on during design stage is imperative for successful design and ongoing data center operations. It will be extreme costly to install monitoring devices after construction of a data center.

    1.5 Best Practices and Emerging Technologies

    Although designing energy-efficient data centers is still evolving, many best practices could be applied whether you are designing a small server room or a large data center. The European Commission published a comprehensive 2013 Best Practices for the EU Code of Conduct on Data Centres. The U.S. Department of Energy’s Federal Energy Management Program published Best Practices Guide for Energy-Efficient Data Center Design. Both, and many other publications, could be referred to when preparing a data center design specification. Here is a short list of best practices and emerging technologies:

    Increase server inlet temperature (Fig. 1.5) and humidity adjustments [20]

    Hot- and cold-aisle configuration

    Hot and cold air containments

    Air management (to avoid bypass, hot and cold air mixing, and recirculation)

    Free cooling using air-side economizer or water-side economizer

    High-efficiency UPS

    Variable speed drives

    Rack-level direct liquid cooling

    Combined heat and power (CHP) in data centers (Fig. 1.6) [21]

    Fuel cell technology [22]

    Direct current power distribution

    c1-fig-0005

    Figure 1.5 Adjust environmental conditions (FEMP First Thursday Seminars, U.S. Department of Energy).

    c1-fig-0006

    Figure 1.6 CHP System Layout for Data Center.

    1.6 Operations Management and Disaster Management

    Some of the best practices in operations management include applying ISO standards, air management, cable management, preventive and predictive maintenance, 5S, disaster management, and training.

    1.6.1 ISO Standards

    To better manage your data centers, operations management adheres to international standards, so to practice what you preach. Applicable ISO standards include the following:

    ISO 9000: Quality management

    ISO 14000: Environmental management

    OHSAS 18001: Occupation Health and Safety Management Standards

    ISO 26000: Social responsibility

    ISO 27001: Information security management

    ISO 50001: Energy management

    ISO 20121: Sustainable events

    1.6.2 Computerized Maintenance Management Systems

    Redundancy alone will not prevent failure and preserve reliability. Computerized maintenance management system (CMMS) is a proven tool, enhanced with mobile, QR/barcoding, or voice recognition capabilities, mainly used for managing and upkeeping data center facility equipment, scheduling maintenance work orders, controlling inventory, and purchasing service parts. ICT asset could be managed by DCiM as well as Enterprise Asset Management System. CMMS can be expanded and interfaced with DCiM, BMS, or Supervisory Control and Data Acquisition (SCADA) to monitor and improve Mean Time between Failure and Mean Time to Failure, both closely relating to dependability or reliability of a data center. Generally, CMMS encompasses the following modules:

    Asset management (Mechanical, Electrical, and Plumbing equipment)

    Equipment life cycle and cost management

    Spare parts inventory management

    Work order scheduling (man, machine, materials, method, and tools):

    – Preventive Maintenance (e.g., based on historical data and meter reading)

    – Predictive Maintenance (based on noise, vibration, temperature, particle count, pressure, and airflow)

    – Unplanned or emergency services

    Depository for Operations and Maintenance manual and maintenance/repair history

    CMMS can earn points in LEED certification through preventive maintenance that oversees HVAC system more closely.

    1.6.3 Cable Management

    Cabling system may seem to be of little importance, but it makes a big impact and is long lasting, costly, and difficult to replace [23]. It should be planned, structured, and installed per network topology and cable distribution requirements as specified in TIA-942-A and ANSI/TIA/EIA-568 standards. The cable should be organized so that the connections are traceable for code compliance and other regulatory requirements. Poor cable management [24] could create electromagnetic interference due to the induction between cable and equipment electrical cables. To improve maintenance and serviceability, cabling should be placed in such a way that it could be disconnected to reach a piece of equipment for adjustments or changes. Pulling, stretching, or bending the radii of cables beyond specified ranges should be avoided. Ensure cable management discipline to avoid out of control, leading to chaos [24] in data centers.

    1.6.4 The 5S Pillars [25]

    5S is a lean method that organizations implement to optimize productivity through maintaining an orderly workplace.⁶ 5S is a cyclical methodology including the following:

    Sort: eliminate unnecessary items from the workplace.

    Set in order: create a workplace so that items are easy to find and put away.

    Shine: thoroughly clean the work area.

    Standardize: create a consistent approach with which tasks and procedures are done.

    Sustain: make a habit to maintain the procedure.

    1.6.5 Training and Certification

    Planning and training play a vital role in energy-efficient design and the effective operation of data centers. The U.S. Department of Energy offers many useful training and tools.

    The Federal Energy Management Program offers free interactive online First Thursday Semin@rs and "eTraining.⁷" Data center owners can use Data Center Energy Profiler (DC Pro) Software⁸ to profile, evaluate, and identify potential areas for energy efficiency improvements. Data Center Energy Practitioner (DCEP) Program [26] offers data center practitioners with different certification programs.

    1.7 Business Continuity and Disaster Recovery

    In addition to natural disasters, terrorist attack to the Internet’s physical infrastructure is vulnerable and could be devastating. Also, statistics show that over 70% of all data centers was brought down by human errors such as improper executing procedures or maintenance. It is imperative to have detailed business continuity (BC) and disaster recovery (DR) plans well prepared and executed. BC at data centers should consider design beyond requirements per building codes and standards. The International Building Code (IBC) and other codes generally concern about life safety of occupants but with little regard to property or functional losses. To sustain data center operations after a natural disaster, the design of data center building structural and nonstructural components (mechanical equipment [27], electrical equipment [28], duct and pipe [29]) must be toughened considering BC.

    Many lessons were learned on DR from two natural disasters: the Great East Japan Tsunami (March 2011) [30] and the eastern U.S. Superstorm Sandy (October 2012). Many of Japan’s data centers—apart from the rolling brownouts—were well prepared for the Japanese tsunami and earthquake. Being constructed in a zone known for high levels of seismic activity, most already had strong measures in place [31].

    Key lessons learned from the aforementioned natural disasters are highlighted as follows:

    Detailed crisis management procedure and communication command line.

    Conduct drills regularly by emergency response team using established procedures.

    Regularly maintain and test run standby generators and critical infrastructure in a data center.

    Have contract with multiple diesel oil suppliers to ensure diesel fuel deliveries.

    Fly in staff from nonaffected offices. Stock up food, drinking water, sleeping bags, etc.

    Have different communication mechanisms such as social networking, web, and satellite phones.

    Get required equipment on-site readily accessible (e.g., flashlight, portable generator, fuel and containers, hoses, and extension cords).

    Brace for the worst—preplan with your customers on communication during disaster and a controlled shutdown and DR plan.

    Other lessons learned include using combined diesel fuel and natural gas generator, fuel cell technology, and submersed fuel pump and that a cloud computing-like environment can be very useful [32]. Too many risk response manuals will serve as a ‘tranquilizer’ for the organization. Instead, implement a risk management framework that can serve you well in preparing and responding to a disaster.

    1.8 Conclusion

    This chapter describes energy use that accelerates global warming and results in climate changes, flood, drought, and food shortage. Strategic planning of data centers applying best practices in design and operations was introduced. Rapidly increasing electricity demand by data centers for information processing and mobile communications outpaces improvements in energy efficiency. Lessons learned from natural disasters were addressed. Training plays a vital role in successful energy-efficient design and safe operations. By collective effort, we can apply best practices to radically accelerate speed of innovation (Fig. 1.7) to plan, design, build, and operate data centers efficiently and sustainably.

    c1-fig-0007

    Figure 1.7 ESIF’s high-performance computing data center—innovative cooling design with PUE at 1.06 [33].

    References

    [1] IPCC. Summary for policymakers. In: Stocker TF, Qin D, Plattner G-K, Tignor M, Allen SK, Boschung J, Nauels A, Xia Y, Bex V, Midgley PM (eds.) Climate Change 2013: The Physical Science Basis. Contribution of Working Group l to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge/New York: Cambridge University Press; 2013.

    [2] Turn down the heat: why a 4°C warmer world must be avoid. Washington, DC: The World Bank; November 18, 2012.

    [3] Koomey J. Growth in data center electricity use 2005 to 2010. Analytics Press; August 2011.

    [4] EMC/IDC (International Data Corporation). Extracting value from chaos; June 2011.

    [5] Gartner. The top 10 strategic technology trends for 2012; November 2011.

    [6] Ebbers M, Archibald M, Fonseca C, Griffel M, Para V, Searcy M. Smart Data Centers, Achieving Greater Efficiency. 2nd ed. IBM Redpaper; 2011.

    [7] Best Practices Guide for Energy-Efficient Data Center Design. Federal Energy Management Program, U.S. Department of Energy; March 2011.

    [8] Mell P, Grance T. The NIST definition of cloud computing. NIST, U.S. Department of Commerce; September 2011.

    [9] Sigmon D. What is the difference between SDDC and cloud?. InFocus; August 2013.

    [10] VMware. Delivering on the promise of the software-defined data center; 2013.

    [11] Vision and roadmap: routing telecom and data centers toward efficient energy use. Sponsored by: Emerson Network Power, Silicon Valley Leadership Group, TIA, Yahoo Inc., U.S. Department of Energy; May 13, 2009.

    [12] Porter ME. How competitive forces shape strategy. Harvard Bus Rev 1980;57(2):137–145.

    [13] Geng H. Strategic planning process. Amica Association; 2012.

    [14] Geng H. Data centers plan, design, construction and operations. Datacenter Dynamics Conference, Shanghai; September 2013.

    [15] Bell MA. Use best practices to design data center facilities. Gartner Publication; April 22, 2005.

    [16] Top 10 business practices for environmentally sustainable data centers. Microsoft; August 2012.

    [17] Dunlap K, Rasmussen N. Choosing between room, row, and rack-based cooling for data centers. White Paper 130, Rev. 2, Schneider Electric Corporation.

    [18] Rasmussen N, Torell W. Data center projects: establishing a floor plan. APC White Paper #144; 2007.

    [19] Telecommunications Infrastructure Standard for Data Centers. Telecommunications Industry Association; August 2012.

    [20] Server Inlet Temperature and Humidity Adjustments. Available at http://www.energystar.gov/index.cfm?c=power_mgt.datacenter_efficiency_inlet_temp. Accessed on May 6, 2014.

    [21] Darrow K, Hedman B. Opportunities for combined heat and power in data centers. Arlington: ICF International, Oak Ridge National Laboratory; March 2009.

    [22] 2010 hydrogen and fuel cell global commercialization & development update. International Partnership for Hydrogen and Fuel Cells in the Economy; November 2010.

    [23] Best Practices Guides: Cabling the Data Center. Brocade; 2007.

    [24] Apply proper cable management in IT racks—a guide for planning, deployment and growth. Emerson Network Power; 2012.

    [25] Productivity Press Development Team. 5S for Operators: 5 Pillars of the Visual Workplace. Portland: Productivity Press; 1996.

    [26] DCEP program energy training-assessment process manual. LBNL and ANCIS Inc.; 2010.

    [27] Installing seismic restraints for mechanical equipment. FEMA; December 2002.

    [28] Installing seismic restraints for electrical equipment. FEMA; January 2004.

    [29] Installing seismic restraints for duct and pipe. FEMA; January 2004.

    [30] Yamanaka A, Kishimoto Z. The realities of disaster recovery: how the Japan Data Center Council is successfully operating in the aftermath of the earthquake. JDCC, Alta Terra Research; June 2011.

    [31] Jones P. The after effect of the Japanese earthquake. Tokyo: Datacenter Dynamics; December 2012.

    [32] Kajimoto M. One year later: lessons learned from the Japanese tsunami. ISACA; March 2012.

    [33] High performance computing data center. National Renewable Energy Laboratory, U.S. Department of Energy; August 2012.

    Further reading

    2011 Thermal Guidelines for Data Processing Environments. ASHRAE TC 9.9; 2011.

    Al Gillen, et al., The software-defined datacenter: what it means to the CIO. IDC; July 2012.

    A New Approach to Industrialized IT. HP Flexible Data Center; November 2012.

    Annual Energy Outlook 2013 with Projections to 2040. U.S. Energy Information Administration; April 2013.

    Avelar V, Azevedo D, French A. PUE™: a comprehensive examination of the metric. The Green Grid; 2012.

    Brey T, Lembke P, et al. Case study: the ROI of cooling system energy efficiency upgrades. White Pager #39, the Green Grid; 2011.

    Create a Project Plan-Cops, U.S. Dept. of Justice. Available at http://www.itl.nist.gov/div898/handbook/index.htm. Accessed on May 6, 2014.

    Coles H, Han T, Price P, Gadgil A, Tschudi W. Assessing corrosion risk from outside-air cooling: are coupons a useful indicator? LBNL; March 2011.

    eBay Data Center Retrofits: The Costs and Benefits of Ultrasonic Humidification and Variable Speed Drive. Energy Star Program, the U.S. EPA and DOE; March 2012. Available at http://www.energystar.gov/ia/products/power_mgt/downloads/Energy_Star_fact_sheet.pdf?0efd-83df. Accessed on May 6, 2014.

    EU energy trends to 2030. European Commission; 2009.

    European Code of Conduct on Data Centre Energy Efficiency, introductory guide for applicants 2013. European Commission; 2013.

    Gartner IT Glossary. 2012. Available at http://www.gartner.com/itglossary/data-center/. Accessed on May 6, 2014.

    Geary J. Who protects the internet? Popular Science; March 2009.

    Gens F. Top 10 predictions, IDC predictions 2013: competing on the 3rd platform. IDC; November 2012.

    Govindan S, Wang D, Chen L, Sivasubramaniam A, Urgaonkar B. Modeling and Analysis of Availability of Datacenter Power Infrastructure. Department of Computer Science and Engineering, The Pennsylvania State University, IBM Research Zurich. Technical Report. CSE 10-006.

    Green Google. Available at http://www.google.com/green/. Accessed on May 6, 2014.

    Hickins M. Companies test backup plans, and learn some lessons. Wall Street Journal, October 37, 2012.

    Iyengar M, Schmidt RR. Energy Consumption of Information Technology Data Centers. Electronics Cooling Magazine, Publisher: ITEM Media, Plymouth Meeting, PA; December 2010.

    Joshi Y, Kumar P. Energy Efficient Thermal Management of Data Centers. New York: Springer; 2012.

    LaFontaine WR. Global technology outlook 2013. IBM Research; April 2013.

    Newcombe L, et al. 2013 Best Practices for the EU Code of Conduct on Data Centers. European Commission; 2013.

    Pitt Turner IV W, Brill K. Cost model: dollars per kW plus dollars per square foot of computer floor. White Paper, Uptime Institute; 2008.

    Planning guide: getting started with big data. Intel; 2013.

    Polar ice melt is accelerating. The Wall Street Journal, November 30, 2012.

    Porter M. Competitive Strategy: Techniques for Analyzing Industries and Competitors. New York: Free Press, Harvard University; 1980.

    Prescription for server room growth: Design a Scalable Modular Data Center. IBM Global Services; August 2009.

    Report to Congress on server and data center energy efficiency. U.S. Environmental Protection, Agency Energy Star Program; August 2007.

    Rodgers TL. Critical facilities: reliability and availability. Facility Net; August 2013.

    Salim M, Tozer R. Data Center Air Management Metrics-Practical Approach. Hewlett-Packard, EYP MCF.

    Sawyer R. Calculating total power requirements for data centers. APC; 2004.

    Trick MA. Network Optimization. Carnegie Mellon University; 1996.

    U.S. Energy Information Administration. Available at http://www.eia.gov/todayinenergy/. Accessed on May 6, 2014.

    VanDenBerg S. Cable pathways: a data center design guide and best practices. Data Center Knowledge, Industry Perspectives; October 2013.

    Wenning T, MacDonald M. High performance computing data center metering protocol. Federal Energy Management Program, U.S. Department of Energy; 2010.

    Notes

    1 http://www.usgbc.org/leed/rating-systems

    2 European Commission, Directorate-General, Joint Research Centre, Institute for Energy and Transport, Renewable Energy Unit.

    3 http://www.jdcc.or.jp/english/index.html, see Outline of Data Center Facility Standard (PDF)

    4 http://www.jdcc.or.jp/english/index.html, see Japan Data Center Council (PDF)

    5 http://www.raritandcim.com/

    6 Lean Thinking and Methods, the U.S. Environmental Protection Agency.

    7 http://apps1.eere.energy.gov/femp/training/first_thursday_seminars.cfm

    8 http://www1.eere.energy.gov/manufacturing/datacenters/software.html

    2

    Energy and Sustainability in Data Centers

    William J. Kosik

    Hewlett-Packard Company, Chicago, IL, USA

    2.1 Introduction

    Flashback to 1999: Forbes published a seminal article coauthored by Peter Huber and Mark Mills. It had a wonderful tongue-in-cheek title: Dig More Coal—the PCs Are Coming. The premise of the article was to challenge the idea that the Internet would actually reduce overall energy use in the United States, especially in sectors such as transportation, banking, and health care where electronic data storage, retrieval, and transaction processing were becoming integral to business operations. The opening paragraph, somewhat prophetic, reads as follows:

    Southern California Edison, meet Amazon.com. Somewhere in America, a lump of coal is burned every time a book is ordered on-line. The current fuel-economy rating: about 1 pound of coal to create, package, store and move 2 megabytes of data. The digital age, it turns out, is very energy-intensive. The Internet may someday save us bricks, mortar and catalog paper, but it is burning up an awful lot of fossil fuel in the process.

    What Mills was trying to demonstrate is that even if you never have to drive to your bank to deposit a paycheck, or require delivery trucks to bring CDs to your house to acquire new music, a great deal of electricity is still being used by the server that processed your transaction or the storage and networking gear that is delivering your streaming media. While I am not going to detail out a life-cycle assessment counting kWh, carbon, or water, comparing the old way to the new way, one thing is for sure: the Internet has created new services that do not replace anything at all, but are completely new paradigms. The energy use we are talking about here is completely additive.

    Flash forward to now: One of these paradigms that come to mind is social networking. So if Mills and Huber wrote an article today, it would have to relate to how much coal is used to Tweet, friend someone in Facebook, or network with a professional group using LinkedIn. The good news here is that there are concerted efforts underway for some time by the data center industry to continue to look for ways to minimize the electricity required to power servers, storage, and networking gear, as well as to reduce the overhead energy used in cooling processes and power distribution systems. For example, data center owners and end users are demanding better server efficiency, airflow optimization, and using detailed building performance simulation techniques comparing before and after energy usage to justify higher initial spending to reduce ongoing operational costs.

    The primary purpose of this chapter is to provide information and guidance on the drivers of energy use in data centers. It is a complex topic—the variables and skillsets involved in the optimization of energy use and minimization of environmental impacts are cross-disciplinary and include IT professionals, power and cooling engineers, builders, architects, finance and accounting professionals, and energy procurement teams. While these types of multidisciplinary teams are not unusual when tackling large, complex business challenges, planning, designing, and operating a new data center building is very intricate and requires a lot of care and attention. In addition, a data center has to run 8760 h/year nonstop including all scheduled maintenance, unscheduled breakdowns, and ensure that ultracritical business outcomes are delivered on time as promised. In summary, planning, design, implementation, and operations of a data center takes a considerable amount of effort and attention to detail. And after the data center is built and operating, the energy cost of running the facility, if not optimized during the planning and design phases, will provide a legacy of inefficient operation and high electricity costs.

    So to keep it simple, this chapter will provide some good information, tips, and resources for further reading that will help obviate (metaphorically) having to replace the engine on your car simply to reduce energy expenditures when the proper engine could have been installed in the first place. The good news is the industry as a whole is far more knowledgeable and interested in developing highly energy-efficient data centers (at least compared to a decade ago). With this said, how many more new paradigms that we haven’t even thought of are going to surface in the next decade that could potentially eclipse all of the energy savings that we have achieved in the current decade? Only time will tell, but it is clear to me that we need to continue to push hard for nonstop innovation, or as another one of my favorite authors, Tom Peters, puts it, Unless you walk out into the unknown, the odds of making a profound difference…are pretty low. So as the need for data centers continues to grow, each consuming as much electricity and water as a small town, it is imperative that we make this profound difference.

    2.1.1 How Green Is Green?

    I frequently get questions like, Is going green the right thing to do, environmentally speaking? Or is it just an expensive trend? Or is there a business case for doing so (immediate energy savings, future energy savings, increased productivity, better disaster preparation, etc.)? First, it is certainly the right thing to do. However, each individual, small business, or corporation will have different tolerance levels on the amount of collateral goodness they want to spread around. CIOs have shareholders and a board of directors to answer to so there must be a compelling business case for any green initiative. This is where the term sustainable can really be applied—sustainable from an environmental perspective but also from a business perspective. And the business perspective could include tactical upgrades to optimize energy use or it could include increasing market share by taking an aggressive stance on minimizing the impact on the environment—and letting the world know about it.

    Certainly there are different shades of green here that need to be considered. When looking at specific greening activities for a data center for example, there is typically low-hanging fruit related to the power and cooling systems that will have paybacks (construction costs compared to reduced operational costs attributable to energy use) of 1 or 2 years. Some have very short paybacks because there are little or no capital costs involved. Examples of these are adjusting set points for temperature and humidity, minimizing raised floor leakage, optimizing control and sequencing of cooling equipment, and optimizing air management on the raised floor to eliminate hot spots (which reduces the need to subcool the air). Other upgrades, which are much more substantial in first costs, historically have shown paybacks closer to 5 years. These are upgrades that are done to not only increase energy efficiency but also to lengthen the life of the facility and increase reliability. So the first cost is attributable to things other than pure energy efficiency upgrades. These types of upgrades typically include replacement of central cooling plant components (chillers, pumps, cooling towers) as well as electrical distribution (UPS, power distribution units). These are typically more invasive and will require shutdowns unless the facility has been designed for concurrent operation during maintenance and upgrades. A thorough analysis, including first cost, energy cost, operational costs, and greenhouse gas emissions, is the only way to really judge the viability of different projects.

    So when you’re ready to go green in your data center, it is critical to take a holistic approach. As an example, when judging the environmental impact of a project, it is important to look at the entire life cycle, all the way from the extraction of the raw materials to the assembly, construction, shipping, use, and recycling/disposal. This can be a very complex analysis, but even if it is looked at from a cursory standpoint, it will better inform the decision-making process. The same is true for understanding water and land use and how the people that will be a part of final product are impacted. Similarly, the IT gear should also be included in this analysis. Certainly it is not likely that servers will be replaced simply to reduce energy costs, but it is possible to combine IT equipment retirement with energy efficiency programs. The newer equipment will likely have more efficient power supplies, more robust power management techniques, resulting in overall lower energy consumption. The newer IT gear will reduce the cooling load and, depending on the data center layout, will improve airflow and reduce air management headaches. Working together, the facilities and IT organizations can certainly make an impact in reducing energy use in the data center that wouldn’t be able to be achieved by either group working alone (Fig. 2.1).

    c2-fig-0001

    Figure 2.1 Data center planning timeline (HP image).

    2.1.2 Environmental Impact

    Bear in mind that a typical enterprise data center consumes 40 times, or more, as much energy as a similarly sized office building. This can have a major impact on a company’s overall energy use, operational costs, and carbon footprint. As a further complication, not all IT

    Enjoying the preview?
    Page 1 of 1