Optical Interconnects for Data Centers
eBook - ePub

Optical Interconnects for Data Centers

  1. 428 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Optical Interconnects for Data Centers

Book details
Book preview
Table of contents
Citations

About This Book

Current data centre networks, based on electronic packet switches, are experiencing an exponential increase in network traffic due to developments such as cloud computing. Optical interconnects have emerged as a promising alternative offering high throughput and reduced power consumption. Optical Interconnects for Data Centers reviews key developments in the use of optical interconnects in data centres and the current state of the art in transforming this technology into a reality. The book discusses developments in optical materials and components (such as single and multi-mode waveguides), circuit boards and ways the technology can be deployed in data centres.

Optical Interconnects for Data Centers is a key reference text for electronics designers, optical engineers, communications engineers and R&D managers working in the communications and electronics industries as well as postgraduate researchers.

  • Summarizes the state-of-the-art in this emerging field
  • Presents a comprehensive review of all the key aspects of deploying optical interconnects in data centers, from materials and components, to circuit boards and methods for integration
  • Contains contributions that are drawn from leading international experts on the topic

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Optical Interconnects for Data Centers by Tolga Tekin,Nikos Pleros,Richard Pitwon,Andreas Hakansson in PDF and/or ePUB format, as well as other popular books in Computer Science & Computer Networking. We have over one million books available in our catalogue for you to explore.

Information

Part I
Introduction
Outline
1

Data center architectures

C. DeCusatis, Marist College, Poughkeepsie, NY, United States

Abstract

Data centers house the computational power, storage, networking, and software applications that form the basis of most modern business, academic, and government institutions. This chapter provides an overview of data center fundamentals, with particular emphasis on the role of optical data networking. The chapter begins with an introduction and brief history of data center design, followed by a discussion of environmental considerations including raised floors, fire suppression, energy consumption, and the role of containerized data centers. Industry standard data center classification tiers are presented. Application architectures are discussed, including client-server, peer-to-peer, high performance computing, and cloud computing infrastructure-as-a-service architectures. The physical architectures of Ethernet networked data centers are presented, including Layer 2 and Layer 3 design considerations (alternatives including Fibre Channel storage networks and InfiniBand are briefly mentioned). Network protocols are discussed, including ECMP, STP, transparent interconnect for lots of links (TRILL), shortest path bridging (SPB), and others, in the context of top-of-rack, middle-of-rack, leaf-spine, spline, and other topologies. Design considerations are discussed, including agility, flattened converged networks, virtualization, latency, oversubscription, energy efficiency, scalability, security, availability, and reliability (including database availability constructs). Finally, future design considerations for next generation data centers are presented, including software designed data centers, and the role of optical interconnects is presented, including industry roadmaps.

Keywords

Data center; architecture; enterprise; client-server; peer-to-peer; cloud; Ethernet; SDN

1.1 Introduction

Data centers house the computational power, storage, networking, and software applications that form the basis of most modern business, academic, and government institutions. It is difficult to overstate the importance of information technology (IT) in our daily lives. Data centers support the hardware and software infrastructure which is critical to both Fortune 500 companies and startups. It is difficult to find a market related to business, finance, transportation, health care, education, entertainment, or government which has not been disrupted by IT [1,2]. The widespread availability of data on demand anytime, anywhere, has caused new applications to emerge, including cloud computing, big data analytics, real-time stock trading, and more. Workloads have evolved from a predominantly static environment into one which changes over time in response to user demands, often as part of a highly virtualized, multitenant data center. In response to these new requirements, data center architectures are also undergoing significant changes. This chapter provides an overview of data center fundamentals, with particular emphasis on the role of optical data networking.
It can be difficult to describe a “typical” data center, since there are many types of servers, storage devices, and networking equipment which can be architected in different ways. Historically, large centralized mainframe computers were first developed in the 1950s and 1960s, at a time when computer networking was extremely limited (peripheral devices such as printers could be placed no more than 400 feet away from the mainframe, connected by inch-thick copper cables [2]). This led to most of the computer equipment being kept in a large room or “glass house,” the forerunners of contemporary data centers. In this environment, computer security was equivalent to physical security, since few users had access to the expensive computer hardware required. Over time, mainframes evolved to become significantly more cost-efficient and powerful, setting the standard for so-called “enterprise class computing” (analogous with “carrier class” networking in the central office of large telecom providers). An enterprise class data center provided very high levels of reliability, availability, and scalability (RAS), as well as large amounts of centralized processing power. Over time, the description for enterprise computing came to be associated with platforms other than the mainframe, as x86 based servers gradually became capable of offering similar performance and reliability.
Mainframes continue to be used today by Fortune 500 companies worldwide due to their extremely high performance and RAS characteristics,1 although other platforms comprise most of the server volumes in data centers and cloud computing environments. While mainframe architectures derive their increased processing power by scaling up the capacity of their processors in each successive technology generation, other platforms achieve high performance by scaling out (adding more processors to form a cluster or server farm). The predominant server platform is currently based on the Intel x86 microprocessor, packaged into servers which mount into standardized 19-inch wide equipment racks (see Fig. 1.1). Servers with other processors are available (including AMD and IBM Power microprocessors) in different form factors, including cabinets and towers. To improve the density of servers per rack, as well as potentially reducing power consumption and heat generation, other form factors such as blade servers can also be used (see Fig. 1.2). Enterprise data centers can become quite large; e.g., a typical data center used by Microsoft could easily be the size of a warehouse (25–30,000 m2), consume 20–30 MW of electricity, and cost over $500 M to construct [3].
image

Figure 1.1 Rack mounted server.
image

Figure 1.2 Blade server.
Enterprise class servers were the first to take advantage of server virtualization as early as the 1960s, using software to create multiple virtual machines (VMs) hosted on a single physical machine. Virtualization technology was not widely adopted on x86 servers until much later, but has since become important for modern data centers. Virtualization software helps improve utilization of physical servers, and allows multiple VMs running different operating systems to be installed on a single processor. However, there are still many data centers consisting of lightly utilized servers running a “bare metal” operating system, or a hypervisor with a small number of VMs.

1.2 Data center environment considerations

Designing a data center is far more complicated than simply assembling enough servers, storage, and networking in a large building. Considerations ranging from the data center geographic location, proximity to inexpensive electrical power sources, disaster recovery planning, and much more, can require large investments of time and money, even for a moderately sized facility. In this section, we provide a brief overview of some important data center design features.
The location of a data center and its backup facilities is typically influenced by factors such as safety (proximity to earthquake fault lines, flood plains, etc.), access to high-speed wide area networking, and cost of construction and real estate. As with any large facility, data centers require physical security such as video monitoring systems, restricted access using badge locks, and similar precautions. Often designed to operate continuously, some facilities will use three or four shifts of employees managing and servicing equipment.
Data centers often have unique architectural requirements, the most prominent being raised flooring which provides space for cabling and air cooling under the servers (see Fig. 1.3). A typical raised floor may be 80–100 cm deep, and covered with a grid of removable 60 cm square floor tiles. Some tiles may be perforated to facilitate air flow, since the underfloor area provides a plenum for air circulation that helps cool the equipment racks as part of the air conditioning system. There are several general types of raised floors, which are documented in detail as part of various industry standards, originally developed for telecom environments compliant with the Network Equipment Building Standard (NEBS) [4,5] and other environments [6]. For example, one type consists of vertical steel pedestals called stringers, which are mechanically fastened to a concrete floor. The pedestal height can be adjusted to allow for different raised floor heights. Stringered raised floors provide good structural integrity, including support for lateral loads, through mechanical attachments between the pedestal heads. Equipment racks are generally attached to the floor using toggle bars or some other form of bracing. This approach is commonly used in areas where the floor will bear large amounts of equipment or be exposed to earthquakes. Structured platforms can also be constructed of welded or bolted steel frames, sturdy enough to permit equipment to be fastened directly to the platform. For areas which require less mechanical integrity, stringerless raised floors may be used. This method is significantly weaker, since it only provides pedestals to support floor panels at their corners, but it also provides fewer obstructions to access the space under the raised floor.
image

Figure 1.3 Data center raised floor.
The data center must also be protected by a fire suppression system, preferably using a chemical agent which will not damage the servers (unlike plain water). Safety regulations affect the different types of optical and copper cables which can be installed in a data center. In particular, there are several classifications for optical fiber cables, as established by the National Fire Code [7] and National Electrical Code [8], which are enforced by groups such as the Underwriters Laboratories [9]. These include riser, plenum, low halogen, and data processing environments. Riser rated cable (UL 1666, type OFNR) is intended for use in vertical building cable plants, but provides only nominal fire protection. An alternative cable type is plenum rated (UL 910, type OFNP) which is designed not to burn unless extremely high temperatures are reached. Plenum rated cable is required for installation in air ducts by the 1993 National Fire Code 770-53, although there is an exception for raised floor and data processing environments which may be interpreted to include subfloor cables. There is also an exception in the National Electrical Code that allows for some cables installed within a data center to be rated “DP” (data processing) rather than plenum (see the “Information technology equipment” section of the code, Article 645-5 (d), exception 3). Some types of plenum cable are also qualified as “limited combustibility” by the National Electrical Code. There are two basic types of plenum cable, manufactured with either a Teflon- or PVC-based jacket. Although they are functionally equivalent, the Teflon-based jackets tend to be stiffer and less flexible, which can affect installation. Outside North America, another standard known as low halogen cable is widely used; this burns at a lower temperature than plenum, but does not give off toxic fumes. Yet another variant is low smoke/zero halogen, in which the cable jacket is free from toxic chemicals including chlorine, fluorine, and bromides. It remains challenging to find a single ca...

Table of contents

  1. Cover image
  2. Title page
  3. Table of Contents
  4. Copyright
  5. List of contributors
  6. Biography
  7. Preface
  8. Woodhead publishing series in electronic and optical materials
  9. Part I: Introduction
  10. Part II: Materials and Components
  11. Part III: Circuit Boards
  12. Part IV: Using Optical Interconnects to Improve Network Architectures in Data Centers
  13. Index