Data centers have evolved into physical and virtual infrastructures. Many companies run hybrid clouds, a combination of both types of data centers.
ABSTRACT
At the most basic level, a data center centralizes an organization’s IT operations and equipment. Data is stored, managed and disseminated across a variety of devices. It houses computer systems and associated components such as telecommunications and storage systems. Often included are redundant power systems, data communications connections, environmental controls and security devices. Yet today’s data centers are far from basic. Many large data centers can use as much electricity as a small town and are a critical component to a company’s business strategy.
Data is to this century what oil was to the last one: a driver of growth and change.
- The Economist 5/6/17
PROBLEM STATEMENT
Data centers have evolved into physical and virtual infrastructures. Many companies run hybrid clouds, a combination of both types of data centers. The role and composition of the data center has changed significantly. It used to be that building a data center was a 25+ year commitment that had inefficiencies in power/cooling, no flexibility in cabling and no mobility within or between data centers. Now it’s about speed, performance and efficiencies. One size does not fit all and there are a variety of architectures, configurations, etc.
The Evolution of Hyperscale Data Centers
The word “hyperscale” is relatively new. Hyperscale data center or computing refers to the facilities and provisioning that is required in distributed computing environments. The driver of this change is the ability to scale and grow systems quickly, from a few servers to thousands. This type of computing is usually implemented in companies with a large focus on big data and cloud computing. Hyperscale data center companies strike a balance between standardization and flexibility. They look to buy less expensive infrastructure, have faster refresh cycles and allow the application or automation software to manage performance and failures so that
administrators can focus on innovation.
Software plays a big role in driving operational efficiencies. Companies such as Facebook, Amazon, Google and Microsoft have been leaders driving the definition and evolution of the hyperscale movement. At Facebook, there is one administrator per 24,000 servers. Compare that to a traditional enterprise with one administrator per 300-700 servers (Source: Wikibon.org Hyperscale invades the enterprise datacenter 5/14).
However, hyperscale computing is not just an American phenomenon; it is global with leading players around the world. In China, BAT is well known and includes Baidu, Alibaba and Tencent. While these brand names may not be household names in the Americas or Europe, they are driving significant innovation across Asia. Baidu, a Chinese internet search tool, primarily controlling the majority of the search market. Alibaba, an e-commerce tool, serves as a middleman between buyers and sellers online and facilitates the sale of goods between the two parties through its extensive network of websites. Tencent, one of the largest, best-performing internet companies in the world has services ranging from a social networking application to multiplayer online games. So, companies across the globe are influencing and driving innovation for hyperscale computing.
From a technology view, the major forces driving hyperscale data centers are the increasing requirements for application performance, growing needs for reduction in operational expenditures and capital, as well as ever-increasing technology investments. Many companies, not just hyperscale, are implementing the cloud as a partial or complete solution to help them rapidly scale their infrastructure as business requirements skyrocket. Yet it’s not just about hyperscale. We see a variety of company types embracing these new technologies, from enterprises and service providers to governments.
QUANTIFYING GROWTH
One of the leading sources on data center growth, the Cisco Global Cloud Index, projects a 3.3x growth in global data center traffic by 2020. That equals 15.3 zettabytes (ZB) per year in a global data center traffic by 2020, which is up from 4.7 ZB per year in 2015. Of that traffic, 92% will come from the cloud by 2020. The number of hyperscale data centers will grow from 259 in number at the end of 2015 to 485 by 2020. That represents 47 percent of all installed data center servers by 2020. Understanding the architectural shifts around the hyperscale movement is critical, both as a supplier and as an enterprise customer, to better address the increasing demand for greater performance.
ARCHITECTURAL SHIFTS
Data centers continue to evolve due to architectural shifts in the key building blocks of network/switching, compute/server, and storage. Let us look at these elements individually.
1) Server technology is rapidly changing. The hyperscale companies have increased in importance and influence. Speed is a critical driver as performance requirements increase. While the industry is at 8 Gbps, companies are now looking for 16 to 25 to 32 Gbps. The industry has to improve existing connectors for speed, and now is the time to make a universal/flexible interconnect. In fact, Intel recently introduced a new processor, Purley, which can cable for high speed signals for the first time. FPGA (Field Programmable Gate Array) technology is being used in many servers for security and performance, and external I/O to the server is evolving to 50G from 10G and 25G. Lastly, thermal constraints are also driving rack architecture changes.
2) Storage is also evolving. Protocols are converging, and PCIe is being used more for performance. PCIe switching enables access to a larger drive count, generally requiring more internal/external cables. Flash is becoming more prevalent to drive high density packaging and performance. We see this rising demand for increased flexibility and performance, which we meet by providing innovative interconnect solutions. External cable connections need lower profile (than mini-SAS HD) plug connectors. And finally, hot plug serviceability requires constant connected power for drawer systems.
3) Switch trends evolve as the industry moves from 25G (NRZ) to 112G (PAM-4). Switch chips are also increasing in size and will soon need corresponding sockets. Hyperscale demands are driving 200G/400G uplinks. The trend towards larger scale switches needing higher performance connections that give lower loss is being met by orthogonal and cabled backplane architectures. Another performance enhancing architecture being implemented is to use internal cables to connect directly to the front panel of the I/O. Within rack communication can still be accomplished using copper direct attach cables (DAC) at 25 and 56G - and probably for 112G.
4) Silicon Architectures and Integrated Circuits: Many cloud companies are utilizing different silicon architectures such as GPU (graphics processing unit). GPUs can accelerate the computing speed for SIMD (single instruction multiple data) processing. NVidia, a leading provider, claims that their solution accelerates the speed by five times while reducing costs by 60%.
For highly computational applications, such as financial trading, specialized voice and data analytics, FPGAs may be a good solution. FPGA’s can be programmed after they are manufactured to meet very specific workload requirements. The array of gates that make up an FPGA can be programmed to run a specific algorithm.
How Connectivity Solutions Enable A More Effective Data Center Solution
TE has a deep portfolio of products to support increasing speeds and efficiencies. In addition, we have worked across market leaders and developed industry-leading innovations for several critical components:
Sliver 2.0 Connectors (SFF-TA-1002)
Our Sliver 2.0 internal I/O connectors have been named the industry standard for flash storage connectors – SFF-TA-1002 – thanks to their high performance, density, flexibility and robustness. This modular design and system standardization offers cost-saving opportunities. These multi-lane, super slim connectors are protocol-agnostic and rated up to 112G PAM-4 (56G NRZ). They meet all current protocol performance requirements for PCIe Gen-3/-4 (8G & 16G), SAS-3/-4 (6G, 12G, & 24G), Ethernet protocols (10G & 25G per lane), InfiniBand (28G), and are expected to meet performance for IEEE & OIF 56 Gbps, PCIe Gen-5, and SAS-5. They have been selected for multiple industry standards including Gen-Z, OCP, COBO and EDSFF.
QSFP-DD & OSFP
TE is actively engaged in developing standards and are supporting connectors, cages and DAC for two new interfaces driving an increase in uplink performance - OSFP and QSFP-DD. OSFP is a thermally-enhanced 8 lane 400G interface that can be used in a wide range of use cases from TOR switches all the way to coherent DCI links due to its unique thermal performance. QSFP-DD is innovative because it reduces the risk of data rate upgrades within a data center by offering backwards compatibility to QSFP+ and QSFP28 modules.
STRADA Whisper DPO & Cabled
As the data center industry trends move away from traditional backplanes, TE expanded our STRADA Whisper connector portfolio to include direct plug orthogonal (DPO) and cabled solutions that handle speeds of 112G. The robust mechanical design of our STRADA Whisper DPO connectors help mitigate mechanical challenges of DPO so that the cost and airflow benefits of removing a midplane can be realized with the added benefit of having the same mating interface and electrical performance as our proven STRADA Whisper connector designs. Our cabled STRADA Whisper connectors increase performance and enable low-loss communications across long channels.
Socket P / LGA 3647
Designed to meet the next-generation designs of Intel’s latest CPU processors, our award-winning LGA 3647 socket is the first to feature an innovative two-piece design that reduces warpage issues and offers better coplanarity and connectivity reliability for large processors.
OCP Power Cable Assemblies
Our Open Compute Project (OCP) bus bar connectors and cable assemblies offer a plug-and-play power solution designed to meet the OCP distribution architecture, providing a standardized platform for simple system designs. They are fully compatible with OCP Open Rack V1 specifications and forward compatible with the Open Rack V2 specifications.
WHY TE?
These product offerings are the result of years of research and expertise on high performance requirements. We have a broad knowledge of the industry and participate on leading standards boards to contribute to their development. Our history in design engineering, global manufacturing prowess, materials science expertise and signal integrity analysis are benefits that contribute to the value of partnering with us. At TE, we view our role of consultant as a trusted advisor, who helps to bring value to our customers through innovative and customized solutions.
UNDERSTANDING DATA CENTER TRANSITIONS
At the end of the day, the world is rapidly changing. Enterprises and suppliers need to be agile and rethink business models and architectural shifts. It’s about reimagining the business. The world of data and the need for speed and performance is fundamentally changing everything from delivery channels, operations, service and customer care. Meeting the needs of a hyperscale world and partnering with the best innovators and technology suppliers is an important step in redefining the future of your business. TE is here to partner with you.
Data centers have evolved into physical and virtual infrastructures. Many companies run hybrid clouds, a combination of both types of data centers.
ABSTRACT
At the most basic level, a data center centralizes an organization’s IT operations and equipment. Data is stored, managed and disseminated across a variety of devices. It houses computer systems and associated components such as telecommunications and storage systems. Often included are redundant power systems, data communications connections, environmental controls and security devices. Yet today’s data centers are far from basic. Many large data centers can use as much electricity as a small town and are a critical component to a company’s business strategy.
Data is to this century what oil was to the last one: a driver of growth and change.
- The Economist 5/6/17
PROBLEM STATEMENT
Data centers have evolved into physical and virtual infrastructures. Many companies run hybrid clouds, a combination of both types of data centers. The role and composition of the data center has changed significantly. It used to be that building a data center was a 25+ year commitment that had inefficiencies in power/cooling, no flexibility in cabling and no mobility within or between data centers. Now it’s about speed, performance and efficiencies. One size does not fit all and there are a variety of architectures, configurations, etc.
The Evolution of Hyperscale Data Centers
The word “hyperscale” is relatively new. Hyperscale data center or computing refers to the facilities and provisioning that is required in distributed computing environments. The driver of this change is the ability to scale and grow systems quickly, from a few servers to thousands. This type of computing is usually implemented in companies with a large focus on big data and cloud computing. Hyperscale data center companies strike a balance between standardization and flexibility. They look to buy less expensive infrastructure, have faster refresh cycles and allow the application or automation software to manage performance and failures so that
administrators can focus on innovation.
Software plays a big role in driving operational efficiencies. Companies such as Facebook, Amazon, Google and Microsoft have been leaders driving the definition and evolution of the hyperscale movement. At Facebook, there is one administrator per 24,000 servers. Compare that to a traditional enterprise with one administrator per 300-700 servers (Source: Wikibon.org Hyperscale invades the enterprise datacenter 5/14).
However, hyperscale computing is not just an American phenomenon; it is global with leading players around the world. In China, BAT is well known and includes Baidu, Alibaba and Tencent. While these brand names may not be household names in the Americas or Europe, they are driving significant innovation across Asia. Baidu, a Chinese internet search tool, primarily controlling the majority of the search market. Alibaba, an e-commerce tool, serves as a middleman between buyers and sellers online and facilitates the sale of goods between the two parties through its extensive network of websites. Tencent, one of the largest, best-performing internet companies in the world has services ranging from a social networking application to multiplayer online games. So, companies across the globe are influencing and driving innovation for hyperscale computing.
From a technology view, the major forces driving hyperscale data centers are the increasing requirements for application performance, growing needs for reduction in operational expenditures and capital, as well as ever-increasing technology investments. Many companies, not just hyperscale, are implementing the cloud as a partial or complete solution to help them rapidly scale their infrastructure as business requirements skyrocket. Yet it’s not just about hyperscale. We see a variety of company types embracing these new technologies, from enterprises and service providers to governments.
QUANTIFYING GROWTH
One of the leading sources on data center growth, the Cisco Global Cloud Index, projects a 3.3x growth in global data center traffic by 2020. That equals 15.3 zettabytes (ZB) per year in a global data center traffic by 2020, which is up from 4.7 ZB per year in 2015. Of that traffic, 92% will come from the cloud by 2020. The number of hyperscale data centers will grow from 259 in number at the end of 2015 to 485 by 2020. That represents 47 percent of all installed data center servers by 2020. Understanding the architectural shifts around the hyperscale movement is critical, both as a supplier and as an enterprise customer, to better address the increasing demand for greater performance.
ARCHITECTURAL SHIFTS
Data centers continue to evolve due to architectural shifts in the key building blocks of network/switching, compute/server, and storage. Let us look at these elements individually.
1) Server technology is rapidly changing. The hyperscale companies have increased in importance and influence. Speed is a critical driver as performance requirements increase. While the industry is at 8 Gbps, companies are now looking for 16 to 25 to 32 Gbps. The industry has to improve existing connectors for speed, and now is the time to make a universal/flexible interconnect. In fact, Intel recently introduced a new processor, Purley, which can cable for high speed signals for the first time. FPGA (Field Programmable Gate Array) technology is being used in many servers for security and performance, and external I/O to the server is evolving to 50G from 10G and 25G. Lastly, thermal constraints are also driving rack architecture changes.
2) Storage is also evolving. Protocols are converging, and PCIe is being used more for performance. PCIe switching enables access to a larger drive count, generally requiring more internal/external cables. Flash is becoming more prevalent to drive high density packaging and performance. We see this rising demand for increased flexibility and performance, which we meet by providing innovative interconnect solutions. External cable connections need lower profile (than mini-SAS HD) plug connectors. And finally, hot plug serviceability requires constant connected power for drawer systems.
3) Switch trends evolve as the industry moves from 25G (NRZ) to 112G (PAM-4). Switch chips are also increasing in size and will soon need corresponding sockets. Hyperscale demands are driving 200G/400G uplinks. The trend towards larger scale switches needing higher performance connections that give lower loss is being met by orthogonal and cabled backplane architectures. Another performance enhancing architecture being implemented is to use internal cables to connect directly to the front panel of the I/O. Within rack communication can still be accomplished using copper direct attach cables (DAC) at 25 and 56G - and probably for 112G.
4) Silicon Architectures and Integrated Circuits: Many cloud companies are utilizing different silicon architectures such as GPU (graphics processing unit). GPUs can accelerate the computing speed for SIMD (single instruction multiple data) processing. NVidia, a leading provider, claims that their solution accelerates the speed by five times while reducing costs by 60%.
For highly computational applications, such as financial trading, specialized voice and data analytics, FPGAs may be a good solution. FPGA’s can be programmed after they are manufactured to meet very specific workload requirements. The array of gates that make up an FPGA can be programmed to run a specific algorithm.
How Connectivity Solutions Enable A More Effective Data Center Solution
TE has a deep portfolio of products to support increasing speeds and efficiencies. In addition, we have worked across market leaders and developed industry-leading innovations for several critical components:
Sliver 2.0 Connectors (SFF-TA-1002)
Our Sliver 2.0 internal I/O connectors have been named the industry standard for flash storage connectors – SFF-TA-1002 – thanks to their high performance, density, flexibility and robustness. This modular design and system standardization offers cost-saving opportunities. These multi-lane, super slim connectors are protocol-agnostic and rated up to 112G PAM-4 (56G NRZ). They meet all current protocol performance requirements for PCIe Gen-3/-4 (8G & 16G), SAS-3/-4 (6G, 12G, & 24G), Ethernet protocols (10G & 25G per lane), InfiniBand (28G), and are expected to meet performance for IEEE & OIF 56 Gbps, PCIe Gen-5, and SAS-5. They have been selected for multiple industry standards including Gen-Z, OCP, COBO and EDSFF.
QSFP-DD & OSFP
TE is actively engaged in developing standards and are supporting connectors, cages and DAC for two new interfaces driving an increase in uplink performance - OSFP and QSFP-DD. OSFP is a thermally-enhanced 8 lane 400G interface that can be used in a wide range of use cases from TOR switches all the way to coherent DCI links due to its unique thermal performance. QSFP-DD is innovative because it reduces the risk of data rate upgrades within a data center by offering backwards compatibility to QSFP+ and QSFP28 modules.
STRADA Whisper DPO & Cabled
As the data center industry trends move away from traditional backplanes, TE expanded our STRADA Whisper connector portfolio to include direct plug orthogonal (DPO) and cabled solutions that handle speeds of 112G. The robust mechanical design of our STRADA Whisper DPO connectors help mitigate mechanical challenges of DPO so that the cost and airflow benefits of removing a midplane can be realized with the added benefit of having the same mating interface and electrical performance as our proven STRADA Whisper connector designs. Our cabled STRADA Whisper connectors increase performance and enable low-loss communications across long channels.
Socket P / LGA 3647
Designed to meet the next-generation designs of Intel’s latest CPU processors, our award-winning LGA 3647 socket is the first to feature an innovative two-piece design that reduces warpage issues and offers better coplanarity and connectivity reliability for large processors.
OCP Power Cable Assemblies
Our Open Compute Project (OCP) bus bar connectors and cable assemblies offer a plug-and-play power solution designed to meet the OCP distribution architecture, providing a standardized platform for simple system designs. They are fully compatible with OCP Open Rack V1 specifications and forward compatible with the Open Rack V2 specifications.
WHY TE?
These product offerings are the result of years of research and expertise on high performance requirements. We have a broad knowledge of the industry and participate on leading standards boards to contribute to their development. Our history in design engineering, global manufacturing prowess, materials science expertise and signal integrity analysis are benefits that contribute to the value of partnering with us. At TE, we view our role of consultant as a trusted advisor, who helps to bring value to our customers through innovative and customized solutions.
UNDERSTANDING DATA CENTER TRANSITIONS
At the end of the day, the world is rapidly changing. Enterprises and suppliers need to be agile and rethink business models and architectural shifts. It’s about reimagining the business. The world of data and the need for speed and performance is fundamentally changing everything from delivery channels, operations, service and customer care. Meeting the needs of a hyperscale world and partnering with the best innovators and technology suppliers is an important step in redefining the future of your business. TE is here to partner with you.