The evolution to the data center has introduced a growing number of technologies, solutions and deployment methods to the enterprise. Navigating this complexity can be challenging, which is why CommScope is eager to share our extensive experience around data center best practices through this eBook. The guidance and information presented in the following chapters are built upon decades of practical application in the field and innovation in the lab. They include tips, answers and insight to help demystify the technology, untangle the complexity and accelerate time to market so you can identify the challenges — and opportunities — in your own data center.
This is why CommScope developed this eBook to provide a holistic overview of the data center and share guidance on how to navigate through the ever-increasing challenges faced by data center managers today.
With stored data predicted to continue growing at 40 percent per year, and billions of Internet of Things devices generating data to be processed, analyzed and stored, the data center infrastructure must be able to support this ever-expanding workload. While connectivity that can meet business objectives today and in the future is absolutely essential and must be done properly, it is only the first step. Driving the most value out of the data center is critical. As such, efficiency must be maximized — not just power efficiency but the overall efficiency of whether assets are being used to their fullest potential.
For those with responsibility for planning and designing data centers, there have never been as many options as there are today. From fundamental choices such as whether to build, lease or move to the Cloud, to detailed technical choices such as what types of optical fiber to deploy, there are countless decisions to be made. This is perhaps why we see the rich variation in how data centers are being implemented today. This eBook is intended to help advise those charged with building an enterprise data center on how to navigate through all of these decisions — with an in-depth emphasis on choices related to the physical layer infrastructure. Since all of the upper layers in the “stack,” from the network to the applications themselves, rely on the physical layer, it is essential that it is designed to meet the needs of the data center both now and in the future.
A holistic view of the data center and its infrastructure challenges
CommScope Quarterly Standards Advisor
Data Center Cabling Design Fundamentals: Telecommunication Cabling Infrastructure Requirements
Data Center Application Standards Reference Guide - Networking and Storage
Infiniband Trade Association is used primarily for high-performance computing applications, with a roadmap with options for up to 600 Gb/s.
(Fibre Channel) covers storage area networks (SANs), with published standards for up to 128 Gb/s with a roadmap out to 1 Tb/s.
(Ethernet standards) have been particularly active, and currently have draft standards underway for applications up to 400 Gb/s.
Applications standards also define the distance that an application can operate over a given media type. For example, under IEEE 802.3an, 10GBASE-T can operate at up to 100 meters over Category 6A cabling.
Applications standards define the application that will run on the cabling infrastructure. There are three applications standards that are the most commonly deployed in data centers
In addition to EN50173-5, CENELEC has also developed the EN 50600-2-4 standard “Telecommunication Cabling Infrastructure”. It focuses primarily on design requirements for the different DC availability classes with strong emphasis on migration and growth.
Each of these groups has a general standard which defines structured cabling, as well as a standard specifically for data center applications to reflect the need for higher speeds, increased density and an array of architectures. While there are differences between these standards, there is agreement around the minimum recommended cabling categories and connector types.
Cabling standards provide more detail around the physical media and define the channel that supports the applications. There are three main cabling standard bodies.
Data Center Standards
Keep up with the standards to future-proof the data center
Since data center cabling infrastructure will likely need to support multiple generations of equipment and speeds in the future, keeping up with the latest standards developments is critical. For new builds, it is important to deploy the highest bandwidth cabling as per the data center cabling standards.
There are two main types of standards relevant to data center cabling infrastructure: applications standards and cabling standards.
Data centers and their contents must adhere to a wide range of standards, ranging from local building codes to guidelines from ASHRAE on cooling to a number of requirements placed on the IT equipment. There are also a number of standards related to the structured cabling infrastructure that serves as the platform for IT equipment in the data center. Given the relentless growth in data traffic and the need to provide high-bandwidth, low-latency connections, there has been a tremendous amount of activity within the standards bodies to define higher speeds. It is important to keep up with the latest developments to ensure the cabling infrastructure can support these higher speeds with minimal disruption.
for higher speeds
How the standards define data center cabling
Setting the standards
Multitenant Data Centers
By outsourcing data center services instead of building, hosting, maintaining and upgrading them themselves, MTDC tenants can realize significant OpEx and CapEx savings. Most companies are not in the business of building and operating data centers. The expertise and efficiencies gained by multiple builds and design iterations have enabled MTDC operators to optimize their designs and operational efficiencies. MTDC operators can not only build a data center more cost-effectively but are also able to operate it more cost-effectively, as well. Building a traditional data center is a significant capital expense for enterprises; MTDC operators offers conversion from Cap Ex to OpEx by leasing the data center to the client, and also offer savings from tenant improvement or asset amortization. Enabling direct connection between enterprises, vendors, content providers and cloud providers in the same facility eliminates the need for metro/WAN connections that have backhaul and bandwidth charges. MTDCs offer clients the ability to scale as they grow, and to deploy assets on a just in time basis. Most leases run from three to 15 years, which gives the customer the ability to dynamically manage their business versus trying to over-plan and build a traditional data center that is an up to 30-year depreciating asset.
MTDCs usually provide their own technicians to maintain the infrastructure and ensure that hosted functions operate at peak efficiency at all times. MTDC operators provide SLAs to tenant clients to ensure commitment to uptime and operational parameters. The MTDC operators typically offer 2N, N+1, N and hybrid mesh solutions for power redundancy with multiple POPs (points of presence)/POEs (points of entrance), as well as multiple metro/WAN connectivity providers to provide redundancies that increase reliability. This enables clients to balance their redundancy/reliability needs against their cost options. Some clients may require lower levels of reliability for certain applications, such as deploying a lab environment; MTDCs can match the reliability requirements to specific user requirements.
An MTDC offers multiple levels of security against external threats plus faster, more thorough recovery from disaster situations. The initial layer of security is at the entry points of the facility or campus, which are usually surrounded by high steel fences, gates and bollards, and equipped with badge or biometric readers and security personnel. The facilities themselves are designed to restrict accessibility while maintaining a discrete appearance. Inside, there are security guards, restricted access and man traps that are designed to slow and restrict entry. Only authorized personnel are allowed entry to designated areas via badge or biometric access. In addition, the entire campus is under continuous monitoring via security cameras, and may often be subject to random security patrols.
By providing direct connectivity to service providers, content providers, cloud providers, high-frequency traders, financial transaction and peering partners also co-located at the MTDC, latency can be significantly reduced.
MTDC infrastructure makes advanced technology such as cloud computing and virtualized data centers available to small- and mid-sized businesses while also allowing easy expandability as the business grows.
Pick a key advantage to learn more
service for tenants working onsite, since data centers are often constructed with reinforced materials that make IBW coverage difficult. Providing mobile service to all mobile operators ensures that tenants have access, ensuring their productivity and efficiency while they are onsite.
Effective in-building wireless (IBW)
that supports the tenant’s differentiated services and allows visibility into the enterprise.
Physical layer management
to readily expand capacity and functionality under the same roof to meet increasing data center demand as the tenant’s business grows. This includes space, power and bandwidth scalability, and also the ability to scale down should there be a shift in public cloud utilization.
to cloud and content providers. Today’s and tomorrow’s data centers are and will continue to be connected with content and cloud providers in an effort to support internal and external customers. The ability to have direct access to these providers improves latency and cost objectives.
that performs optimally and deploys quickly, allowing for fast, simple changes.
High-quality, robust mechanical, electrical, and data transport infrastructure
When designing the cabling infrastructure for deployment in an MTDC, it is critical that the infrastructure be able to scale up and down, and adapt to technology changes. It must be able to integrate peer, cloud and hybrid strategies, and must also ensure that adequate space is allocated for horizontal and vertical cable management so that moves, adds and changes can be carried out easily. There is often the temptation to fully load server and switch cabinets without consideration of cable management. While this may reduce the amount of leased rack space, it also greatly increases the risk of downtime due to manual error. Proper cabling design, along with the use of an Automated Infrastructure Management (AIM) system, can reduce this risk.
The best MTDC providers help clients through all stages of the project, from need assessment through migration through operation. They have the core competencies to understand, meet and exceed clients requirements.
With over 130 million square feet of white space, multitenant data centers (MTDCs) are one of the fastest-growing segments in the data center industry, expanding at a rate of 16 percent annually. MTDCs enable enterprises and service providers to outsource their data center facilities. By leasing third-party data center whitespace, enterprises can remain focused on their core business while enjoying optimal data center availability, reliability and cost control.
Factors to consider when choosing an MTDC
Freeing Enterprises to focus on their core business
Cross-connect connects cabling runs, subsystems and equipment using patch cords or jumpers that attach to connecting hardware at each end. The advantage of a cross-connect is that there is no need to disturb the electronic ports or backbone cabling in order to make the connection. Additionally, growth requirements can be more easily designed into cross-connects due to its superb cable management features; flexibility increases significantly through the “any-to-any” concept (every piece of equipment in the data center can be connected to each other, independent of their location); and the operational advantages increase as connections for moves, adds and changes can be done in one location (the cross-connect). Although cross-connects offer a lot of advantages, this concept is more expensive to implement as it requires more cabling.
Interconnect brings a patch cord directly from the electronics port to the backbone cabling. This solution requires fewer components and is therefore less expensive; however, it does reduce flexibility and adds risk, as users must directly access the electronics ports in order to make the connection. CommScope generally recommends utilizing cross-connect for maximum solution flexibility and operational efficiency in the data center.
Although cabling is utilized more efficiently in top-of-rack design, there typically is an increase in the cost of switches, as well as the high cost for under-utilization of ports. ToR switching may be difficult to manage in large deployments, and there is also the potential for overheating of LAN switch gear in server racks. As a result, some will deploy ToR switches in an MoR or EoR architecture to better utilize switch ports and reduce the overall number of switches used.
Top-of-rack (ToR) switching typically consists of two or more switches in each server cabinet, as shown below. This architecture can be a good choice for dense one-rack unit server environments. The switches are placed at the top of the rack. All servers in the rack are cabled to both switches for redundancy. The ToR switches have uplinks to the next layer of switching. ToR significantly simplifies cable management and minimizes cable containment requirements. This approach also provides fast port-to-port switching for servers within the rack and predictable oversubscription of the uplink.
In certain scenarios, EoR switching can provide performance advantages when the LAN ports of two servers that exchange large volumes of information are placed on the same EoR switch to take advantage of the low latency of port-to-port switching. A potential disadvantage of EoR switching is the need to run cable back to the EoR switch. Assuming every server is connected to redundant switches, this cabling can exceed what is required in top-of-rack architecture.
Zoned architecture consists of distributed switching resources. As shown below, the switches can be distributed in an end-of-row (EoR) or middle-of-row (MoR) location. In these cases, chassis-based switches are typically used to support multiple server cabinets. This solution is recommended by the ANSI/TIA-942 Data Center Standards and is very scalable, repeatable and predictable. Zoned architecture is usually the most cost-effective, providing the highest level of switch and port utilization while minimizing cabling costs.
This provides very efficient utilization of port switches and makes it easier to manage and add components. The centralized architecture works well for smaller data centers but does not scale up well, which makes it difficult to support expansions. In larger data centers, the high number of extended-length cable runs required causes congestion in the pathways and cabinets, and increases cost. While some larger data centers may use zoned or top-of-rack topologies for LAN traffic, they may also utilize a centralized architecture for the SAN environments, where the cost of SAN switch ports is high and port utilization is important.
The centralized model is an appropriate architecture for smaller data centers (under 5,000 square feet). As shown, there are separate LAN/SAN environments and each one has home run cabling that goes to each of the server cabinets and zones. Each server is effectively cabled back to the core switches, which are centralized in the main distribution area.
TOP OF RACK
Data Center Architectures
Whether it's a new installation or an upgrade of a legacy system
Anticipated growth of the data center
The size of the current data center
Key decision factors in selecting a data center architecture include:
Pick an architecture type to learn more
Here we discuss the three main data center architectures designers can choose from. There is no one right choice for all situations — each design has its advantages and trade-offs. In fact, some larger data centers will often deploy two or even all three of these architectures in the same facility.
With newer technologies such as 25/40/100GbE, 40GFcoE and 16G and 32G Fibre Channel, the bandwidth, distance and connection limitations are more stringent than with lower-speed legacy systems. In planning the data center’s LAN/SAN environment, designers must understand the limitations of each application being deployed and select an architecture that will not only support current applications but also have the ability to migrate to higher-speed future applications.
There are two methods typically used to connect data center electronics via structured cabling: cross-connect and interconnect.
Data center networking architecture — the layout of the cabling infrastructure and the way servers are connected to switches — must strike a balance between reliability, performance, agility, scalability and cost. To optimize the data center investment, the architecture must also offer the ability to support both current and future applications and speeds.
Architecture for a higher-speed future
Data center equipment connection methods
Creating a blueprint for better data center performance
Fabric Networks: Designing your network for the future — from 10G through 400G and beyond
Image Courtesy of the Ethernet Alliance
Tomorrow's possible interfaces
The design of high-capacity links is more complex since the number of links and link speeds are both increasing. Providing more data center capacity means pushing the limits of existing media and communication channel technologies. As shown below, the Ethernet Alliance Roadmap shows existing application standards as well as predictions of application rates beyond 1 terabit/second. This will further challenge complexity as application speeds move from duplex transmission to parallel transmission. With the advent of new technologies such as shortwave WDM (SWDM), wideband multimode fiber (WBMMF), BiDi transmission, CWDM and more efficient line coding, it is anticipated that the transition to parallel optics will be delayed.
Possible Future Speed
Speed in Development
High Speed Migration
The combination of SWDM and WBMMF provides the opportunity to extend the use of multimode technology, which continues to be the most prevalent fiber technology deployed in data centers. Engineered solutions make complex fabric network designs easy to design, implement and manage. Pre-terminated high-performance systems support next-generation network media and duplex and multifiber modular applications, while reducing deployment management time and expense.
The fabric has inherent redundancy, with multiple switching resources interconnected across the data center to help ensure better application availability. These meshed network designs can be much more cost-effective to deploy and scale when compared to very large, traditional switching platforms.
These fabric networks can take many forms, from fabric extensions in a top-of-rack deployment, to fabric at the horizontal or intermediate distribution area, to a centralized architecture. In all cases, consideration must be given to how the physical layer infrastructure is designed and implemented to ensure the switch fabric can scale easily and efficiently.
In response to the demand for lower costs and higher capacities, new fabric network-based systems that support cloud compute and storage systems are becoming the architecture of choice for today’s data centers. The performance of the network fabric is well suited to establishing universal cloud services, enabling any-to-any connectivity with predictable capacity and lower latency.
Future-ready network fabric technology
Redesigning data center connectivity for a higher-speed future
To accommodate the rapid growth of cloud-based storage and compute services, traditional enterprise data centers are evolving, adapting their current architectures to accommodate new, agile, cloud-based designs. These new enterprise architectures are different from the traditional three-layer switching topologies, resembling "warehouse scale" facilities and designed to support many different enterprise applications. Data center designers are using leaf-spine architecture to achieve an optimized path for server-to-server communication that can accommodate additional nodes as well as higher line rates as the network grows. The meshed connections between leaf and spine switches — often referred to as the network “fabric” — allow applications on any compute and storage device to work together in a predictable, scalable way regardless of their physical location within the data center.
OBO or Consortium for On-Board Optics (COBO): Eliminates the E/O function traditionally performed by transceivers, meaning that the bandwidth density at the faceplate can be dramatically increased. Data applications to be supported by OBO are currently to be defined, but this technology is primarily targeted at 400Gb/s and 800Gb/s + data rates.
400GBASE-SR16 parallel MMF (16x25G NRZ) 400GBASE-FR8/LR8 duplex SMF (8x50G PAM4 WDM) 400GBASE-DR4 parallel SMF (4x100G PAM4)
CFP8 – “C Form Factor Pluggable”. Primarily aimed at supporting 400Gb/s with a claim to offer a path to support 800Gb/s in the future. Designed to support:
400GBASE-DR4 parallel SMF (4x100G PAM4) 400GBASE-SR8 parallel MMF (8x50G PAM4) 400GBASE-FR4 duplex SMF (4x100G PAM4 WDM) 400GBASE-FR8/LR8 duplex SMF (8x50G PAM4 WDM) 2x200GBASE-FR4 parallel SMF 2x100GBASE-CWDM4 parallel SMF
OSFP – “Octal Small Factor Pluggable”. Targeting data rates of 400Gb/s, the OSFP foot print is not backwards compatible to existing modules. Designed to support:
QSFP-DD – “Quad Small Factor Pluggable – Double Density”. The smallest 400Gb/s module, will provide backwards compatibility to 40GbE and 100GbE QSFP modules. Will support Ethernet, FibreChannel or Infinband protocols. Designed to support: 200Gb/s or an aggregate of 400Gb/s, using 25Gb/s NRZ modulation per lane or 50Gb/s PAM4 per lane
Image Courtesy of Finisar
Example of an optical transceiver
Images Courtesy of the Ethernet Alliance
Fortunately, each of the data center cabling standards (TIA 942, ISO/IEC 11801-5 and CENELEC 50173-5) have standardized on two optical connectors for use in the data center: the LC for single or duplex applications and the MPO for applications requiring more than two fibers. This has simplified the fiber connectivity as the MSAs that are relevant in the data center environment also have made use of the LC and MPO connectors. And while the standardization of connectors has helped to simplify cabling, it has also become very important to provide very flexible, agile connectivity that can accommodate the ever-increasing speeds and the higher densities that are being driven by higher densities at the equipment faceplate.
The clear trend in the development of new MSAs has been toward both higher speeds and increased densities. Higher speeds are the result of new applications standards that specify higher line rates. Higher densities have largely been driven by technology advances which enable the transceiver to make use of lower power, which allows for smaller packaging. As shown above, the larger physical sized MSAs are designed to accommodate higher power transceivers, while reduced power transceivers can make use of smaller MSAs, thus more ports or higher density communication hardware.
A networking technology may come to market with multiple choices or generations of optical transceivers. The market will eventually identify the winning solution based on cost, size, power consumption, vendor support and other factors.
The optical transceiver MSA environment is very dynamic, with numerous MSAs — too many to list in this publication. These MSAs cover everything from form factor, application (standard, prestandard or proprietary), maximum power consumption, fiber connector type, strand count, wavelength and cable reach. Examples of current and future MSAs are shown below:
The phenomenal growth in data, voice and video has triggered the need for higher and higher speeds in the data center and across data centers. This has driven the standards bodies to develop higher application speeds, which in turn have driven the need for new MSAs. Per the latest version of the Ethernet Roadmap, there are currently seven new applications in progress, most of which involve fiber optics. There are now many different MSAs which reflect the variety of applications we see in the data center:
The solution is a multisource agreement (MSA) — an agreement among multiple manufacturers to make equipment consistent and interchangeable by defining a common physical form for various devices and components. In the case of data center connectivity, there are MSAs that cover both the specification and implementation of the optical transceivers made by various manufacturers.
The data center is a complex environment, comprising of a wide range of equipment and technology manufactured by many different companies. Ever-increasing bandwidth and line rates have led to optical fiber being the preferred technology to enable higher speeds. To ensure proper operation and maximum efficiency of the data center networks, optical transceivers of the same type must be interchangeable and interoperable so that replacements and upgrades can be performed quickly and easily, without the need to replace or modify other network equipment.
Implications for fiber cabling infrastructure design
Examples of optical MSAs
Bringing options to a changing environment
• Retain legacy application support of OM4 • Increase capacity to > 100 Gb/s per fiber • Reduce fiber count by 4 • Boost array cabling capacity for parallel applications • Enable Ethernet: 40G-SR, 100G-SR, 200G-SR, 400G-SR4 Enable Fiber Channel: 128G-SWDM, 256GFC-SWDM • Extend MMF utility as universal data comm. medium
Fabric networks: Designing your network for the future From 10G through 400G and beyond
Wideband Multimode Fiber - What is it and why does it make sense?
Goals and benefits
Wave Division Multiplexing
Fiber connectors pull it all together
Fiber connectors have evolved along with fiber-optic cabling, driven by increasing fiber density. The duplex LC connector emerged during the early 2000s as the predominant two-fiber type and remains so today. While the evolution of the duplex connector was underway, array connectors (parallel optics) were also emerging. First deployed in public networks, the multifiber push-on (MPO) connector has become a preferred choice for rapidly deploying cabling into data centers. The compact form of the MPO allows 12 or more fibers to be terminated in a compact plug, occupying the same space as a duplex LC. The MPO’s high density enables installation of preterminated, high-strand-count cables, while eliminating the time-consuming process of field installing connectors on site.
Singlemode fiber also enables duplex transmission at higher speeds because it is able to transport multiple wavelengths, thus reducing fiber counts. It is anticipated that one of the 200GE and 400GbE applications will utilize four-pair parallel optics over SMF, taking advantage of the lower overall system cost that parallel optics can offer. The PSM4 multisource agreement (MSA) also defines a four-pair transceiver for 100G applications.
Very large data centers as well as hyperscale data centers typically deploy SMF to connect multiple halls and extended equipment zones using a centralized cross-connects architecture at the MDA. They typically use a dedicated optical distribution frame (ODF). Deploying an ODF can help to ensure that cables are kept to an optimum length for transmission, while equipment zones and other data halls can be quickly and efficiently patched to one another with the minimum disruption to service and networking equipment.
Designed with a much narrower core, singlemode fiber (SMF) is the technology of choice for longer reach applications in the data center, such as extended runs in the fabric between leaf and spine switches, spine and routers, and into the transport network to connect data centers in different locations. SMF provides higher bandwidth and does not have the modal dispersion limitations inherent in MMF. For this reason, SMF is used in applications where support for higher and next-generation bandwidths can be absolutely guaranteed to be supported. This makes it a perfect media of choice for hyperscale and service provider data center owners to deploy.
OM3 and OM4 provide very high modal bandwidth at 850 nm, the predominant wavelength that can be efficiently supported by VCSEL transmitters. To support an increase in performance over a single pair of multimode fibers, additional wavelengths need to be transmitted alongside 850nm, achieved via a new technology — shortwave wavelength division multiplexing (SWDM). Because the modal bandwidth of OM3 and OM4 fibers were specified for laser operation at 850 nm only, a new specification for optical fiber was required. Many data center managers are now considering wideband multimode fiber (WBMMF), which optimizes the reach of SWDM transmission that delivers four times more information with the same number of fiber strands over practical distances. Being optimized to support the additional wavelengths required for SWDM operation (in the 850 nm to 950 nm range), WBMMF ensures not only more efficient support for future applications across the data center fabric, but also full compatibility with legacy applications because it remains fully compliant to OM4 specifications.
By the middle of 2017, the journey to standardization of WBMMF cabling was complete, having been recognized by ISO/IEC and TIA standard bodies. The OM5 designation was adopted for inclusion of this new cabled optical fiber Category in the 3rd Edition of the ISO/IEC 11801 standard. Once again, CommScope led the market in next-generation standards development as well as product availability and was one of the first manufacturers to deliver a commercially available OM5 end-to-end solution, with the distinctive lime green color that is also being recognized by standards bodies. Well ahead of standards ratification, CommScope introduced the LazrSPEED OM5 Wideband solution in 2016, knowing that the support of higher data throughput using low cost optics, is exactly what Data Center Managers require to enable next generation networks today and in the future. And the future of OM5 is very bright indeed. At the end of 2017, the IEEE agreed to initiate a project to define next generation multimode transmission using short wave division multiplexing, the transmission technology that OM5 was designed to support.
Today, MMF is the workhorse media for data centers because it is the lowest-cost way to transport high data rates over the relatively short distances in these environments. MMF has evolved from being optimized for multimegabit-per-second transmission using light-emitting diode (LED) light sources to being optimized to support multigigabit transmission using 850 nm vertical cavity surface emitting laser (VCSEL) sources, which tend to be less expensive than their singlemode counterparts. This leap in performance is reflected in the classifications given by the standards bodies. OM1 and OM2 represented the earlier MMF types with low modal bandwidth and very limited support for higher speed optics. OM3 and OM4 represent the newer, laser-optimized MMFs that are typically installed in data centers today. The following table provides examples of some of the current data center applications and the maximum channel lengths over different fiber types.
Multimode fiber (MMF) continues to be the predominant fiber type deployed in Enterprise Data Centers today. It was initially deployed in telecom networks in the early 1980s. With a light-carrying core diameter about six times larger than singlemode fiber, MMF offered a practical solution to the alignment challenges of efficiently getting light into and out of the cabling.
Singlemode fiber: Supporting longer distances
Introducing wideband multimode fiber (WBMMF)
Multimode fiber - the low-cost platform
Vital connections for today's data centers
The data center is at the core of today’s business, and fiber-optic connectivity is the fabric, carrying vital data to drive critical business processes and providing connectivity to link servers, switches and storage systems. Data center designers have two high-level choices when it comes to fiber types: multimode fiber and singlemode fiber. In this chapter, we’ll discuss the development, deployment and advantages of each fiber type, as well as the connectors that pull it all together.
Optical Distribution Frames
Requires no direct patching at the switch/SAN director
Equipment can be connected regardless of its location
Cabling can be added or changed without disrupting running systems
A proven telecom solution comes to the data center
Pre-cabled ODFs allow fast moves, adds and changes
Take control of data center cabling for optimal performance
Having the right cross-connect solution.
Choosing a different cabling architecture.
Since cross-connect ODFs are optimized for cabling, not for equipment, they are able to solve the two largest data center cable management problems: those caused by application migration towards parallel fiber-optic applications, and those caused by the expected growth of the data center itself. Both of these trends require deploying much more fiber in the data center, resulting in massive patch cord changes in both number and size. ODFs can easily deal with these challenges because they are optimized for cable management, offering bend radius protection for fiber patch cords and over-length storage for efficient use of the ODF even with thousands of patch cords in it. Correctly designed, cross-connect ODFs function very effectively as the single point of distribution for all LAN, SAN and telecommunication services in the data center, delivering best-in-class cable management and reduced operations costs, with these advantages:
ODFs have been available for years, used primarily in telecommunication providers’ central offices where tens of thousands of optical fibers converge at a single location. With similar challenges now facing data center operators, the use of ODFs to manage data center cabling has become an effective option.
In order to meet these challenges today and equip their facilities for future growth, data centers must be designed with optical distribution frames (ODFs) functioning as cross connects in the MDA.
Equipment cabinets that are designed to accommodate servers may also be equipped with fiber patch panels. However, they often provide limited cable management for the patch cords connecting to the active equipment. This scenario may be adequate for equipment connectivity, but does not provide cable management needed for cross-connects. A best-in-class cross-connect solution consists of frames or cabinets that have been designed around the fiber patch panels along with providing patch cord management to accommodate the quantity and types used today and those that will be used in the future. Only when all of this is considered, can the data center designer design the cabling infrastructure of a data center properly.
A centralized cross-connect configuration in the main distribution area (MDA) eliminates patching from the core equipment cabinets. All active core ports from LAN/SAN are mirrored in central cross-connect cabinets, resulting in safer operation and simplified design for future growth.
The attempt to address these issues by using high-density patch panels can make the problem worse, if not done correctly. Trying to fit high-density cabling into cabinets that are designed to house active equipment can result in a tangled “spaghetti bowl” of cabling — especially in configurations where cable management is essentially non-existent. The solution to the problem has two parts:
With more and more optical connections to contend with, the challenge becomes how to add optical density to the fiber frame while still maintaining proper accessibility, flexibility and manageability at the lowest possible cost. As data center operators add more fiber- optic cabling, they often face an out-of-control situation in terms of fiber count, density and space resulting in potentially reduced availability and higher cost of operation.
The ever-growing demand for more bandwidth to accommodate a wide array of new applications in the data center is driving higher fiber counts, as more and more data centers are being designed and built to run high-speed applications for LAN/SAN. These high-speed applications are based on fiber-optic transmission, making fiber-optic cabling the predominant transmission media in the data center now and into the future.
ISO/IEC AIM Document (18598/DIS draft)
imVision Automated Infrastructure Management
Managing Critical Data Center Fiber Connectivity with imVision
imVision. Infrastructure Mangement. Made Easy.
Automated Infrastructure Management
Generate electronic work orders and automatically monitor the accuracy of implementation of work order tasks
Generate graphic representation of end-to-end connectivity (circuit trace)
Identify and track the physical location of end devices connected to the network
Document connectivity between non-AIM enabled ports and other equipment
Automatically update records when any monitored connections are modified
Automatically detect, document and monitor the presence of network devices
Reduces time-intensive manual processes by generating electronic work orders and enabling guided administration of connectivity changes. This helps to minimize human errors and unplanned network downtime.
AIM can raise alerts when a port is disconnected or connected in an unauthorized location; for example, if someone has moved a server without following approved change management processes.
The precise location of a connectivity problem is documented so the technician doesn’t have to spend time verifying manual documentation or hunting for the location of a problem.
Provides an accurate view of available panel, switch and server ports, which helps address network capacity challenges by eliminating dormant ports, resulting in more efficient planning and reduced CapEx.
AIM systems also improve other aspects of data center operations, including:
Choosing an AIM system that meets the standard
01000011 01101000 01101111 01101111 01110011 01101001 01101110 01100111 00100000 01100001 01101110 00100000 01000001 01001001 01001101 00100000 01110011 01111001 01110011 01110100 01100101 01101101 00100000 00001101 00001010 01110100 01101000 01100001 01110100 00100000 01101101 01100101 01100101 01110100 01110011 00100000 01110100 01101000 01100101 00100000 01110011 01110100 01100001 01101110 01100100 01000011 01101000 01101111 01101111 01110011 01101001 01101110 01100111 00100000 01100001 01101110 00100000 01000001 01001001 01001101 00100000 01110011 01111001 01110011 01110100 01100101 01101101 00100000 00001101 00001010 01110100 01101000 01100001 01110100 00100000 01101101 01100101 01100101 01110100 01110011 00100000 01110100 01101000 01100101 00100000 01110011 01110100 01100001 01101110 01100100 01101110 01000011 01101000 01101111 01101111 01110011 01101001 01101110 01100111 00100000 01100001 01101110 00100000 01000001 01001001 01001101 00100000 01110011 01111001 01110011 01110100 01100101 01101101 00100000 00001101 00001010 01110100 01101000 01100001 01110100 00100000 01101101 01100101 01100101 01110100 01110011 00100000 01110100 01101000 01100101 00100000 01110011 01110100 01100001 01101110 01100100 01000011 01101000 01101111 01101111 01110011 01101001 01101110 01100111 00100000 01100001 01101110 00100000 01000001 01001001 01001101 00100000 01110011 01111001 01110011 01110100 01100101 01101101 00100000 00001101 00001010 01110100 01101000 01100001 01110100 00100000 01101101 01100101 01100101 01110100 01110011 00100000 01110100 01101000 01100101 00100000 01110011 01110100 01100001 01101110 01100100 01101110
Taking AIM at data center downtime
Speed, density, complexity, quantity — as all of these factors converge, the necessity increases for an Automated Infrastructure Management solution for the data center.
The data center connectivity challenge
Automatically detect and monitor connectivity
The emergence of AIM standards is making intelligent connectivity a mission-critical technology for data centers. Since cabling infrastructure migration often requires replacement of fiber-optic modules, now is the time to upgrade to an AIM-driven intelligent connectivity system.
In early 2017, the ISO/IEC WG3 SC25 group is expected to publish the ISO/IEC 18598 Standard for Automated Infrastructure Management Systems — Requirements, Data Exchange and Application. To meet the key requirements of ISO/IEC 18598, an AIM solution must:
The IT industry has recognized the important role intelligent infrastructure solutions can play in data center management and has established standards for AIM capabilities and functions.
By capturing information about every physical connection in the network and relaying it to higher-level network management systems, the AIM system provides an accurate, real-time view of the physical network connectivity and can issue alarms when an unplanned or unauthorized change occurs. AIM streamlines the provisioning and monitoring of data center connectivity; produces up-to-date reports on the status and capacity of the network infrastructure — and ultimately can reduce data center downtime and mean-time-to-repair through real-time, precision notification of connectivity outages.
In its simplest terms, Automated Infrastructure Management (AIM) is an integrated hardware and software platform that manages the entire physical layer. It fully documents the cabling infrastructure, including connected equipment, to provide a complete view of where devices are located and how they are connected.
In today’s fast-evolving data centers, expanding the fiber-optic infrastructure is vital for providing the bandwidth and speed needed to transmit large amounts of data to and from multiple sources. As switches with 40G and 100G ports become commonplace, data center infrastructure becomes more complex — and it is becoming increasingly clear that traditional, manual methods for managing fiber connectivity may not be sufficient. Demand for fast data transmission and efficient network performance has been fueled by requirements to support virtualization, convergence and cloud computing, as well as high-bandwidth applications like streaming video. But while supporting more bandwidth is important, there are additional trends impacting data center fiber infrastructure management:
Point-to-multipoint connections have become commonplace with the advent of 40G and 100G technology, making manual record-keeping nearly impossible.
Increased complexity of cabling topology
The move to heavily-meshed leaf-spine network architecture has greatly increased the number of connections along with a need for the consistent and accurate deployment of connectivity pattern/mesh for orderly network expansions in the data center.
Increased complexity of network architecture
Space is at a premium in the data center, which has led to higher densities of fiber ports on equipment and fiber shelves. With increased density comes the increased risk of making or removing the wrong connection - potentially causing widespread disruption in network services.
Higher port density
SYSTIMAX® InstaPATCH® 360 Traffic Access Point (TAP) Solution Design Guide (TP-108221-EN)
Fiber performance (Link loss) calculator
Designing for Fiber TAPs
Monitoring equipment capability
Intended application (for example, 8G Fibre Channel or 10G Ethernet)
Length and number of connections within the main and two monitor channels
Loss created by the selected TAP splitter
Using TAPs in high-speed fiber links can be complicated, especially in a do-it-yourself retrofit application. Instead of trial and error, today’s best practice is to design and deploy an engineered solution in the data center. Designing TAPs into the data center from the start enables the addition of monitoring capability when it is needed in the future, while proving the operational links to be reliable and solid on day one.
When designing a TAP solution for a particular application, many factors need to be taken into consideration, including:
By diverting network traffic for monitoring, TAPs can introduce additional insertion loss into the network. While industry standards for Ethernet and Fibre Channel are not expressly designed to support the added loss of TAPs, with pre-engineering and the use of high-performance cabling systems it is possible to deploy TAPs and retain useful channel topologies. As shown below, the evolution of higher speed applications includes reduced loss budgets, underscoring the need for low-loss components and engineering guidelines.
Designing a TAP Solution to mitigate insertion loss
Fiber TAP with 70/30 split
TAP modules help improve managers’ understanding of how applications perform and how to measure their performance and ensure that it meets the required standard. They are also being used to meet compliance or legal requirements that require a business to deploy reasonable tools to secure the data center network.
Because TAPs continuously pass all traffic running between the endpoint network devices with zero latency — while duplicating that exact same traffic to the monitor ports simultaneously — TAPs are one of the most efficient ways to monitor traffic and network link quality in data center networks.
The need for real-time network traffic monitoring in today’s data center has become compelling. Data center network administrators need to gain better visibility of their networks to optimize the performance of mission-critical applications and keep their networks secure. In fiber-optic data center networks, a traffic access point (TAP) is a critical tool for data center monitoring and management. A TAP module can be integrated into the fiber cabling infrastructure to enable network traffic monitoring from the physical layer (layer 1) and above in real time — without interrupting network service. A TAP module is a compact package of fiber-optic couplers or splitters that passively diverts a fixed percentage of light energy away from main transportation channels to monitor the traffic status or content without disrupting the main channel traffic. The optical couplers or splitters inside a TAP module split the light energy from the input port into two output ports according to a designed split percentage, usually diverting from 10 to 50 percent to the TAP.
with no service interruptions
Real-Time Network Monitoring
Because of the number of solutions required and the agility with which they must be deployed, data center networks all over the world run on CommScope. As an industry leader with decades of expertise and ongoing innovation, CommScope designs and builds the solutions that power the data center in all its forms — always with an eye on collaborative development, competitive cost structures and the relentless growth in demand for capacity. We invite you to contact a CommScope representative to see how we can help you design and build a more connected and efficient data center for your enterprise.
Contact a CommScope expert today
A road map and resources for building an efficient data center
In this eBook, we have explored many of the best practices that are the foundation of the data center model. Beyond the sheer diversity of applications, systems and technologies involved, the data center is also in a constant state of change, bringing new benefits, savings and potential to life.
Even as the data center continues to be the lifeblood of an enterprise, it is undergoing tremendous change as cloud and multitenant options challenge the traditional model. To continue to effectively enable business applications and deliver a wide range of services to an enterprise’s customers and employees, the data center must evolve and adapt to today’s dynamic business and technology environment.