Expectation for Fiber Connectivity: Layer 0

The footprints of cloud data centers continue to increase substantially to accommodate massive amounts of servers and switches. To support sustainable business growth, many Web 2.0 companies, such as Google, Facebook and Microsoft, have decided to deploy 100G Ethernet using single mode optics-based infrastructure in their new data centers.

According to LightCounting and Dell’Oro, 100G transceiver module and switch port shipments this year will outpace last year’s shipments, with 10 times as many being shipped in 2017 vs. 2016. Shipment for 200G/400G switch ports will begin in 2018.

Data Center Architecture and Interconnects

Most intra-rack fiber connectivity has been implemented with DAC (direct-attach cables). As we discussed in our fiber infrastructure deployment blog series, system interconnects with a reach longer than 5 m must use more fiber connectivity to achieve the desired bandwidth.

100G, 200G, and 400G transceivers for data center applications have already been showcased by various vendors; massive deployment is expected to start in 2018. Based on reach requirements, different multimode and signal optical transceivers are being developed with optimized balance between performance and cost. Examples include:

  • In-room or in-row interconnects with multimode optics or active optical cables (AOCs), with a reach of up to 100 m. (New multimode transceivers, such as 100G-eSR4, paired with OM4/OM5 multimode fiber, can support a maximum reach of up to 300 m for 100G connectivity, which is suitable for most intra-rack interconnects.)
  • On-campus interconnects (inside the data center facility), with transceiver types such as PSM4 (parallel singlemode four-channel fiber) or CWDM4/CLR4 (coarse wavelength division multiplexing over duplex singlemode fiber pair) for 500 m reach.
  • On-campus interconnects (between data center buildings), with transceiver types such as PSM4 and CWDM4/CLR4 for a reach of 2 km.
  • Regional data center cluster interconnects, also referred as data center interconnects (DCIs), using coherent optics (CFP2-ACO and CFP2-DCO) for a reach of over 100 km, or direct modulation modules, such as QSFP28 DWDM ColorZ, for reach of up to 80 km.

Multimode Fiber Roadmap to 400G and Beyond

Multimode optics use low-cost VCSELs as the light source. When compared to singlemode transceivers, which utilize silicon photonics, VCSELs have some native performance disadvantages:

  • Fewer available wavelengths for wavelength division multiplexing
  • Speed is limited by the singlemode laser
  • Less advanced modulation options
  • High fiber counts needed to deliver required bandwidth
  • Shorter reach in multimode fiber (limited by fiber loss and dispersion) compared to singlemode fiber

Read full article

Data Centre Audits: What’s the Difference?

There are numerous things to “audit” inside a data center in order to keep it operating at peak performance. When your team starts talking about a data center audit, make sure you know your options.

Depending on your goals, and what you hope to accomplish, there are several varieties of data center audits that be conducted. Here is a summary of the most common, and what types of information they can uncover.

Security Audit

A data center audit focusing on physical security will document and ensure that the appropriate procedures and technology are in place to avoid downtime, disasters, unauthorized access and breaches. It will revolve around things like:

In addition to analysing current security processes, a security audit can also provide you with improvement recommendations.

Energy Efficiency/Power Audit

A data center energy efficiency audit helps you pinpoint potential ways to reduce energy usage and utility bills. By taking a close look at power use, the thermal environment and lighting levels, an energy audit can uncover things such as malfunctioning equipment, incorrect HVAC settings and lights being left on in unused/unoccupied spaces.

During a data center audit that focuses on energy efficiency, power usage effectiveness (PUE) can also be calculated (based on dividing total power usage by IT equipment power). By tracking this number, you can establish benchmarks and determine whether data center performance is improving or declining over time.

 

read full article

DCIM/AIM Webinar – 24th Jan 11AM SAST

DCIM/AIM Software & Hardware Solution – Belden PatchPro®

24th January @ 11am SAST (South African Standard Time)

Join Wolfgang Schröder and Christos Birbilis from Belden to learn more about PatchPro®, Belden Data Centre Infrastructure Management (DCIM) software and Automated Infrastructure Management (AIM) hardware solutions.

PatchPro®Infrastructure DCIM/AIM solution works with PatchPro® hardware to achieve transparency and accuracy in complex, demanding environments. Adapting to any environment – small businesses, large enterprises or sophisticated data centers – PatchPro®I modular architecture allows you to license only the components you need.

Licensing is based on concurrent users, allowing you to install the software on as many workstations as you like – and allows you to grow in the future. It can also be adapted to your specific requirements without programming or expensive consulting.

With PatchPro®, you can:

  • Monitor vital systems for early recognition of potential bottlenecks, such as hotspots, excessive power usage and other critical conditions that could impact business continuity
  • Minimize the effort required to prepare for assessments, delivering data and documents to auditors for a fast signoff
  • Extract business-critical, real-time data from your network and display it in tables, charts or combined dashboards
  • Make sure existing data center capacity is utilized before investing in an expansion
  • Support any topology (star, ring, bus or mashed network structures) or any voltage (low-, medium- or high-voltage [230 V, 400 V, 500 V])
  • Integrate air conditioning, alerting, fire and intrusion detection, facility management and IT server monitoring systems

More than 2150 Clients Worldwide

Core Business:

  • Development, Distribution and Services of, Technical Software (PatchPro® DCIM-AIM)“
  • Building Information Systems, “BIS”
  • Including Cable Management and IT-Network Planning & Documentation (DCIM / AIM)

Better, Faster, Cheaper Ethernet: The Road From 100G to 800G

Worldwide IP traffic has been increasing immensely in the enterprise and consumer division, driven by growing numbers of Internet users, as well as growing numbers of connected devices that provide faster wireless and fixed broadband access, high-quality video streaming and social networking capabilities.

Data centers are expanding globally to support computing, storage and content delivery services for enterprise and consumer users. With higher operation efficiency (CPU usage), higher scalability, lower costs and lower power consumption per workload, cloud data centers will process 92% of overall data center workloads by 2020; the remaining 8% of the workload will be processed by traditional data centers.

According to the Cisco Global Cloud Index 2015-2020, hyperscale data centers will grow from 259 in 2015 to 485 by 2020, representing 47% of all installed data center servers.

Cisco Global Cloud Index

Source: Cisco

Global annual data center traffic will grow from 6.5 ZB (zettabytes) in 2016 to 15.3 ZB by 2020. The majority of traffic will be generated in cloud data centers; most traffic will occur within the data center.

When it comes to supporting cloud business growth, higher performance and more competitive services for the enterprise (computing and collaboration) and consumers (video streaming and social networking), common cloud data center challenges include:

  • Cost efficiency
  • Port density
  • Power density
  • Product availability
  • Reach limit
  • Resilience (disaster recovery)
  • Sustainability
  • System scalability

This is the first in a series of seven blogs that will appear throughout the rest of 2017; in this series, we’ll walk you down the road to 800G Ethernet. Here, we take a close look at Ethernet generations and when they have (or will) come into play.

Read full article

Achieving Solid Link Performance and Desired Link Distances with Singlemode Fiber

Having all new technologies and products available in the data center market, it is beneficial to plan in advance for potential amendments and upgrades. No matter which option you carry out, low-loss, high-bandwidth fiber cable used in conjunction with low-loss fiber connectors will always provide solid link performance and desired link distances with the number of connections you require.

As we’ve mentioned in earlier blogs, it is imperative to understand the power budget of new data center architecture, as well as the desired number of connections in each link. The power budget indicates the amount of loss that a link (from the transmitter to the receiver) can tolerate while maintaining an acceptable level of operation.

This blog equips you with singlemode fiber (SMF) link specifications so your fiber connections will have sufficient power and reach and desired link distances. Unlike multimode fiber (MMF), SMF has virtually unlimited modal bandwidth, especially operating at the zero-dispersion wavelength 1300 nm range, where material dispersion and waveguide dispersion cancel each other out.

Typically, a singlemode laser has a much finer spectral width; the actual reach limit isn’t bound by the differential modal dispersion (DMD) like it is in multimode fiber.

Read full article

The Right DC Supply Chain Can Improve Speed to Market

Trasfering capacity online faster, without sacrificing reliability or performance, is crucial for hyperscale and colocation data center projects, as providers and tenants continue to require additional equipment to support their growing infrastructure.

Recently reflecting on a panel discussion at last year’s CAPRE San Francisco Data Center Summit, which covered the top three things on the minds of data center industry executives today. In order of importance, their concerns were:

  1. Security
  2. Meantime to deploy
  3. Customer satisfaction

While all of these things are significant, No. 2 struck a chord. The ability to deploy data center capacity rapidly and efficiently can mean the difference between going live – or going broke! Meantime to deploy is not a concern that just popped up at a conference – rapid, on-time deployment has been a priority in the data center industry from Day One!

How can you reduce the amount of time it takes to “go live” for a tenant (or for your enterprise)? You could try to achieve better speed to market by working harder and faster, hiring more people and putting in longer hours. But there are only so many hours in the day – and only so much money in the budget.

Read full article

Fiber Infrastructure Deployment: Validate Link Budget

Prior to deploying a new fiber cabling infrastructure, or reusing the installed infrastructure, it’s vital to understand the link budget of the selected speed and transceivers in the new architecture, as well as the desired number of connections in each link.

In new fiber infrastructure deployment, more stringent link budget specifications will need higher-quality passive optical components with reduced channel insertion loss in the link. Typically, the low-loss connector not only allows more connections, but also supports longer links with solid performance.

As you get ready for new fiber infrastructure deployment, there are four essential checkpoints that you should keep in mind:

  1. Determine the active equipment I/O interface based on application types
  2. Choose optical link media based on reach and speed
  3. Verify optical fiber standards developed by standards bodies
  4. Validate optical link budget based on link distance and number of connection points

In a series of blogs, we have discussed these checkpoints. This blog covers the final checkpoint (No. 4): validating the optical link budget based on link distances and number of connection points.

 

Validating the Multimode Link Budget

The current available ultra-low-loss adaptor is 0.2 dB for MPO-8/12 and 0.35 dB for MPO-24 per connection. These enhancements have been achieved by a combination of new material and polishing methods.

read full article

Checkpoint 3: Optical Fiber Standards for Fiber Infrastructure Deployment

To reinforce the expanding cloud ecosystem, optical active component vendors have designed and commercialized new transceiver types under multi-source agreements (MSAs) for dissimilar data center types; standards bodies are incorporating these new variants into new standards development.

For example, IEEE 802.3 taskforces are working on 50 Gbps- and 100 Gbps-per-lane technologies for next-generation Ethernet speeds from 50 Gbps to 400 Gbps. Moving from 10 Gbps to 25 Gbps, and then to 50 Gbps and 100 Gbps per lane, creates new challenges in semiconductor integrated circuit design and manufacturing processes, as well as in high-speed data transmission.

Getting ready for new fiber infrastructure deployment to accommodate these upcoming changes, there are four essential checkpoints that we think you should keep in mind:

  1. Determine the active equipment I/O interface based on application types
  2. Choose optical link media based on reach and speed
  3. Verify optical fiber standards developed by standards bodies
  4. Validate optical link budget based on link distance and number of connection points

The first blog published on March 23, 2017 – we are discussing these checkpoints, describing current technology trends and explaining the latest industry standards for data center applications. This blog covers checkpoint No. 3: verifying optical fiber standards developed by standards bodies.

Read full article

Rack Scale Design: “Data-Center-in-a-Box”

The “data-center-in-a-box” concept is becoming a reality as data center operators look for explanations that are easily replicated, scaled and deployed following a just-in-time methodology.

Rack scale design is a modular, efficient design approach that supports this yearning for easier-to-manage compute and storage solutions.

What is Rack Scale Design?

Rack scale design solutions serve as the building blocks of a new data center methodology that incorporates a software-defined, hyper-converged management system within a concentrated, single rack solution. In essence, rack scale design is a design approach that supports hyper-convergence.

Rack scale design is changing the data center environment. Read on to discover how the progress to a hyper-converged, software-defined environment came about; its pros and cons; the effects on the data center infrastructure; and where rack scale design solutions are headed.

What is Hyper-Convergence?

Two years ago, the term “hyper-convergence” meant nothing in our industry. By 2019, however, hyper-convergence is expected to be a $5 billion market.

Offering a centralized approach to organizing data center infrastructure, hyper-convergence can collapse compute, storage, virtualization and networking into one SKU, adding a software-defined layer to manage data, software and physical infrastructure. Based on software and/or appliances, or supplied with commodity-based servers, hyper-convergence places compute, storage and networking into one package or “physical container” to create a virtualized data center.

Read full article

Is DC Power Heading for Your Data Center?

Could DC power be an energy-saving game changer in the data center industry?

As power densities expand, colocation and hyperscale data center operators need to take advantage of every opportunity to decrease power consumption. Is it possible that 380V direct current (DC) might be the solution?

To answer that query, it’s important to understand the history behind AC (alternating current) and DC power, the pros and cons of using DC power in data centers, and the potential future of DC power.

Some History: AC vs. DC                                                   

The world might be altered if Thomas Edison had won the power war back in the 1800s. In addition to inventing the lightbulb, Edison was the inventor and patent holder of an electrical distribution system based on direct electric current. He established the first electric utility company in New York in 1882 to supply electricity to 59 customers. By the late 1890s, he had constructed and was operating 100+ direct electric power plants in the Northeast.

His jolt to deploy DC power plants ended after one of his employees (Nikola Tesla) joined George Westinghouse; together, they developed an AC power distribution system. The AC power plant was significantly efficient than Edison’s DC plant; AC power plants could distribute power to customers over hundreds of miles compared to DC power plants that needed to be placed within a few miles of homes and offices.

Read full article

Copyright © 2024 Jaycor International
Engineered by: NJIN Agency