JAYCOR Awarded Tender to Design & Build SANSA’s Micro-Data Centre

JAYCOR is very proud to be an integral part of the construction and deployment of the South African National Space Agency’s (SANSA) new Space Weather Centre in the Western Cape, due to be completed and launched later this year.

SANSA was formed in 2010, however, South Africa’s involvement with space research and activities began much earlier, helping early international space efforts to observe Earth’s magnetic field at stations around the Southern parts of Africa.

The research and work carried out at SANSA focuses on space science, engineering and technology that can promote development, build human capital, and provide important national services. Much of this work involves monitoring the Sun, the Earth, and our surrounding environment, and utilizes the collected data to ensure that navigation, communication technology and weather forecasting and warning services function as intended.

JAYCOR earlier this year was awarded the tender to design, supply, construct, and commission the Space Weather Centre’s micro-data center. A central and key component of the hybrid cloud/on-prem solution SANSA requires to deploy the center’s mission-critical services.

 

The scope of work for the 36m2 micro-data center included all racks and related infrastructure, external and rack UPS’s, PDU’s, access control, cooling, environmental monitoring, and the fire-suppression systems. With all components to be managed with a Data Center Infrastructure (DCIM) asset management software. JAYCOR’s expertise in connected infrastructure and the flexibility and agility to deliver a turnkey solution to SANSA with best-in-class OEM brands, and solutions to meet the scope of work, were key factors in being selected as a partner on the project.

 

 

 

 

 

 

 

 

 

 

 

 

As we celebrate Space Exploration Day this week and edge closer to the completion of the project, we take this opportunity to commend and celebrate all the people, past and present, committed to the advancement of the space science. And look forward to the future as we help play a small role in the advancement of South Africa’s space agency and the sciences.

 

Greg Pokroy

CEO
JAYCOR International

Map, Track and Create Connections in Granular Detail with PatchPro®

DCIM – Data Center Infrastructure Management

A Great Solution with a Strong IT Focus Versus Traditional DCIM

Patchpro® I – Infrastructure Connection Manager

Administer your facilities assets and power usage effectiveness, coupled with unprecedented visualization and access to your network architecture, connectivity, and components:

  • Visualize and access racks, inventory and free rack units
    • Front and rear, back and rear and side views

  • Visualize and work on multiple rows, racks, and pods

A Great Solution with a Strong IT Focus Versus Traditional DCIM

  • Add/subtract components – drag and drop servers, switches, PDU’s, SFP’s Patch panels and more from your component library
  • New components are saved to the database in real-time with the objects unique set of attributes

  • Visualize and connect/disconnect free and used ports on devices and patch panels
  • Green – free port
  • Red – connected port

  • Visualize Connections (GUI visually maps connections)
    • Patches between devices within the rack
    • Cross-connects between devices and cabinets
    • Front and Rear Connections
      • ‘View Connections’ quickly visually maps connections
        (Side A, B or A/B)

  • Mouse-hover over free ports to view its connection and the patch objects unique attributes

  • Select multiple objects and click ConnectView to map the selections connections
  • Easily export to Microsoft Visio or Excel

  • End-to-end connectivity literally ‘down to the wire’
    • ConnectView maps any object’s (PC, Server, Switch, cable, port etc.) path from start-to-finish in unprecedented detail

  • Create and visualize patches and cross-connects by clicking on and connecting free ports

  • MultiPatch allows users to create multiple patches between objects, manually or through a .csv (bulk) upload

  • Individual racks indicate
  • Total:
    • Energy consumption (W)
    • PDU’s connect via SNMP
    • Weight
    • Free rack units

RackView Object Search

  • Search all rack assets using any criteria within the entire facility
    • View the full tree of connectivity from the building down to the device
    • Quickly navigate to the object in RackView by right click or drag & drop into the window
  • RackView – planning mode
    • Design and build your DC and assign work orders to technicians, execute changes once confirmed

  • Colour indicators on the sides of objects within the rack indicate planned work and the present status
 

How PatchPro® Works

The key point is the Database

  • The Database sends all information / changes to the raphical user interface (GUI) through CADVANCE and or AutoCAD
  • The advantage of PatchPro is that you can manipulate the database by using the GUI which is quicker and easier
  • The Database and the GUI are connected. Objects changed on the system update in real-time

Key attributes of PatchPro® Software Solutions

Comprehensive technical functionality – both in the Facilities as well as in Data Centres.

  • Graphical User Interface’s (GUI) display:
  • Entire Facility & Multiple Site
  • Data Centre/s
  • Rack View/s
  • Open System (API) and database architecture

Empower your Data Centre Collocation Customers with PatchPro® Web

Real-time Online Access to Hosted Infrastructure

PatchPro® Web application provides a collocation data centre’s clients access to their hosted infrastructure, online through a user-friendly web interface. An amazing tool for empowering DC customers to access and view their network infrastructure, servers and other devices. View free ports and rack units, create patch or cross-connects between devices and send workorders direct to the NOC.

The results:

  • Provide Visibility
  • Improve Efficiency
  • Empower Customers

Web Features

Front (and back) and rear (and back) views provide full visibility of all hosted infrastructure within the rack.

– user level access restricts collocations customers from accessing and viewing other customers infrastructure.

Side rack view provides visibility in ensuring no conflicting space requirements apply, when adding additional hardware components.

Visualize connections in granular detail:

– Connected/open ports (front and back) visually

– All connected devices

– Export to Excel/Visio

Customers manage their infrastructure and connectivity

– Components (Servers, switches, SFP’s)

– Create Connections (Patches & Cross-Connects)

Access unique attributes for all connected devices


Additional Benefits of PatchPro® SaaS

  • SaaS (Software as a service)
    • No capital investment in licensing, hardware, staff and training required to execute
    • Contract based on your scope of work and customized for your requirements and budget
  • Open API

Other Modules (Included)

  • PactchPro® F – Facilities Manager
    • Infrastructure physical Layer management (iPLM)
  • PatchPro®I – Infrastructure Connection Manager
    • Data Centre Infrastructure management (DCIM)
    • Automated Infrastructure Management (AIM)
  • PatchPro® SPM Web
    • Service Plan Manager/Asset Managment

 

Greg Pokroy

CEO – JAYCOR International

Dual-Power Feeds in Data Centers

Things like always-on technology, streaming content and cloud adoption are creating high demand for efficient, resilient and fast data centers that never let us down.

To meet these needs, dual-power feeds – two independent electrical feeds coming into a data center from the utility company – are becoming more common to reduce the chance of a complete outage (or not having enough power). This type of power set-up is often seen in Tier 4 data centers. If one of the two power sources suffers from an interruption, the other source will still supply power.

Generally labeled “A” and “B” feeds, each power source has not only its own utility feed, but also:

  • A backup generator
  • A switch that alternates between A and B feeds
  • Electrical and distribution switchboards
  • An uninterruptible power supply (UPS)
  • A power distribution unit (PDU)
  • Rack-level PDUs

At any one of these points along the chain, failure can occur. A true dual-power feed means that there are two separate sets of these components operating independently, reducing the likelihood of downtime due to failure.

Today, most mission-critical IT equipment, such as servers and switches, are also designed with at least dual power supplies. When everything is running normally, the equipment pulls power equally from both power feeds. In the event of an outage, however, the IT equipment can automatically switch all power to one feed or the other.

Read full article

Network Upgrades: Utilizing Parallel Fiber Cabling

It comes to no surprise, that enterprise and consumer demands are impacting data centers and networks. As speed requirements go up, layer 0 (the physical media for data transmission) becomes increasingly critical to ensure link quality.

Numerous organizations are looking for an economical, futureproof migration path toward 100G (and beyond). Multimode fiber (MMF) cabling systems continue to be the most popular, futureproof cabling and connectivity solution.

Both duplex and parallel cabling are options for network upgrades. A few weeks ago, we discussed duplex MMF cabling. In this, we’ll discuss parallel MMF cabling.

 

Parallel Fiber Cabling

When transceiver technology can’t keep up with Ethernet speed requirements, the most obvious solution is to move from duplex to parallel fiber cabling.

Although using BiDi (bi-directional) and SWDM (shortwave wavelength division multiplexing) transceivers can reduce direct point-to-point cabling costs, they do not support breakout configuration (e.g. 40G switch ports to four 10G server ports), which is a very common use in data centers.

According to research firm LightCounting, approximately 50% of 40GBASE-SR4 QSFP+ form factors are deployed for breakout configuration; the other 50% are deployed for direct switch-to-switch links.

As a matter of fact, 40G QSFP+ and 100G QSFP28 are the most popular form factors used for Ethernet switches in data centers. QSFP (quad small form-factor) is a bi-directional, hot-pluggable module mainly designed for datacom applications. QSFP+/QSFP28 has a 2.5x data density compared to SFP+/SFP28, using four parallel electrical lanes. The optical interface is a receptacle for MPO female connectors. Four fibers (1, 2, 3 and 4) transmit the signal; the other four fibers (9, 10, 11 and 12) receive the optical signal.

QSFP transceivers, paired with parallel fiber connectivity with a one-row MPO-12 (Base-8 or Base-12) interface, can support flexible breakout or direct connection.

  • 40G/100G direct links are typically used in switch-to-switch links, which can be supported by duplex or parallel fiber cabling.
  • 40G/100G Ethernet ports can be configured as 4x 10G or 4x 25G ports to support 10G/25G server uplinks.
  • 40G/100GBASE-SR4 transceivers only use eight fiber threads in an MPO-12 connector; therefore, Base-8 is a cost-optimized cabling solution that allows 100% fiber utilization.

Read full article

Analyzing Data Center Energy Consumption By Using Business Metrics

About five years ago, the industry first heard about Digital Service Efficiency (DSE) – a method that was designed by eBay to help the company capture a holistic picture of their data center energy consumption and performance.

The initiative was then made public in an effort to assist other organizations establish their own data center energy consumption benchmarks and goals, and compare live system performance against those benchmarks and goals to determine actual efficiency levels.

While they tracking their data center’s power usage effectiveness (PUE), which illustrates how efficient a data center’s electrical and mechanical systems are, they felt like something was missing. Calculating PUE didn’t offer them insight into how efficiently their data center equipment (such as servers) was being used. The DSE initiative was formed to fill this gap.

Earlier this year, the team of eBay engineers who created the DSE initiative received a patent for it. With this news, we thought it would be a good time to revisit the data center productivity metric they introduced a few years ago. Even though it was created based on eBay’s core competency – e-commerce – there are still some lessons to be learned.

In eBay’s case, to measure performance and data center energy consumption, they chose to specifically measure how many online business transactions are completed per kilowatt-hour consumed. They calculated this by analyzing four metrics:

  1. The type of performance they wanted to measure (transactions, or the number of online purchases and sales)
  2. Cost per transaction (they measured cost per megawatt-hour, per user and per server)
  3. Environmental impact (amount of carbon dioxide produced per transaction)
  4. Revenue per transaction (they measured revenue per transaction, per megawatt-hour and per user)

Then they base their data center improvement goals around those metrics – goals like reducing cost per transaction by a certain percentage, for example, or increasing transactions per kilowatt-hour by a certain percentage.

The organization believes that, by substituting your own unique business metric in place of the metric they used – online business transactions – you’ll be able to create your own, unique way of measuring data center productivity and efficiency, too.

What performance metric could you use to measure and benchmark data center energy consumption? Here are a few ideas:

  • Healthcare: number of patients seen or number of appointments set
  • Hospitality: number of guests who stay onsite or number of reservations
  • Manufacturing: number of widgets produced
  • Financial: number of transactions

Read full article

Public vs Private Clouds: How Do You Choose?

An Intel Security survey of 2,000+ IT professionals last year revealed several fascinating information about public and private cloud adoption. For starters, within the next 15 months, 80% of all IT budgets will have some income dedicated to cloud solutions.

Many enterprises are starting to rely on public and private clouds for a few simple reasons:

  • Most good public and private cloud providers regularly and automatically back up data they store so it is recoverable if an incident occurs.
  • Tasks like software upgrades and server equipment maintenance become the responsibility of the cloud provider.
  • Scalability is virtually unlimited; you can grow rapidly to meet business needs, and then scale back just as quickly if that need no longer exists.
  • Upfront costs are lower, since cloud computing eliminates the capital expenses associated with investing in your own space, hardware and software.

But before you decide you are moving to the cloud, you should know the differences between public and private clouds. Making a choice between public and private clouds often depends on the type of data you’re creating, storing and working with.

 

Public Clouds Defined

The public cloud got its kick start by hosting applications online – today, however, it has evolved to include infrastructure, data storage, etc. Most people do not  realise that they have been benefitting from the public cloud for years (before most of us even referred to “public and private clouds”). For example, any time you access your online banking tool or login to your Gmail account, you’re using the public cloud.

In a public cloud, data center infrastructure and physical resources are shared by many different enterprises, but owned and operated by a third-party services provider (the cloud provider). Your company’s data is hosted on the same hardware as the data from other companies. The services and infrastructure are accessible online. This allows you to quickly scale resources up and down to meet demand. As opposed to a private cloud, public cloud infrastructure costs are based on usage. When dealing with the public cloud, the user/customer typically has no control (and very limited visibility) regarding where and how services are hosted.

 

Private Clouds Defined

In a private cloud, infrastructure is either hosted at your own onsite data center or in an environment that that can guarantee 100% privacy (through a multi-tenant data center or a private cloud provider). In these third-party environments, the components of a private cloud (computing, storage and networking hardware, for example) are all dedicated solely to your organization so you can customize them for what you need. In some cases, you’ll even have choices about what type of hardware is used. No other organization’s data will be hosted using the equipment you use.

With an internal private cloud (one hosted at your own data center), your enterprise incurs the capital and operating costs associated with establishing and maintaining it. Many of the benefits listed earlier about choosing cloud services don’t apply to internal private clouds, especially since you serve as your own private cloud provider.

In organizations and industries that require strict security and data privacy, private clouds usually fit the bill because applications can be hosted in an environment where resources aren’t shared with others; this allows higher levels of data security and control as compared to the public cloud.

 

What’s a Hybrid Cloud?

Enterprises also have the opportunity to take advantage of both the public and private cloud by implementing a hybrid cloud, which combines the two.

For example, the public cloud can be used for things like web-based email and calendaring, while the private cloud can be used for sensitive data.

Read full article

10 Factors to consider when Choosing a Rack PDU

In it’s simplicity, rack power distribution units (PDUs) are designed to provide electrical protection and distribute power to networking equipment within racks/cabinets. As the needs and requirements of data centers altar, so do options for rack PDU performance.

There are several questions to consider before selecting rack PDUs that will work well for your data center application. This list below will aid you in the right direction, ensuring that the PDUs you choose will fit the design of your data center today and in the future.

1. Type of Mount

Depending on where you want to station it, a rack PDU can be mounted horizontally or vertically. Installed horizontally inside the rack (taking up RU space) is one option; another option is to vertically mount a PDU on the back or side of the enclosure (not taking up any RU space). You will often see one vertically mounted PDU on the left side and one on the right side of a data center cabinet (although rack PDUs can be mounted on either side, based on preferences).

PDUs can be mounted so that power cords exit either at the bottom or top of the enclosure. (If your data center is on a slab, for example, the power cord needs to exit at the top of the enclosure because there is no raised floor for it to pass through.)

2. Amperage

Your power rating – the amount of sustained power draw a PDU can handle – determines the amperage level you’ll need. Why is this important? Because, for example, a PDU with a 30A fuse will blow if a 30A circuit experiences more than 30A of power for an extended period of time.

Per the National Electrical Code, 30A PDUs or higher are required to be equipped with a 20A breaker to prevent injury in the event of a short circuit.

3. Voltage

In addition to different amperages, there are different input voltage options for rack PDUs as well; 208/240V is the most common voltage output to computing gear, with a new trend moving toward 400V input. Confirm your infrastructure voltage, and you’ll know what type of voltage you need in your PDU.

4. Single- or 3-Phase Power

What type of input power do you have access to: single-phase power or 3-phase power? The type of power distribution in your data center will determine whether you need a single- or 3-phase PDU.

The difference involves where in the distribution system the phase is broken down. When it’s broken down at the distribution panel, power to the rack will be single-phase service (requiring single-phase rack PDUs). When all three phases are brought to each rack, then a 3-phase PDU is needed. In most data centers, the input power is 3-phase service.

Read full article

Which is Right for You: 40G vs 100G Ethernet?

Companies like as Google, Amazon, Microsoft and Facebook started their migration toward 100G in 2015 – and smaller enterprise data centers are now following suit. Plenty of these new 100G deployments adopt a singlemode fiber solution for longer reach that best suits their hyperscale data center architectures.

Comparing 40G vs. 100G optical transceivers currently available in the market, both have been developed and cost optimized for their designated reach and applications.

While weighing 40G vs. 100G Ethernet, and deciding which migration path makes more sense for your organization, here are some facts you should know:

  • Switches with 10G SFP+ ports, or 40G (4x 10G) QSFP+ ports, can support 10G server uplinks
  • Switches with 25G SFP28 ports, or 100G (4x 25G) QSFP28 ports, can support 25G server uplinks
  • 100G switches have already been massively deployed in cloud data centers; the cost difference between 40G vs. 100G is small
  • Most new 100G transceivers can easily support 40G operation
  • Some non-standard 100G singlemode transceivers are designed and optimized for cloud data center deployment; product availability for other environments is limited for the short term
  • Traditional Ethernet networking equipment giants Cisco and Arista have already started selling switch software on a standalone basis that goes into networking devices (such as a “white box” solution with merchant switch ASICs); this move accelerates hardware and software disaggregation and lowers overall ownership costs for end-users
  • According to Dell’Oro, 100G switch port shipments will surpass 40G switch port shipments in 2018.

When considering system upgrades from 10G, it’s essential to understand that 40G will also be needed to support the legacy installed base with 10G ports; 40G/100G switch port configurability will certainly accelerate 100G adoption in the enterprise market.

In 2017, 100G Ethernet is already ubiquitous – it will be mainstream, not just in hyperscale cloud data centers. Next-wave 200G/400G Ethernet will soon hit the market; standards bodies have already initiated a study group for 800G and 1.6T Ethernet to support bandwidth requirements beyond 2020.

Wrapping Up the Road to 800G

We’re almost finished with our blog series covering the road to 800G Ethernet. Subscribe to our blog to follow this series, as well as receive our other content each week. As part of this blog series, we’ve covered the following topics:

 

Read full article

Copyright © 2024 Jaycor International
Engineered by: NJIN Agency