A Genius Tool for Effective Facilities Management – PatchPro®

PatchPro® F – iPLM Infrastructure Physical Layer Management
Design, Build & Manage your Enterprise and Data Centre in Granular Detail

iPLM View

The iPLM View enables the user to access and visualize all objects, their attributes and cost centre’s within the entire facility:

  • All infrastructure (shown per floor)
    • Power distribution
      • DB Boards
      • Cabling, routes, and ducts
    • Ventilation ducts and CRAC Units
    • Cost Centre and PUE (real-time)
    • All other assets – offices/free space, PC’s, furniture

  • All network infrastructure and connectivity
    • Cabling, patch cords, wall-jacks, cable routes, and ducts
    • All connections from start-device through the network (point-to-point) to end-device

  • Cabinet/rack – real-time visualization of:
    • Dimensions (e.g. 800x1000x2000 42RU)
    • Free Rack Units
    • Sum of and Max BTU’s
    • Actual and Max Wattage
    • customize

  • Search (keywords) the object manager in granular detail for any criteria/objects within the entire facility
  • Quickly and easily navigate/zoom to the asset, view its connections and attributes

  • DC Managers create real-time design changes in planning mode and deploy work orders for execution.

IoT is Here to Stay: The Evolution of Converged Networks

Lately, I’ve been reminded of a quote that’s often attributed to Charles Darwin: “… It is not the most intellectual of the species that survives; it is not the strongest that survives, but the species that survives is the one that is able best to adapt and adjust to the changing environment in which it finds itself.”

The idea behind this quote remains true and applies well beyond the field of evolutionary biology.

Convergence (version 2.0) is here, and, to survive, we need to adjust, change and adapt to our changing environment. We cannot build networks for today (and for the future) like we have built them in the past, lest we go the way of the dodo bird.

Let’s look at the changes and improvements made since the first converged network (Convergence 1.0).


We established networks in the voice world that operated on high-availability systems we could count on without question. When you picked up the handset, you had a dial tone. With the rapid growth of data networks starting in the ’70s, it was inevitable that the industry would find synergies to allow voice and data telecommunications to exist on the same converged network.


In hindsight, I would argue that the technological and engineering issues were actually the easiest to overcome. The most difficult were the people issues: resistance to or fear of change, ego, protectionism, organizational boundaries, and risk avoidance, to name just a few. As the technologies grew, evolved and improved, so did our understanding. This helped us break down and overcome the people issues. A converged network bringing voice and data together is now the norm.

I’ve had a number of recent discussions with user groups in regard to the Internet of Things (IoT) and the opportunity that new technologies, applications and devices bring to an organization, as well as the challenges that can arise in adapting to this changing environment.

Traditionally, machine-to-machine (M2M) or device-based networks sat outside our converged networks, whether they be for digital building technologies, like video and security; smart cars; industrial networks; or many others.


In an IoT world, those networks still exist, as they always have. They may work on the same physical and/or logical networks with the same cables, boxes, and software, or they may use “like” networks to better interact.

The IoT world is here, and the level and rate of convergence are increasing in volume and velocity. IoT is a nebulous concept – hence all the cloud analogies. It will continue to morph as technologies evolve along with those that use it. Your corporate IoT cloud will look different from mine, and that’s okay.


Will we ever get to a true hyper-converged network where anything can talk to anything at any time? I don’t know – but that’s a people issue, not an engineering one. My lack of understanding or foresight doesn’t mean I don’t need to adjust and prepare for that eventuality. Converged networks will grow as they have; I will grow and adapt, or else I risk the potential of not being able to function in my changing environment.

Which brings me to adapting and adjusting to a changing environment from a network infrastructure frame of mind. Our TIA TR-42 (Telecommunications Cabling Systems ANSI/TIA-568 family), BICSI (TDMM and others) and proprietary or third documents must adapt and adjust. Whether they be specifications, standards or best-practice resources, they must evolve or face irrelevance (extinction, to extend the metaphor).

Our converged networks have evolved with higher speeds, higher power and more portability or mobility than ever before. More than any pundit, I remember prognosticating in the ’90s when people were amazed at shared megabit network capabilities and the ability to talk on the phone untethered. Simply creating faster networks, with higher grades of cabling, is not the answer.

Improvements in speed, noise immunity, power, portability, and mobility are all important, but they alone won’t get us where we need to go. We need to think differently, challenge the status quo and create new solutions. We need to adjust and adapt.

Traditional network guidance has usually centered on human telecommunications, whether directly, through things like voice and video, or indirectly through human-controlled devices, like our computers and tablets. Devices have been communicating through artificial means at least as long as we have, either through mechanical wires, pneumatics, hydraulics, electronic signals or other means. But now those machines are joining us in the digital world; rather than relying on proprietary protocols, they can now run on the same networks that our human-controlled devices do.

The bias toward human-controlled telecommunications is natural given the nature of standards development. Almost every standard defines “user” as a primary consideration when designing networks. Devices, despite having the ability to communicate on the same networks, have noticeably different requirements and, therefore, need different considerations. A one-size-fits-all approach to network design has arguably never worked well; it certainly won’t for our digital buildings and IoT environment of the future.

Using the smart building example, a “user” is a transient device on the network. The user goes home at the end of the day and on holidays, and user groups or customers change over with leases and occupancy changes. The lights, door controls, surveillance, security, mechanical and other digital building systems are effectively permanent fixtures. Our laptops, phones, and tablets are typically refreshed every few years. A building’s systems and technologies are expected to last much longer than that.

Furthermore, the operational risks, concerns, needs, and security requirements are different from “users” to “devices.” A person can get sick or take a vacation; a building cannot. The lights must always turn on, the HVAC systems must always work, the doors must always open, close and secure – without question. Even though a door control, lighting or HVAC system may not require the same bandwidth as a user, it does not mean that their network has lesser requirements. If anything, they may have higher requirements in some areas. If my laptop doesn’t function, I can still connect with my tablet or my phone. If a building doesn’t function, it impacts all the users – not just one.

I know that industry standards and best practices are adapting and adjusting to a new environment. Make sure your practices, specifications, assumptions, and procedures do as well. Otherwise, we risk new technology becoming an impediment to our goals – not through any fault of its own, but rather through how it was implemented. Make sure your team members, both external and internal, remember the lessons of Convergence 1.0 so they can be ready for 2.0, which is happening now. “We’ve always done it this way” might as well have been the mantra of the dodo bird.

Full article

Cost Effective Industrial Wifi

Now available in stock – JAYCOR offers a complete end-to-end solution for cost-effective industrial/outdoor ruggedized Wifi. Purchase all components for a turnkey solution:

  • Wireless AP (Access Point)
  • Omni or directional antennas
  • Antenna (N-Type) & Ethernet (RJ45) patch cords
  • Antenna & Ethernet lighting surge protection
  • Din Rail /Wall Mount PoE Switches, Media Converters and SFP modules
  • Din Rail Power Supplies & cabtyre
  • Outdoor enclosure


How PatchPro® Works

The key point is the Database

  • The Database sends all information / changes to the raphical user interface (GUI) through CADVANCE and or AutoCAD
  • The advantage of PatchPro is that you can manipulate the database by using the GUI which is quicker and easier
  • The Database and the GUI are connected. Objects changed on the system update in real-time

Key attributes of PatchPro® Software Solutions

Comprehensive technical functionality – both in the Facilities as well as in Data Centres.

  • Graphical User Interface’s (GUI) display:
  • Entire Facility & Multiple Site
  • Data Centre/s
  • Rack View/s
  • Open System (API) and database architecture

The Evolution of Wireless Standards

In the late 1990’s, one of the first wireless standards was carried out. You may remember IEEE 802.11b – the first wireless LAN standard to be widely adopted and incorporated into computers and laptops. A few years later on came the IEEE 802.11g, which offered signal transmission over relatively short distances at speeds of up to 54 Mbps. Both standards operated in the unlicensed 2.4 GHz frequency range. In 2009, IEEE 802.11n (which operated in 2.4 GHz and 5 GHz frequency ranges) was a big step up. It provided anytime wireless access and was the de facto standard for mobile users.

Understanding wireless technology and standards like these is key to making sure you are investing in technology and equipment that can support your organisation’s short-term and long-term network-connection requirements. Wireless standards layout specific specifications that must be followed when hardware or software are designed related to those standards.

Now that we have covered the major wireless standards of the past, let’s look ahead at current standards – and what is yet to come.



General-Purpose Applications

Today’s wireless standards, like IEEE 802.11ac (Wave 1 and Wave 2), operate in the 5 GHz frequency range. This standard is used for many general-purpose, short-range, multi-user applications, like connecting end devices to networks.

As we have mentioned in previous blogs, IEEE 802.11ax is the “next big thing” in terms of wireless standards. As the successor to 802.11ac, 802.11ax operates in both the 2.4 GHz and 5 GHz frequency spectrums. It will offer 10G speeds, and the ability for multiple people to use one network simultaneously with fewer connectivity problems (and while still maintaining fast connection speeds). It will improve average throughput per user by a factor of at least four as compared to 802.11ac Wave 1.

High-Performance Applications

Operating at an unlicensed frequency of 60 GHz are IEEE 802.11ad and IEEE 802.11ay, which are used primarily for short-range, point-to-point applications vs. point-to-multipoint applications. 802.11ay is an update to 802.11ad, improving throughput and range. As compared to 802.11ad, 802.11ay can offer speeds between 20Gbps and 40Gbps, as well as an improved range.

IoT Applications

Operating at lower frequencies are standards like 802.11af (UHF/VHF) and 802.11ah (915 MHz). These standards are designed for extended-range applications, like connecting hundreds of remote Internet of Things (IoT) sensors and devices. They’re also used in rural areas.

Because they operate in lower-frequency ranges, they’re able to offer extended operational ranges. They can carry signals for miles, but have a low throughput of 350 Mbps.

Read full article

The Impact of Patch Cord Types on the Network

Data Centers and the networks they support have expanded to be an integral part of every business. The software applications that keep mission-critical operations up and running in highly redundant, 24/7 environments rely on highly engineered structured cabling systems to connect the cloud to every user. Structured cabling is the foundation that supports data centers.

Although structured cabling is not as sexy as diesel-driven UPS systems or adiabatic cooling systems, it contributes a huge role in supporting the cloud. One important component of structured cabling that is often overlooked: patch cords.

Oftentimes, patch cords are purchased haphazardly and installed at the last minute. But the right patch cord type can improve the performance of your network. The proper design, specification, manufacturing, installation and ongoing maintenance of patch cord systems can help ensure that your network experiences as much uptime as possible.

A patch cord problem can wreak havoc on an enterprise, from preventing an airline customer from making a necessary reservation change to keeping a hotel guest from getting work done while on business travel.

What Drives Data Growth?

Explosive data growth due to social media, video streaming, IoT, big data analytics and changes in the data center environment (virtualization, consolidation and high-performance computing) means one thing: Data traffic is not only growing in bandwidth, but also in speed.

Another essential point is network design. Today’s network design, such as a leaf-spine fabric, makes the network flatter, which lowers latency – this makes the Ethernet and corresponding patch cord types incredibly important.

The Definition of a Patch Cord

A patch cord is a cable with a connector on both ends (the type of connector is a function of use). A fiber patch cord is sometimes referred to as a “jumper.”

Patch cords are part of the local area network (LAN), and are used to connect network switches to servers, storage and monitoring portals (traffic access points). They are considered to be an integral part of the structured cabling system.

Copper patch cords are either made with solid or  stranded copper; due to potential signal loss, lengths are typically shorter than connector cables.

A fiber patch cord is a fiber optic cable that is capped at both ends with connectors. The caps allow the cord to be rapidly connected to an optical switch or other telecommunications/computer device. The fiber cord is also used to connect the optical transmitter, receiver and terminal box.

Read full article

Network Cables; How Cable Temperature Impacts Cable Reach

There is nothing more disheartening than making a big investment in something that promises to deliver what you require – only to find out once it is too late that it is not performing according to expectations. What happened? Is the product not adequate? Or is it not being utilised correctly?

Cable Performance Expectations

This scenario holds true with category cable investments as well. A cable that can not fulfil its 100 m channel reach (even though it is marketed as a 100 m cable) can derail network projects, increase costs, cause unplanned downtime and call for lots of troubleshooting (especially if the problem is not obvious right away).

High cable temperatures are sometimes to blame for cables that don’t perform up to the promised 100 m. Cables are rated to transmit data over a certain distance up to a certain temperature. When the cable heats up beyond that point, resistance and insertion loss increase; as a result, the channel reach of the cable often needs to be de-rated in order to perform as needed to transmit data.

Many factors cause cable temperatures to rise:

  • Cables installed above operational network equipment
  • Power being transmitted through bundled cabling
  • Uncontrolled ambient temperatures
  • Using the wrong category cabling for the job
  • Routing of cables near sources of heat

In Power over Ethernet (PoE) cables – which are becoming increasingly popular to support digital buildings and IoT – as power levels increase, so does the current level running through the cable. The amount of heat generated within the cable increases as well. Bundling makes temperatures rise even more; the heat generated by the current passing through the inner cables can’t escape. As temperatures rise, so does cable insertion loss, as pictured below.

Testing the Impacts of Cable Temperature on Reach

To assess this theory, I created a model to test temperature characteristics of different cables. Each cable was placed in an environmental chamber to measure insertion loss with cable temperature change. Data was generated for each cable; changes in insertion loss were recorded as the temperature changed.

The information gathered from these tests was combined with connector and patch cord insertion loss levels in the model below to determine the maximum length that a typical channel could reach while maintaining compliance with channel insertion loss.

This model represents a full 100 m channel with 10 m of patch cords and an initial permanent link length of 90 m. I assumed that the connectors and patch cords were in a controlled environment (at room temperature, and insertion loss is always the same). Permanent links were assumed to be at a higher temperature of 60 degrees C (the same assumption used in ANSI/TIA TSB-184-A, where the ambient temperature is 45 degrees C and temperature rise due to PoE current and cable bundling is 15 degrees C).

Using the data from these tests, I was able to reach the full 100 m length with Belden’s 10GXS, a Category 6A cable. I then modeled Category 6 and Category 5e cables from Belden at that temperature, and wasn’t able to reach the full 100 m. Why? Because the insertion loss of the cable at this temperature exceeded the insertion loss performance requirement.

Read full article

Cabling Demands for Digital Buildings

2017 to be the year of the digital building, and there has certainly been progress in this direction as predicted. In fact, according to Deloitte, sensor deployment in commercial buildings could potentially grow by 79% between 2015 and 2020.

Support for Internet of Things (IoT) is growing, bringing standalone building systems onto one platform. As all these systems and devices are being connected on a single IP network, they can be integrated to gather data, make automatic adjustments and provide intelligence and analytics for informed decision-making to reduce operating costs and energy use, increase occupant satisfaction, improve safety and reduce time spent on troubleshooting and maintenance.

In some cases, existing infrastructure are already being put to the test due to cloud adoption. As augmented and virtual reality move into the workplace – whether office settings, hospitals, hospitality environments or educational institutions – and more devices join the network, demands placed on infrastructure will become more intense. (And even though this newer technology is not widely deployed yet – check back in a few years.)

What demands do these digital buildings place on cabling infrastructure? A well-designed, high-performance cabling infrastructure is what brings IoT and digital buildings to life. All of the data (and power, in most cases) required for these devices and applications is traveling via the network’s category cabling. Without it, devices wouldn’t be able to communicate to each other, gather and relay important information or be controlled and adjusted remotely.

As digital buildings take over, it’s important to keep in mind the demands they place on a structured cabling system.

Demand No. 1: More Power Needs

Digital building cabling will need to support Power over Ethernet (PoE). This cabling technology safely transmits power and data over a single standard network cable, allowing devices – cameras, lighting systems, wireless access points, etc. – to be deployed anywhere. This allows remote control and data collection on one infrastructure. As device complexity continues to increase, the amount of power these devices need also increases (up to 100W in some cases). Outdated cabling systems won’t be able to safely and successfully carry this power level.


Demand No. 2: Increased Temperatures

Running more power inside a network cable can increase the cable’s internal temperature. When cables get hotter, insertion loss increases. This can cause unplanned downtime and may ultimately damage the cable, hurting its long-term performance.

If cables are tightly packed in trays and pathways, temperatures could rise even more because they can’t dissipate. When a cable’s temperature exceeds the recommended level, it may need to be de-rated – which means it won’t reach the full length promised.

Read full article

Single-Pair Ethernet Cabling: Four New Applications

Four New Types of Single-Pair Ethernet Cabling

For years, Ethernet cabling has used four twisted pairs to carry data without worrying about noise in data lines. Recent developments in IEEE 802.3 (Ethernet Working Group) and TIA TR-42(Telecommunications Cabling Systems Engineering Committee) has unveiled four standards projects which may change that; instead of four balanced twisted-pairs cabling, these standards feature a single balanced twisted-pair Ethernet cabling.

Of these four, one will impact enterprise networks the most. We will cover this standard first, and then explain the three other types of single-pair Ethernet cables below.

IoT 1 Gbps Applications: 100 m Reach

2017 Ericsson Mobility Report says that there will be nearly 28 billion connected devices in place globally by 2021 – and more than half of these will be related to Internet of Things (IoT).

With the ability to deliver data at speeds of up to 1G, and PoE power, this standard is intended specifically for IoT applications. Known as ANSI/TIA-568.5, it will provide cable, connector, cord, link and channel specifications for single-pair connectivity in enterprise networks.

This single-pair Ethernet cable may help network professionals connect more devices to their networks as the industry moves toward digital buildings – where all types of systems and devices integrate directly with the enterprise network to capture and communicate data.

Most of the devices used in digital buildings – such as sensors – have minimal power and bandwidth requirements (in applications like building automation and alarm systems). In these cases, single-pair Ethernet cable can provide a cost-effective cabling solution. The cable is smaller and lighter than a standard four-pair Ethernet cable, so it can also reduce pathway congestion.

The three other single-pair Ethernet cable types don’t apply directly to data centers or enterprise networks, but they’re still important to understand.

Read full article

Ethernet Switch Evolution: High Speed Interfaces

Technology development has always been driven by emerging applications: big data, Internet of Things, machine learning, public and private clouds, augmented reality, 800G Ethernet, etc.

Merchant Silicon switch ASIC chip development is an excellent example of that golden rule.


OIF’s Common Electrical Interface Development

The Optical Internetworking Forum (OIF) is the standards body – a nonprofit industry organization – that develops common electrical interfaces (CEIs) for next-generation technology to ensure component and system interoperability.

The organization develops and promotes implementation agreements (IAs), offering principal design and deployment guidance for a SerDes (serializer-deserializer), including:

  • CEI-6G (which specifies the transmitter, receiver and interconnect channel associated with 6+ Gbps interfaces)
  • CEI-11G (which specifies the transmitter, receiver and interconnect channel associated with 11+ Gbps interfaces)
  • CEI-28G (which specifies the transmitter, receiver and interconnect channel associated with 28+ Gbps interfaces)
  • CEI-56G (which specifies the transmitter, receiver and interconnect channel associated with 56+ Gbps interfaces)

OIF’s CEI specifications are developed for different electrical interconnect reaches and applications to ensure system service and connectivity interoperability at the physical level:

  • USR: Ultra-short reach, for < 10 mm die to optical engine within a multi-chip module (MCM) package.
  • XSR: Extremely short reach, for < 50 mm chip to nearby optical engine (mid-board optics); or CPU to CPU/DSP arrays/memory stack with high-speed SerDes.
  • VSR: Very short reach, < 30 cm chip (e.g. switch chip) to module (edge pluggable cage, such as SFP+, QSFP+, QSFP-DD, OSFP, etc.).

Read full article