Cost Effective Industrial Wifi

Now available in stock – JAYCOR offers a complete end-to-end solution for cost-effective industrial/outdoor ruggedized Wifi. Purchase all components for a turnkey solution:

  • Wireless AP (Access Point)
  • Omni or directional antennas
  • Antenna (N-Type) & Ethernet (RJ45) patch cords
  • Antenna & Ethernet lighting surge protection
  • Din Rail /Wall Mount PoE Switches, Media Converters and SFP modules
  • Din Rail Power Supplies & cabtyre
  • Outdoor enclosure

 

Empower your Data Centre Collocation Customers with PatchPro® Web

Real-time Online Access to Hosted Infrastructure

PatchPro® Web application provides a collocation data centre’s clients access to their hosted infrastructure, online through a user-friendly web interface. An amazing tool for empowering DC customers to access and view their network infrastructure, servers and other devices. View free ports and rack units, create patch or cross-connects between devices and send workorders direct to the NOC.

The results:

  • Provide Visibility
  • Improve Efficiency
  • Empower Customers

Web Features

Front (and back) and rear (and back) views provide full visibility of all hosted infrastructure within the rack.

– user level access restricts collocations customers from accessing and viewing other customers infrastructure.

Side rack view provides visibility in ensuring no conflicting space requirements apply, when adding additional hardware components.

Visualize connections in granular detail:

– Connected/open ports (front and back) visually

– All connected devices

– Export to Excel/Visio

Customers manage their infrastructure and connectivity

– Components (Servers, switches, SFP’s)

– Create Connections (Patches & Cross-Connects)

Access unique attributes for all connected devices


Additional Benefits of PatchPro® SaaS

  • SaaS (Software as a service)
    • No capital investment in licensing, hardware, staff and training required to execute
    • Contract based on your scope of work and customized for your requirements and budget
  • Open API

Other Modules (Included)

  • PactchPro® F – Facilities Manager
    • Infrastructure physical Layer management (iPLM)
  • PatchPro®I – Infrastructure Connection Manager
    • Data Centre Infrastructure management (DCIM)
    • Automated Infrastructure Management (AIM)
  • PatchPro® SPM Web
    • Service Plan Manager/Asset Managment

 

Greg Pokroy

CEO – JAYCOR International

Network Upgrades: Utilizing Parallel Fiber Cabling

It comes to no surprise, that enterprise and consumer demands are impacting data centers and networks. As speed requirements go up, layer 0 (the physical media for data transmission) becomes increasingly critical to ensure link quality.

Numerous organizations are looking for an economical, futureproof migration path toward 100G (and beyond). Multimode fiber (MMF) cabling systems continue to be the most popular, futureproof cabling and connectivity solution.

Both duplex and parallel cabling are options for network upgrades. A few weeks ago, we discussed duplex MMF cabling. In this, we’ll discuss parallel MMF cabling.

 

Parallel Fiber Cabling

When transceiver technology can’t keep up with Ethernet speed requirements, the most obvious solution is to move from duplex to parallel fiber cabling.

Although using BiDi (bi-directional) and SWDM (shortwave wavelength division multiplexing) transceivers can reduce direct point-to-point cabling costs, they do not support breakout configuration (e.g. 40G switch ports to four 10G server ports), which is a very common use in data centers.

According to research firm LightCounting, approximately 50% of 40GBASE-SR4 QSFP+ form factors are deployed for breakout configuration; the other 50% are deployed for direct switch-to-switch links.

As a matter of fact, 40G QSFP+ and 100G QSFP28 are the most popular form factors used for Ethernet switches in data centers. QSFP (quad small form-factor) is a bi-directional, hot-pluggable module mainly designed for datacom applications. QSFP+/QSFP28 has a 2.5x data density compared to SFP+/SFP28, using four parallel electrical lanes. The optical interface is a receptacle for MPO female connectors. Four fibers (1, 2, 3 and 4) transmit the signal; the other four fibers (9, 10, 11 and 12) receive the optical signal.

QSFP transceivers, paired with parallel fiber connectivity with a one-row MPO-12 (Base-8 or Base-12) interface, can support flexible breakout or direct connection.

  • 40G/100G direct links are typically used in switch-to-switch links, which can be supported by duplex or parallel fiber cabling.
  • 40G/100G Ethernet ports can be configured as 4x 10G or 4x 25G ports to support 10G/25G server uplinks.
  • 40G/100GBASE-SR4 transceivers only use eight fiber threads in an MPO-12 connector; therefore, Base-8 is a cost-optimized cabling solution that allows 100% fiber utilization.

Read full article

Supporting Your Future of Network Technology: 6 Ways to Design Layer 0

The year 2014 was a key moment for the structured cabling industry. That is when the number of devices on the Internet officially surpassed the number of people on the Internet. In other words, we’re carrying and using more connected devices than ever before. Since then, Internet of Things (IoT) has begun to take over conversations about technology. Digital buildings – which feature a connected infrastructure to bring building systems together via the enterprise network – are moving to the forefront.

With these changes, how can you design your cabling infrastructure – your layer 0 – to support network technology changes? Every structured cabling system is unique, designed to fit a company’s specific needs. Taking the future into account during cabling projects helps maximize your investment while decreasing long-term costs. With correct planning and design, you’ll be ready for future hardware and software upgrades, be able to support increasing numbers of devices joining your network and will be set to accommodate higher-speed Ethernet migrations, such as 40G/100G.

We have gathered our best pieces of advice on how to design your layer 0 to support the future of network technology.

1. Abide by Cabling Standards

To provide guidance and best practices for the lifetime of your layer 0, following standards for structured cabling systems allows for the mix of products from different vendors and also helps in future moves, adds and changes:

  • TIA , North American standards for things like telecommunications cabling (copper and fiber), bonding and grounding, and intelligent building cabling systems
  • ISO/IEC, global standard harmonized with TIA networking standards
  • IEEE, which creates Ethernet-based standards for networks and relies on TIA and ISO/IEC layer 0 standards

2. Invest in High-Performance Cables

When your cabling system is designed to be used across multiple generations of hardware, it can remain in place longer while supporting fast and easy hardware upgrades.

Analyze how your business is currently run, as well as any expected business or technology shifts in the years to come. Then match these requirements with the performance characteristics of the cabling systems you’re considering.

Make sure that the category cabling can:

  • Support the full 100m distance per channel
  • Accommodate a tight bend radius inside wall cavities and other tight spaces
  • Support the highest operating temperature rating possible with low DC resistance
  • Maintain excellent transmission performance
  • Be bundled or tightly packed into trays and pathways without performance issues

Most Category 6A cables offer all of the benefits mentioned above, making Category 6A a solid decision that will support the future of network technology.

3. Find a Reputable Warranty

One of the best ways to ensure that your cabling and connectivity solutions will last is to find products that are backed by extensive and impressive warranties (such as a 25-year warranty).

When layer 0 is properly designed and installed, the structured cabling system will support your short-term and long-term needs. A reliable warranty ensures that this happens. For example, with a 25-year warranty, the installed system should meet or exceed industry standards for 25 years, as well as support future standards and protocols. If this isn’t the case, the manufacturer should address the issue.

Read full article

The Evolution of Wireless Standards

In the late 1990’s, one of the first wireless standards was carried out. You may remember IEEE 802.11b – the first wireless LAN standard to be widely adopted and incorporated into computers and laptops. A few years later on came the IEEE 802.11g, which offered signal transmission over relatively short distances at speeds of up to 54 Mbps. Both standards operated in the unlicensed 2.4 GHz frequency range. In 2009, IEEE 802.11n (which operated in 2.4 GHz and 5 GHz frequency ranges) was a big step up. It provided anytime wireless access and was the de facto standard for mobile users.

Understanding wireless technology and standards like these is key to making sure you are investing in technology and equipment that can support your organisation’s short-term and long-term network-connection requirements. Wireless standards layout specific specifications that must be followed when hardware or software are designed related to those standards.

Now that we have covered the major wireless standards of the past, let’s look ahead at current standards – and what is yet to come.

 

 

General-Purpose Applications

Today’s wireless standards, like IEEE 802.11ac (Wave 1 and Wave 2), operate in the 5 GHz frequency range. This standard is used for many general-purpose, short-range, multi-user applications, like connecting end devices to networks.

As we have mentioned in previous blogs, IEEE 802.11ax is the “next big thing” in terms of wireless standards. As the successor to 802.11ac, 802.11ax operates in both the 2.4 GHz and 5 GHz frequency spectrums. It will offer 10G speeds, and the ability for multiple people to use one network simultaneously with fewer connectivity problems (and while still maintaining fast connection speeds). It will improve average throughput per user by a factor of at least four as compared to 802.11ac Wave 1.

High-Performance Applications

Operating at an unlicensed frequency of 60 GHz are IEEE 802.11ad and IEEE 802.11ay, which are used primarily for short-range, point-to-point applications vs. point-to-multipoint applications. 802.11ay is an update to 802.11ad, improving throughput and range. As compared to 802.11ad, 802.11ay can offer speeds between 20Gbps and 40Gbps, as well as an improved range.

IoT Applications

Operating at lower frequencies are standards like 802.11af (UHF/VHF) and 802.11ah (915 MHz). These standards are designed for extended-range applications, like connecting hundreds of remote Internet of Things (IoT) sensors and devices. They’re also used in rural areas.

Because they operate in lower-frequency ranges, they’re able to offer extended operational ranges. They can carry signals for miles, but have a low throughput of 350 Mbps.

Read full article

Public vs Private Clouds: How Do You Choose?

An Intel Security survey of 2,000+ IT professionals last year revealed several fascinating information about public and private cloud adoption. For starters, within the next 15 months, 80% of all IT budgets will have some income dedicated to cloud solutions.

Many enterprises are starting to rely on public and private clouds for a few simple reasons:

  • Most good public and private cloud providers regularly and automatically back up data they store so it is recoverable if an incident occurs.
  • Tasks like software upgrades and server equipment maintenance become the responsibility of the cloud provider.
  • Scalability is virtually unlimited; you can grow rapidly to meet business needs, and then scale back just as quickly if that need no longer exists.
  • Upfront costs are lower, since cloud computing eliminates the capital expenses associated with investing in your own space, hardware and software.

But before you decide you are moving to the cloud, you should know the differences between public and private clouds. Making a choice between public and private clouds often depends on the type of data you’re creating, storing and working with.

Public Clouds Defined

The public cloud got its kick start by hosting applications online – today, however, it has evolved to include infrastructure, data storage, etc. Most people do not  realise that they have been benefitting from the public cloud for years (before most of us even referred to “public and private clouds”). For example, any time you access your online banking tool or login to your Gmail account, you’re using the public cloud.

In a public cloud, data center infrastructure and physical resources are shared by many different enterprises, but owned and operated by a third-party services provider (the cloud provider). Your company’s data is hosted on the same hardware as the data from other companies. The services and infrastructure are accessible online. This allows you to quickly scale resources up and down to meet demand. As opposed to a private cloud, public cloud infrastructure costs are based on usage. When dealing with the public cloud, the user/customer typically has no control (and very limited visibility) regarding where and how services are hosted.

Private Clouds Defined

In a private cloud, infrastructure is either hosted at your own onsite data center or in an environment that that can guarantee 100% privacy (through a multi-tenant data center or a private cloud provider). In these third-party environments, the components of a private cloud (computing, storage and networking hardware, for example) are all dedicated solely to your organization so you can customize them for what you need. In some cases, you’ll even have choices about what type of hardware is used. No other organization’s data will be hosted using the equipment you use.

With an internal private cloud (one hosted at your own data center), your enterprise incurs the capital and operating costs associated with establishing and maintaining it. Many of the benefits listed earlier about choosing cloud services don’t apply to internal private clouds, especially since you serve as your own private cloud provider.

In organizations and industries that require strict security and data privacy, private clouds usually fit the bill because applications can be hosted in an environment where resources aren’t shared with others; this allows higher levels of data security and control as compared to the public cloud.

What’s a Hybrid Cloud?

Enterprises also have the opportunity to take advantage of both the public and private cloud by implementing a hybrid cloud, which combines the two.

For example, the public cloud can be used for things like web-based email and calendaring, while the private cloud can be used for sensitive data.

Read full article

The Impact of Patch Cord Types on the Network

Data Centers and the networks they support have expanded to be an integral part of every business. The software applications that keep mission-critical operations up and running in highly redundant, 24/7 environments rely on highly engineered structured cabling systems to connect the cloud to every user. Structured cabling is the foundation that supports data centers.

Although structured cabling is not as sexy as diesel-driven UPS systems or adiabatic cooling systems, it contributes a huge role in supporting the cloud. One important component of structured cabling that is often overlooked: patch cords.

Oftentimes, patch cords are purchased haphazardly and installed at the last minute. But the right patch cord type can improve the performance of your network. The proper design, specification, manufacturing, installation and ongoing maintenance of patch cord systems can help ensure that your network experiences as much uptime as possible.

A patch cord problem can wreak havoc on an enterprise, from preventing an airline customer from making a necessary reservation change to keeping a hotel guest from getting work done while on business travel.

What Drives Data Growth?

Explosive data growth due to social media, video streaming, IoT, big data analytics and changes in the data center environment (virtualization, consolidation and high-performance computing) means one thing: Data traffic is not only growing in bandwidth, but also in speed.

Another essential point is network design. Today’s network design, such as a leaf-spine fabric, makes the network flatter, which lowers latency – this makes the Ethernet and corresponding patch cord types incredibly important.

The Definition of a Patch Cord

A patch cord is a cable with a connector on both ends (the type of connector is a function of use). A fiber patch cord is sometimes referred to as a “jumper.”

Patch cords are part of the local area network (LAN), and are used to connect network switches to servers, storage and monitoring portals (traffic access points). They are considered to be an integral part of the structured cabling system.

Copper patch cords are either made with solid or  stranded copper; due to potential signal loss, lengths are typically shorter than connector cables.

A fiber patch cord is a fiber optic cable that is capped at both ends with connectors. The caps allow the cord to be rapidly connected to an optical switch or other telecommunications/computer device. The fiber cord is also used to connect the optical transmitter, receiver and terminal box.

Read full article

Network Cables; How Cable Temperature Impacts Cable Reach

There is nothing more disheartening than making a big investment in something that promises to deliver what you require – only to find out once it is too late that it is not performing according to expectations. What happened? Is the product not adequate? Or is it not being utilised correctly?

Cable Performance Expectations

This scenario holds true with category cable investments as well. A cable that can not fulfil its 100 m channel reach (even though it is marketed as a 100 m cable) can derail network projects, increase costs, cause unplanned downtime and call for lots of troubleshooting (especially if the problem is not obvious right away).

High cable temperatures are sometimes to blame for cables that don’t perform up to the promised 100 m. Cables are rated to transmit data over a certain distance up to a certain temperature. When the cable heats up beyond that point, resistance and insertion loss increase; as a result, the channel reach of the cable often needs to be de-rated in order to perform as needed to transmit data.

Many factors cause cable temperatures to rise:

  • Cables installed above operational network equipment
  • Power being transmitted through bundled cabling
  • Uncontrolled ambient temperatures
  • Using the wrong category cabling for the job
  • Routing of cables near sources of heat

In Power over Ethernet (PoE) cables – which are becoming increasingly popular to support digital buildings and IoT – as power levels increase, so does the current level running through the cable. The amount of heat generated within the cable increases as well. Bundling makes temperatures rise even more; the heat generated by the current passing through the inner cables can’t escape. As temperatures rise, so does cable insertion loss, as pictured below.

Testing the Impacts of Cable Temperature on Reach

To assess this theory, I created a model to test temperature characteristics of different cables. Each cable was placed in an environmental chamber to measure insertion loss with cable temperature change. Data was generated for each cable; changes in insertion loss were recorded as the temperature changed.

The information gathered from these tests was combined with connector and patch cord insertion loss levels in the model below to determine the maximum length that a typical channel could reach while maintaining compliance with channel insertion loss.

This model represents a full 100 m channel with 10 m of patch cords and an initial permanent link length of 90 m. I assumed that the connectors and patch cords were in a controlled environment (at room temperature, and insertion loss is always the same). Permanent links were assumed to be at a higher temperature of 60 degrees C (the same assumption used in ANSI/TIA TSB-184-A, where the ambient temperature is 45 degrees C and temperature rise due to PoE current and cable bundling is 15 degrees C).

Using the data from these tests, I was able to reach the full 100 m length with Belden’s 10GXS, a Category 6A cable. I then modeled Category 6 and Category 5e cables from Belden at that temperature, and wasn’t able to reach the full 100 m. Why? Because the insertion loss of the cable at this temperature exceeded the insertion loss performance requirement.

Read full article

Which is Right for You: 40G vs 100G Ethernet?

Companies like as Google, Amazon, Microsoft and Facebook started their migration toward 100G in 2015 – and smaller enterprise data centers are now following suit. Plenty of these new 100G deployments adopt a singlemode fiber solution for longer reach that best suits their hyperscale data center architectures.

Comparing 40G vs. 100G optical transceivers currently available in the market, both have been developed and cost optimized for their designated reach and applications.

While weighing 40G vs. 100G Ethernet, and deciding which migration path makes more sense for your organization, here are some facts you should know:

  • Switches with 10G SFP+ ports, or 40G (4x 10G) QSFP+ ports, can support 10G server uplinks
  • Switches with 25G SFP28 ports, or 100G (4x 25G) QSFP28 ports, can support 25G server uplinks
  • 100G switches have already been massively deployed in cloud data centers; the cost difference between 40G vs. 100G is small
  • Most new 100G transceivers can easily support 40G operation
  • Some non-standard 100G singlemode transceivers are designed and optimized for cloud data center deployment; product availability for other environments is limited for the short term
  • Traditional Ethernet networking equipment giants Cisco and Arista have already started selling switch software on a standalone basis that goes into networking devices (such as a “white box” solution with merchant switch ASICs); this move accelerates hardware and software disaggregation and lowers overall ownership costs for end-users
  • According to Dell’Oro, 100G switch port shipments will surpass 40G switch port shipments in 2018.

When considering system upgrades from 10G, it’s essential to understand that 40G will also be needed to support the legacy installed base with 10G ports; 40G/100G switch port configurability will certainly accelerate 100G adoption in the enterprise market.

In 2017, 100G Ethernet is already ubiquitous – it will be mainstream, not just in hyperscale cloud data centers. Next-wave 200G/400G Ethernet will soon hit the market; standards bodies have already initiated a study group for 800G and 1.6T Ethernet to support bandwidth requirements beyond 2020.

Wrapping Up the Road to 800G

We’re almost finished with our blog series covering the road to 800G Ethernet. Subscribe to our blog to follow this series, as well as receive our other content each week. As part of this blog series, we’ve covered the following topics:

 

Read full article

Cabling Demands for Digital Buildings

2017 to be the year of the digital building, and there has certainly been progress in this direction as predicted. In fact, according to Deloitte, sensor deployment in commercial buildings could potentially grow by 79% between 2015 and 2020.

Support for Internet of Things (IoT) is growing, bringing standalone building systems onto one platform. As all these systems and devices are being connected on a single IP network, they can be integrated to gather data, make automatic adjustments and provide intelligence and analytics for informed decision-making to reduce operating costs and energy use, increase occupant satisfaction, improve safety and reduce time spent on troubleshooting and maintenance.

In some cases, existing infrastructure are already being put to the test due to cloud adoption. As augmented and virtual reality move into the workplace – whether office settings, hospitals, hospitality environments or educational institutions – and more devices join the network, demands placed on infrastructure will become more intense. (And even though this newer technology is not widely deployed yet – check back in a few years.)

What demands do these digital buildings place on cabling infrastructure? A well-designed, high-performance cabling infrastructure is what brings IoT and digital buildings to life. All of the data (and power, in most cases) required for these devices and applications is traveling via the network’s category cabling. Without it, devices wouldn’t be able to communicate to each other, gather and relay important information or be controlled and adjusted remotely.

As digital buildings take over, it’s important to keep in mind the demands they place on a structured cabling system.

Demand No. 1: More Power Needs

Digital building cabling will need to support Power over Ethernet (PoE). This cabling technology safely transmits power and data over a single standard network cable, allowing devices – cameras, lighting systems, wireless access points, etc. – to be deployed anywhere. This allows remote control and data collection on one infrastructure. As device complexity continues to increase, the amount of power these devices need also increases (up to 100W in some cases). Outdated cabling systems won’t be able to safely and successfully carry this power level.

 

Demand No. 2: Increased Temperatures

Running more power inside a network cable can increase the cable’s internal temperature. When cables get hotter, insertion loss increases. This can cause unplanned downtime and may ultimately damage the cable, hurting its long-term performance.

If cables are tightly packed in trays and pathways, temperatures could rise even more because they can’t dissipate. When a cable’s temperature exceeds the recommended level, it may need to be de-rated – which means it won’t reach the full length promised.

Read full article