Understanding network redundancy

Fiber Cables Connection

Within a network, primary paths may become unavailable due to unforeseen outages or other reasons such as malware attacks, poor cabling, imperfect design, malfunctioning equipment or junk software.

These issues can occur at any one point within the network, and even a single point failure can have catastrophic outcomes.

If and when this occurs, a network redundancy system creates an alternative path that can be activated to reduce any downtime and keep the network functioning at an acceptable level.

Redundancy and data distribution

Two types of network redundancy

Fault tolerant is a form of hardware redundancy. In simple terms, two or more hardware systems operate side by side, performing identical tasks. One system mirrors the applications of the other. When a hardware failure occurs in the main or primary system, the secondary system (running an identical application in simulation with the primary system) takes over, with no loss of service or downtime. Fault tolerant redundancy systems are employed within data centres, ensuring hardware is backed up by identical systems.

Within a fault tolerant redundancy system, hardware is required with the capability of detecting component faults in real time and immediately switching to the secondary system without any lag. The key benefit of this form of network redundancy solution is maintaining the in-memory application state, and access to other applications is retained without downtime. Fault tolerant redundancy is not 100% foolproof and the secondary system is vulnerable to the same failure as the primary system, which may result in server downtime. This form of network redundancy is employed in the majority of high stake environments such as hospitals and manufacturing facilities, where network failure may result in loss of life. In large IT reliant environments, fault tolerant redundancy systems also make sense, such as data centres, mitigating loss that may occur if a server suffers a power outage or human error compromises the system. The primary drawback to fault tolerant redundancy is the lack of protection against software failure, as it is a hardware failure contingency system and the economic burden needs to be factored into the system.

High availability is an alternative system of network redundancy. High availability is a software funded system that employs a cluster of servers monitoring each other with failure protocols coded into the software. When a server fails, a backup server will take over and restart the applications that were running on the failed server. The high availability redundancy system is less complex and less expensive than the fault tolerant system as it requires less hardware infrastructure. The trade-off is downtime, and critical data and applications can be offline and unavailable whilst the system reboots. The systems are usually robust enough to recover data that was running on the compromised server at the time of the failure. High availability redundancy systems offer protection against software failure and hacking intrusions, due to the backup servers working independently of each other.

This question is best answered by the nature of your business or service. Keeping in mind that fault tolerant systems are fantastic at safeguarding against equipment failure, the downside is the cost. The first step when considering a network redundancy plan is to review existing infrastructure and formulate a network strategy. It may sound rudimentary; however, even the most sophisticated software redundancy system is of little use if the power source is compromised and electricity supply fails. A quality data collection centre must have backup power systems with instant engagement functionality in the case of a power outage. Properly maintained systems will ensure servers can switch from electricity to backup generator power without compromising data or losing access to the servers’ applications.

When the business is a software driven service, high availability redundancy is more than likely a better fit. The required architecture is less expensive to implement and maintain and doesn’t require every piece of the IT hardware to be replicated. The system will encounter downtime that may influence the bottom line; however, when it does not result in a life or death situation, such as in a hospital or operating theatre, over the long term it should be a more profitable system.

Optimising network redundancy

When the business or organisation has made the investment into a network redundancy strategy, that does not mean it is a set and forget option. All valuable data should be backed up and stored in an alternative location, so in the event of a failure the data can be easily accessed. Larger companies may employ multiple data storage facilities in the event of a major disaster, to mitigate disruption to their business and customers. Frequently testing backup systems and ensuring they are regularly maintained is essential. High valued data storage centres will run tests and have a set of contingency plans in place to ensure the system is sufficiently robust and can pivot when necessary.

Cyber attacks are an increasing risk to most systems. Having response plans in place is critical to the network. Being able to identify and respond will make a difference to the network integrity and its ability to remain resilient.

The COVID-19 pandemic has exposed the fragility of some networks – and not just the redundancy aspects. Most businesses are heavily reliant on internet connections and the abruptness of the pandemic relocated many workers into stay at home health orders and home based working environments. For many organisations, the IT infrastructure and cyber security controls were simply inadequate for the shift in working environments. The issues were magnified for those in the manufacturing sector with unforeseen supply chain complications and those in health care where demand on the system from increased patient intake placed an enormous strain on the existing digital infrastructure. The increased use of telehealth and medical professionals using cloud based infrastructure to share research and outcomes meant the security and reliability of IT systems with demand on uptime became much more intense than generally experienced. When this is combined with a spike in online retail activity, the need for robust network redundancy systems is the new focus for everybody from large corporate entities and hospitals, to mum and dad traders carving out new online markets.

OSA’s R&D laboratory not only provides a secure customer service response facility, its unique function to build, test, configure and supply any sized network for a plug and play outcome affords our clients a low risk solution that is designed and engineered for maximum capability of their specific requirements. We are the only network distribution company in Australia with purpose-built research and development laboratories.

Here at OSA, we can help determine exactly what your business networking requirements are, to be working at their optimum level whilst mitigating risk. When you are ready to upgrade your system or simply need some advice, contact us today for a no obligation chat.

Read More Blogs

CONTACT US

Get in touch for
a powerful
network solution