For many, the concept of the internet seems simple enough without having to understand the specifics of how it works; we can simply reach into our pockets and have access to a vast wealth of information within seconds. But how did we get here? Well, the Internet can be first traced back to the 1960s. The first prototype came to fruition through ARPANET, a network developed by ARPA (the Advanced Research Projects Agency), a division of the United States Department of Defense. ARPANET was one of the first efforts to utilize the TCP/IP protocol suite, which became a large cornerstone for the birth of the internet. However, it would go on to take over 20 years before it would be looked at as anything more than a means of government communication. It wasn’t until the late 1980s that the first ISP (Internet Service Provider) came to be, and commercial use wasn’t even approved until the early 1990s.

By the mid-90s, any remaining restrictions for commercial traffic ended, and this is where things really began to take off. By this point, the public telephone network had had years to flourish and most homes in North America had access to a landline, so this became the method for internet access over the next few years. Dial-up internet access utilized audio communication to relay traffic, similar to how you’d pick up the phone and talk to a family member. A modem would take the digital data from your computer, modulate that information into an audio signal, and then send it to a receiving modem. When received, this modem would demodulate the signal back into digital data and the computer at the receiving end would process the request. As you can imagine, this wasn’t exactly a fast process, in fact, dial-up had a maximum theoretical transfer speed of 56Kbps. Of course, back then, our internet needs were so simple that this wasn’t as painful as it may sound.

Some years later, in the 2000s, Broadband internet began to replace dial-up. These technological upgrades allowed a higher volume of data to be transferred quicker using a variety of connections, the most popular being DSL (Digital Subscriber Line), Cable Internet Access, and Satellite Internet Access. As is the case with all thing’s technology, each option had its limitations:

  • DSL required you to live within approximately 3 miles of a DSLAM (Digital Subscriber Line Access Multiplexer), making it tough to service those in rural areas.
  • Cable could relay data further than DSL, but because the infrastructure was entirely new, priority was given to metropolitan and suburban areas. Even today, many rural households still do not have access to cable.
  • Satellite is dependent on, you guessed it, satellites -geostationary ones to be exact. This means that the latency is typically poor, and the speed is limited, making it tough for those who like to game or want to depend on VoIP (Voice Over Internet Protocol). On top of this, poor weather conditions can prompt interrupted, or entirely disconnected, service.

By now, you may have noticed a common trend: rural customers had it tough. A solution to this was born with Fixed Wireless Networks. The core of a WISP (Wireless Internet Service Provider) is connected to an Internet exchange point via a fiber circuit, but this is where the physical internet exchange ends. From the core, internet access is transmitted out to a WISP’s network through directional radio antennas. These antennas are installed on elevated platforms such as radio towers, tall buildings, and water towers, and then finely tuned to each other to ensure the highest availability and efficiency exists on the wireless link. The radio frequency used depends on the technology chosen and the availability of either licensed or unlicensed spectrum. This means that with the right planning, even WISPs can become competition to your average cable-only ISPs.

The availability doesn’t end once a single wireless connection is made between two points, though. With this method, you can connect multiple tower sites together through the use of wireless backhauls. Both sides of the connection would connect to a local switch or router, and from there can lead out to further wireless connections such as subscriber-only APs (Access Points) or to another half of a wireless backhaul. Effectively, you could daisy-chain an entire service area together through the use of wireless technology. And with the customizability through the use of routers, you can also configure back-up links so that if one goes down (whether this be from equipment failure to wireless noise), traffic can automatically be routed through an alternate wireless path.

With the acceleration of fiber technology, WISPs are beginning to take things one step further by launching hybrid network models that combine the benefits of fiber and wireless together. When just using wireless backhauls, there are scalability limitations, as the wireless spectrum in an area may not provide the available bandwidth needed to support a growing customer base. With a hybrid model in place, fiber can be run to a specific network site and then traffic routed wirelessly from there. With fiber run strategically to certain sites, you can eliminate potential bottlenecks and improve reliability at the same time because nothing beats fiber in terms of reliability.

Deciding whether to operate as a classic WISP, or as a hybrid model, depends on a few key factors. The first, and arguably the most important, is cost. Simply put, fiber is expensive. The initial deployment cost to run fiber to a network site can set you back hundreds of thousands of dollars. To compare, the deployment of a wireless link is closer to tens of thousands, though this amount can vary depending on the technology used, licensing (if applicable) and whether the support infrastructure (tower, back-up generator, etc.) already exists. However, when you factor in scalability and reliability, the choice may not be so clear-cut. As we’ve already touched on, wireless links are limited to spectrum availability, and this applies to both licensed and unlicensed deployments. Operating a fixed wireless network isn’t as simple as purchasing the biggest and best backhaul you can find; even the nodes with the highest EIRP (Effective Isotropic Radiated Power) capability are subject to interference without the right spectrum planning. Having fiber run to a network site can truly be a means of future-proofing your network.

From a customer approach, they want the fastest and the best. Most don’t know or care about the deployment cost of fiber vs wireless, all they know is whether their ISP can provide the service they want. With an ever-increasing population, and a growing number working remotely, a lack of reliability is not something they can accept. And reliability isn’t as clear-cut whether there’s an outage or not; if their speed drops below a certain threshold, and they’re unable to do what they want, customers will consider their internet an unreliable service. Internet use is continually increasing, and it’s not as simple as someone watching more hours of Netflix than they did 2 years ago, the actual required speed for many of these services has increased too. For example, SD (standard definition) used to be the only streaming quality available, and with a recommended need of a consistent 1Mbps, it was doable for most households. Whereas today, the likes of Netflix offer a quality of up to 4K/Ultra HD (UHD) which comes with a recommended speed of 15Mbps. If you live in the city, this may not sound resource-heavy, but for many rural communities, it is. With a hybrid fiber and wireless solution, rural-based customers can enjoy faster, and more reliable, experiences.

We’ve discussed a lot about hybrid fiber-wireless approaches for WISPs, but how does Sonar play into this solution? Sonar is intended to act as a one-stop solution for all your needs, ranging from billing through to network monitoring. Whether you’re implementing DHCP services, RADIUS accounts, or inline devices, you can configure multiple external devices to manage your network all from within the Sonar interface. Taking this further, you are also able to monitor the devices on your network with the use of our Poller. The Sonar Poller is an open-source application that would be installed on a virtual machine within your network (it should be hosted locally to your network rather than outside, e.g., Digital Ocean, AWS, or Azure). With this feature configured, you are able to track both SNMP and ICMP, allowing your support staff to view statistics on devices from the same platform they’re receiving trouble tickets via. To read more on the specifics of the Sonar Poller, click here.

We also recognize that many Internet Service Providers may already have existing means of monitoring their network, such as Preseem. By connecting Preseem to your Sonar instance, they’re able to query the data in your system such as accounts, plans, and network sites. In doing this, they can measure QoE metrics for each site and customer. The method to do this requires a brief setup, as outlined in our Knowledge Base article, and once configured can be left to automatically pull the data between services on your behalf.

By taking a hybrid approach to your network, you can benefit from both the speed fiber offers and from the customer range wireless provides. To learn more about how Sonar can benefit your network operations, contact us or schedule a demo!