Today there is a renewed focus on “‘The Edge”, as more companies begin the realize the benefits of decreasing the latency between your users (Mobile devices, IoT or Web browsers) and your software (Website, AI platform, Video streams etc). But where exactly is the edge?
Before the multi-datacenter solution was The Edge. Then the Cloud was the Edge. Then the Content Distribution Network was the Edge. Now it’s the mobile towers? The end user device? Where is the EDGE!?
“ The Edge… there is no honest way to explain it because the only people who really know where it is are the ones who have gone over. ”
- Hunter S. Thompson, Hell’s Angels (1966)
The Edge …. basics
So, what is edge and what is edge computing? The word edge in this context means literal geographic distribution. Edge computing is computing that’s done at or near the source of the data, instead of relying on the cloud at one of a dozen data centers to do all the work. It doesn’t mean the cloud will disappear. It means the cloud is coming to you.
Edge 1.0 : The Edge … of the Cloud
In the beginning, there was The Cloud™ and infinitely scalable multi-datacenter deployment at the click of a button. In the old days, applications were deployed to one or maybe two data centers (for disaster recovery). This meant you users had to travel whatever ‘internet distance’ was required to reach your server. Then with the advent of cloud computing, deployments could now spread across multiple data centers. At the time of writing the three biggest contenders run 18 datacenter regions (AWS), 50 datacenter regions (Azure), and 17 datacenter regions (Google Cloud) respectively, each with their own subdivision into isolated networks and availability zones. These datacenters exist on the major backbone networks, the main connection points of the internet (called IXs or Internet Exchanges see https://blog.edgemesh.com/what-happens-when-the-internet-grows-100x-f1d2b0874bd6).
With this newfound multi-region deployment model, this means your users are now perhaps 3–10 ‘internet hops’ away from your server. For example, from my house here in Los Angeles CA to AWS.AMAZON.COM I get the following:
LA (internal network) -> LA (Time Warner Network) -> LA (Time Warner Upstream) -> Dallas (Time Warner Transit backbone) -> Unknown Transit Provider -> Ashburn VA (Amazon) [~ 100 milliseconds of latency ]
Edge 2.0: The Edge … of the Content Delivery Network
At the dawn of the Web-Age (1990s), it was realized that the web was going to be a global phenomenon. With this knowledge, Akamai developed a system for ‘caching’ or keeping copies of web content spread across a global network of servers. They called this model a Content Delivery Network, or CDN. Over the last 20 years, CDN’s have increased both in distribution (Points of Presence or datacenters) and interconnectivity. Akamai, the original CDN company connects to more than 1700 networks and operates more than 130 Points of Presence. Cloudflare by contrast operates approximately 150 PoPs — each with its own peering and deep network connectivity. Here we are likely just 1–5 hops from most users.
Looking at an Akamai example, I can see that when I visit Pinterest.com they are pulling some of their images from an Akamai cache at the IP address of 23.57.41.248.
And mapping this via traceroute we can see we are indeed closer to the Edge (me):
LA (internal network) -> LA (Time Warner Network) -> LA (Time Warner Upstream) -> Dallas (Time Warner Transit backbone) -> Akamai [~ 30 milliseconds of latency ]
Edge 3.0: The Edge … of the network
The last edge, is the actual last mile network. Until now this has been limited by the sheer economics of the problem. How does one deploy not 100’s of micro PoPs but thousands?
Here we have only a few players. Mobile operators (like Deutsche Telekom’s MobiledgeX group — which Edgemesh runs on) operate inside the last mile network. Meaning — they are 0 network hops away. The core idea of Edge 3.0 is to never leave the last mile network (Time Warner or your home network). For 5G this will become critical, as 5G bandwidth is nearly 20x that of 4G and the strains on the backbone will begin to crack.
With Edgemesh, the cached content is running on the edge device itself (the browser or a network local Supernode)… so -1 network hops away :).
Edgemesh allows content to be moved onto the device from any Edgemesh compatible peer — be it a hosted Supernode or a nearby active browser. When the user requests the content, the Edgemesh client skips the network altogether and serves the asset from cache!
Below is an example. We can see the browser requests an Asset (images) but the Edgemesh client was able to serve for its local cache. The result is this request goes like this:
My Browser (Laptop) -> cache [~ 1–4ms milliseconds of latency ]