Computing on the Fringes: How Edge Computing and Fog Computing Are Changing How We Use the IoT
"Edge computing" is a concept that can be seen as an extension of now ubiquitous Cloud Computing and Internet of Things (IoT)—and it's a domain that's gaining importance.
The IoT is generating increasing amounts of data in need of processing. Are edge computing and fog computing the path forward?
Working in the tech world, it may be hard to keep up with the newest trends and rising domains in the industry. Take, for example, types of computing. How and where we process data is constantly evolving as we encounter limitations in hardware and connectivity.
The term "cloud computing" has established itself firmly in the vocabulary of most consumers. "Edge computing" is a concept that can be seen as an extension of now ubiquitous cloud computing and Internet of Things (IoT). Meanwhile, "fog computing" is in between edge computing and cloud computing, though the differences between it and edge computing can be a little... foggy.
As we get closer to 2018, it may be worth taking a closer look at what edge computing is, why it’s gaining momentum, and whether or not this is something to keep an eye on for the year ahead.
Edge Computing: Moving Computation Away from the Core
In the most fundamental terms, edge computing is the movement of intelligence and computation from centralized data servers in a cloud network to hardware on the fringes of a network. Instead of sensors collecting data at a location and sending it back for a centralized server to process, hardware is available locally that will compute that data and then send the results to the cloud where the information will be immediately available and actionable without further processing.
Odds are that you're more familiar with this concept than you realize. For example, most of us have encountered the term "edge server" before. This term usually describes a piece of local hardware available for computing. You may have seen edge servers in action in an industrial setting, processing information for factories, or in more general distribution and enterprise scenarios where a corporate head office utilizes "edge" processing at localized facilities.
Nokia's visualization of their Multi-access Edge Computing system. Image courtesy of Nokia.
There are a few benefits that make on-the-spot computation more ideal:
- Information is processed closer to real-time
- Processed data can be collected from various edge nodes in parallel
- Removes the burden of sending raw data over a network with limited bandwidth
- Takes the pressure off data centers to compute large quantities of raw data
- Less reliance on the cloud network to be able to get meaningful information from data (since it's being processed on the spot)
- Can help manage sensitive data that is processed locally instead of being shared
The emergence of edge computing is definitely owed to the availability and ubiquitousness of cloud computing, as well as increasingly accessible and economical IoT solutions now available. There are a number of easy to customize and accessible SoCs available, such as the Raspberry Pi, that makes edge computing all the more feasible.
There are predictions that edge computing will be a 6.72 billion dollar industry by 2022, with a 35.4% annual growth.
Fog Computing: Changing the Definition of "Edge"
Fog computing and edge computing have been used interchangeably, but there have been attempts to differentiate the two as separate concepts. The key idea is that, in edge computing, the data processing occurs on the same hardware that data is collected from. Fog computing, on the other hand, is when a subset of nodes sends their data to be processed at a more localized central point that is connected to a larger, overall central network.
Image courtesy of Cisco.
Both fog computing and edge computing have their benefits. Fog computing still removes some of the latency and bandwidth issues with sending large streams of raw data to a central network, but it doesn’t require each set of sensors collecting data to have to be capable of processing their own data.
Alright, so now that we know what edge and fog computing are, how are they useful in the real world?
The future of automated driving relies on data about traffic, obstacles, and dangers being computed in real-time for quick decision making. In the event of a collision, a second delay is more than enough to change the outcome.
Even though self-driving cars will likely still be connected to a cloud network to send, share, and receive information, processing information locally will be essential to real-time decision making. It is estimated that self-driving vehicles will collect and produce over 3 terabytes of data per hour. If we are hoping to have fleets of self-driving vehicles on the roads, this would put significant strain and risk on cloud computing networks.
Uber's test automated cars are among those already on our streets. Image courtesy of Uber.
Fog computing can also be used to analyze and computer data about local traffic, by collecting information from vehicles, processing it, and then sending it to the overall cloud to share.
Bill Gates recently invested $80 million to develop a smart city from the ground up in Arizona. In a city where real-time data is being collected about traffic, pedestrians, lighting, building health, and streaming it to be available to residents and visitors, edge and fog computing will surely be essential services. Edge computing nodes can simultaneously compute information on weather, visibility, traffic congestion, and infrastructure health in high resolution, and still have it efficiently and quickly shared over the cloud for residents or visitors.
In many applications, some form of edge computing already exists. Its emergence as its own domain, however, will help refine it so that we can develop more complex solutions that are easier to integrate. In the meantime, watch for both edge computing and fog computing as emerging domains in the mainstream tech industry in 2018.