The office security camera, the wristband that tracks our exercise, the car's navigation system, and dozens of applications installed on smartphones. Also, traffic cameras, sensors that control wind farm operations, and devices that measure the rainfall at a given time and place. We're talking about consumer products or infrastructure installations, innumerable intelligent devices connected to the Internet, and by extension to the cloud. And there will be more and more: there are estimates that calculate that as early as 2020 there will be between 20 and 30 billion artifacts – end point devices – encompassed by the Internet of Things (IoT).
Each of these devices sends the raw data it has collected to the cloud for processing; the cloud then frequently responds with instructions depending on what has been analyzed. We are speaking about an unfathomable amount of data, which consumes a lot of bandwidth. Moreover, data making round trips can cause latency, which can then have an impact on the proper functioning of the device. Another forecast: in 2025 the volume of this type of data will be 10,000 times greater than current volumes.
The intelligent devices can do even more
This is why “edge computing” – computing at the edge or periphery of the cloud – has been in the works for some time. The idea behind edge computing is that intelligent devices can do even more; they don't have to be limited to collecting data and sending it to the cloud, rather they can analyze it themselves, at least to a certain degree. Furthermore, if it's not the artifacts themselves analyzing their data, it can be small nodes at the edge of the cloud, intermediaries between the end point artifacts and the giant data centers sitting at the heart of the cloud.
All of this would represent a shift from the current centralized system to a distributed one. The cloud's giant data centers will still be needed, due to their more powerful computing power, but thanks to computing at the edge, there will be a reduction to their load. Filtering and processing tasks will be assumed by the end point gadgets connected to the IoT. (It is interesting to note that only an estimated third of data collected is really useful, the rest are redundant data or data with a short lifespan).
Virtually all sectors involved in the IoT will benefit from advances in edge computing, but it will be particularly important to some. We can take the autonomous car as an example. A vehicle of this type generates four TB of data a day; their cameras alone send the cloud between 20 and 40 Mb per second. Reduced latency is essential for these cars: when a decision is being made, a millisecond can be fatal. A self-driving car cannot wait very long for the cloud to process information and return the results.
Or commercial aircraft. Each airplane generates 70 TB of data per flying hour, which upon landing are analyzed to determine what maintenance work is necessary. When this data can be processed by the aircraft itself, it could reduce the time the plane has to be grounded.
And infrastructure facilities like electricity networks. From wind farm sensors to smart meters installed at homes, if these devices could process their own data, the network response to demand would be quicker and more efficient.
Edge computing is not an entirely rosey picture. For a full roll-out, greater advances in 5G technology are needed to deliver greater speed and lower latency. Security is another significant concern: on one hand, there will be less data traveling from one point to another; on the other hand, the data sitting in connected devices are considered more vulnerable than those in well-guarded data centers. This is why security improvements are also expected with technologies such as quantum cryptography that would protect device data.
All these improvements are coming, with them an “intelligence upgrade” to millions of devices and a small but important paradigm shift: up to now data went to be processed, now processing will go to the data.