Edge Computing

Edge computing is defined by Wikipedia as: “A distributed computing paradigm that brings computation and data storage closer to the location where it is needed, to improve response times and save bandwidth.”. A summary of the first few lines can be condensed to: “The origins of edge computing lie in content delivery networks that were created in the late 1990s to serve web and video content from edge servers that were deployed close to users. In the early 2000s, these networks evolved to host applications and application components at the edge servers, resulting in the first commercial edge computing services that hosted applications such as dealer locators, shopping carts, real-time data aggregators, and ad insertion engines. Modern edge computing significantly extends this approach through virtualization technology that makes it easier to deploy and run a wider range of applications on the edge servers.”

Interpretation of definition

As the Wikipedia page further describes reduced latency and using traditional data centers as a comparison of what edge computing isn’t my interpretation is that edge computing is placed somewhere in between a (often the only) data center serving the application and the client computing device. Although the ultimate degree of distribution would be all the way to the client for applications that run on the client device (or in the client browser) we typically call this client applications.

It seems that in some definitions edge computing is considered to be all devices, and in some it is connected with IoT. In some models IoT devices connects to the edge and the edge in turn is connected to cloud data centers or with a layer in between sometimes called “fog”. Where to draw the line seems to be blurry at best…

This thread was posted by one of our members via one of our news source trackers.

Corresponding tweet for this thread:

Share link for this tweet.