Enroll Course

100% Online Study
Web & Video Lectures
Earn Diploma Certificate
Access to Job Openings
Access to CV Builder



Online Certification Courses

Everything You Ever Wanted To Know About Edge Computing

Computing. 

Everything You Ever Wanted To Know About Edge Computing

Edge computing is a distributed information technology (IT) architecture in which client data is processed at the network's periphery, as close as possible to the originator.

Data is critical to modern business because it provides valuable business insight and enables real-time control over critical business processes and operations. Businesses today are awash in a sea of data, and massive amounts of data can be collected routinely from sensors and IoT devices operating in real time from remote locations and inhospitable operating environments almost anywhere on Earth.

However, this virtual flood of data is altering how businesses approach computing. The traditional computing paradigm, which is based on a centralized data center and the public internet, is not well suited to moving rivers of real-world data that never stop growing. Bandwidth constraints, latency issues, and unpredictably disrupted networks can all conspire to thwart such efforts. Businesses are addressing these data challenges by implementing edge computing architectures.

Edge computing, in its simplest form, relocates a portion of storage and compute resources away from the central data center and closer to the source of the data. Rather than transmit raw data to a central data center for processing and analysis, this work is performed on-site at the source of the data – whether that is a retail store, a factory floor, a sprawling utility, or throughout a smart city. Only the output of that edge computing work, such as real-time business insights, equipment maintenance predictions, or other actionable responses, is sent back to the main data center for review and other human interactions.

As a result, edge computing is reshaping information technology and business computing. Examine edge computing in detail, including what it is, how it works, the cloud's influence, edge use cases, tradeoffs, and implementation considerations.

What is edge computing and how does it work?

Edge computing is entirely location-based. Traditionally, data is generated at a client endpoint, such as a user's computer. That data is transferred across a WAN, such as the internet, and then stored and processed by an enterprise application on the corporate LAN. The outcome of that work is then communicated to the client endpoint. This is a tried-and-true method of client-server computing for the majority of common business applications.

However, the number of devices connected to the internet, as well as the volume of data produced and consumed by businesses, is growing at a rate that traditional data center infrastructures cannot keep up with. According to Gartner, 75% of enterprise-generated data will be created outside of centralized data centers by 2025. The prospect of moving such a large amount of data in situations that are frequently time- or disruption-sensitive places an enormous strain on the global internet, which is frequently congested and disrupted.

As a result, IT architects have shifted their focus away from the central data center and toward the logical edge of the infrastructure, relocating storage and computing resources from the data center to the point of data generation. The logic is simple: If you can't get the data closer to the data center, move the data center closer to the data. Edge computing is not a new concept; it is rooted in decades-old concepts of remote computing – such as remote offices and branch offices – in which it was more reliable and efficient to place computing resources at the desired location rather than relying on a single central location.

Edge computing locates storage and servers close to the data, frequently requiring only a single rack of equipment to collect and process data on the remote LAN. Often, computing equipment is deployed in shielded or hardened enclosures to protect it from temperature, moisture, and other environmental extremes. Processing frequently entails normalizing and analyzing the data stream in order to extract business intelligence, with only the analysis results being returned to the primary data center.

The concept of business intelligence can take on a wide variety of forms. Several examples include retail environments in which video surveillance of the showroom floor is combined with actual sales data in order to determine the most desirable product configuration or consumer demand. Additionally, predictive analytics can be used to guide equipment maintenance and repair prior to actual defects or failures occurring. Other examples are frequently associated with utilities, such as water treatment or electricity generation, to ensure proper equipment operation and output quality.

Why is edge computing important?

Computing tasks necessitate the use of appropriate architectures, and an architecture that is appropriate for one type of computing task is not necessarily appropriate for all types of computing tasks. Edge computing has established itself as a viable and critical architecture for distributed computing, allowing for the deployment of compute and storage resources closer to — and ideally in the same physical location as — the data source. In general, distributed computing models are not new, and concepts such as remote offices, branch offices, colocation of data centers, and cloud computing all have a long and proven history.

However, decentralization can be difficult, requiring a high level of monitoring and control that is easily overlooked when departing from a centralized computing model. Edge computing has gained traction as a viable solution to emerging network problems associated with the massive volumes of data produced and consumed by today's organizations. It is not simply a matter of quantity. Additionally, it is a matter of time; applications increasingly rely on processing and responses that are time-sensitive.

Consider the growing popularity of self-driving cars. They will be reliant on intelligent traffic signaling. Automobiles and traffic management systems will need to generate, analyze, and exchange data in real time. When this requirement is multiplied by a large number of autonomous vehicles, the magnitude of the potential problems becomes clear. This necessitates a highly responsive network. Edge – and fog – computing address the three primary network constraints of bandwidth, latency, and congestion or reliability.

  • Bandwidth: Bandwidth is the maximum amount of data that a network can transmit in a given amount of time, typically expressed in bits per second. All networks have a finite amount of bandwidth, and these constraints are exacerbated in wireless communication. This means that the amount of data – or the number of devices – that can communicate across the network is finite. While it is possible to increase network bandwidth in order to accommodate more devices and data, the cost can be significant, there are still finite (higher) limits, and it does not resolve other issues.
  • Latency: The term "latency" refers to the time required to transmit data between two points on a network. While communication should ideally occur at the speed of light, large physical distances combined with network congestion or outages can cause data to be delayed in transit. This slows down any analytics or decision-making processes and impairs a system's ability to respond in real time. It even resulted in the loss of life in the autonomous vehicle example.
  • Congestion: Essentially, the internet is a global "network of networks." Although the internet has evolved to provide adequate general-purpose data exchange for the majority of everyday computing tasks – such as file exchanges or basic streaming – the volume of data generated by tens of billions of devices can overwhelm the internet, causing high levels of congestion and necessitating time-consuming data retransmissions. In other cases, network outages can exacerbate congestion and even cut off communication entirely to some internet users, rendering the internet of things inoperable during outages.

By locating servers and storage close to where data is generated, edge computing enables the operation of a large number of devices over a much smaller and more efficient LAN with ample bandwidth reserved exclusively for local data-generating devices, effectively eliminating latency and congestion. Local storage collects and protects raw data, while local servers can perform critical edge analytics – or at the very least pre-process and reduce the data – in real time, before sending the results, or just the essential data, to the cloud or central data center.

Corporate Training for Business Growth and Schools