01 logo

Challenges in Edge Computing

Is it to Hard to Realize Promise of Edge Computing?

By SunkuRPublished 2 years ago 6 min read
Figure: Distributed Edge Computing

As the 5G and IoT infrastructure are transforming toward unified set of architectures, they deliver brand new set of experiences and use cases using unprecedented compute power and distribute the intelligence across various devices (Figure)

Traditionally limited set of hardware vendors and service providers, controlled end to end deployment of communication infrastructure. With the transformation towards distributed architecture and 5G advances multiple new business models and service delivery models are being enabled across different types of edges.

Products and services from multiple infrastructure and software vendors could need to be working together providing a cohesive interoperability between various network functions and seamless scale up and scale out from across edge to cloud infrastructure. This brings forth various challenges to deploy, scale and manage the edge computing paradigm.

Some of the challenges in utilizing edge computing are described below.

1. Software Infrastructure

Infrastructure Components in an Edge environment (source)

Transformation towards virtualized and cloud native models has essentially converted edge infrastructure to as-a-service deployment model using industry standard cloud orchestrators such as Kubernetes. Proprietary network functions now have a necessity to evolve towards microservice based architecture for service-oriented deployment models.

2. Unified Manageability Across Edges

Distributed Control Plane (source)

Edge computing could be divided into multiple segments based on traffic type, applications being serviced, device connectivity and point of presence, there is an explosion of number of edge computing zones that span across geographical areas. Each of the edge computing zone that addresses specific area of geographical region or set of network bandwidth needs an interoperable mechanism with other edge computing zones to provide seamless connectivity. This arises the need for unified orchestration and life cycle management across these multiple clusters and cloud regions.

3. Public and Private Cloud

Figure: Scalability of Edge Computing (source)

The utilization of cloud native technologies across multiple edge scenarios essentially puts the service provider to enable private cloud clusters across these edges. However, hyperscale cloud service providers such as the likes of Microsoft Azure or Amazon Web Services, etc., provide the opportunity of hyperscale economics leveraging public cloud constructs. Infrastructure could now be scaled in an intelligent and cost-efficient manner while leveraging unified Application Programmable Interface (APIs) across the infrastructure provided by public cloud provider.

4. Security and Privacy

Security Required from Device to Edge Cloud (source)

Due to diverse nature of various types of edge deployments, the needs of a secure edge have evolved into multi-faceted set of approaches that needs to be customized per edge deployment. There is no one size fits all policy that can satisfy requirements across the edge types. Ability to provide Authentication Authorization and Accounting (AAA) across the distributed edge requires multiple levels policies to ensure every end user is accounted for. Privacy of an end user or end device is another critical aspect to maintain as the traffic flows across the edge. Zero trust security architecture is one the latest paradigm with the belief that no aspect of data communication is secure as there are no trusted personas. Implementing these with real time low latency requirements continue to prove to be a major challenge.

5. Hardware Abstraction and Utilization

Abstracting out all the hardware features, accelerators and any other enhancements available for virtualized network functions or cloud native microservices is the value proposition of COTS hardware. With the advent of various as-a-service hardware models such as Graphical Processor Unit (GPU) as a service or Infrastructure Processor Unit (IPU) or in general x Processor Unit (xPU) as a service, additional intelligence needs to be enabled for latency sensitive network functions to fully utilize various hardware features with the ability to scale across the edge deployments

6. Value of Data

Data across Edge Cloud (source)

Distributed and disaggregated edge computing paradigm brings out huge amounts of data across the end-to-end edge infrastructure. The challenge however is that the value of data decreases as the latency increases farther from the origin point. Thus, an efficient and low latency data processing and analytics mechanisms are needed at the distributed edges that are closer to the end users.

7. AI & ML Models for Edge

Example AI/ML Workflow (source)

In order to derive the value of the data generated across the various edges, huge amounts of data points associated with a single end user or a single device need to be processed and analyzed at a constant time interval (for example every second in an Industrial automation use case). This calls for customized ML and AI models that need to be tailored for each of the edge type and management system that can apply appropriate model for the necessary use case. There is a huge scope for innovation and development in this space for various use cases across the edges.

8. Life cycle management

Components Involved in Edge Life Cycle Management in a Factory Floor (source)

Software based network functions are deployed across edge infrastructure necessitates the operator to deal with packaging, onboarding, deploying the network functions; storing, updating, testing for interoperability, ensuring high availability with zero downtime, error free upgrading on the fly, chaos testing, demand-based scaling, supplying adequate infrastructure resources, detecting anomalies at run time and so on, which all constitutes the aspects of life cycle management. Due to the nature of complexity at each of the edge type, move towards microservice based containerized deployments instead of virtual machines has significant benefits over managing virtual machines. DevOps and DevSecOps need to be efficiently customized and implemented for the Edge.

9. Operational Telemetry

Importance of Correlating Machine & Application Telemetry (source)

Telemetry consists of various set of metrics from applications and infrastructure that can be exported to a database in order for them to be analyzed and derive meaningful insights. With the distributed nature of edge computing, the traditional telemetry collection and analysis models don’t apply as they add to the latency in data collection and data processing in a centralized location. Metrics generation, telemetry storage and processing need to be distributed across the individual set of edge types before the data loses value because of latency. Analytics models are to be developed that can be customized and scale across different types of edge deployments.

10. Policy Management

Entities across Policy Management in an Edge Cloud Environment (source)

Policies at the edge are set of rules and constraints that control over edge services deployment by different personas, such as administrators, service providers, application developers, service owners, operation personnel, etc. These policies help cloud based orchestrators understand the constraints for each of the application type such as hardware constraints, latency tolerance, application priority, run to completion models, security requirements, scale requirements and so on. Policies differ heavily based on type of edge. Ultimately a centralized policy manager is required that can interact with individual policy managers and enforcers at each of the edge types.

11. Network Automation

Automation Interfaces Required Across Edge to Cloud (source)

Managing and maintaining the distributed compute capacity and varying nature of network function requirements in edge environments requires 100s of operations at any given instant. This gets a lot more complex in a 5G based architecture where the intelligence in the network is widely distributed. Automation of network operations is a huge differentiating factor in owning, maintaining and operating the network at scale. Zero-touch automation is an emerging area that aims to provide human free and interaction free automation of network issues leveraging AI by taking preemptive actions against a predetermined set of objectives.

Some of these universal challenges across the edge continuum necessitate a simplified architecture that interconnects the interfaces between IoT edge, Wireless Access edge, Fixed Access Edge, On-premises and Network Edge in a seamless manner for cloud native application functions to be operated across the infrastructure and in turn enable data movement across these networks to core of the network. One such architectural concept is Multi-Access Edge Computing.


In summary there is an incredible potential to work on various challenges described above. This is a really interesting time in the history to enable Edge computing that touches multiple lives and multiple industries.

how to

About the Creator

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights


There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.