Edge Computing’s Unique Challenges

By George Crump

IT needs to understand Edge Computing’s unique challenges, so they can make the right infrastructure design decisions. Treating Edge Computing as a smaller version of the data center will put data at risk, increase complexity and raise costs. There are critical differences between Edge Computing, Remote Office, Branch Office (ROBO), and Core Data Center use cases:

Data Center Edge ROBO Core
Serviceability Limited Accessibility Accessible Local
Management Remote Remote Local
Data Protection Replication On-Premises On-Premises
Footprint Shelf Closet Data Center
Power Constrained Available Plentiful
Comparing Edge, ROBO and Core Data Centers

Edge Computing vs. “The Edge”

Edge Computing is different from what is commonly referred to as “the Edge.” When we refer to “the Edge” we are referring to a data collector, like a sensor or a Wi-Fi Camera, even though they too have a small processor of some sort. Edge Computing is the consolidation of processing power that gathers data from a variety of these sensors, and processes that data. The goal is to either make real-time decisions, like an autonomous vehicle, or consolidate the collected data and send a subset back to a larger data center.

While collecting sensor data and acting on it covers a wide swath of Edge Computing use cases, there are others. It might also be a Point of Sale (POS) system that an organization with dozens or hundreds of retail locations. Other Edge Computing use cases are content delivery systems, video surveillance processing and storage, as well as dynamically adapting retail advertising.

In addition to real-time decision-making, Edge locations may also, even with today’s network capabilities, be bandwidth constrained. The need to make the decision locally is instant, compared with the seconds required to send data to another location and respond with a decision. In these cases, instant versus seconds makes a critical difference. It may also be that the bandwidth to the Edge Computing location isn’t reliable enough, or that the cost to transmit a large amount of data isn’t worth the expense.

Register for this week’s Virtual CxO Roundtable to get answers to all your Edge Computing and Private Cloud questions.

What Makes Edge Computing Unique?

Edge Computing is unique from the core data center and remote office branch office in three key areas:

  • Available Space
  • Serviceability
  • Data Protection

Edge Computing is Space Constrained

Edge Computing’s unique challenges include small footprint
A Complete Data Center in a Shoebox

The first of Edge Computing’s unique challenges is the physical space available to host the infrastructure. As we indicate in our table, the available data center floor space shrinks from a full-scale facility in core, to a closet in ROBO, to, at best, a shelf in Edge Computing use cases. In some situations, the “data center” is the space underneath the cash register.

The constraints placed on Edge Computing mean that whatever infrastructure you deploy at the Edge needs to run, efficiently, in that small footprint. The good news is the hardware to accomplish the feat is available. Mini-servers, like Intel NUCs (next unit of computing), can provide plenty of processing capabilities while consuming a few dozen watts of power. The problem is finding an efficient software operating environment for those servers.

Edge Computing is Hard to Service

The second of Edge Computing’s unique challenges is that it is hard to get to, physically and maybe remotely. The lack of accessibility makes Edge Computing hardware difficult to service if something goes wrong. Most locations are not in major cities. Sometimes they are “in the middle of nowhere” on purpose because that is where the sensors perform best. Other times they are small towns, hours away from major airports. The lack of accessibility and serviceability make redundancy and remote operations critical.

Edge Computing Needs Redundant Availability

Redundant Edge Computing is something that IT planners may overlook, but because of the lack of accessibility, continuous access becomes critical. If the Edge location goes down, sensor data and remote transactions can’t process. It can mean the loss of critical information that can’t be recreated, or the loss of revenue and unhappy customers.

What to look for:

Given the space efficiency of mini-servers, it makes sense to deploy two or three units, even if one has all the processing power that the location needs. Redundancy at the Edge means that the software platform responsible for running operations needs to seamlessly fail to the surviving servers without complex changes to networking. It also means that a replacement server must be easily preconfigured to automatically join to the surviving servers when it arrives at the location.

Edge Computing Needs Redundant Operations

The Edge Computing solution should also be easy to remotely manage and operate. While most solutions provide some form of monitoring capabilities, these are often “after-the-fact” products. An add-on product creates a single point of management failure, and the Edge location doesn’t know something is “listening”. Instead, IT planners should look for solutions where reporting is the responsibility of the Edge Computing solution. The edge software platform should send its telemetry data to multiple points, which eliminates the single point of failure.

Moreover, the remote capabilities should include more than remote monitoring. It is not uncommon for Edge Computing locations to number in the dozens, if not hundreds. Having to log in to each location to perform an update or change to a security setting is incredibly time-consuming and increases the chances of human error.

What to look for:

IT planners need to look for a solution that can perform operations like updates or setting changes, globally. Executing once instead of individually logging in to each server increases the efficiency of the IT staff and lowers the overall cost of the Edge Computing initiatives.

Edge Computing is Hard to Protect

The third of Edge Computing’s unique challenges is it has unique data protection needs. In numerous instances, the Edge creates unique data that can’t be recreated if lost due to hardware failure or site disaster. The challenge is because of the lack of available space and operational concerns. There is no room or administrative staff to support on-premises backup infrastructure.

The Problems with Protecting the Edge with the Public Cloud

Many organizations will consider protecting this data in the public cloud, but end up ruling it out because:

  1. The recurring costs to store dormant data are too expensive
  2. The data is needed at core data center for further processing
  3. There is too much Edge data and not enough bandwidth
  4. Disaster recovery from the Public Cloud to the edge is difficult

What to look for:

IT planners need to look for a solution that can leverage the extra redundancy within their Edge Computing design to facilitate a reasonable on-premises data protection strategy. While protecting data within the same infrastructure does not technically meet the 3-2-1 data protection rule, it gets close. If the Edge solution can also replicate data efficiently, then it does meet the requirements of the 3-2-1 rule. Global Inline Deduplication is a critical requirement so that redundant data is only sent once and replication jobs are complete in record time.

Edge Computing is NOT Remote Office Branch Office

Remote Office and Branch Office (ROBO) IT infrastructures are not the same as Edge Computing infrastructures. First, in most cases, they are significantly easier to get to. Second, there is available space, even if it is a server closet, for a more robust infrastructure that includes data protection.

ROBO infrastructures also tend to support a wider variety of workloads, including file sharing, and multiple business applications as well as core infrastructure utilities. They do, however, share the need for remote operations and can certainly benefit from many of the capabilities that infrastructure at the Edge requires.

Most IT vendors can’t span all three use cases with a single software solution. They may address the specific needs of each use case, but they do so with alternative solutions which require unique training for each one, patch monitoring and implementation as well as unique data protection.

What to look for:

IT Planners should look for an infrastructure solution that can span all three location types and add in the public cloud. Imagine the efficiency of running the same networking, storage, and hypervisor software throughout your sprawling infrastructure.

VergeOS, One and Done

VergeIO is an ultraconverged infrastructure (UCI) company. UCI differs from Hyperconverged Infrastructure (HCI) in that it rotates the traditional three-tier IT stack (networking, storage, and compute) onto a linear plane through a single piece of software that we call VergeOS. The result is an efficient data center operating system (DCOS) that can deliver more performance and run a greater variety of workloads on less physical hardware. If you are being asked to do “more with less,” VergeOS is your solution.

In one bootable operating system you eliminate the need for separate storage software, proprietary networking hardware, independent hypervisors, separate “cloud” functionality, data protection software, disaster recovery software, and multiple management interfaces. All of these functions are included in VergeOS’s single piece of software.

VergeOS is able to address all of Edge Computing’s unique challenges. It provides

  • Downward scale to one or two nodes
  • Seamless redundancy, data protection and ransomware resiliency
  • A mesh-like management framework for monitoring and operations
  • upward scale for branch offices, core data centers and the cloud

With VergeOS, you don’t have to “go to” the cloud. You can “be the cloud.”

Next Steps

  • This week we are holding a Virtual CxO Roundtable on “Edge Computing and Private Cloud Infrastructures. We will answer questions about these two topics we’ve been collecting the last few weeks, and we’ll take questions live from our audience. If you have a question, you can submit it in the comments section below. Register
  • We also have a complete tutorial built on developing and Edge Computing Strategy. Subscribe to our Digital Learning Guide, “Creating an Edge Computing Strategy.”
  • Learn More:

Further Reading

The ROI of High-Performance HCI

Learn why Ultraconverged Infrastructure (UCI) is the evolution of high-performance HCI, improving the ROI of legacy solutions. UCI delivers better performance than dedicated storage arrays and superior ROI compared to traditional HCI by integrating virtualization, storage, and networking into a unified platform. Discover how UCI transforms modern data centers efficiently and affordably.
Read More

A High-Performance vSAN

Discover how VergeOS leverages ultraconverged infrastructure (UCI) to overcome the performance limitations of traditional HCI. Learn why high-performance vSAN is critical for modern workloads and how VergeOS delivers scalable, cost-effective solutions with industry-leading IOPS and low latency, eliminating the need for costly dedicated storage arrays.
Read More

Overcoming the High Cost of DR

Disaster recovery is essential, but high costs and fragmented tools make it challenging for many businesses. VergeIO’s ioReplicate simplifies DR with integrated features like efficient replication, virtual data centers, and automated failover. Achieve rapid, reliable recovery in just three clicks—all at a cost lower than traditional backup solutions.
Read More