How three wireless technologies will soon ignite the edge computing revolution
Nicole Raimundo envisions a day when streaming video will be as routine a part of emergency response as stretchers and defibrillators.
Raimundo, who is the chief information officer for the town of Cary, North Carolina, believes high-quality live streams will redefine the way responders, caregivers and law enforcement authorities manage crises. Police departments across a region will be able to assess an emergency and dispatch officers accordingly. Doctors and nurses will diagnose injuries via high-definition video before patients reach the emergency room. Traffic control systems will automatically adjust stoplights and re-route traffic to speed emergency vehicles on their way.
“Imagine if you could string applications together so response happens all at once,” said Raimundo, who is leading efforts to implement such capabilities for Cary’s “smart cities” project.
The technology to enable these scenarios is theoretically in place today – but you wouldn’t want to depend on it. As anyone who has ever made a video call over a cellular network knows, fuzz-outs, freezes and dropped connections are par for the course. It’s the price we pay for sharing bandwidth with others, which is how wireless networks have always operated.
All that may be about to change, however — and 2020 could be the turning point. A trio of new technologies is set to redefine wireless networking. That, in turn, could change the way enterprises think about how they build applications, manage data, distribute computing resources and even deploy robots and factory floor machinery.
Computing on the edge
It’s called “edge computing,” and the infrastructure to make it real is about to hit the market. The timing couldn’t be better. International Data Corp. projects that the world will create 163 zettabytes of data annually by 2025, about 10 times as much as in 2016. The research firm also expects that a quarter of that data will be created in real time, with “internet of things” devices contributing 95% of that volume.
Processing and storing such large volumes of data in a centralized cloud or data center is impractical. That means more decisions need to move closer to the sensors, video cameras, cash registers, hygrometers and hundreds of other devices that create data. The emerging wireless protocols to make that distributed intelligence feasible – 5G, Wi-Fi 6 and Citizens Broadband Radio Service — are finally here.
The big question: Will complexity, security and management issues stall new edge models before they get off the ground?
There’s a lot at stake for a wide swath of the technology industry, ranging from cloud computing providers to traditional networking providers and telecommunications giants. Edge computing’s distributed nature is at odds with the centralized processing model that cloud giants such as Amazon Web Services Inc. and Microsoft Corp. have ridden to success, and they’re wasting no time in responding.
At the same time, networking providers such as Cisco Systems Inc. and Juniper Networks Inc. see the opportunity to embed themselves more firmly into their customers’ information technology infrastructure. Wireless carriers and colocation providers are hunting new opportunities to help enterprises construct sprawling new edge “fabrics.” Virtualization and systems management vendors such as VMware Inc. and IBM Corp. will compete to tie it all together, as will a host of startups.
These issues will be top of mind for many people at industry conferences in coming weeks, from Cisco Live in Barcelona this week to Mobile World Congress next month, also in Barcelona, and Internet of Things World in April in San Jose, California. But they’re also likely to dominate conversations throughout the tech industry all year.
That’s because many people expect edge computing will prompt a massive rethinking of enterprise IT infrastructure. “I believe we’re right smack in the middle of a big rip with forces pulling for both centralization and decentralization,” said Vince Kellen, CIO at the University of California at San Diego. Cloud will continue to be the preferred destination for aggregation and analysis, but what the networks will look like that feed data to the cloud is anybody’s guess.
Wireless at wire speed
By now just about everyone has heard of 5G, the new international standard for communication over digital cellular networks, but most people don’t understand why it’s such a big deal. There’s good reason for confusion. There are three different versions of the protocol – low-band, mid-band and high-band – and they’re all incompatible with each other. Low-band networks offer only marginal speed improvements over current 4G speeds and will be the first to reach consumer handsets.
The real excitement is over high-band 5G, which operates in the spectrum between 20 gigahertz and 100 gigahertz, a range that has never been used for consumer applications. Not only do high-frequency bands enable blistering speed but 5G brings with it the ability to slice signals into multiple virtualized and independent logical networks that run on the same physical network infrastructure.
Each slice can have different characteristics and can be reserved for specific uses or organizations, providing guaranteed performance and extremely low latency. Network slicing does away with bandwidth sharing, which is the principal cause of unpredictable performance on today’s cellular networks.
“If you’re FedEx and you need cellular bandwidth today, you have to share it with people streaming videos,” said Zac Smith, chief executive of Packet Host Inc., developer of a bare-metal cloud platform. With 5G, in contrast, “whenever you’re near a radio you have guaranteed spectrum that isn’t shared,” he said. “Now you can run applications with zero latency and nearly unlimited bandwidth.”
And the connection can be very fast, up to 10 gigabits per second. That makes 5G a potential wireless alternative to Ethernet cabling, a fact that could liberate millions of machines from their hard-wired tethers.
The challenge with high-band 5G is that signals at those frequencies don’t travel very far and are easily blocked by walls and trees. To get the full benefit of speed and low latency, micro-cell stations need to be positioned as little as 500 feet from each other, making network buildouts both time-consuming and expensive. It’s expected that for the foreseeable future public cellular data networks will use a combination of all three types of 5G bands, along with their 4G and 3G predecessors, depending on the usage scenario.
Commercial deployments of 5G networks began last year and about 50 live networks are in place, according to Chris Pearson, president of 5G Americas, a trade association of telecommunications service providers and manufacturers. “But you’ll see a steep ramp up in 2020,” said Pearson, who’s leading two sessions on 5G futures at the Internet of Things World conference in April. “We’re expecting more than 1 billion connections worldwide by the end of the year.”
Wi-Fi that works
The second technology that’s set to make an impact this year is Wi-Fi 6, the latest version of the wireless protocol typically used within buildings or geographically bounded shared spaces. Although the standard has yet to be formally ratified, devices that are compatible with the draft standard have been available for months and are expected to fall sharply in price over the next year.
Wi-Fi 6 delivers a modest performance boost over the current standard but has a couple of new features that enterprises will appreciate. Like 5G, Wi-Fi 6 implements its own form of network slicing, enabling signals to be dedicated to certain endpoints. Bandwidth sharing is the bugaboo that has frustrated network architects for years, limiting Wi-Fi’s use in settings where reliability and speed are critical. “Wi-Fi is currently completely unusable for any application requiring guaranteed access and performance,” said Packet’s Smith.
The new standard also raises the threshold on the number of devices that can be addressed simultaneously by a Wi-Fi access point. When combined with bandwidth slicing, that greatly improves reliability. “It is much more effective at handling densely populated areas” than its predecessors, said Bob Laliberte, an analyst at Enterprise Strategy Group Inc.
Finally, a feature called “Wake on Wireless LAN” permits devices on the network to be awoken when needed rather than connected all the time, a feature that can greatly improve the battery life of sensors and other endpoint devices. Currently, endpoints must constantly broadcast their status over the network, even if updates are only needed occasionally. “Wi-Fi 6 is a game changer from a battery life perspective,” said Kilton Hopkins, CEO of Edgeworx Inc., an edge computing software company.
The third significant new wireless standard, Citizens Broadband Radio Service, is also the least well-known. CBRS, which is unrelated to the lightly regulated Citizens Band voice service that was popular with truckers during the 1970s and 1980s, covers a swath of unlicensed spectrum that was originally developed for use by the U.S. Department of Defense and cleared for commercial use last fall. Like Wi-Fi, CBRS is intended for geographically bounded areas such as university campuses and sports arena.
It has greater capacity, speed and range than Wi-Fi. But perhaps more important, CBRS can be used as a carrier medium for 5G, meaning enterprises can run their own 5G networks without licensing commercial spectrum. That has never been possible before. Commercial CBRS deployments have been slowed by regulatory red tape but are expected to begin in earnest this year.
Taken together, the new standards should enable organizations to build micro-networks without having to deal with the bottleneck of cell towers and shared signals. They can essentially become mini-cloud providers, delivering most services locally and only sending data selectively to a central cloud.
That has cloud providers scrambling to respond. Although public cloud’s rapid growth has so far provided plenty of riches to go around, edge computing could be a potent challenger to the centralized computing model ushered back in by the cloud.
1 + 1 = 3
The potential is particularly compelling when technologies are combined. For example, Hopkins said Wi-Fi 6’s Wake on Wireless LAN can lead to dramatically lower power consumption that enables more intelligence to be housed in sensors and remote devices. That, in turn, allows more application logic to be delegated to those devices.
Similarly, the speed and bandwidth of 5G networks will enable many devices that currently require hard-wired Ethernet connections to unplug. Companies that are embracing these technologies “are not looking replace Wi-Fi; they’re looking to replace wiring,” said Ozer Dondurmacioglu, vice president of marketing at Celona Inc., a company that builds enterprise applications based upon 5G.
Features such as bandwidth slicing and reserved bandwidth are also challenging decades-old assumptions about network architecture. “Years ago, everything was built on a hub-and-spoke model,” said Joe Wojtal, chief technology officer of the global service provider division at World Wide Technology Inc., a provider of technology and supply chain services for enterprises. “With public cloud and edge computing becoming more prevalent, there are really no more borders to these networks.”
“With 3G and 4G the network was a big pie, and the quality of service between the app and mobile device was determined on a best-effort basis,” said Ken Zhang, CEO of Global Elmeast, an IT consulting firm focused on intelligent networks. “It’s now possible for enterprises to build their own dedicated 5G networks within a given geographic region with as few as 10-20 base stations.”
Sitting atop this new infrastructure is software-defined networking, a catch-all term for the management layer that orchestrates devices and services. SDN enables services to be provisioned and managed flexibly from a central administration point. Networks can be set up, changed, segmented and shut down without touching physical devices.
Arizona State University is going all in on SDN, 5G, Wi-Fi 6 and CBRS in a complete overhaul of its campus network that it expects will deliver high speed and guaranteed performance across thousands of devices.
“We’re looking at virtual networks, software-defined orchestration and automation across multiple domains with the ability to provision performance, latency, privacy and security for each customer,” said James McCabe, an ASU enterprise network architect who previously designed networks for the National Aeronautics and Space Administration.
ASU’s new network will enable the university to disperse processing now done on central hubs local processors and even intelligent devices themselves. “We have the ability to sensor the environment as well as the transmission,” McCabe said. For example, intelligent cameras will be able to interpret video signals instead of just capturing them, empowering campus police to detect a crowd while it’s forming or identify people who are overheating on a hot summer day.
The new network will enable the university to become, in effect, both a cloud service provider and a telecommunications carrier. ASU has already deployed branded version of applications like WhatsApp and Zoom videoconferencing on a campus-wide basis rather than from a public cloud. It can mine usage data to better understand how those applications are being applied.
It can also apportion dedicated slices of its network for such uses as emergency response, fleet management and scientific experimentation. “For the first time we’re virtualizing the network and taking advantage of a sliceable software platform,” said CIO Lev Gonick. “It’s simplified operational overhead dramatically.”
ASU is also extending its network for use by nearby innovation zones where businesses set up shop and collaborate with the university. “Leveraging our network rather than commercial networks can add up to significant cost savings for those towns,” Gonick said.
SDN also permits operators to expose network services through application program interfaces, a capability that particularly piques the interest of people in the public sector like the Town of Cary’s Raimundo. “Current IoT solutions tend to be very singular,” she said, “but we’re talking about new ecosystems that pull in all this data into one place and look at it holistically.”
That could be used to furnish a consistent user experience to visitors driving through the town as well as aid in region-wide emergency response. Cary is working with the nearby town of Apex as well as the North Carolina capital city of Raleigh on a coordinated approach to storm water sensors so that towns that are experiencing flooding can alert their downstream neighbors to prepare.
The goal is for video cameras to sense an oncoming flood and trigger alerts that automatically shut down affected roads and redirect traffic. Such a network can be built by federating local networks through APIs. “A lot of the benefit will be thinking in a regional approach across a larger community,” Raimundo said.
Campus networks that combine technologies like CPRS, Wi-Fi 6 and 5G can empower enterprises to become, in effect, providers of their own small clouds. That would permit the owner of a football stadium to set up a private 5G network exclusively for fans in the stands and provide value-added content such as virtual reality replays and interactive gaming for a fee or as a ticket sales incentive. Essentially the stadium owner becomes a content delivery network and a carrier without paying any of the overhead to external services.
“Right now people in the stands are using an LTE server, but there’s no reason the stadium can’t become the network provider,” said Packet’s Smith.
Given the complexity of highly distributed networks, enterprises are likely to lean heavily on technology providers to make edge computing real.
“We have multifaceted complexity and the management overhead is enormous,” said Russell Kaurloto, CIO at Clemson University, which is investigating a network overhaul using CBRS, 5G and Wi-Fi 6. He’s leaning on technology providers to make the network as simple for the university to operate as the cable modem in a consumer’s home. “I can control it but I don’t own it,” he said. “Do I really want to be in the carrier business?”
Client/server 2.0?
Despite the promise of next-generation networks, many CIOs remain skeptical. In an informal year-end survey of IT leaders by The Wall Street Journal, they labeled 5G as one of the most overhyped technologies of the previous year.
Their attitude no doubt reflects a certain battle weariness but may also be shaped by the belief that these technologies are simply faster versions of what they already have, said Edgeworx’s Hopkins. “If they think 5G is just an upgrade in bandwidth, then of course it’s not a revolution,” he said. “They need to understand that this is about entirely new models of connectivity and consumption.”
Some experts have drawn an analogy to the introduction of electricity and manufacturing plants in the 1920s. Prior to that, most factories used a system of serpentine leather belts connected to a central power source like a steam engine or water wheel to drive equipment on the factory floor. Decisions about where to locate machinery were influenced as much by the need for proximity to the power source as by operational efficiency. Electrification enabled factory owners overhaul their assembly lines, putting machinery where it made the most sense and enabling huge gains in manufacturing productivity.
Next-generation networks have the same disruptive potential, many experts believe. Initial use cases are likely to include real-time couponing in retail settings, fleets of autonomous vehicles and modular robots in manufacturing and warehouse environments, said ESG’s Laliberte. “The IT pendulum is clearly swinging back to distributed environments and technologies that enable organizations to more effectively connect those environments,” he said.
But complexity could throw a wrench into the works. IT veterans have been down the distributed intelligence path before. In the early 1990s a concept called client/server captured the imagination of the industry organizations with promises of breaking centralized mainframe computing into networks of servers and PCs.
Client/server was supposed to be cheaper and more flexible than centralized computing, but it never lived up to expectations, in large part because of administrative complexity and security concerns. Could edge computing end up being client/server 2.0? Some people say it’s a possibility.
“The size and scale of this stuff runs the risk of collapsing the value proposition,” said Rob High, an IBM fellow and chief technology officer for IBM Watson. “If we don’t solve the problem of making administration ubiquitous, easy and cheap, then the weight of the scale could supersede the value it delivers.”
World Wide Technology’s Wojtal agreed. “Nobody is going to pay more and deal with more complexity if they’re not seeing the payback,” he said.
Others say the scenarios are too different to compare. For one thing, client/server was conceived in the days before the commercial internet provided a consistent transport layer. It also relied heavily on Windows personal computers, which could barely even multitask at the time. In contrast, new edge architectures are intended to distribute processing and logic across a diverse and fluid network of computers and other devices.
Client/server was also framed as an alternative to corporate mainframes. Thanks to the rise of cloud computing, no one is casting aspersions on centralized processing anymore. Rather, the cloud is seen as an essential destination in the value chain with intelligent devices taking on more of the tasks of sifting data at the edge.
“It is all about how deploying the minimal amount of infrastructure to perform the analysis required, ensuring that all data is then sent to a centralized data lake, ocean or cloud of your choice,” said ESG’s Laliberte. “Enterprises shouldn’t look at this as a case of one versus the other but rather how can these technologies work together seamlessly.”
That vision of a flat, peer-to-peer network rekindles memories of grid computing, another 1990s phenomenon that never quite lived up to the hype. Grid is a distributed processing architecture in which many computers tackle a complex problem in parallel. It’s still popular in scientific and high-performance computing today environments today, and edge computing networks could rekindle interest in other areas.
Last fall UCSD partnered with the University of Wisconsin to apply more than 51,000 rented graphics processing units bound together in a grid to the task of analyzing ice crystals deep below the Antarctic surface. The experiment demonstrated how grid architectures could amass computing power on a temporary basis to attack big problems. “We’re now seeing forms of grid computing becoming very feasible,” said CIO Kellen.
Competitive scrum
The vendor land grab has already begun, led by the cloud platform providers. The on-premises cloud operating stacks that the top three providers are pushing – AWS’ Outposts, Microsoft’s Azure Stack and Google’s Anthos – all serve as base stations that can be used to orchestrate customers’ edge deployments in the future.
Last December AWS announced Wavelength, a service that makes its cloud compute and storage resources available at the edge of 5G networks, along with a partnership with Verizon Communications Inc. that it said is the first of many telecom alliances. AWS has also started rolling out Local Zones, which are small satellite data centers intended for latency-sensitive applications. “People are going to want to eliminate several or all of those hops and find a way to have the compute and the storage much more local to where the 5G is and then an experience that wraps that together,” CEO Andy Jassy told SiliconANGLE.
Microsoft teamed up with AT&T Corp. on an alliance to deliver Azure cloud services from edge locations on AT&T’s 5G network. The company announced a broad series of edge-focused initiatives at its Build developer conference last spring and has a version of its Azure cloud specifically for deployment on IoT devices.
Google has so far staked its strategy on the Kubernetes container orchestrator that was developed on its campus. It’s considered a key technology for making cloud-native applications run on edge devices.
Some people say cloud providers have the most to lose to edge networking, although few executives are losing sleep at the moment given the industry’s rapid growth. Nevertheless, a core principle of edge computing is to reduce reliance on central services over the long term.
Cloud providers “absolutely are threatened” by the edge, said Hopkins. “Microsoft and AWS don’t make money by helping people keep data inside their buildings. The more they keep it there, the less incentive there is to use the cloud.”
Telecommunication providers, on the other hand, could benefit from helping customers set up their own micro networks and even delivering cloudlike computing and data services at the local level. “The service providers have a bit of a leg up on the public cloud providers because they have facilities that connect directly to enterprise customers,” said World Wide Technologies’ Wojtal.
Edge computing could also spark a resurgence of interest in colocation, a decades-old business that has a knack for reinventing itself. Originally conceived as leased data center space, colocation providers now serve an important role as low-cost networking hubs for companies that want to exchange data with each other. They could find new relevance as nexus points for geographically confined 5G networks.
Not least, traditional networking companies such as Cisco are hoping their strength in data centers and corporate networks along with newly developed and acquired 5G and other edge technologies can help them remain a mainstay in enterprises as they rethink their networks.
Building the next generation of networks will take time and CIOs who don’t want to be on the bleeding edge are best advised to wait, experts say. “Get educated on the technologies and ensure key architects and team members are educated as well,” advised ESG’s Laliberte. Then proceed incrementally. “For example, it probably makes sense for organizations to migrate to Wi-Fi 6 during a normal technology refresh, even though there may not be many compatible devices available today,” he said.
By moving methodically, upgrading incrementally and relying on technology providers to do the heavy lifting, enterprises will eventually arrive at a network of unprecedented speed, adaptability and dependability. Even for video phone calls.
Image: Pixabay/Pexels
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU