On a recent visit to the Mercedes-Benz factory in Stuttgart, Germany, I was introduced to the highly connected machinery being used to build cars today. Each machine—from the robot welders to the Wi-Fi-connected screwdrivers—kept track of each step of manufacture on each part of every car. As the car is made, each part is tracked as is each employee who worked to make the car.
Continue Reading Below
That makes for an impressive factory tour, but behind the scenes, the result is a vast amount of data being stored for every car at every stage of its life, even after it leaves the factory. While Daimler Benz wasn't willing to share the details of their computing environment with me, it was clear that the vast amount of data and the low latency required for manufacturing couldn't work in an architecture that relies on a single core data system, meaning a network designed to bounce all that data back and forth between endpoints and a single core server resource, especially given the global nature of the company's manufacturing. The speed with which that data would have to travel is prohibitive and the response time between query and answer, your basic definition of latency, would also be impossible.
And manufacturing isn't the only industry pushing up against the limits of traditional network design. The Internet of Things (IoT) and mobile computing are also moving fast enough that they'll soon fundamentally change the networks you're used to seeing. These trends are demanding massive and still rising bandwidth from today's networks—bandwidth that our traditional network infrastructure increasingly can't handle and certainly won't be able to support in the long-term future.
IoT is growing so fast that it's already bumping up against networking's physical limits. Sensors in industrial equipment are providing not only gobs of data but also a need to analyze that data in real time, which not only imposes bandwidth needs all its own, but also requires a serious upgrade in acceptable latency. And that's only one aspect of IoT. The consumer retail market is growing IoT even faster than the industrial sector with trends like smart home devices and the services that monitor and respond to them, on-demand entertainment and streaming services, and, of course, the huge and ever-growing mobile website, apps, and services sector.
And right around the corner are new trends like virtual- and augmented reality (AR) work and infotainment services as well as autonomous transports, both of which promise to add huge amounts of real-time data streams to an internet that's already getting strained at the seams. And worse, all these new applications not only want more data to fit down constrained pipes, they want it all analyzed much more quickly—think real time.
5G Wireless Adds Complications
While the perception is that 5G wireless communications is a silver bullet for these problems, in actuality it means more complexity. Among other things, 5G will provide dramatically faster speeds, and thus, greater overall bandwidth, which sounds great for wireless devices. But mobile networks don't exist by themselves. The new 5G network and the devices that use it will need a network that supports them on the back end so that the data they need and the computing services they require can be available with as little latency as possible. That low-latency requirement will be more insistent than ever as services like self-driving transports will need to transfer data almost instantly in order to do their jobs.
Latency can be thought of as a network delay, but it's really caused by several factors, the most basic of which is the speed of light in glass fiber. The longer the distance a data packet has to travel on a network, the longer it'll take to get to its destination. While it's still measured in tiny fractions of a second, those fractions add up as other factors join in. For example, the operation speed of network equipment, such as routers and switches, add to the overall latency and that even varies not only by vendor but also by different routing or switching chipsets. So does the time it takes a server and whatever application or database its running to find the information you need and send it back to you. As the network gets busier, and the network infrastructure becomes less able to cope with the traffic, latency increases. This is especially true with servers as they become overloaded.
Because communicating with a centralized computing and data repository takes time, the only way to save time (i.e. decrease latency) is to avoid using that centralized repository—which means moving big chunks of your network's computing power to the edge of the network. The result is something called "edge computing," with architectures referred to as "edge cloud computing," which, in turn, uses things called "cloudlets" or sometimes called "fog computing." A key driver is mobile computing, which necessarily uses data at the edge.
The edge of the network is the part that's closer to the ultimate user. By moving the data to the edge of the network, you cut down on delays in two ways: First is that you reduce the distance between the user of the data and the place where it's stored (the repository), which reduces the time it takes data to move back and forth. Second, by keeping just the required data near the user, you're also reducing the amount of data that the server has to handle, which also speeds things up.
What Is Mobile Edge Computing?
Mobile edge computing (MEC) is edge computing that supports mobile devices usually through wireless communications. While it's just starting to become an important part of the enterprise, this will change dramatically in the near future as mobile bandwidth increases and demands for data increase. Mobile edge computing uses wireless infrastructure and data repositories that are positioned close to the wireless infrastructure to keep latency low. For enterprise IT professionals, or even those administering midsized business networks, that can mean lots of changes in the near future across not only their wireless infrastructure, but also the back-end network equipment, hybrid cloud services, in-house application design characteristics, and certainly data protection and security.
MEC is already an important component of networks supporting industries as diverse as healthcare and manufacturing and will certainly be critical to new trends like autonomous vehicles. The mobile network edge needs to support extremely low latency because of the decision times required for mobile devices; an autonomous car can't afford to wait long for data while it's moving. Other applications such as augmented reality (AR) are extremely sensitive to latency because a delay in providing data to the app can make it useless if the user has already moved on.
The lack of widespread edge computing resources is already a limiting factor in the development of autonomous vehicles because each car is required to carry what is effectively a super computer—complete with a super computer's power and cooling demands—in its trunk. This may work while there are relatively few such vehicles, all of which are under development, but it won't work for widespread deployment.
But traditional networks also won't work for autonomous vehicles because the latency of such a network is too high for the vehicles to operate effectively. The same thing is true with widespread AR on mobile devices or widespread artificial intelligence (AI). All need to be close to their data for it to be useful.
Industrial users have a similar problem. As everything from manufacturing machinery to inventory equipment becomes more automated, the demands on the network become greater. Like autonomous cars, they need to have instant access to their data.
Edge Clouds, 5G, and IT
For all these reasons, edge computing is going to have a significant effect on many IT departments, though still not on all of them. In situations where latency is a problem, meaning where fast access to data is necessary, then you may find that your cloud provider isn't able to meet your needs. The issue is that this need for faster and faster data access is quickly becoming more prevalent even in processes that previously didn't worry about latency as much. Fortunately, you may also find that there are cloud providers that can handle clouds at the edge (these are the cloudlets or fog) that will meet your needs. You may also find that your wireless provider, the one that will be providing your 5G communications, will also handle your data repository so that it stays as close to the edge of the network as possible.
This may not be a major change in one sense, because you're still dealing with a cloud provider. But it may also mean that you will have to be more involved in performance monitoring to know if you're meeting your goals for performance at the edge. And since many applications will effectively stop functioning if latency goals aren't met, this means you'll probably need to change your SLA and remediation tactics when trouble inevitably happens. To help, here are some things to think about:
- Are any of your applications latency sensitive? This means do you use a data repository where some kind of real-time response is required? And if it's not latency sensitive now, do IT trends in your industry indicate that this will change in the near future? If the answer to any of these is "yes" then start investigating how to improve latency on your in-house networking infrastructure as well as through all of your cloud service providers.
- Do you anticipate an increase in mobile or remote operations as 5G brings you better connectivity? Will 5G bring with it changes in devices or apps used by your business? If so, then planning for a lengthy testing and remediation cycle will be critical.
- Will your existing enterprise network meet the performance demands as computing moves to the edge? The short answer here is almost always "no," but the devil is in the details. By staying on top of the new application trends in your business sector, you should be able to pinpoint not only if edge computing and 5G will imact your business, but probably how. Once you know that, start investigating what will be required to get your network to this new optimal state and adjust your plans that way going forward.
You may find that in addition to preparing for your data living at the edge of the network, your network between the edge and the core will have to be upgraded. Eventually, all that data needs to move to a location where analysis is possible and that will require some very heavy lifting in many cases.
As you might suspect, this process isn't new and, for that matter, neither is edge computing. What's new is that it's quickly becoming much more widespread and much more widely used. What's also new is the concept of edge clouds and mobile edge computing coupled with the demands of 5G wireless networks and mobile devices. Again, this will affect everything from your infrastructure to your application and security stack. The edge is no longer on the horizon, it's right in front of you. Plan accordingly.
Continue Reading Below