Loading...
You are here:  Home  >  Data Education  >  Data Architecture News, Articles, & Education  >  Data Architecture Blogs  >  Current Article

Why You Should Be Experimenting with Edge Computing Right Now

By   /  February 9, 2018  /  No Comments

Click to learn more about author Destiny Bertucci and Patrick Hubbard.

Edge is a point of transition, a demarcation point where one thing ends and another begins. It’s also a negotiated point of risk/reward where competing interests often strike a balance for mutual benefit. In the realm of application delivery, Edge Computing seeks to delight users with low latency without pushing business logic and compute all the way to the endpoint. Fortunately, real-world experimentation to determine if edge is a good fit for your business is becoming less complex and more affordable. That means that now is a great time to try edge for yourself.

Trusting Edge with Your Life

Although Amazon® AWS®, Azure®, Google® and others tend to talk about Edge Computing in terms of supporting IoT, there’s much more opportunity than that. Shrinking retailers will survive by creating interesting shopping experiences that online sellers can’t replicate. For them, hyper-personalization—where shoppers instantly “charm” their environments—will seem almost magical if latency is low.

Autonomous vehicles will continue to sport more and more capable Machine Learning-optimized processers like the NVIDIA DRIVE PX, but 20ms responses won’t cut it when the vehicle is going 80 mph. Edge will be the only way to conquer space and time, or at least the speed of light in fiber. Add in 5G’s gigabit bandwidth, and it’s simply not going to be possible to transport almost every fact from every grain of sand on the beach back to the Cloud. Edge, some operated by wireless providers in a container at the base of cell towers, will enable much richer services, at least until the backhaul catches up.

Edge Computing does come with one inescapable pain point that must be mitigated. Distributing infrastructure is the opposite of what’s driving Cloud adoption. Increased complexity, more real estate, increased management, and a bigger attach surface are all unwelcome compromises.

Fortunately, a new crop of tools, including Amazon Greengrass or Azure Stack FaaS are not only removing pain, but making it possible to do real work at the edge. Some even create real knowledge at the edge. What makes them different is that they’re easily deployed everywhere, not just the main data center. Data democratization is at work and comes with plenty of tooling and support. With it, most dev teams can get up and running in an afternoon.

Moving to the Edge

Moving to the edge starts with identifying the business case. Each organization will have unique opportunities and requirements. The surrounding infrastructure is often a consideration. In a big city, it’s reliable and more robust. In a more rural area, though, even with fewer users, the infrastructure wasn’t meant for the demand of today’s technology.

Hybrid-edge is likely to be the approach of choice to meet the different location scenarios. For example, workloads connected at the neighborhood level might be delivered close to the mobile network, running essentially in a distributed co-location facility on the part of the mobile operator itself. Yet, that workload is actually managed through the Cloud provider.

Out in the suburbs, regional edge would most appropriately serve the purpose. When you look at an AWS Edge, for example, the full AWS stack isn’t utilized, but a subset of its most broadly used capabilities are pushed to edge locations at the metro level.

In the countryside, where the population is low, the density of that metro edge compute wouldn’t be justifiable. Instead, the Cloud would suffice because increased latency in the application would be an acceptable tradeoff.

Edge Security

In every case, an Edge Computing Strategy must include compliance and security, just like in a centralized facility. Proper documentation provides cohesiveness, and should ensure that distributed data centers are identical. Standardization begets protection, so wherever possible, create a uniform set of hardware and software. The fewer differences to manage, the easier the management will be. Commonality will be a big advantage.

Ask questions like: Are we all on the same platform? Is the user having the same experiences? How are we able to update back and forth? What’s the recovery process across the distributed area? Will our business be interrupted as the result of a failure we cannot recover from? Network, systems, and Cloud Management and Monitoring tools along with a robust SIEM capability will help answer these questions along with optimizing and protecting across the distributed environment.

Remember that a single point of failure at the edge could lock out millions of IoT devices or applications. And, while distribution means more resiliency in the face of individual point attacks, it also affords a larger surface for attack. Make sure that the whole team, from the director on, takes part in the testing and recovery phase. Even the best security posture can fail without testing, especially at the edge.

Getting Edgy in your Lab

If you’ve been watching vendor announcements related to Edge Computing with curiosity, or have been wondering if IoT might be a valuable tool to transform your business, make sure you learn about it, ideally hands-on in your lab.

It’s easy to initially hesitate when it comes to edge, thinking it won’t be beneficial, but ask your friends and peers in other organizations if they have started deploying systems outside their data centers but not in the public Cloud. If they say they have, buy them a beverage and pull up a chair for a chat about the details.

Once considered too complex, theoretical, or just not enterprise-ready, edge is set for prime time.  Businesses should be experimenting because they are likely to discover value and services that outshine the competition. Even better, it’s now possible to do so without building regional data center.

Sure, your on-premises data center or Cloud systems will do plenty of Machine Learning and computations, but edge just might allow you to create new and better customer experiences. Hybrid-IT isn’t all bad. There’s a chance it might actually let you put compute and knowledge right where you need it.  Close enough to take physics out of the equation, but concentrated enough to eliminate local computing. Embrace and experiment; it’s a great time to be in IT.

About the author

Destiny Bertucci is a Head Geek at SolarWinds® and a Cisco Certified Network Associate (CCNA), Master CIW Designer, and INFOSEC, MCITP SQL, and SolarWinds Certified Professional®. In her 15 years as a network manager, she has worked in healthcare and application engineering, and was a SolarWinds Senior Application Engineer for over nine years.  Patrick Hubbard is a Head Geek and Technical Product Marketing Director at SolarWinds®. With over 20 years of IT experience spanning network management, datacenter, storage networks, VoIP, virtualization, and more, Hubbard's broad knowledge and hands-on expertise affirm his IT generalist authority. Since joining SolarWinds in 2007, Hubbard has combined his technical expertise with his IT customer perspective to develop SolarWinds’ online demo platform, launch the Head Geek program and create helpful content that speaks to fellow networking and systems professionals.   Follow Destiny, Patrick, and SolarWinds at: SolarWinds Twitter, Patrick Twitter

You might also like...

Data Governance in Data Warehousing and BI vs. Data Governance in Big Data Analytics

Read More →