Uploaded by escambon

GUIDE TO IMPLEMENTING EDGE AI

advertisement
GUIDE TO
IMPLEMENTING
EDGE AI
www.modzy.com
info@modzy.com
Edge computing is one of the hottest trends in IT today. By the end
of 2023, there will be 43B connected devices in market, and IDC
research predicts more than half of new enterprise IT infrastructure
will be at the edge. Additionally, Gartner research predicts that by
2025, more than 75% of enterprise data will be created and
processed at the edge. It should come as no surprise, then, that this
hot topic coincides with another trend – the explosion of artificial
intelligence (AI) and machine learning (ML.) To take advantage of
these trends and the actionable insights hidden within the troves of
data collected, teams need a way to centrally train and manage
locally run AI and ML models at the edge to accelerate results and
outcomes.
In this white paper, we’ll explore edge AI concepts, trends, and use
cases, as well as provide practical guidance for organizations
looking to take advantage of the data and opportunities that exist at
the edge.
Why Edge AI and why now?
First, edge computing is nothing new. It refers to distributing data
storage and workloads close to where the data is being generated and
where actions are being taken with the goal of improving scalability,
performance, and security. As seen in the image below, edge
computing can take many forms; however, to keep things simple we
will limit the scope to compute capabilities, device size, and location.
The Cloud to Edge Spectrum
Cloud
Edge
Compute
Edge
Device
Edge
Sensor
Edge
Far
Edge
Cloud Edge – offers computing capabilities that you would find in
a cloud service provider , e.g., MEC.
Compute Edge – functions as a localized, micro-data center that
includes a limited range of resources and services you would find
in the cloud, e.g., an edge line server racked or placed near or
close to other devices or sensors.
Device Edge – much smaller compute and processing
capabilities, e.g., NVIDIA Jetson modules, Raspberry Pi, Intel NUC.
Sensor Edge – comprises IoT sensors and devices that gather
data, e.g., camera, and interact directly with the cloud, compute, or
device.
Far Edge – e.g., a microprocessor on board a robotic arm.
Vehicles today are becoming more like
computers, and there’s seemingly no limit to
the types of analysis that can be done for all
the data being collected in an individual
vehicle. Multi-access edge computing (MEC)
provides many options for auto companies
to deploy AI software inside mobile networks
to analyze data coming off vehicles.
Use Cases: Automotive
Now that you have a high-level understanding of what edge
computing is, let’s jump into the primary factors driving AI and ML to
the edge using an example. Consider the shop floor of an industrial
factory— it likely has a lot of machinery and possibly other wired or
wireless endpoints and actuators such as, cameras, sensors, and
more. Without edge AI/ML solutions, all the data collected from those
machines and endpoints must get sent to a centralized data center or
cloud computing infrastructure for AI/ML processing and analysis. In
most circumstances, the results are sent back to on-site operational
technology (OT) systems to perform post-processing for optimization,
on-site alerting, and other applications.
Use Cases: Manufacturing
Edge AI is transforming manufacturing
facilities, powering visual inspection and
defect detection systems, predictive
maintenance, worker safety monitoring,
inventory and supply chain monitoring, and
more. Industrial facilities are prime locations
for edge AI enabled insights because of the
convergence of large volumes of data being
generated and the need for data to be
processed in an environment with bandwidth
and networking constraints.
Factors Driving ML to the Edge
Bandwidth & Network
Access
Real-time, Low
Latency Results
Cost
Considerations
Privacy & Security
Concerns
By moving and processing the AI/ML workloads locally on or near the
factory floor, you can immediately imagine reduced strains on
network bandwidth from not having to transfer significant volumes of
data back and forth. Multiply this by scenario by the number of other
factory locations in your supply chain and you can begin to see
significant cost savings from reduced inbound and outbound traffic
transfer fees. Other obvious benefits of moving AI/ML workloads to
the edge include increased privacy and security because your
organization’s proprietary data is not exposed to network
vulnerabilities and attack or may not even have to leave the factory.
Similarly, many OT systems on the factory floor are executing critical
functions in near real-time, necessitating low latency, which only realtime edge AI/ML results can provide.
Ultimately, sending AI analysis to the point of data collection can
ensure that all the machines and factory floor assets remain fault
tolerant, all while enabling faster, more secure AI results, and safer
and more efficient operations.
Getting Started with Edge AI
Edge AI presents endless possibilities. Getting started doesn’t have to
be hard. The following steps outline a high-level implementation
approach that organizations can follow to start reaping the benefits of
moving AI/ML workloads to the edge.
Step 1: Pick an Edge AI Use Case
There are likely many processes ripe for AI/ML across your
organization. Getting this step right is critical. Get it wrong and you
may end up with a few disenfranchised stakeholders that will make
any future attempts an uphill battle. That said, it is important to get
the right mix of stakeholders in the room at the onset of this kind of
initiative. The more hats the merrier; but, at a minimum be sure to
have representation from business/operational, IT and security, data,
and engineering groups. Equally important is not to assume everyone
has the same background and understanding of AI/ML concepts and
techniques.
In healthcare, many of the smart diagnostic and
health assistants medical care providers rely on
edge AI, improving not only the quality of care
with faster results as data is processed locally,
but also expanding access to care as health
assistants become more sophisticated.
Use Cases: Healthcare
An assessment rubric for prioritizing and picking the right edge AI/ML
use case can be a useful tool. At the very least, you should consider
the following criteria when picking an edge AI use case.
Outcomes & Results:
Does the edge AI use case solve a real business problem or
introduce opportunities to generate revenue or create savings?
Does the edge AI use case provide real-time/near-real-time
processing and low latency?
Does the edge AI use case produce analytical insights that are of
strategic importance and value to the organization, its
stakeholders, or its customers?
Are the edge AI use case’s benefits measurable?
Can the edge AI use case be scaled to similar processes or
locations?
Use Cases: Retail
The possibilities for edge AI in retail are
seemingly endless, included location-specific
demand forecasting, supply chain resiliency,
and immersive shopping experiences.
Retailers can also now gain real-time visibility
into inventory with automated scanning at
millisecond intervals to ensure restocking
when needed. OCR can be used to track and
trace product labels as they move through
the supply chain, improving insight into the
status of supply chain operations.
Resources & Readiness:
Do we have the human and technical resources and capacity to
develop/train an AI/ML model internally or the budget resources
to hire outside support?
Can we get access to the right data to develop/train a performant
AI/ML model?
Do we have existing brownfield technologies (e.g., sensors,
actuators, devices) and infrastructure that can be used to support
this use case or the budget resources to acquire new technical
capabilities?
How quickly can we get the edge AI use case up and running for
testing?
Use Cases: Finance
Edge computing offers great promise for
retail banking to improve customer
service and offer more personalized
customer experiences, detect fraudulent
activity in real-time, power the low
latency results needed for high-frequency
algorithmic trading (HFT), and improve
compliance. The ability to analyze and
process sensitive data locally can ensure
better security and privacy in an industry
defined
by
stringent
compliance
standards.
Risk:
What are the implications if the edge AI use case does not
perform as expected?
How can we ensure transparency in edge AI use case results or
decisions?
Does the edge AI solution create potential risks for data or
personnel privacy violations?
Step 2. Decide Your Edge AI Solution
Architecture
As discussed, there are a lot of different things to consider when
designing your edge AI architecture–compute capabilities, device
size, and location. A recently published article on edge AI
architectures goes into more detail on four different architecture
design patterns.
Ultimately, the resulting design should be driven by your use case and
the outcomes you are looking to achieve. That said, it is important to
determine upfront where the AI workloads need to be
performed/processed so that your resulting use case is fast, scalable,
and secure. If fast response is important for your use case, your edge
AI solution should support low latency inferences. This can facilitate
real-time results if you need them.
Four Elements for Successful Edge AI
Central Management
Hub
e
hil
w
es cted
t
a
ne
er
Op scon
di
Rapid inferencing
Device agnostic
Also, consider a central management hub for your ML/AI models.
Managing your models in a central location allows you to deploy new
versions of models easily and automatically to your edge devices,
enables collaboration amongst teams, and the ability to reuse or
update models as needed. The concept of a central management hub
also enables transparency by presenting model training framework
and data, version history, and expected performance. A central hub
also ensures results reproducibility by providing an audit trail of past
predictions, and the ability to set governance controls.
Next, your solution architecture should be device agnostic, and be
able to support a wide range of chips, and data transfer protocols.
While many IoT devices come built with ARM chips, you’ll want to also
make sure they work for AMD chips. Same goes for data transfer
protocols. For example, while MQTT might be the standard in
manufacturing, you’ll also want to make sure your architecture woks
for gRPC, REST, etc. This can save a lot of time as you build future
solutions and applications that will interact with the AI/ML inference
results. Planning for spotty or disparate network connectivity can also
ensure that your models run as expected, always, even if or when your
devices go offline. Your architecture should support the ability to
operate in a disconnected environment.
Step 3. Decide and Execute the Plan
Once you have the use case identified, the solution architecture
designed, it is time to execute an implementation plan that will deliver
the projected outcomes and results. The implementation plan is a
necessary asset that can be iterated on in parallel to ensure
coordination, collaboration, and accountability across involved
stakeholders. Look to structure the plan with milestones at the 30-6090-day points. For example, strive to have an AI/ML model prototype
developed by day 30, your edge devices, central management hub,
and networking infrastructure in place by day 60, and the edge AI/ML
results integrated into business or operational workflows and
applications by day 90. At this point, you are ready to begin educating
the end users and collecting feedback and lessons learned that will
help you scale adoption and use across the organization successfully.
In Conclusion
Building AI solutions for the edge might be the future, but by using
some of the recommendations in this white paper and adopting a
location-centric approach will help put the foundation in place to build
these solutions today. By abstracting some of the complexities of an
infrastructure-centric approach to AI deployment, you can use the
ideas from this white paper to support the efficient development of
scaled edge AI solutions that run in any location - in the cloud, onprem, and of course, at the edge.
About Modzy
Modzy enables organizations to deploy, integrate, run, and monitor ML/AI
models anywhere — in the cloud, on-premises, or at the edge. With 15x
faster model deployment and up to 80% cloud cost savings, teams use
Modzy to build AI-powered solutions faster. The Modzy platform speeds up
AI solution development times by 20X with powerful APIs, manages the
chaos of deploying models to the cloud, on-prem, edge, and disconnected
locations, and helps optimize and save infrastructure costs associated with
running models in the cloud. For more information, visit modzy.com or
request a custom demo to discuss your use case.
Download