Intelligent Resource Prediction for OpenFlow

advertisement
Intelligent Resource Prediction for OpenFlow-based Datacenter Testbed.
Wei fan Hong 0056028
Department of Computer Science
National Chiao Tung University
dekger11@gmail.com
VM
Abstract-In cloud computing, the load prediction is a very
important issue. In order to provide an efficient allocation
of resources and power-saving, and Immediacy supply
services. Resource prediction mechanism is a key technology. We propose a using artificial neural network to
predict demand of resource in each application, and each
application is independent to predict. The advantage is can
set different SLA standards for each difference application,
and can instant assign the appropriate virtual machine to
meet application require. We consider the time information
that can affect user traffic, so we training time information
in the artificial neural network, it allows to prediction more
accurately.
Application request
Provisioning
Manager
Load
balancing
service
Application
service
Application
service
VM
VM Allocation
service
Application
service
VM
VM
SLA Analyzer
Performance
Monitoring
Resource
prediction
Fig. 1. System architecture
In the following sections, we propose the artificial neural
network method in resource prediction. In section II, we
introduce the relevant resource prediction literature, and
compare our proposal. In section III will explain the details
of our methods; we compare another method in simulation,
and compare their difference in section IV. Finally, we describe the future expectations in section IV.
I. INTRODUCTION
I
n the recent decades, network bandwidth and hardware
technology is the unceasing rise. The cloud needed to
support a stable network bandwidth and the hardware
technology. Accordingly, the cloud system is a feasible idea
in modern. Through the cloud, machine solved large computation data for user. Cloud computing bring up a lot of
advantages about convenient user. They consist of several
applications that user can access faster with internet application. The cloud system can handle the service request from
each user. Cloud applications of cloud computing can increase user convenience, such as YouTube. Cloud system
application can provide a user interface for solve problems,
and cloud system can allocate resource, and load balancing
algorithm to increase performance.
II. RELATED WORK
As shown in Fig. 2, the load prediction can have a variety
of ways to do it, and it appliance is a lot of environments in
cloud computing. For example, it can apply on the massively
multiplayer online game environment for predict game
loading. It also can apply on cloud web servers about loading.
Relative papers propose a lot of method.
Load prediction
Figure 1 illustrates the autonomic system architecture.
The load balancing service the incoming requests among
the leased resources, maintaining user sessions. The VM
allocation service decides how much VM capacity to support services. The performance monitoring detect using the
amount of resources in the application, and then transferred
to the resource prediction, resources required using neural
network prediction estimates of future supply applications,
and check for SLA violations.
Artificial
neural network
Max(CPU,
memory,
BW)
[1]
Last value
Sliding
window
median
Timeline [2]
Fig. 2. Comparison Tree
Vald N et al. [1] focus on the resource provisioning in
1
massively multiplayer online games, it compared many
methods (neural network, average, moving average, and last
value, Exp. Smoothing) to prove neural network is best load
prediction. Finally, it used the neural network apply on
massively multiplayer online games, and get a better result in
simulations.
The effectiveness of the historical record
If the history of
the training data,
Virtual machine
can be
configuration
pre-assigned virtual machine
Fig. 4. Method comparison table
As shown in Fig. 4, we propose a prediction application
future use of resources. It benefits is can set a different level
of SLA in different applications, and virtual machine capacity allocation according to different service, and have
effectiveness of the historical record, and the history of the
training data, can be pre-assigned virtual machine.
Reig, G. et al. [3] propose a fast analytical predictor and
adaptive machine learning based predictor, and also show
how a deadline scheduler could use these predictions to help
providers to make the most of their resources.
Our design
Each application load
MMOG[1]
Server load
Prediction
method
Artificial
neural
network
Artificial
neural
network
Artificial
neural
network
Application
Environment
III. THE LOAD PREDICTION ARCHITECTURE
General
Cloud
We propose a load prediction method than using the artificial neural network. The cloud environment needed to
maintain each operation node of the services.
The neural model receives three inputs x (n) from external
information source and is applied to the input layer of the
sensory nodes. Each neuron is composed of two units. The
first unit adds products of weights coefficients and input
signals. The second unit realizes a nonlinear function called
the neuron activation function. The desired response vector
d(n) is obtained at the output layer of the computation nodes.
When the network is run, each layer performs the calculation
on the input and transfers the result Oc to the next layer[2].
MMOG
Server
General
Cloud &
Data center
Deadline
Cloud data
Server load
Euler
schedulers [3]
center
Fig. 3. Relative work comparison table
Timeline[2]
Server load
flexible
Only know the
server load is too
heavy
John J. Prevost et al. [2] focus on the load prediction architecture. It used timeline loading method to input neural
network. It provided a four existing old data to estimate of
loading the next, and training the system. In simulation, the
Forecast accuracy is high.
Performance
Monitoring
Effectively lost
when the server
application to
change
As shown in Fig. 3, we compare to difference paper
method, and we aim difference prediction in the cloud environment, and compare to predict method. In [1], It used
Artificial Neural Network apply to load prediction in Massively multiplayer online game, and it had a pretty good
performance. In [2], it proposed an timeline mechanism, use
old timeline input to Artificial Neural Network. And it
training to open source data(EPA, NASA web server). In [3],
it show how a deadline scheduler could use these predictions
to help providers to make the most of their resources.
Predict server load
Our design
SLA
Only set a highest
level of SLA on
signal server
set a different
level of SLA in
different applications
Resource utilization
Fixation capacity of
the virtual machines
will have internal
fragmentation
Virtual machine
capacity allocation according to
different service
In Fig. 5, In order to take into consideration more system
information, we input CPU usage, memory size, GPU usage,
hard disk I/O and time information in input layer, and
training the artificial neural network. When training is
completed, artificial neural network provide the output of the
predictor from each input, and the sum of all the information
in each virtual machine, we can get the resources required to
run this application, and allocate the most appropriate resources to the application.
2
CPU usage
∑
ƒ (1)
CPU usage
ƒ (1)
Memory
b1 = 1
Memory
∑
b2 = 1
GPU usage
∑
ƒ (1)
∑
ƒ (1)
GPU usage
b3 = 1
Hard disk I/O
Time
information
Hard disk I/O
∑
ƒ (1)
b10 = 1
An Application
Time
information
Fig. 6. The Load prediction using Artificial Neural Network
in Server
Prediction
Fig. 5. The Artificial Neural Network prediction method
In Figure 9 and 10, this is the difference between prediction and actual in virtual machine level. We monitor request
bytes, and training artificial neural network. We are the top
70 percent of the data as training, and then after 30 percent of
estimated data.
When the change of future resource estimate, resource
supply also needs to change with it, so we define the following formula:
RPt = Resource provision level at time t
RDt = Resource demand at time t
RPt+1 = the predict RD t+1
In order to complete the resource allocation, we define
RPt is a resource provision level at time t, RDt is resource
demand at time t, RPt+1 is the predict RD t+1.
If RPt+1 > RPt express to overloading in the next time, we
needed to start new virtual machine to support application.
If RPt+1 < RPt express to low loading in next time. In this
time, we can close or suspend virtual machine to reduce
power consumption.
Fig. 7. The EPA Load prediction using Artificial Neural
Network from EPA web data
Our monitoring of resources and the use of artificial
neural network to predict the immediate allocation of the
necessary resources, so we can be guaranteed not to violate
the SLA,
IV. EVALUATION AND DISCUSSION
Fig. 6, Fig. 7 and Fig. 8 is a reality difference between
actual and prediction, and the prediction line is very approached the really actual. The prediction value had a good
result in Artificial Neural Network.
Fig. 8. The Load prediction using Artificial Neural Network
from NASA web data
In Fig .8, this is the difference between prediction and
actual in server level. We monitor CPU, memory and
bandwidth, and training artificial neural network. We are the
top 70 percent of the data as training, and then after 30 percent of estimated data.
The calculation of the mean squared error (MSE) is shown
below.
The MSE express to the difference from actual and predicted, If the predicted is very accurate, MSE difference will
become smaller.
In Fig .11, we compare the input average(CPULoad,
3
memoryLoad, bandwidthLoad) (avg) and input max(CPULoad,
memoryLoad, bandwidthLoad) (max) to artificial neural network, because the maximum has more dramatic changes,
Relatively high value of the MSE. In other words, the average is more gentle changes, so the MSE is relatively low.
[2] Prevost, J.J. ; Nagothu, K. ; Kelley, B. ; Jamshidi, M,
“Prediction of Cloud Data Center Networks Loads Using
Stochastic and Neural Models”. System of Systems Engineering (SoSE), 2011 6th International Conference on.
27-30 June 2011, pp. 276-281.
[3] Reig, G.; Alonso, J.; Guitart, J., “Prediction of job resource requirements for deadline schedulers to manage
high-level SLAs on the cloud,” in Proc. 9th IEEE International Symposium on Network Computing and Applications
(NCA), July 2010, pp. 162-167.
Figure 11. Comparison MSE in Server Level
In Fig .12, we compare input timeline(t~t-2) and last value(t)
to artificial neural network. The timeline is more reference
parameters for artificial neural network, so relatively high
value of the MSE. In other words, The last value(t) only a
single parameter, so the MSE is relatively low.
Figure 12. Comparison MSE in Virtual Machine Level
V. CONCLUSION
In this paper, we introduce some recent researches in the
load balancing field. Through better load prediction, method
can increase the stable and the performance, and we propose
the artificial neural network method have a good result.
These results provide a cloud framework that can be utilized, and the framework can manage the state-changes
required for optimal operation and control in the cloud system.
VI. REFERENCES
[1] Nae, V. ; Iosup, A. ; Prodan, R, “Dynamic Resource
Provisioning in Massively Multiplayer Online Games”.
Parallel and Distributed Systems, IEEE Transactions on.
March 2011, Vol. 22, Issue:3, pp. 380-395.
4
Download