MidYear Progress Report - University of Maryland

advertisement
System Identification of Nonlinear State-Space
Battery Model
December 2012 Progress Report
Author:
Wei He
weihe@calce.umd.edu
Center for Advanced Life Cycle Engineering
Department of Mechanical Engineering
University of Maryland, College Park
Advisor:
Dr. Chaochao Chen
chaochao@umd.edu
Center for Advanced Life Cycle Engineering
Department of Mechanical Engineering
University of Maryland, College Park
Abstract
The goal of this project is to solve the parameter estimation problem of the nonlinear state-space model
for the battery state of charge estimation. An Expectation Maximization (EM) algorithm is employed to
solve this problem. The Expectation (E) step involves solving a nonlinear state problem, which will be
solved using the particle filter and smoother algorithm in this project. In this semester, the particle filter
and smoother were implemented and validated in Matlab.
1. Introduction
Electric vehicles (EVs) powered by lithium-ion batteries is going to penetrate the automobile market
within the next few years, due to the increasing concerns on global warming and fossil fuel depletion.
However, challenges still exist for EVs that remain to be solved. The most notable one is the state of
charge (SOC) estimation, which can be used for remaining range prediction of EVs and optimal battery
control. SOC by definition is the remaining charge in the battery expressed as the percentage of its
maximum capacity. When the battery is full, the SOC is 100%; when it is empty, the SOC is 0 %. To
estimate the SOC, a lot of equivalent circuit models (ECMs) have been developed to model the dynamics
of the battery system [1-3]. Fig.1 shows a commonly used ECM [1]. In this figure Vt is the terminal
voltage of the battery that can be measured by a voltage sensor. OCV is the open circuit voltage of a
battery, which is a monotonic nonlinear function of SOC. The nonlinear relationship between OCV and
SOC can be established by battery tests. The Rp and Cp are the double layer resistance and capacitance
respectively, and R0 is the series resistance of the battery. It should be noted that the ECM in Fig.1 is just
an approximation of the real system. If more capacitors and resistors are added into the model, the
modeling accuracy may be increased. Eq. (1) shows the equations of the ECM in Fig. 1 in a continuous
form.
Figure 1 An equivalent circuit model of batteries
Vp

IL

V p 
C p C p Rp

V  OCV SOC  V  I R

 p L 0
 t
We can reformulate the Eq.(1) to a discrete state-space representation as follows:
(1)
 T
V p , k 1  exp  
 C R
p p



 T
 V p , k  R p 1  exp  


 C p Rp

  I L , k
 
(2)
Vt , k  OCV ( SOCk )  R0 I L , k  Vp , k
(3)
where k is the sampling time and T is the sampling interval.
To model the dynamic evolution of the hidden state SOCk, the Coulomb counting principle can be used,
which is defined by:
SOCk 1  SOCk 
I L , k T
(4)
Qmax
where Qmax is the maximum capacity of a battery, and IL is the current flowing through the battery.
The combination of Eq.(2), Eq.(3) and Eq.(4) forms a state-space model for the battery SOC estimation.
Since SOC and Vp are not direct observable, they are set as the state variables. As a result, Eq.(2) and
Eq.(4) are the process functions in the model. Vt can be directly measured by a sensor, so Eq.(3) is the
measurement function. The input of the model is IL,k and output is VL,k. We assume that the states: SOC
and Vp, and the output Vt are corrupted with independent zero mean Gaussian noise. The state-space
representation of the battery system is summarized as follows:
Process function:
t / Qmax
0

1

 SOCk 1  
  SOCk  


t   
t
V

 
 p , k 1  0 exp   C R    V p , k   R p 1  exp   C R
p p 
p p
 



Meausrement function:
Vt , k  OCV ( SOCk )  R0 I L , k  V p , k   k
where
SOC ~ N  0,  
SOC


V ~ N 0,  
p
Vp





   I L , k   SOC , k 

  
 Vp , k 
  
(5)
 ~ N  0,   
Therefore, the model parameters of this model are    Rp , C p , R0 , Qmax ,   ,      . The model

soc
Vp

parameters will change with the loading conditions of batteries. For example, R0 may decrease with the
increase of temperature. Thus, we need to update these parameters to reflect the true system response, in
order to get accurate SOC estimations.
2. Approach
2.1 Expectation Maximization
This project considers estimating the unknown parameters  in state-space models
xk 1  f k  xk , uk ,   k   ,
k  
pk 
yk  hk  xk , uk ,   ek   ,
ek  
pek 
(6)
based on the information in the measured input-output responses
U1:N   I t ,1 , I t ,2 ,..., I t , N  , Y1:N  Vt ,1 , Vt ,2 ,..., Vt , N 
(7)
using a maximum likelihood (ML) framework
ˆ  arg max p Y1:N 
(8)

Eq. (8) is equivalent to maximize a log-likelihood function of Y1:N
ˆ  arg max  L Y1:N 
(9)

where L Y1:N  is:
N
L Y1:N   log p Y1:N   log p Y1    log p  yk | Y1:k 1 
(10)
k 2
Given a set Y1:N of observed data, a set of unobserved latent data or missing values X 1:N , and a vector of
unknown parameters  , the EM algorithm seeks to find the MLE of the marginal likelihood by iteratively
applying the following two steps [4]:
1. Expectation step (E step): calculate the expected value of the log likelihood function, with respect
to the conditional distribution of X 1:N given Y1:N under the current estimate of the parameters i :
Q  ,i   Ei  L  X 1:N , Y1:N  | Y1:N    L  X 1:N , Y1:N  pi  X 1:N | Y1:N  dX 1:N
(11)
L  X 1:N , Y1:N   log p Y1:N | X 1:N   log p  X 1:N 
N 1
N
k 1
k 1
(12)
 log p  x1    log p  xk 1 | xk    log p  yk | xk 
2. Maximization step (M step): find the parameter that maximizes this quantity:
i 1  arg max Q  ,i 
(13)

If not converged, update i→i+1 and return to step 2
It has been proved in Ref.[4] that L Y1:N   L Y1:N   Q i 1 ,i   Q i ,i  , which implies that the
i 1
i
increase of Q i 1 ,i  can insure the increase of the log likelihood of L
i 1
Y1:N 
When the model (6) is linear and the process noise and measure noise  t and et are Gaussian, then Eq. (11)
can be simply computed by a standard Kalman filter. However, in nonlinear and/or non-Gaussian case,
other approaches should be employed. In this study the particle filter and smoother will be used to
compute Eq. (11). Apply the conditional expectation operator E  | Y1:N  to both side of Eq. (12), we have
i
[4]:
Q  ,i   I1  I 2  I3
(14)
where
I1   log pi  x1  log pi  x1 | Y1:N  dx1
(15)
N 1
I 2     log p  xk 1 | xk  pi  xk 1 , xk | YN  dxk dxk 1
k 1
(16)
N
I3    log p  yk | xk  pi  xk | Y1:N  dxk
k 1
pi  x1 | Y1:N 
(17)
in Eq. (15) and p  xt | YN  in Eq. (17) are smoothing problems and can be solved using a
i
particle smoother [4-6]. In Eq. (16), p  xk 1 , xk | Y1:N  can be rewritten as
i
pi  xk 1 , xk | Y1:N   pi  xk | xk 1 , Y1:N  pi  xk 1 | Y1:N 
 pi  xk | xk 1 , Y1:k  pi  xk 1 | Y1:N 

pi  xk 1 | xk  pi  xk | Y1:k 
pi  xk 1 | Y1:k 
pi  xk 1 | Y1:N 
(18)
Therefore, the particle filter and smoother representations can be used deliver an importance sampling
approximation to I2 .
If we substitute the particle smoother representation: p  xk | Y1:N    kj| N   xk  xkj  and particle filter
N
i
j 1
representation: p  xk | Y1:k    kj   xk  xkj  into Eq. (15) , Eq. (16) and Eq. (17), then we have:
N
i
j 1
I1   log p  x1  log pi  x1 | Y1:N  dx1
N 1
I 2     log p  xk 1 | xk  pi  xk 1 , xk | Y1:N  dxk dxk 1
k 1
N
I 3    log p  yk | xk  pi  xk | Y1:N  dxk
(19)
Eq. (19) provides a solution to calculate Q  ,i  for any nonlinear state space model. The EM method
k 1
with particle approximation is called particle EM in the literature. Below provides a summary of the
particle EM algorithm [4].
1.
Set i = 0 and initialize i
2.
Expectation (E) Step:
a) Run particle filter and particle smoother
b) Calculate QM  ,i   I1  I 2  I 3
3.
Maximization (M) Step:
Compute: i 1  arg max QM  ,i 

4.
Check the non-termination condition Q i 1 ,i   Q i ,i    . If satisfied update i  i  1 and return
to step 2, otherwise terminate.
2.2 The algorithm of particle filter
This section will introduce the principles of particle filters, which follows a Bayesian filtering framework.
It includes two steps: prediction and update. Prediction is to propagate the distribution to the next time
point based on the process model to get a prior for the updating step:
p( xk 1 | Y1:k )   p( xk 1 | xk , Y1:k ) p( xk | Y1:k )dxk   p( xk 1 | xk ) p( xk | Y1:k )dxk
(20)
In the updating step, the posterior distribution is updated with the measurement using Bayes’ rule:
p( xk 1 | Y1:k 1 )  p ( xk 1 | Yk 1 , Y1:k )

p(Yk 1 | xk 1 , Y1:k ) p( xk 1 | Y1:k )
p(Yk 1 | Y1:k )

p(Yk 1 | xk 1 ) p( xk 1 | Y1:k )
p(Yk 1 | Y1:k )
(21)
where p(Yk 1 | Y1:k )   p(Yk 1 | xk 1 ) p( xk 1 | Y1:k )dxk 1
Eq. (20) and (21) forms a recursive Baysian solution for the filtering problem. But it is just conceptual in
general, because the propagation of the distributions is hard to solve analytically in most case. However it
is possible to find an approximated numerical solution by using Monte Carlo sampling. The idea is to
represent the distribution by a set of random samples with the associated weights:
M
p( xk | Y1:k )   wki  ( xk xki )
(22)
i 1
where xki , i = 1,2,3,…,M is a set of independent random samples draw from a proposal distribution
𝜋(𝑥𝑘𝑖 |𝑌1:𝑘 ), and wki is the Bayesian importance weights associated with each sample xki . The weight can
be obtained by:
wki 
p ( xki | Y1:k )
(23)
  xki | Y1:k 
Why sampling xki from   xki | Y1:k  instead of p ( xki | Y1:k ) is because the target distribution p ( xki | Y1:k ) is
usually unknown. If we choose our proposal distribution   xki | Y1:k  to be p( xk | xk 1 ) , then the weights
become to
wki 
p( xki | Y1:k )
p ( yk | xki )

p( xk | xk 1 ) p( yk | Y1:k 1 )
(24)
Since the denominator is independent of xki , we have:
wki 
1

M
p( yk | xki ),
   p( yk | xki )
i 1
(25)
The main steps of the particle filter are summarized as follows [4]:
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1.
Initialize particles, {x0i }iM1 ~ P ( x0 ) and set k = 1.
2.
Predict the particles by drawing M i.i.d samples according to
3.
Compute the importance weights wki 
xki ~ P  xk | xki 1  , i  1,..., M
 
i
wki  w x k 

P yk | x
 P  y

M
j 1
4.
i
k
k
|x
j
k
M
i 1

, i  1,..., M
For each j = 1,…,M draw a new particle xki with replacement (resample) according to
j
P( xkj  x k )  wki , i  1,..., M
5.
If k < N increment k → k+1 and return to step 2, otherwise terminate.
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
In step 4 of the algorithm, there is a resample step. This resample step is to solve the degeneracy problem
of particle filters. Degeneracy problem means that after several iterations, the weights of the most
particles are close to zero, which implies that a large computational effort is devoted to updating particles
whose contribution to p ( xki | Y1:k ) is almost zero. The basic idea of resample is to eliminate the particles
with small weights and concentrate on the particles with high weights. In this study, we adopted the
system resampling algorithm presented in Ref. [5], and the steps of the algorithm are shown below:
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Initialize the CDF: c1 = 0
For i = 2:M
Construct CDF: ci  ci 1  wki
End For
Start at the bottom of the CDF: i = 1
Draw a starting point: ci  ci 1  wki
For j=1:M
Move along the CDF: u j  u1  Ns1 ( j  1)
While u j  ci
i = i+1
End while
Record indices: indj = i
End For
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
In this algorithm, the indices of the particles after resampling are recorded, and these particles kept after
resampling are used as the parent particles for the state prediction in the next time points.
2.3 Particle smoother
Similar to particle filter, the particle smoother is to approximate the distribution of the posterior
distribution of xk given the entire measurements Y1:N by a set of particles xki , i = 1,2,3,…,M with
associated weights :
M
p( xk | Y1:N )   wki | N  ( xk xki )
(26)
i 1
To compute the wki | N , we note the following fact based on the law of total probability and Bayes’ rule:
p ( xk | Y1:N )   p ( xk | xk 1,Y1:N ) p ( xk 1 | Y1:N )dxk 1
  p( xk | xk 1,Yk ) p( xk 1 | Y1:N )dxk 1

p ( xk 1 | xk ) p( xk | Yk )
p( xk 1 | Y1:N )dxk 1
p ( xk 1 | Yk )
 p ( xk | Yk ) 
(27)
p( xk 1 | xk )
p ( xk 1 | Y1:N )dxk 1
p( xk 1 | Yk )
It is easy to find that when k = N, the smoothing density and the filtering density are the same, and hence
the weights wNi | N  wNi in Eq. (26) and the particles x Ni are identical. As a result, we can work backwards to
compute Eq. (27) by assuming the particle smoothing approximation is available at time k+1 and use it to
compute Eq.(27) as:

j
j
j
M w
p( xk 1 | xk )
k 1| N p  xk 1 | xk 
p( xk 1 | Y1:N )dxk 1   M
p( xk 1 | Y1:k )
j 1
 wki p  xki 1 | xki 
(28)
i 1
Therefore, based on Eq. (27) and Eq. (28) , we have
M
p( xk | Y1:N )   wki | N  ( xk xki )
i 1
M
wkj1| N p  xkj1 | xki 
j 1
w px
wki | N  wki 
M
i 1
i
k
j
k 1
| xki 
The steps of the particle smoother are listed as follows [4]:
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1.
Run the particle filter and store the predicted particles {xki }iM1 and their weights {wki }iM1 , for k = 1,…,N.
2.
Initialize the smoothed weights to be the terminal filtered weights wki
  at time k = N: w
and set k = N-1.
i
N|N
 wNi , i  1,..., M
3.
Compute the smoothed weights wki | N 
i
k|N
w
4.
M
i 1
w
w
j
k 1| N
j 1
i
M
i 1
j
M
i
k
using the filtered weights wki 
i
P ( x k 1 | x k )
vkj
M
i
and particles {x k , x k 1}iM1 via:
where vkj   wki P ( x k 1 | x k )
j
i
i 1
Update k → k-1. If k > 0 return to step 3, otherwise terminate.
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
3. Validation
3.1 Simulated signal generation
In this project, the particle filter and smoother will be validated using simulated data. Here, we assume
that the model parameters in Eq. (5) are known, and simulated data were generated based on Eq. (5). The
parameter settings are shown as follows:
Qmax  1.07 Ah
R0  0.1 
Rp  0.1 
 SOC  10
 V  10
 Q  10
6
6
p
3
C p  10 F
OCV ( SOC )  1.6SOC  2
Fig. 2 shows the voltage and current signal and Fig. 3 shows the evolution of the states.
Current and Voltage Measurements
Current (A)
4
2
0
-2
-4
0
1000
2000
3000
4000
Time(s)
5000
6000
7000
8000
0
1000
2000
3000
4000
Time(s)
5000
6000
7000
8000
Vt (V)
4
3
2
1
Fig.2 Simulated input current and output voltage
Hidden States: SOC and Vp
SOC
1
0.5
0
0
1000
2000
3000
4000
Time(s)
5000
6000
7000
8000
0
1000
2000
3000
4000
Time(s)
5000
6000
7000
8000
0.6
Vp
0.4
0.2
0
-0.2
Fig. 3 Simulated SOC and Vp
3.2 Particle Filtering and Smoothing Results
The particle filter and particle smoother were implemented to estimate the states and compared with the
real states to evaluate the algorithm performance and correctness of the implementation. Fig.4 and Fig. 5
shows the particle filtering and smoothing results respectively. The red line is the real state and the blue
line is the estimation. It can be seen from the figures that the particle filter and smoother can provide
accurate state estimations. The root mean square (RMS) error of particle filter is 0.0063 and that of
particle smoother is 0.0050. For comparison, the Kalman filter was also implemented. Fig. 6 shows the
Kalman filtering result. Since we assume the OCV(SOC) here is a linear function, the Kalman filter
provides a optimal solution here, and RMS error is only 0.0045, which outperforms particle filter. Based
on the principle of particle filter, the accuracy of the particle filter will be increased if we use more
particles to approximate the distribution. Fig. 7 shows the RMS error as a function of the particle number.
It can be seen that as the particle number increases, the RMS error is approaching 0.0045. In the next step,
the OCV(SOC) will be changed to a nonlinear function, then it can be expected that the particle filter will
outperform the Kalman filter.
1
SOC
Estimated SOC
0.9
0.8
0.7
SOC
0.6
0.5
0.4
0.3
0.2
0.1
0
0
1000
2000
3000
4000
Time (S)
5000
6000
7000
8000
Fig. 4 Particle filter-based SOC estimation
1
SOC
Estimated SOC
0.9
0.8
0.7
SOC
0.6
0.5
0.4
0.3
0.2
0.1
0
0
1000
2000
3000
4000
Time (S)
5000
6000
Fig.5 Particle smoothing-based SOC estimation
7000
8000
1
SOC
Estimated SOC
0.9
0.8
0.7
SOC
0.6
0.5
0.4
0.3
0.2
0.1
0
0
1000
2000
3000
4000
Time (S)
5000
6000
7000
8000
Fig. 6 Kalman filter-based SOC estimation
0.0065
RMS Estimation Error of SOC
0.0063
0.006
0.0055
0.0054
0.005
0.0049
0.0047
0.0045
0.0045
0.0045
0.004
50
100
200
400
800
1600
Number of Particles
Fig. 7 The RMS estimation error of the SOC as a function of particle filter size
4 Future works and Deliverables
The future works are to implement the full expectation maximization algorithm based on the particle
smoother and validate the full algorithm. The timeline is listed as follows:

for the full algorithm: February 1



Validation: March 15
Testing: April 15
Final Report: May 1
The algorithm is being written in Matlab 2010 using a Dell laptop. Once the project finished, Deliverables
include the codes of the particle filter, particle smoother and particle EM. The datasets of the simulated
battery discharge process, and the end-of-the year progress reports.
5 References
1. H. He, R. Xiong, and H. Guo, Online estimation of model parameters and state-of-charge of LiFePO4
batteries in electric vehicles. Applied Energy, 2012. 89(1): p. 413-420.
2. C. Hu, B.D. Youn, and J. Chung, A Multiscale Framework with Extended Kalman Filter for LithiumIon Battery SOC and Capacity Estimation. Applied Energy, 2012. 92: p. 694-704.
3. H.W. He, R. Xiong, and J.X. Fan, Evaluation of Lithium-Ion Battery Equivalent Circuit Models for
State of Charge Estimation by an Experimental Approach. Energies, 2011. 4(4): p. 582-598.
4. T.B. Schön, A. Wills, and B. Ninness, System identification of nonlinear state-space models.
Automatica, 2011. 47(1): p. 39-49.
5. M.S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp, A tutorial on particle filters for online
nonlinear/non-Gaussian Bayesian tracking. Signal Processing, IEEE Transactions on, 2002. 50(2): p.
174-188.
6. A. Doucet and A.M. Johansen, A tutorial on particle filtering and smoothing: fifteen years later.
Handbook of Nonlinear Filtering, 2009: p. 656-704.
Download