y(t) + a1y(t − 1) + . . . + ana y(t − na)
= b1u(t − 1) + . . . + bnb u(t − nb) + e(t) (1)
with y(t) the output signal, and u(t) the input signal of the model, and a1, a2, . . . , ana b1, b2, . . . , bnb unknown parameters. The use of these kinds of models in estimation and identification problems is essentially based on the argument that a least squares identification criterion is an optimization problem that is analytically solvable.
Since the white noise term e(t) here enters as a direct error in the difference equation, the model is often called an equation error model. The adjustable parameters in this case are:
y1. (sim)
8
y1. (sim)
8
6 6
4 4
2 2
0 0
−2 −2
−4 −4
−6 −6
100 200 300 400 500 600 700 800
Time (sec)
100 200 300 400 500 600 700 800
Time (sec)
Fig. 2. ARX modeled data (- - -) v/s actual data (—)
θ = [a1 . . . ana b1 . . . bnb ]
Fig. 3. ARMAX modeled data (- - -) v/s actual data (—)
B(q) C(q)
G(q, θ) = ; H(q, θ) =
If we introduce
A(q)
A(q)
A(q) = 1 + a1q−1 + . . . + a B(q) = 1 + b1q−1 + . . . + b
We see that the model corresponds to
B(q)
q−na q−nb
1
The predictor for the ARMAX model can be obtained as
yˆ(t|θ) = B(q)u(t) + [1 − A(q)]y(t) (5)
+[C(q) − 1]ε(t, θ)
where
G(q, θ) =
A(q)
; H(q, θ) =
A(q)
ε(t, θ) = y(t) − yˆ(t|θ)
Computing the predictor for the system above we get
yˆ(t|θ) = B(q)u(t) + [1 − A(q)]y(t) (2)
Now we introduce the vector
ϕ(t) = [−y(t − 1) . . . − y(t − na)
u(t − 1) . . . u(t − nb)]
Then we can write the above equation in the following form
yˆ(t|θ) = θT .ϕ(t) = ϕT (t).θ (3) The predictor is a scalar product between a known data
In this case our regression vector would be
ϕ(t) = [−y(t − 1) . . . − y(t − na)
u(t − 1) . . . u(t − nb)
ε(t − 1, θ) . . . ε(t − nc, θ)]
Fig. 3 displays the ARMAX modeled data versus the actual data.
3.3 Box-Jenkins model