Title: Assignment:
1Assignment http//www.personal.reading.ac.uk/s
is01xh/teaching/CS3assignment.htm The assignment
has three parts, all relating to least squares
system identification. Part A develops software
to do recursive least squares identification.
Part B provides some real data and allows you to
identify a mechanical "master" robot of a
master/slave tele-manipulator. Part C (worth 20
of the marks) extends the RLS algorithm to either
include instrument variables or a PLS noise model
based on the Moving Average Model, or both. You
can treat part C as optional. It is recommended
that you use Matlab for the assignment, although
you will not be penalised for using other
programming languages.
2Files WWW/Central university computer Matlab and
data files are available at WWW address
http//www.personal.rdg.ac.uk/sis01xh/teachin
g/Sysid/datafiles/Datasets.htm Here you will
also find the files loadass.m (will load the data
into a suitable vector), crib.m (available as a
skeleton for your program) and dord2.m (a way of
generating a test system) Copy this program into
an appropriate directory and run it under Matlab.
Enter a file name and you will then have a
variable y in the Matlab workspace containing the
corresponding response data.
3Skeleton code for recursive least squares
estimate (see crib.m) A skeleton for a
recursive identification algorithm. Since the
assignment assumes an ARX model this code does
likewise. You need to set Na, Nb (number of a
and b coefficients) and LN (large number). Nc
is not needed if this is an ARX model. You also
need to supply the algorithm! The code
assumes that the data is in a vector y and the
input is in a vector u theta_nminus1zeros(NaN
b,1) Initialise the estimate of theta to
zero P_nminus1LN.eye(NaNb) Initialise P
where LN is a large number Theta history
of theta starts here Step through data and for
each new point generate a new estimate for n110
Change 10 to length(y) once you have
the code working set py to the previous Na y
values pyzeros(1,Na) for in-1-1n-Na
if igt0 py(n-i)y(i) end end
4 set pu to the previous Nb u values
puzeros(1,Nb) for in-1-1n-Nb if igt0
pu(n-i)u(i) end end Construct varphi from
py' and pu' varphi ... Use varphi(n), y(n)
theta(n-1) and P(n-1) to iterate the next
estimate epsilon ... P P_nminus1 - ... K
... thetatheta_nminus1 ... get ready for
the new iteration theta_nminus1theta
P_nminus1P To get a history of theta set
ThetaTheta theta' end and so it ends If
you have recorded parameter evolution you can
plot with plot(Theta)
5General form of Recursive algorithms where
is a vector of model parameters
is the difference between the measured
output and the estimated output at time n
is the scaling - sometimes known as the
Kalman Gain Supose we have all the data
collected upto time n. Then consider the
formation of the matrix at time n as
6Derivation of Recursive Least Squares Given that
is the collection Thus the
least squares solution is Now what happens
when we increase n by 1, when a new data point
comes in, we need to re-estimate this
requires repetitions calculations and
recalculating the inverse (expensive in computer
time and storage)
7Lets look at the expression
and and define
8(1)
(2)
The least squares estimate at data n
(3)
(4)
( Substitute (4) into (3) )
( Applying (1) )
9RLS Equations are
But we still require a matrix inverse to be
calculated in (8)