Fuzzy-Neural Control with Application to Biomedicine

Fuzzy-Neural Control with Application to Biomedicine

Copyright C IFAC MocIelin& md Control in Biomedical Systtms. Galvestoo. Teus. USA. 1994 Fuzzy-Neural Control with Application to Biomedicine Derek A...

2MB Sizes 0 Downloads 9 Views

Copyright C IFAC MocIelin& md Control in Biomedical Systtms. Galvestoo. Teus. USA. 1994

Fuzzy-Neural Control with Application to Biomedicine Derek A. Linkens &t lunhong Nie o Department of Automatic Control &t Syatem. Engineering, University of Sheffield, SI 4DU, UK. o Department of Electrical Engineering, National Univellity of Singapore. Singapore 0511 1. Introduction This paper addreAes three fundamental and important i ..ues concerning the implementation of a knowledge-bued system. in particular a fussy rule-bued system. for use in intelligent control. These iuues are howled,e acg",i.itioft, computatioftal repruefttatioft. and rea.ofti",. Our primary objective is to develop systema which are capable of performing self-organising and self-learning functions in real-time manner under multi-variable system environments by utilising fussy logic, neural networks, and a combination of both paradigma with emphasis on the novel system architectures, algorithms, and applications to problema found in biomedicalsystema. Considerable effort has been devoted to making the proposed approaches as generic, simple and systematic as posaible. The research consists of two independent but closely correlated parts. The first part involves mainly the utilir;ation of fussy algorithm-bued schemes, whereas the second part deals primarily with the subjects of fusr;y-neural network-bued approaches. More specifically, in the first part a unified approximate reasoning model ha. been established suitable for various definitions of linguistic connectives and for handling po..ibilistic and probabilistic uncertainties. The second (and major) part of the work has been devoted to the development of hybrid funy-neural control systema. A key idea is first to map an algorithm-bued fusr;y system onto a neural network, which carries out the tasks of representation and reasocing, and then to develop the associated self-organir;ing and self-learning schemes which perform the task of knowledge (rule) acquisition. Two mapping strategies, namely juftctioftal and .t"",cturAl mappings, are distinguished leading to distributed and localised network representation respectively. In the former case , by adopting a typical back-propagation neural network (BNN) as a reasoner, a multi-stage approach to constructing such a BNNbased fusr;y controller has been investigated. Further, a hybrid fussy control system with the ability of self-organir;ing and selflearning has been developed by adding a variable-structure Kohonen layer to a BNN network. In the latter case, we present a simple but efficient scheme which structurally maps a simplified fussy control algorithm onto a counterpropagation network (CPN). The system is characterised by explicit representation, self-construction of rule-bases, and fast learning speed. More recently, this structural approach has been utilised to incorporate fussy concepts into neural networks based on Radial Basis FUnctions (RBF) and the Cerebellar Model Articulation Controller (CMAC). Throughout this research, it has been assumed that the controlled processes will be multi-input multi-output systema with pure time delays in controls, that no mathematical model about the process exists, but some knowledge about the process is available. As a demonstrator application, a problem of dualinput/dual-output linear multivariable control for blood pre ..UTt maftagemeftt has been studied extensively with respect to each proposed structure and algorithm. 2. Fuuy Logic-bued Reuoning A ..ume that the system to be controlled has n inputs and m outputs denoted by X lo X 2 , ••• ,Xn and Y l , Y2 , •.• , Y... . Furthermore, it is assumed that the given L "IF .ituatioft THEN actioft" rules are connected by ALSO, each of which has the form of

IF Xl i.t Ai AND X 2 i.t A~ AND . .. AND Xn is A~ THENYl i.tB~ ANDY2 isB~ AND ... ANDY... isB;"

(1)

9

where A: and B~ are fussy subseb which are defined on the corresponding universes, and represent some fur;r;y concepts such as it big, medium, small etc . Then the problema of concern are how to represent knowledge described by rules numerically and how to infer an approximate action in response to a novel situation. One of the solutions is to follow the inferencing procedures used by traditional data-driven reasoning systema and take the fusr;y variables into account (1) . The basic idea is first to treat the L rules oneby-one and then to combine the individual results to give a global output . By adopting possibility theory as a representational tool, the obtained reasoning algorithm is given by

where 9, :=:, and ~ stand for the linguistic connectives ALSO , AND, and IF-THEN respectively and is the possibility measure defined by

pt

pt = Pou(CiIA~) = SUP",Eu,Min [Ci(Ui),A:(Ui)]

(3)

The reasoning algorithm (2) can be interpreted as follows. First, the possibility measure plc between the present input and each rule is calculated. If the rule is considered to be fired , the THEN part of that rule is modified to PIc~BIc(v) . The K results from K fired rules are combined according to the definition of ALSO to produce the final output. The numerous possible fusr;y operator combinations have been investigated on the blood pressure problem and their relative merits are presented in (2) . 3. Functional FUllly-Neural Control The first type of funy-neural control considen a juftctioft4l mapping utilising standard backpropagation neural networks (BNN) . Within this context, a crucial aim is the automatic knowledge acquisition of the funy rules . Aiming at establishing an identical environment in both training and application modes without involving any converting procedures, here the linguistic labels are represented by fussy numben, each of which is typically characterised by a central value and an interval around the centre. The central value is most representative of the fuuy number and the width of the associated interval accounts the degree of funiness . Thus the control rule (1) can be rewritten as

IF X i.t (u', 6')THENY i.t (vi ,,.,')

(4)

where 11' , V' , and ",' are the input and output central value vectoll and width vecton in the jth rule . Now it is po..ible to train the BNN with only the central value vecton. i .e (11' ,v') pain while leaving the width vecton untreated explicitly. This can be justified by the fact that the BNN network inherently po..eues some fur;r;iness which is exhibited in the form of interpolation over new situations (3) . This interpretation provides a basis for learning and extracting training rules directly from the controlled environment . 3.1 Neural-network based fu.sy control The principal objective in this section is to develop a BNNbued funy control system capable of constructing training rules automatically. The basic idea is first to derive required control signals and then to extract a set of training samples(3) . Thus the whole proceu consists of four stages: on-line learning, off-line extracting, off-line training, and on-line application. This cycle may be repeated if necessary. 3.1.1. Learning Fig.1 shows a block diagram of the learning system consisting of a reference model , a learning mechanism, an controller input

formation bloc:k, a abort term memory (STM), and the controlled procesa. By operating the learning mechanism iteratively, the desired control performanc:ea apecified by the reference model su~ ject to the COJDDl&Dd signala are gradually achieved and the deaired control action denoted by lId ia derived. At the aame time, the controller input u together with the learned controllld are stored in the STM in the ordered pain. We notice that there are two types of erron in Fig.l, namely, learning error eL and control error eo. The former ia defined u the difference between the output of the reference model lid and the output of the procesa 1/p, whereu the latter the discrepancy between the output of the command signal r and the output of the procesa 1/p. Suppose that the deaired response vector lid ia designated by a diagonal transfer matrix, the goal of the learning ayatem is to force the learning error e L (t) uymptotically to sero or to a predefined tolerant region f within a time interval T of interest by repeatedly operating the sy.tem. More specifically, we require that 11 eL,,(t)lI-+ 0 or 11 eL,,(t)lI< e uniformly in tE [O,T] as k -+ 00, where k denotes the iteration number. Whenever the convergence has occurred, the correaponding control action at that iteration ia regarded as the learned controllld . According to the learning goal described above, the corresponding learning law is given by [4] 11"+1 (t) = 1I1o(t) + p. eLIo(t + A) + Q. eLlo(t + A)

(5)

where 1110+1, 11" E R m are the learning control vector-valued functions at the kth and the (k+l)th iterations reapectively, eLIo, eLIo E Rm are the learning error and ita derivative vector-valued functions, >. ia an eatimated time advance corresponding to the time delay of the process, and P, Q E Rm"m are constant learning gain matrices. 3.1.2. Extracting While the effects of 6' /"Y' can be automatically and implicitly accommodated by the BNN, the primary effort for extracting training rules will be devoted to deriving the central value vecton u' and v'. Suppose that, at the kdth learning iteration, the deaired control vector sequence "d(l) i. obtained with which the reaponse of the procesa i. aati.fied with 1 E [0, I~] and I~ the maximum sampling number. Meanwhile, the control error ec(l) i. measured, expanded, and denoted by u(I) . Combining u(l) with lId(I), a paired and ordered data aet r~ : {U(I)'''d(I)}, i. created in the STM . Basically, to obtain a set of rulea in the form of (4) based on the recorded data set ia a matter of converting a timebased r ~ into a value-bued set r" . Each data usociation in r" ia representative of several elements in r ~ . Accordingly, the extracting process can be thought of as a grouping process during which I~+1 data pain are appropriately cl....ified as Iv group., each of which is represented by one and only one data pair. Thus two functions, grouping and extracting, are involved. First, each component u.(l) is acaled and quanticed to the closest element of the corre.ponding univene, thereby fonning a new data set {U(I),lId(I)}. Next, the data pain in are divided into I" groups by putting those pain with the .ame value of u(l) into one group r~, that is,

r; :

r~ :

r;

(uP'V:l)' (up,~) , ... (up, v: q )

(6)

Note that, due to the pure time delay possessed by the controlled process and the clustering effects, the Q different may be associated with the same uP in value, referred to as a conflict group. Therefore the third step is to deal with conflict resolution. Here we employ a .imple but reasonable scheme which takes the average of lIdl,lIcl2, ... , lIdq as their repre.entative denoted as vP . Now after conflict resolution, we derive I" distinct data pain in a new data .et r,,: {uP, "P}. By considering that uP and vP are the central value. corresponding to some funy numben, the above data pain can be expressed in the fonn of" IF uP THEN vP .. and can be used to train the BNN-based controller.

v:

10

The disadvantage of this structure ia that it ~quirea a 4-atage procesa for knowledge acquisition and implementation. The next section describes an alternative approach which ia a aingle-stage paradigm. 3.2 Hybrid Neural Network Fussy Controller Fig. 2 abows a schematic of this syatem structure[5] . It consists of a basic feedback control loop containing a hybrid networkbued fussy controller, a controlled process, and a performance loop compoeed of reference models and learning law.. Auuming that neither a control expert nor a mathematical model of the procesa ia available, the objectives of the overall system are (a) to minimise the tracking error between the desired output apecified by the reference model and the actual output of the process in the whole time interval of interest by adjusting the connection weights of the networks, and meanwhile (b) to construct control rule-bues dynamically, by observing, recording, and processing the input and output data associated with the net-controller. The whole syatem perfonna the two functions of control and learning simultaneously. Within each aampling period, a feedforward pass producea the suitable control input to the process and the backward atep learns both control rulea and connection weights. The function of each component in Fig.2 is described below. I'fl'JI1l.t jorm.a.tio'fl. Corresponding to traditional PID controllen, the inputs to the network-based controller are chosen to be various combinations of control error ec, change-in-error cc, and sum of error .c . Three po..ible controller modea denoted by the vector u can be constructed by combining ec with cc, ec with .c, and ec with Cc and .c, being termed as EC, ES, and ECS type controllen respectively. Rejere'flct model. The desired transient and state-steady performance. are signified by the reference model which provides a prototype for use by the learning mechanism. For the aake of simplicity we use a diagonal transfer matrix as the reference model with the elements being a second-order linear transfer function. Hybrid 'fleurAl 'fletwori:.. Aa a key component of the syatem, the hybrid neural network, as shown in Fig. 3, i. designed to .tabilise the dynamical input vector u, compute the present control value, and provide the nece..ary information for the rule-base formation module. To be more apecific, the VSC network convert. a time-based on-line incoming sequence into value-based pattern vecton which are then broadcast to the BNN network, where the pre.ent control is calculated by the BNN using feedforward algorithms. Competitive or unsupervised learning can be used as a mechanism for adaptive (learning) vector quantication (AVQ or LVQ) in which the system adaptively quanticea the pattern space by discovering a set of repreaentative prototypes. A b ...ic idea regarding the AVQ is to categorice vectorial atochastic data into different groups by employing some metric measures with a winner selection criteria. An AVQ network ia usually composed of two fully connected layen, an input layer with n units and a competitive layer with p units. The input layer can be simply receiving the incoming input vector u( t ) E Rn and forwarding it to the competitive layer through the connecting weight vecton 'Wj(t) E Rn . In view of the above diacussion, the operating procedures of the VSC competitive algorithm can be deacribed ... followa. With each iteration and each aampling time, the proce..ed control error vector u is received. Then a competitive proceas takes place among the existing units in regard to the current u by using the winner selection criteria. If one of the existing units wins the competition, the corresponding 'Wj is modified; otherwise, a new unit is created with an appropriate initialication scheme. Finally, the modified or initialiced weight vector is broadc... t to the input layer of BNN net. Thus the VSC can be regarded as a preprocessor or a pattern supplier for the BNN. From the knowledge processing point of view, the networks not only perform a self-organi&ing function by which the required knowledge b ...es are constructed automatically, but &bo behave as a real-time funy re ...oner in the aense

described previously. R.lt-6a6t fo"",atiofl.. By appropriately recording the input and

output of the hybrid neural networb, the formed rule-base, each rule havm, the format of IF .it.atiofl. THEN actiofl., offen an explicit explanation of what the net-based controller has done. Thia can be undentood in the followm, way. Becawe the information wed to suide the adjwtment of the connection weipb of the BNN is Iimilar to that for cODltructing the IF-THEN rules, the mappm, property of the raultant BNN mwt exhibit Iimilar bebaviourto that ofthe IF-THEN conditionalstatemenb. Again, from the knowledge reprelentation viewpoint, both paradigIDI in effect reprelent the ume body of knowledge wing the different fOrml. The BNN reprelenb it distributedly and implicitly, wbile IF-THEN models describe it concisely and explicitly. Ada,ti1lt mtc1l.afl.i.m. While the VSC network can be self-organised by merely relying on the incoming information, the weipb of the BNN network mwt be adjwted by back.propagating the preddined learning errors. Due to the unavailability of the teacher signals, a modified and simple adaptive law h .. been developed. ". Structural Fu .. y-Neural Control In this section a number of approaches to" .t,.,.ctUnll mappm, between fussy and neural approaches are given. In all of these, the initial basis is a simplified form of fussy control which is now described.

".1.

Simplified Fussy Control Algorithm The operation of a fussy controller typically involves three following main stages within one sampling iDltant. However by taking the nonfussy property regarding the input / output of the fussy controller into account, we have derived a kind of very simple but efficient fussy control algorithm SFCA which cODlisb of only two main steps, pattern matching and weighted average, and this is described below. Allume that A: and B~ in the rule equatioDl (1) are normalised fussy subseb whose membership functions are defined uniquely as triangles, each of which is characterised only by two parameters, M~ .i and cS~.i' or M; ." and cS! ." with the undentanding that M~.i

(M; .,,) is the centre element of the support set of A~ (BD, and cS~.i (cS!.,,) is the half width of the support set . Hence, the jth rule can be written as IF(M~ .l,cS~ . l)AND ... AND(MZ .n,cS~.n)

(7) Let MZ = (M~.l , M~.'2 "" ,MZ. n ) and ~~ = (cS~,l'cS~.'2"" ,cS~.n) be two n-dimeDlional vectors. Then the cOfl.d,tiofl. part of the jth rule may be viewed as creating a subspace or a TUI. ,atttTfl. whOle centre and radiw are M~ and ~~ respectively. Thus the cOfl.ditiofl. part of the jth rule can be simplified further to "IF M ~,,(j) n, where M ~,,(j) = (M~, ~~) . Similarly 71. current inputs UOi E (7i(i = 1,2, ··· ,71.), with UOi being a singleton, can also be represented as a n-dimeDlional vector UO or a input pattern. The fussy control algorithm can be cODlidered to be a process in which an appropriate control action is deduced from a current input and P rules according to some prespecified reasoning algorithms. We split the whole reasoning procedure into two phases : pattern matching and weighted average . The first operation deals with the IF part for all rules, whereas the second one involves an operation on the THEN part of the rules . From the pattern concept introduced above , we need to compute the matching degrees between the current input pattern and each rule pattern. Denote the current input by UO=(U01, UO'2 , ' " ,UOn ). Then the matching degree denoted by S; E [0,1) between UO and the jth rule pattern M ~,,(j) can be measured by the compliment of the corresponding relative distance given by

Sj

=1- D'(uo,M~,,(j»

(8)

11

where D' (UO, M ~,,(j» E [0, 1) denotes the relative distance from UO to M ~,,(j). D' can be specified in many waYI. With the Ulumption of an identical width cS for all fussy seb the computational definition of D' is given by

A: '

D;

= { 11 Mt,;-"o 11

if

11 MZ -

UO

II~ cS

(9)

oth.~wi.e

1

For a specific input UO and P rules, after the matching process is completed, the Icth component of the deduced control action tile is given by

(10) 9=1

9=1

where it is usumed that all membenhip functioDl b~ (v) are symmetrical about their relpective centres and have an identical width. We notice that becawe only the centrel of the THEN parts of the fired rules are utilised and they are the only element having the maximum membenhip grade 1 on the corresponding support seb, the algorithm can be understood as a modified maximum membership decision scheme in which the global centre is calculated by the Ctfl.tr. Of Gravity algorithm. Thus the rule form can be further simplified to

IF M~,,(j)THEN M~

(11)

where M~=[M~.1 ' M~.'2 "'" M~ .m) is a centre value vector of the THEN part .

".2

CounterPropagation Network (CPN) Fuuy Controller By combining a portion of the self-organising map of the Kohonen and the outltar structure of Grossberg, Hecht-Nielsen has developed a new type of neural networb named counterpropagation (CPN)[6). The CPN network cODlisb of an input layer, a hidden Kohonen layer, and a Grossberg output layer with n , N , and m unib relpectively. The aim of the CPN is to perform an approximate mapping~: Rn ..... Rm based on a set of examples (:z:', y') with :z:' vectors being randomly drawn from Rn in accordance with a fixed probability density function p . The forward algorithms of the CPN in regard to a particular input x at the time iDltant t are outlined as follows . a) Determine the winner unit I in the Kohonen layer competitively according to the distances of weight vector Wi(t)'S with respect to x D(w[(t),:z:) = min;=1 .ND(Wi(t) , :z:)

(12)

b) Calculate the outputs z.;(t) E {O,l} of the Kohonen layer by a Winner-take-all rule

%;(t) = {

~

if i

=I

oth.~wi.e

(13)

c) Compute the outputs of the Grossberg layer by N

1I;

=

I: z.;(t) . 14;i(t)

(14)

,=1 There exist some striking similarities between the fussy control algorithms (9), (10) and the CPN algorithms (12), (13), and (14). By viewing the CPN as a hard version of the fussy or soft counterpart , the stable weight vectors connecting to and emanating from the ith Kohonen unit may be expressed as specifying a rule:

IFwiTHENu;

(15)

In this perspective, by ignoring the effects of the ~ for the time being, each rule in the form of (11) can be structurally mapped

into a Kohonen unit with IF and THEN parts being represented by the connection weipts. 4.2.1. SELF-CONSTRUCTION OF RULE-BASE So far the knowledce representation problem has been solved by structural mappin« provided that the rule-bue is available. However, our primary concern lies in buildin« a rule-bue automatically from the controlled environment. In terms of CPt:l, this means that the number of units in the Kohonen layer must be self-organised and the associated weights must be self-learned. Self-organising: IF part Comparina the IF part of (11) with (15), there is a width vector ~t usociated with the former. ~t can be roughly visualised as defining a neighbourhood for the ith rule centred at Mt. Q rules partition the input apace into Q overlapping subapaces. This viewpoint together with the concept of relative distance provides some insight into findin« a solution for the problem by using a modified CPN algorithm. By auiping each Kohonen unit a predefined . width vector ~;, the winner I not only hu a minimum distance among all the existin« unita in regard to the current input x u determined by (12), but also must satisfy the condition of x falling into the winner's neipbourhood u designated by ~; . Thus if D(wr(t), x)$ ~r then unit I is conaidered to be the winner and the weight vectors in the Kohonen layer are adjusted by

w;(t + 1)

= w;(t)

+

a;(t) . [:z: - w;(t)]· z;

(16)

where i=l,N and a; (t) E [0,1] is a learning rate which decays monotonically with increuing time. On the other hand, if D(wr(t), x» ~r, it indicates that no existing unit is adequate to usign x u ita member and therefore a new unit should be created. It is clear that, starting from an empty state, the Kohonen layer can be dynamically self-organised in terms of the number of units and w weight usociated with each unit, thereby establishing the IF part of the rule-bue. Self-learning: THEN part It is always the cue that the lack of a teacher for any kind of neurocontroller, unless explicitly conatructed, represents a forbidden problem. Here a simple but efficient scheme [4] is introduced which is capable of providing teacher signals in a real-time manner. By includin« a reference model specifying the desired responae vector, the error vector, denoted bye E R"' and defined u the difference between the desired and actual output of the process, is used u a basis for teacher learning. More specifically, a teacher vector,." E R"' at the kth iteration and at the time instant t is given by

,."(t) = ,."-l(t)

+

G· c"-l(t)

(17)

where G is a learning gain matrix which can be diagonal in its simplest form. Upon the ,." being available, the weights 1£,; at the Grossberg layer are adjusted by

Uj;(t) = Uj;(t - 1)

+

(3 .

h;(t) - Uj;(t - 1)]· z;(t)

(18)

where i=l to N, j=l to m, and (3 is a conatant learning rate within the range [0,1] . Denoting u; = [Uti, 1£2;, · ·· ,u.,.;], we notice that u; is a weight vector connecting the ith unit at the Kohonen layer to every unit at the output layer. Thus by the representional convention described in the lut section, the THEN part of the rule will be learned gradually. Further details can be found in [7]. 4.3 Radial Basis Function (RBF) Fuzzy Control Simply stated, a RBF network is intended to approximate a continuous mapping f: Rn ..... R"' by performing a nonlinear transformation at the hidden layer and subsequently a linear combination at the output layer. More specifically, this mapping is described by N

j,,(u) =

L: ~ .
1£ -

w' 11)

(19)

e

where N is the number of the hidden units, 1£ Rn is an input vector, w' E R" is the centre of the jth hidden unit and can be regarded as a weipt vector from the input layer to the jth hidden unit,