- Email: [email protected]

Fuzzy-Neural Control with Application to Biomedicine Derek A. Linkens &t lunhong Nie o Department of Automatic Control &t Syatem. Engineering, University of Sheffield, SI 4DU, UK. o Department of Electrical Engineering, National Univellity of Singapore. Singapore 0511 1. Introduction This paper addreAes three fundamental and important i ..ues concerning the implementation of a knowledge-bued system. in particular a fussy rule-bued system. for use in intelligent control. These iuues are howled,e acg",i.itioft, computatioftal repruefttatioft. and rea.ofti",. Our primary objective is to develop systema which are capable of performing self-organising and self-learning functions in real-time manner under multi-variable system environments by utilising fussy logic, neural networks, and a combination of both paradigma with emphasis on the novel system architectures, algorithms, and applications to problema found in biomedicalsystema. Considerable effort has been devoted to making the proposed approaches as generic, simple and systematic as posaible. The research consists of two independent but closely correlated parts. The first part involves mainly the utilir;ation of fussy algorithm-bued schemes, whereas the second part deals primarily with the subjects of fusr;y-neural network-bued approaches. More specifically, in the first part a unified approximate reasoning model ha. been established suitable for various definitions of linguistic connectives and for handling po..ibilistic and probabilistic uncertainties. The second (and major) part of the work has been devoted to the development of hybrid funy-neural control systema. A key idea is first to map an algorithm-bued fusr;y system onto a neural network, which carries out the tasks of representation and reasocing, and then to develop the associated self-organir;ing and self-learning schemes which perform the task of knowledge (rule) acquisition. Two mapping strategies, namely juftctioftal and .t"",cturAl mappings, are distinguished leading to distributed and localised network representation respectively. In the former case , by adopting a typical back-propagation neural network (BNN) as a reasoner, a multi-stage approach to constructing such a BNNbased fusr;y controller has been investigated. Further, a hybrid fussy control system with the ability of self-organir;ing and selflearning has been developed by adding a variable-structure Kohonen layer to a BNN network. In the latter case, we present a simple but efficient scheme which structurally maps a simplified fussy control algorithm onto a counterpropagation network (CPN). The system is characterised by explicit representation, self-construction of rule-bases, and fast learning speed. More recently, this structural approach has been utilised to incorporate fussy concepts into neural networks based on Radial Basis FUnctions (RBF) and the Cerebellar Model Articulation Controller (CMAC). Throughout this research, it has been assumed that the controlled processes will be multi-input multi-output systema with pure time delays in controls, that no mathematical model about the process exists, but some knowledge about the process is available. As a demonstrator application, a problem of dualinput/dual-output linear multivariable control for blood pre ..UTt maftagemeftt has been studied extensively with respect to each proposed structure and algorithm. 2. Fuuy Logic-bued Reuoning A ..ume that the system to be controlled has n inputs and m outputs denoted by X lo X 2 , ••• ,Xn and Y l , Y2 , •.• , Y... . Furthermore, it is assumed that the given L "IF .ituatioft THEN actioft" rules are connected by ALSO, each of which has the form of

IF Xl i.t Ai AND X 2 i.t A~ AND . .. AND Xn is A~ THENYl i.tB~ ANDY2 isB~ AND ... ANDY... isB;"

(1)

9

where A: and B~ are fussy subseb which are defined on the corresponding universes, and represent some fur;r;y concepts such as it big, medium, small etc . Then the problema of concern are how to represent knowledge described by rules numerically and how to infer an approximate action in response to a novel situation. One of the solutions is to follow the inferencing procedures used by traditional data-driven reasoning systema and take the fusr;y variables into account (1) . The basic idea is first to treat the L rules oneby-one and then to combine the individual results to give a global output . By adopting possibility theory as a representational tool, the obtained reasoning algorithm is given by

where 9, :=:, and ~ stand for the linguistic connectives ALSO , AND, and IF-THEN respectively and is the possibility measure defined by

pt

pt = Pou(CiIA~) = SUP",Eu,Min [Ci(Ui),A:(Ui)]

(3)

The reasoning algorithm (2) can be interpreted as follows. First, the possibility measure plc between the present input and each rule is calculated. If the rule is considered to be fired , the THEN part of that rule is modified to PIc~BIc(v) . The K results from K fired rules are combined according to the definition of ALSO to produce the final output. The numerous possible fusr;y operator combinations have been investigated on the blood pressure problem and their relative merits are presented in (2) . 3. Functional FUllly-Neural Control The first type of funy-neural control considen a juftctioft4l mapping utilising standard backpropagation neural networks (BNN) . Within this context, a crucial aim is the automatic knowledge acquisition of the funy rules . Aiming at establishing an identical environment in both training and application modes without involving any converting procedures, here the linguistic labels are represented by fussy numben, each of which is typically characterised by a central value and an interval around the centre. The central value is most representative of the fuuy number and the width of the associated interval accounts the degree of funiness . Thus the control rule (1) can be rewritten as

IF X i.t (u', 6')THENY i.t (vi ,,.,')

(4)

where 11' , V' , and ",' are the input and output central value vectoll and width vecton in the jth rule . Now it is po..ible to train the BNN with only the central value vecton. i .e (11' ,v') pain while leaving the width vecton untreated explicitly. This can be justified by the fact that the BNN network inherently po..eues some fur;r;iness which is exhibited in the form of interpolation over new situations (3) . This interpretation provides a basis for learning and extracting training rules directly from the controlled environment . 3.1 Neural-network based fu.sy control The principal objective in this section is to develop a BNNbued funy control system capable of constructing training rules automatically. The basic idea is first to derive required control signals and then to extract a set of training samples(3) . Thus the whole proceu consists of four stages: on-line learning, off-line extracting, off-line training, and on-line application. This cycle may be repeated if necessary. 3.1.1. Learning Fig.1 shows a block diagram of the learning system consisting of a reference model , a learning mechanism, an controller input

formation bloc:k, a abort term memory (STM), and the controlled procesa. By operating the learning mechanism iteratively, the desired control performanc:ea apecified by the reference model su~ ject to the COJDDl&Dd signala are gradually achieved and the deaired control action denoted by lId ia derived. At the aame time, the controller input u together with the learned controllld are stored in the STM in the ordered pain. We notice that there are two types of erron in Fig.l, namely, learning error eL and control error eo. The former ia defined u the difference between the output of the reference model lid and the output of the procesa 1/p, whereu the latter the discrepancy between the output of the command signal r and the output of the procesa 1/p. Suppose that the deaired response vector lid ia designated by a diagonal transfer matrix, the goal of the learning ayatem is to force the learning error e L (t) uymptotically to sero or to a predefined tolerant region f within a time interval T of interest by repeatedly operating the sy.tem. More specifically, we require that 11 eL,,(t)lI-+ 0 or 11 eL,,(t)lI< e uniformly in tE [O,T] as k -+ 00, where k denotes the iteration number. Whenever the convergence has occurred, the correaponding control action at that iteration ia regarded as the learned controllld . According to the learning goal described above, the corresponding learning law is given by [4] 11"+1 (t) = 1I1o(t) + p. eLIo(t + A) + Q. eLlo(t + A)

(5)

where 1110+1, 11" E R m are the learning control vector-valued functions at the kth and the (k+l)th iterations reapectively, eLIo, eLIo E Rm are the learning error and ita derivative vector-valued functions, >. ia an eatimated time advance corresponding to the time delay of the process, and P, Q E Rm"m are constant learning gain matrices. 3.1.2. Extracting While the effects of 6' /"Y' can be automatically and implicitly accommodated by the BNN, the primary effort for extracting training rules will be devoted to deriving the central value vecton u' and v'. Suppose that, at the kdth learning iteration, the deaired control vector sequence "d(l) i. obtained with which the reaponse of the procesa i. aati.fied with 1 E [0, I~] and I~ the maximum sampling number. Meanwhile, the control error ec(l) i. measured, expanded, and denoted by u(I) . Combining u(l) with lId(I), a paired and ordered data aet r~ : {U(I)'''d(I)}, i. created in the STM . Basically, to obtain a set of rulea in the form of (4) based on the recorded data set ia a matter of converting a timebased r ~ into a value-bued set r" . Each data usociation in r" ia representative of several elements in r ~ . Accordingly, the extracting process can be thought of as a grouping process during which I~+1 data pain are appropriately cl....ified as Iv group., each of which is represented by one and only one data pair. Thus two functions, grouping and extracting, are involved. First, each component u.(l) is acaled and quanticed to the closest element of the corre.ponding univene, thereby fonning a new data set {U(I),lId(I)}. Next, the data pain in are divided into I" groups by putting those pain with the .ame value of u(l) into one group r~, that is,

r; :

r~ :

r;

(uP'V:l)' (up,~) , ... (up, v: q )

(6)

Note that, due to the pure time delay possessed by the controlled process and the clustering effects, the Q different may be associated with the same uP in value, referred to as a conflict group. Therefore the third step is to deal with conflict resolution. Here we employ a .imple but reasonable scheme which takes the average of lIdl,lIcl2, ... , lIdq as their repre.entative denoted as vP . Now after conflict resolution, we derive I" distinct data pain in a new data .et r,,: {uP, "P}. By considering that uP and vP are the central value. corresponding to some funy numben, the above data pain can be expressed in the fonn of" IF uP THEN vP .. and can be used to train the BNN-based controller.

v:

10

The disadvantage of this structure ia that it ~quirea a 4-atage procesa for knowledge acquisition and implementation. The next section describes an alternative approach which ia a aingle-stage paradigm. 3.2 Hybrid Neural Network Fussy Controller Fig. 2 abows a schematic of this syatem structure[5] . It consists of a basic feedback control loop containing a hybrid networkbued fussy controller, a controlled process, and a performance loop compoeed of reference models and learning law.. Auuming that neither a control expert nor a mathematical model of the procesa ia available, the objectives of the overall system are (a) to minimise the tracking error between the desired output apecified by the reference model and the actual output of the process in the whole time interval of interest by adjusting the connection weights of the networks, and meanwhile (b) to construct control rule-bues dynamically, by observing, recording, and processing the input and output data associated with the net-controller. The whole syatem perfonna the two functions of control and learning simultaneously. Within each aampling period, a feedforward pass producea the suitable control input to the process and the backward atep learns both control rulea and connection weights. The function of each component in Fig.2 is described below. I'fl'JI1l.t jorm.a.tio'fl. Corresponding to traditional PID controllen, the inputs to the network-based controller are chosen to be various combinations of control error ec, change-in-error cc, and sum of error .c . Three po..ible controller modea denoted by the vector u can be constructed by combining ec with cc, ec with .c, and ec with Cc and .c, being termed as EC, ES, and ECS type controllen respectively. Rejere'flct model. The desired transient and state-steady performance. are signified by the reference model which provides a prototype for use by the learning mechanism. For the aake of simplicity we use a diagonal transfer matrix as the reference model with the elements being a second-order linear transfer function. Hybrid 'fleurAl 'fletwori:.. Aa a key component of the syatem, the hybrid neural network, as shown in Fig. 3, i. designed to .tabilise the dynamical input vector u, compute the present control value, and provide the nece..ary information for the rule-base formation module. To be more apecific, the VSC network convert. a time-based on-line incoming sequence into value-based pattern vecton which are then broadcast to the BNN network, where the pre.ent control is calculated by the BNN using feedforward algorithms. Competitive or unsupervised learning can be used as a mechanism for adaptive (learning) vector quantication (AVQ or LVQ) in which the system adaptively quanticea the pattern space by discovering a set of repreaentative prototypes. A b ...ic idea regarding the AVQ is to categorice vectorial atochastic data into different groups by employing some metric measures with a winner selection criteria. An AVQ network ia usually composed of two fully connected layen, an input layer with n units and a competitive layer with p units. The input layer can be simply receiving the incoming input vector u( t ) E Rn and forwarding it to the competitive layer through the connecting weight vecton 'Wj(t) E Rn . In view of the above diacussion, the operating procedures of the VSC competitive algorithm can be deacribed ... followa. With each iteration and each aampling time, the proce..ed control error vector u is received. Then a competitive proceas takes place among the existing units in regard to the current u by using the winner selection criteria. If one of the existing units wins the competition, the corresponding 'Wj is modified; otherwise, a new unit is created with an appropriate initialication scheme. Finally, the modified or initialiced weight vector is broadc... t to the input layer of BNN net. Thus the VSC can be regarded as a preprocessor or a pattern supplier for the BNN. From the knowledge processing point of view, the networks not only perform a self-organi&ing function by which the required knowledge b ...es are constructed automatically, but &bo behave as a real-time funy re ...oner in the aense

described previously. R.lt-6a6t fo"",atiofl.. By appropriately recording the input and

output of the hybrid neural networb, the formed rule-base, each rule havm, the format of IF .it.atiofl. THEN actiofl., offen an explicit explanation of what the net-based controller has done. Thia can be undentood in the followm, way. Becawe the information wed to suide the adjwtment of the connection weipb of the BNN is Iimilar to that for cODltructing the IF-THEN rules, the mappm, property of the raultant BNN mwt exhibit Iimilar bebaviourto that ofthe IF-THEN conditionalstatemenb. Again, from the knowledge reprelentation viewpoint, both paradigIDI in effect reprelent the ume body of knowledge wing the different fOrml. The BNN reprelenb it distributedly and implicitly, wbile IF-THEN models describe it concisely and explicitly. Ada,ti1lt mtc1l.afl.i.m. While the VSC network can be self-organised by merely relying on the incoming information, the weipb of the BNN network mwt be adjwted by back.propagating the preddined learning errors. Due to the unavailability of the teacher signals, a modified and simple adaptive law h .. been developed. ". Structural Fu .. y-Neural Control In this section a number of approaches to" .t,.,.ctUnll mappm, between fussy and neural approaches are given. In all of these, the initial basis is a simplified form of fussy control which is now described.

".1.

Simplified Fussy Control Algorithm The operation of a fussy controller typically involves three following main stages within one sampling iDltant. However by taking the nonfussy property regarding the input / output of the fussy controller into account, we have derived a kind of very simple but efficient fussy control algorithm SFCA which cODlisb of only two main steps, pattern matching and weighted average, and this is described below. Allume that A: and B~ in the rule equatioDl (1) are normalised fussy subseb whose membership functions are defined uniquely as triangles, each of which is characterised only by two parameters, M~ .i and cS~.i' or M; ." and cS! ." with the undentanding that M~.i

(M; .,,) is the centre element of the support set of A~ (BD, and cS~.i (cS!.,,) is the half width of the support set . Hence, the jth rule can be written as IF(M~ .l,cS~ . l)AND ... AND(MZ .n,cS~.n)

(7) Let MZ = (M~.l , M~.'2 "" ,MZ. n ) and ~~ = (cS~,l'cS~.'2"" ,cS~.n) be two n-dimeDlional vectors. Then the cOfl.d,tiofl. part of the jth rule may be viewed as creating a subspace or a TUI. ,atttTfl. whOle centre and radiw are M~ and ~~ respectively. Thus the cOfl.ditiofl. part of the jth rule can be simplified further to "IF M ~,,(j) n, where M ~,,(j) = (M~, ~~) . Similarly 71. current inputs UOi E (7i(i = 1,2, ··· ,71.), with UOi being a singleton, can also be represented as a n-dimeDlional vector UO or a input pattern. The fussy control algorithm can be cODlidered to be a process in which an appropriate control action is deduced from a current input and P rules according to some prespecified reasoning algorithms. We split the whole reasoning procedure into two phases : pattern matching and weighted average . The first operation deals with the IF part for all rules, whereas the second one involves an operation on the THEN part of the rules . From the pattern concept introduced above , we need to compute the matching degrees between the current input pattern and each rule pattern. Denote the current input by UO=(U01, UO'2 , ' " ,UOn ). Then the matching degree denoted by S; E [0,1) between UO and the jth rule pattern M ~,,(j) can be measured by the compliment of the corresponding relative distance given by

Sj

=1- D'(uo,M~,,(j»

(8)

11

where D' (UO, M ~,,(j» E [0, 1) denotes the relative distance from UO to M ~,,(j). D' can be specified in many waYI. With the Ulumption of an identical width cS for all fussy seb the computational definition of D' is given by

A: '

D;

= { 11 Mt,;-"o 11

if

11 MZ -

UO

II~ cS

(9)

oth.~wi.e

1

For a specific input UO and P rules, after the matching process is completed, the Icth component of the deduced control action tile is given by

(10) 9=1

9=1

where it is usumed that all membenhip functioDl b~ (v) are symmetrical about their relpective centres and have an identical width. We notice that becawe only the centrel of the THEN parts of the fired rules are utilised and they are the only element having the maximum membenhip grade 1 on the corresponding support seb, the algorithm can be understood as a modified maximum membership decision scheme in which the global centre is calculated by the Ctfl.tr. Of Gravity algorithm. Thus the rule form can be further simplified to

IF M~,,(j)THEN M~

(11)

where M~=[M~.1 ' M~.'2 "'" M~ .m) is a centre value vector of the THEN part .

".2

CounterPropagation Network (CPN) Fuuy Controller By combining a portion of the self-organising map of the Kohonen and the outltar structure of Grossberg, Hecht-Nielsen has developed a new type of neural networb named counterpropagation (CPN)[6). The CPN network cODlisb of an input layer, a hidden Kohonen layer, and a Grossberg output layer with n , N , and m unib relpectively. The aim of the CPN is to perform an approximate mapping~: Rn ..... Rm based on a set of examples (:z:', y') with :z:' vectors being randomly drawn from Rn in accordance with a fixed probability density function p . The forward algorithms of the CPN in regard to a particular input x at the time iDltant t are outlined as follows . a) Determine the winner unit I in the Kohonen layer competitively according to the distances of weight vector Wi(t)'S with respect to x D(w[(t),:z:) = min;=1 .ND(Wi(t) , :z:)

(12)

b) Calculate the outputs z.;(t) E {O,l} of the Kohonen layer by a Winner-take-all rule

%;(t) = {

~

if i

=I

oth.~wi.e

(13)

c) Compute the outputs of the Grossberg layer by N

1I;

=

I: z.;(t) . 14;i(t)

(14)

,=1 There exist some striking similarities between the fussy control algorithms (9), (10) and the CPN algorithms (12), (13), and (14). By viewing the CPN as a hard version of the fussy or soft counterpart , the stable weight vectors connecting to and emanating from the ith Kohonen unit may be expressed as specifying a rule:

IFwiTHENu;

(15)

In this perspective, by ignoring the effects of the ~ for the time being, each rule in the form of (11) can be structurally mapped

into a Kohonen unit with IF and THEN parts being represented by the connection weipts. 4.2.1. SELF-CONSTRUCTION OF RULE-BASE So far the knowledce representation problem has been solved by structural mappin« provided that the rule-bue is available. However, our primary concern lies in buildin« a rule-bue automatically from the controlled environment. In terms of CPt:l, this means that the number of units in the Kohonen layer must be self-organised and the associated weights must be self-learned. Self-organising: IF part Comparina the IF part of (11) with (15), there is a width vector ~t usociated with the former. ~t can be roughly visualised as defining a neighbourhood for the ith rule centred at Mt. Q rules partition the input apace into Q overlapping subapaces. This viewpoint together with the concept of relative distance provides some insight into findin« a solution for the problem by using a modified CPN algorithm. By auiping each Kohonen unit a predefined . width vector ~;, the winner I not only hu a minimum distance among all the existin« unita in regard to the current input x u determined by (12), but also must satisfy the condition of x falling into the winner's neipbourhood u designated by ~; . Thus if D(wr(t), x)$ ~r then unit I is conaidered to be the winner and the weight vectors in the Kohonen layer are adjusted by

w;(t + 1)

= w;(t)

+

a;(t) . [:z: - w;(t)]· z;

(16)

where i=l,N and a; (t) E [0,1] is a learning rate which decays monotonically with increuing time. On the other hand, if D(wr(t), x» ~r, it indicates that no existing unit is adequate to usign x u ita member and therefore a new unit should be created. It is clear that, starting from an empty state, the Kohonen layer can be dynamically self-organised in terms of the number of units and w weight usociated with each unit, thereby establishing the IF part of the rule-bue. Self-learning: THEN part It is always the cue that the lack of a teacher for any kind of neurocontroller, unless explicitly conatructed, represents a forbidden problem. Here a simple but efficient scheme [4] is introduced which is capable of providing teacher signals in a real-time manner. By includin« a reference model specifying the desired responae vector, the error vector, denoted bye E R"' and defined u the difference between the desired and actual output of the process, is used u a basis for teacher learning. More specifically, a teacher vector,." E R"' at the kth iteration and at the time instant t is given by

,."(t) = ,."-l(t)

+

G· c"-l(t)

(17)

where G is a learning gain matrix which can be diagonal in its simplest form. Upon the ,." being available, the weights 1£,; at the Grossberg layer are adjusted by

Uj;(t) = Uj;(t - 1)

+

(3 .

h;(t) - Uj;(t - 1)]· z;(t)

(18)

where i=l to N, j=l to m, and (3 is a conatant learning rate within the range [0,1] . Denoting u; = [Uti, 1£2;, · ·· ,u.,.;], we notice that u; is a weight vector connecting the ith unit at the Kohonen layer to every unit at the output layer. Thus by the representional convention described in the lut section, the THEN part of the rule will be learned gradually. Further details can be found in [7]. 4.3 Radial Basis Function (RBF) Fuzzy Control Simply stated, a RBF network is intended to approximate a continuous mapping f: Rn ..... R"' by performing a nonlinear transformation at the hidden layer and subsequently a linear combination at the output layer. More specifically, this mapping is described by N

j,,(u) =

L: ~ .

1£ -

w' 11)

(19)

e

where N is the number of the hidden units, 1£ Rn is an input vector, w' E R" is the centre of the jth hidden unit and can be regarded as a weipt vector from the input layer to the jth hidden unit,

12

w'

112 /«(7,)2]

(20)

offer a desirable property making the hidden units to be locallytuned, where the locality of the

w' ,

w'

[9]. 4.4.

,=1

[-11 1£ -

Cerebellar

Model

Articulation

Controller

(CMAC) Fo.. y Control The CMAC is desiped to represent approximately a multidimensional function by uaociating an input vector ,. E U C Rn with a correspondinc function vector v EVe R"'. CMAC haa a aimilar .tructure to a three layered. network with auociation cella playing the role of hidden layer unib. Mathematically, CMAC may be described .. COnailting of a leries of mappings: U - t A - t V, where A il a N-dimensional cell apace. A fixed mapping U - t A tranaforms each ,. E U into an Ndimensional binary aIIociate vector 4(,,) in which only NL element. have the values of 1, where NL < N is referred to as the generalisation width. In other words, each ,. activates preci.ely N L auociation cella or geometrically each ,. is aIIociated with a neighbourhood in which N L aIIociation cella are included. An important property of the CMAC il local generalisation derived from the fact that nearby input vectors ,.. and ,., have lome overlapping neighbourhood and therefore share IOme common _ IOciation cella. The degree to which the neighbourhoods of ,.. and p.; are overlapping depends on the Hamming di.tance Hi; of ,.. and ,.;. If Hi, is amall, the intersection of ,.. and ,.; .hould be large and vice versa. At .ome values of Hi; greater than N L, the intersection becomes null indicating that no generalisation occurs. According to the above principle, Albus[10] developed a mapping algorithm conailting of two lequential mappings: U - t M - t A. n component. of ,. are first mapped into n NL-dimensional vectors and these vectors are then concatenated into a binary _ lociation vector 4 with only N L elements being 1. Albus alao lugge.ted an approach to further reducing the N as.ociation cella into a much smaller let by a huhing code. The A- > V mapping i. aimply a procedure of summing the weights of the aslOciation cella excited by the input vector,. to produce the output. More .pecifically, each component Vlo: is given by N Vlo:=

L4;(")'~

• except that the former uses the cri.p neighbourhood function, whereas the latter adopts the graded one . In f~t, by letting SO, =1 for .' > 0 and so, = 0 for other cases, the vector 4 will be precilely equal to the vector so, indicating that the former i. a .pecial case of the latter. We notice that a natural measurement of whether ,. belongs toil' in (22) is to use some diltance metric relevant to the generalisation width N L. In fact, Albus himself [10] dilCUlled the question of how the overlapping of ,.. and ,., is related to the Hamming diltance of ,.. and ,., although he did not formulate explicitly thil concept into the mapping process U - t A . Now we are in a position to implement a FCMAC by replacing eqn (21) with (10). Several advantages can be identified by this replacement . The concept of the graded matching degree not only providel a clear interpretation for U - t A mapping, but alao offers a much aimpler and more systematic computing mechanilm than that proposed by Albus where lome very complicated addreaaing techniques are utilised and further hashing code may be needed to reduce the Itorage. In addition, as noted by Moody [n] and Lane et al [12], the graded neighbourhood functions overcome the problem of discontinuous responses over neighbourhood boundaries due to the crisp neighbourhood functions . However, in order to use FCMAC, reference vectors w' must be specified in advance . Fortunately, this requirement can be met by introducing a funified Kohonen .elf-organising Icheme as described in lections 4.2 and 4.3 . For further details see [13] . 5 . Application to Multivariable Blood Pressure Con-

trQI

It is often required to regulate limultaneously the cardiac output (CO) and the mean arterial pressure (MAP) of a patient in hospital intensive care using various drugs. Two typical drugs used are dopamine (OOP) an inotropic drug, and sodium nitroprusside (SNP) a vaaoactive drug. For the purpole of these aimulation Itudies, we have adopted the same model [2] throughout, and which is given by

KI~.-~2'

(21)

,=1

[

where ~ denotes the weight connecting the ;th aslociation cell to the kth output . Notice that only NL auociation cella contribute to the output. By carefully inapecting the SFCA and the CMAC, we have concluded that there exi.t IOme .triking .imilarities between these two systems. FUnctionally, both of them perform a function approximation in an interpolative look-up table manner with an underlying principle of generalisation and dichotomy: to produce similar outputs in responae to similar input patterns and produce independent outputs to diaaimilar input patterna. From the computational point of view, mapping U - t A corresponds to the calculation of the matching degree in the SFCA, and mapping A - t V corre.ponds to the weighted averaging procedure given by (10). While the latter similarity is apparent by comparing (10)

M;

with (21), where 4', ~, and N correspond to s', ,10:' and P, the former equivalence can be made clearer as follows . Instead of saying that each input,. is associated with a neighbourhood specified by N L, we can equally consider that each asaociation cell '11' is auociated with a neighbourhood centred at, say, w' E U referred to as the reference vector, with the width controlled by N L . If a current input,. is within the neighbourhood of '11' , that cell is regarded as being active. In this view, the associate vector 4(,,) can be derived by operating N neighbourhood functiona "" (w' , ,.) with respect to ,., where

if,. E '11' oth.erwise

(22)

that is, 4(,,) = (",1 ,v? ,.,,,,N). By appropriately selecting the w', 4(,,) can be made to contain only N L l's . Now it becomes evident that the auociate vector 4 is similar to the matching degree vector

13

AOO]

61r1AP

=

[

1.0

0 .••••

K2~.-~2' •

1+1

•

2+ 1

]

11 .[

12

(23)

where IlCO ( ml/s) is the change in cardiac output due to 11 and IlM AP (mmHg) ia the change in mean arterial prelsure due to 11 and 1 2 ; 11 (,.g /Kg/min ) is the infusion rate of dopamine; 12 (ml/h) is the infusion rate of nitroprusside; K 11, K 12, K21 and K22 are steady-stategaina with typical values of8.44, 5.275, -0.09 and -0.15 respectively; "7"1 and "7"2 represent two time delays with typical values of "7"1 = 60s and "7"2 = 30.; and Tl and T2 are time constants typified by the values of 84.11 and 58 .75s respectively. It is evident that the model is characterised by Itrong interactions between the variables and large time delays in control. All of the fussy-neural algorithms described in this paper have been applied to the above model, and in all cases provided satisfactory performance . Some typical results will be presented in the plenary lecture. 12 ;

6. Conclusions Starting from a unified approach to approximate reasoning based on fuslY logic, a range of architectures for fUlly-neural control have been described. All of these algorithms have been validated on a multi-variable blood pressure control problem with significant time-delays and interaction between variables. The overall concept has been to retain the nonlinear mapping merits of neural networks together with the explanation merits of funy logic. Through a leries of research studies, it has been shown that a self-organising atructure can be obtained where IF ... THEN rules can be elicited automatically within a model reference control framework. Apart from the references cited in the text , full details can be found in a forthcoming book [14] . The later fusr;y-neural algorithms are being evaluated currently on nonlinear multivariable anaesthetic models.

References [1) Linkena, D .A. and Nie, 1.(1992); "A unified real time approximate reasoning approach , Part 1 , I1'I.t. J. C01'l.tr., 56, pp 347-393.

I

Learning Algorithm - - - - - ,

l~l:::.J-.:

[2) Linkens , D .A . and Nie, 1 .(1992) "A unified real time approximate reasoning approach, Part 2, I1'I.t. J. C01'l.tr. , 56, pp 36&-393 . [3) Nie, 1 . and Linken., D .A .(1992) "Neural Network-based Approximate Reasoning , I1'I.t . J. Control, 56, pp 394-413 . [4) Linkena, D .A . and Nie , 1 .(1993) ; "Constructing rule-base. for multi variable fuuy control by .elf-Iearning, I1'I.t. J . Sy.t. Sc;c. 2. ; pp 111-128.

+

t.~.~ e

e

, (~~ 1

c

Fig.l Fuzzy-neural learning system

[5) Nie , 1. and Linkena, D.A .(1992) "A hybrid neural networkbased self-organilOing fuslOy controller, I1'I.t. J . C01'l.tr., in p r ell . [6) Hecht-Nielsen, R .(1987); "Counterpropagation networks , Appl;c4 O,t;C1, 26; pp 4979-4984.

Reference Model

[7) Nie , 1 . and Linkens , D .A .(1994) "Fast self-learning multivariable fultlOy controller constructed from a modified CPN network" , I1'I.t . J. C01'l.tr., in press . [8) Moody, 1 . and Darken, C .(1989) "Fast-learning in networks of locally-tuned processing unita", N curtl.l Com.,ut., 1, pp 281-294 . [9) Nie, 1 . and Linkens, D .A .(1994) "Learning Control using fuzlOified lelf-organilOing Radial Basis FUnction network" , IEEE Trtl.1'I.o . Fuzzy Syo ., in press. [10) Albus, 1 .S .(1975) "A new approach to manipulator control : the Cerebellar Model Articulation Controller (CMAC)", J . Dy1'l.. SY'. Metl. •. C01'l.tr., 97, pp 220-227. [11) Moody, 1 .(1989) "Fast-learning in multi-resolution hierarcrues" , Advances in Neural Information Processing Systems, Vol. 1 (D . TowetlOky, ed), Morgan Kaufmann .

- -. . i

Aoapt've~ .

Input

~

-,

I

Hybrid

Forma lion

I •

Law

Process '

Neural Net

II !

I

( Rule-base

J

Fig.2 Hybrid fuzzy-neural learning syst.m

[12) Lane, S.H ., Handelman, D .A and Gelfard , 1 .1 .(1992) "Theory and development of high-order CMAC neural networks" , IEEE Contr. Syst . Mag ., 12, pp 23-30. [13) Nie , 1 . and Linkens, D .A .(1994) "FCMAC : A fuzzified Cerebellar Model Articulation Controller with self-organilOing capacity" , Autom.t1.t;CtI., in press . [14) Nie, 1 . and Linkens, D .A . "FuzEy-Neural Control: principles, algorithnu and applications" ,Prentice-Hall, to appear.

VSC Network

9ad

Fig.3 Hybrid neural network structure

14

J

Yo

Copyright © 2020 COEK.INFO. All rights reserved.