Materials Proeessin2
ELSEVIER
Journal of Materials Processing Technology 71 (1997) 288298
Technolo
Modelling of submerged arc weld beads using selfadaptive offset neutral networks Ping Li, M.T.C. Fang *, J. Lucas Department of Electrical Engineering and Electronics, University of Liverpool, Liverpool L69 3B.¥, UK
Received l July 1996
Abstract
The nonlinear relationship between the five geometric descriptors (height, width, penetration, fused and deposited areas) of a bead and the welding parameters (current, voltage and welding speed) of submerged arc welding (SAW) has been modelled using neutral networks. A comparative study between multioutput networks and singleoutput networks, each modelling one geometric descriptor, has shown the advantages of singleoutput networks. The structure of a conventional feedforward multilayer perceptron network with a single output is modified to accommodate an offset layer which offsets the inputs. This network, known as the selfadaptive offset network (SAON), has definite advantages over conventional multilayer perceptron networks. Altogether, 21 singleoutput neutral networks have been trained for the four types of SAW welds investigated. These networks have achieved good agreement with the training data and have satisfactory generalisation. Overtraining is avoided by using online checking facilities of the software. © 1997 Elsevier Science S.A. Keywords: Submerged arc welding: Neural network: Welding: Selfadaptiveoffset network
I. Introduction
Arc welding is one of the most widelyused processes in metal joining, the process involving the melting and solidification of joined materials. The geometry of the resulting weld bead is a good indicator of the weld quality. However, the geometric parameters, such as the bead width, bead height, penetration, etc., are closely coupled. Thus, independent adjustment of any single parameter is difficult. Because of the unknown nonlinear relationships between the desired bead geometry and the welding parameters (electric current, voltage, welding speed, etc.), it is difficult to know the correct machine setting that will ensure satisfactory weld quality for a given task. Automatic submerged arc welding (SAW) is an important arc welding process and is often used for joining two thick workpieces, an arc 'submerged' (thus giving the process its name) by the molten slag being used as the heat source. A blanket of granular flux, fused by the arc, forms the slag which * Corresponding author. Fax: +44
[email protected]
151 7944519: email:
09240136/97/$17.00 © 1997 Elsevier Science S.A. All rights reserved. PII S09240136(97)000873
isolates the welding zone fi'om the air. A wire which serves as the electrode is fed continuously because it is consumed by the weld bead. For a given type of joint and for a given workpiece material and flux, the dimensions of the bead geometry are controlled by the welding process parameters. The principal control variables are the arc current, arc voltage, and welding speed, the auxiliary control variables being the feed rate, extension, diameter, and angle of the electrode and the current type (AC or DC and the electrode polarity in the case of DC). Multiple linearregression techniques were used to establish mathematical models for the SAW bead geometry [13]. Although some relationships between the weld geometry parameters and the welding parameters are approximately linear (e.g. the fused area varies linearly with the welding current), the welding process as a whole is still nonlinear. For example, the relationship between the SAW bead width or height and the welding current is often highly nonlinear. Therefore, the linearregression technique cannot describe adequately the welding process as a whole. The inadequacy of linear regression prompts the present investigation,
P. Li el al. Joinhal ol Malerhds Proces.~'mg TeclmMog.r 71 ~1997) 28~ 29g Table I Distribution of the test results amongst the lour types of joints Type of joints Number of sets of test results
BOP 35
SF('B 21
BI 2,'4
DI 20
which is based on nonlinear mapping between the bead geometry and welding process parameters. Artificial neural networks are inherently nonlinear. They can approximate a majority of continuous nonlinear functions so that they are more suitable for modelling the SAW bead geometry. Recently, neural networks have been applied to model the penetration of SAW bead geometry [4,5] and to a square edge butt joint [6]. A feedforward neural network ;s a network the training data of which is fed forward from i:z i::~Jut to its output layer with no feedback from the output layer. The present investigation is concerned with using feedforward networks to model five bead geometry parameters (the bead height, width, penetration, deposited area, and fused area} for four types of butt joints (i.e. BeadOnPlate (BOP}, Square Edge Close Butt ~SECB). Bi Preparation, and DI Preparation). Since the invention of SAW in the mid1930s, numerous experiments have been conducted with the aim of establishing a data base for welding engineers. However, because of the complexity of SAW, some test results are not in agreement [3]. In the work described here, the neural network SAW models are established using the results based on the latest experiments of Weimann [21. The organisation of the rest of this paper is as follows. Section 2 describes Weimann's data. The neural networks for the modelling of the SAW beads are then introduced and two improved structures of the conventional feedforward neural network, known as the simple buffer network (Section 3) and selfadaptive offset network (SAON) (Section 4) are proposed. In Section 5, the problem associated with overtraining and its avoidance are discussed. The results of the neuralnetwork models are compared in Section 6 with the results of tests and with those given by statistical regression. Finally, conclusions are drawn.
289
2. Weimann's da{a The SAW experiments carried out by Weimann [2] consist of the four types of butt joints mentioned above. There are one hundred and foul sets of experimental results available for training neural networks. Tile distribution of experimental ~esuits among the four types is given in Table ~. Weimann used an ESAB LAE 1250 power source, which supplies a constant voltage. Because the electrode is consumed in SAW, the electrode wire is fed continuously at a speed known as the wire feed rate IWFR}. For a SAW process with a constant voltage type of power source, the WFR is approximately a linear function of the welding current and is therefore not chosen as an independent variable [2]. The power source supplies a current in the range of 250 A at 24 V to a maximum of 1250 A at 44 V. Although the desired current and voltage must be chosen before wdding, these two variables need to be adjusted to achieve their optimum values alter the arc is initiated. The current and the voltage can only be adjusted m steps. The welding speed of an ESAB A6S welding head mounted on an ESAB A6BUP welding machine is set and controlled by an ESAB PTF control box, the range of the speed being from 60 to 2500 mm min In addition to the welding equipment, weldi:~g consumables, i.e. the electrode wire and flux, saould be compatible with the equipment. SOUDOR B wires combined with TITAN B fluxes produce satisfactory results when used in a single wire SAW process with the following machine settings: welding current 300900 A, arc voltage 2438 V; and welding speed 200 1500 mm min ~. The machine settings of the SAW experiments performed by Weimann Pall within a similar range: welding current 250820 A; arc voltage 2544 V: and welding speed 2501100 mm rainThe reproducibility of the tests has been investigated by choosing the results for which the welding parameters are set nominally at 550 A, 30 V and 500 mm rain ~ respectively. For all of the experiments in [2], the diameter of the feed wire is 3.25 mm and it is at 90 ° to the workpiece. The electrode is positive under DC operation, and its extension is 30 ram. The workpiece is Lloyd's Grade A type carbonmanganese nol'malstrength shipbuilding steel.
Table 2 Statistical indicator of the welding process parameters for BOP with a nominal values set at 550 A, 30 V and 51111mm m i a ' Measurement Mean Standard deviation Standard deviation as a percentage of the mean value
Current (At
Voltage iV)
Speed (ram min
554.7 4.47 0.8 i%
30.3 0.09 0.30%
499.4 1.99 0.40%
~}
290
P. Li et al./ Journal of Material's Processing Technology 71 (1997) 288298
Table 3 Statistical behaviour of the weld bead dimensions derived from six tests for BOP Measurement
Penetration (mm)
Width (mm)
Height (mm)
Deposited area (mm')
Fused area (mm 2)
Mean Standard deviation Standard deviation as a percentage of the mean value
6.55 0.21 3.25%
17.04 0.36 2.13%
2.22 0.07 3.32%
26.6 0.90 3.37"/,,
56.6 i.54 2.72%
The typical reproducibility of test results is illustrated by considering six BOP weld runs. These tes.~s were carried out on the workpieces with dimen,fio.ns of 500 x 200 x 12.5 mm, where the last number icic~ to the thickness of the workpiece. Statistical analysis wa:~ performed on these welding parameters and the test results giving the mean, standard deviation, and the ratio of these two quantities are shown in Tables 2 ar~,d 3, which show that the reproducibility of the exp~riments is adequate for the establishment of SAW models. The measurement accuracy for current is + 1% and for voltage is _+ 1.5 V. However, the accurac) ~ fi.~r the welding speed is not known [2]. For other types of joints, their reproducibility is similar to that of BOP.
3. Treatment of input and output data and network structure The present objective is to establish neuralnetwork models for SAW based on conventional feedforward multilayer perceptrons with error backpropagation as the learning algorithm (hereafter referred t.,a as MLP). A commercial package, Professional ll/Ptus [11], has been used for this purpo~;e. It is wellknown that for MLP the learning and generalisation ability depends on the training data. Preprocessing of the input data can greatly improve the performance of a network both during training and generalisation [13]. For MLP the transfer function of the output neurons is of sigrno[d type. Thus, the range of output variable,,; is restricted to
(0, 1). Scaling of the output variables is therefore necessary. For the present investigation, scaling of the outpul variables is avoided by introducing an appropriate linear transfer function for the output neurons~ For MLP the input data is usually scaled to a range of [  l , l] [9] or [  2 , 2] [7], which ensures reasonable gradient of a sigmoid, the transfer function of hidden and output neurons. The determination of the optimum network structure of an MLP is based on trialanderror. To model the SAW bead geometry there are five quantities char~tcterising it. The training of a multioutput network usually suffers from crosstalk [12] and it is extremely time consuming to train such a network and to find its optimum structure. Here, a multioutput network for SAW bead geometry is decomposed into five single..output networks, each taking care of one aspect of the bead geometry. The results of a comparative studly of multioutput and singleoutput networks are given. 3.1. Preprocesshlg and scalhlg of" input trahfing data
The input training data can be scaled into an appropriate range by using a linear transformation: Si.p = xi.pai + bi,
where 1 < i < 3 is the index which denotes the component of the input vector, the second subscript denotes the p th input vector, and x~,p is the original input. Two parameters, the scaling factor a~ and the offset factor bt, are given by: (/i [(Si)max 
W
(1)
(Si)mir,]/[(Xi)max

(Xi)min],
and:
TI
Fig. i. Dimensional descriptor of bead geometry: W = width; H = height; P = penetration; Aa = deposited area; and Ar = fused area.
bi = [(Xi)m.x(Si)Xmin  (Xi)mi.(Si)m,,~l/[(Xi)m.~  (Xi)min], Table 4 Influence of the voltage offset on the network performance Offset of voltage input
RMS
0 20 20
3.644004 3.158426 2.422281
The RMS is calculated using Eq. (3) with k = 1 and p = 28.
F Li et al. Journal o.l Materh:l.~ Processhzg Techm,h,gy 71 (1997) 2gg 298
Hidden
Offset
!
.~d
Fig. 2. A typical structure o f the selfadaptive offset neural networks. The squareshaped neuron denotes that it has a linear transfer function, whilst the circularshaped neuron has a nonlinear transfer fianction.
where (S~)m~ and (Si)mi n a r e respectively the maximum and minimum of the scaled value for the itb component of the input, and (X,)m,x and (X,)mm are the maximum and minimum values of the ith component of the input amongst the available training sets. The implementation of the scaling may be through the adoption of buffer neurons between the input and hidden layers. The transfer function of the buffer neuron is identical with that given by Eq. (1). However, scaled input data can be input directly into the network, thus reducing the number of neurons in the networks. 3..2. Tile modified output laver of SA W neural network models
the output layer of an MI.P are modified to t h a n e the outputs of SAW neural networks to take arbitrary values. This modification will not affect the ability of an MLP to model a nonlinear relationship such as that between the bead geometry inputs (i.e. voltage, current and welding speed) and outputs since an MLP with one hidden layer can approximate any nonlinear functions. Instead of using a sigmoid as the transfer function for the output neuron, the following is used: y = ax t b
Table 5 Optimisation process based on the error backpropagation learning algo::ithm for the voltage offset Learning iteration 0 600 1200 1800 2400 3000 3600 4200 4800 5400
Offset o f the voltage input

0.0976 15.0943 18.4813 18.8532 19. i 782 19.4545 19.6864 19.8808 20.0468 20.1942
R M S (fused area)
3.121722 2.733054 2.653465 2.587693 2.535049 2.493320 2.a59944 2.43263 ! 2.409586
The R M S is calculated using Eq. (3) with k = i and p = 28. Welding type: BI preparation.
(2)
which ensures that the output can take any real value. For the SAW networks it has been found that a = 2 and b = 0 are satisfactory. In practice the network performance is not sensitive to the values of these two parameters. 3.3. A comparatire study o[ single and multioutlmlS networks
Three bead geometry quantities (height. penetration and width) which are the outputs of the neural networks suffice for a comparative study. The output neurons have a linear transfer function but the input data are not scaled. The network inputs are voltage, current and welding speed. The transfer function of the neurons in the hidden layer is a conventional sigmoid defined by: ti z t = !/( 1 + exp(  z)). The relative merits of the networks are judged by comparing the RMS error after a specified number of learning cycles as well as the ability of generalisation. The RMS error is defined by: RMS err. =
The quantities which describe the bead geometry (Fig. 1) can in principle take any value. The neurons in
291
~ y" (experimental results./, p
k
],"
 network outpuk.v) 2
(3)
where k and p refer respectively to the kth component of the output vector and the pth set of training data. The generalisation ability of a trained network can be checked by evaluating its outputs and their trends whilst varying the network inputs. For both multioutput and singleoutput netwoIks the transfer function of output neurons is a linear function Eq. (2) with a = 2 and b = 0 . Extensive investigation has been carried out for SECB welds. During the search for the optimum network structure for single and multioutputs networks the number of hidden neurons and the parameters of the learning algorithm (i.e. the learning rate and momentum) have been adjusted. Altogether 29 multioutput networks and 97 singleoutput networks have been established and their RMS error compared [8]. It has
P. Li et al./Journal of MaterhTls Processing Technology 71 (1997) 288298
292
Table 6 Comparison between SAON and standard (conventional M LP) networks Network type
Hidden neuron
R M S (mm 2)
Offset Current
Voltage
Iteration
Model
Speed
SAON Standard
4 8
2.7267
81.9169 0
77.7814
2.17296 3.48303
2400 3000
BOP fused area
SAON Standard
2 4
3.3913
10.9782 0
6.9841
2.15361 2.52023
2302 600
SECBfused area
SAON
4
7.4496 0
18.6275 20.1942
5.2031 0
2.15256 2.40959
12000 5400
Bl fused area
SAON Standard
3 10
22.4212 0
85.8886 120
6.8851 0
2.08533 2.32601
7200 3600
D! fused area
SAON Standard
3 7
43.5427
18.6094 0
41.2931
1.54733 2.93942
2400 3000
SECB deposited area
SAON Standard
4 6
22.5175
6.7411 0
29.7640
1.52506 2.27626
3000 3000
B! deposited area
SAON Standard
4 5
70.0786 0
56.6051
56.3810
!.92684 2.22584
4200 3000
BOP deposited area
The RMS for a given network was calculated from the vvaiiable test results for training as given in Table I.
been found that, of the three beads geometry quantities, the bead width is the most difficult to model. The singleoutput network for bead width has the greatest number of hidden neurons. For multioutput networks the number of hidden neurons is greater than those of singleoutput networks for height and penetration, but is slightly less than that of the singleoutput network for bead width. Compared with the multioutput networks, the training and the search for the optimum structure for singleoutput networks are easier and quicker. It is for this reason that multioutput networks will not be adopted for the modelling of SAW. The singleou~tput network structure will also be adopted for the fused and deposited areas for the other types of SAW processes (i.e. BOP, the BI preparation, and D1 preparation).
4. The self=adaptive offset network (SAON) During the development of the neural network models for SAW, it was found that the prediction of the networks could be improved greatly by offsetting the scaled input data. This offset is to shift the scaled range of the ith input from [(Si)min,(Si)mad to [(Si)mi n + ci, (Si)m.~x+ ci]. Three networks with a structure of 34l have been constructed to investigate the effects of offsetting the voltage, the values of which were assumed. The results of the two networks with the offset for the input voltage set at 20 and  2 0 , respectively, are compared with the network without offset in Table 4: the improvement in the network accuracy is apparent.
Traditional MPL can offset the hidden layer by introducing bias:
net
input to the
• , hid zh = y~ w~'% + .,.~,~ i
where z~id denotes the net input to the jth hidden neuron, wJthid is the hidden layer weight on the connection from the ith input neuron to the jth hidden neuron, and tt,~° is the weight o:! the connectiox~ from the bias. However, the structure of" the conventional network cannot offset the input data before they are summed. A new network structure, known as the selfadaptive offset network (SAON), is introduced to automatically compute the offset values using the error backpropagation learning algorithm. The SAON consists of input neurons, an offset (sub) layer (which can be considered as a part of the input layer), a hidden layer and an output layer (Fig. 2). Usually, the offset layer has the same number of neurons as the input neurons. Unlike the simple buffer network, each offset neuron is connected not only to a corresponding input neuron but also to the bias. The weights of the connections from the input neurons are not varied, whilst the weights between the bias and the offset neurons and those between the offset and hidden neurons are varied using the error backpropagation learning algorithm during training. The transfer function of the offset neuron is the same as Eq. (2) with a = l and b = 0 . Thus, the output of the offset neuron is: O ° f f '  O~nput { W~ 'f
(4)
P. Li et al. 'Journal Of Malerials Processing Techmdogy 71 (1997)288298
293
Table 7 Effect of the SAON thal learns scale and offsel palameters for input ,sectors of B1 bead height compared uith the standard fccdfor~vmd network Network type
SAON Standard
Hidden neuron
7 6
Scale
Offset
Currenl
Voltage
Speed
0.826
1.772
2.101 3.~!ii Not applicable
where the superscript denotes the offset or input layer, subscript b denotes the bias, and o and w are respectively the output and the weight. With a = l the off'set can be implemented by directly introducing a bias to the input neuron without introducing an offset layer. However, Professional II/Plus does not allow such a bias of the input data. The introduction of an offset layer gives the flexibility of optimising the value> of a and b using the learning algorithm should the introduction of these parameters be beneficial. The offset layer is designed to overcome the drawback of the conventional feedforward neural network, which cannot automatically offset the input data before they form the weighted sum to the hidden layer. A (31)41 SAON is constructed to investigate the effect of voltage input offset. (311 indicates a modified input layer, which consists of three input neurons and one offset neuron connected to the voltage input. Table 5 shows the effect on RMS when the voltage input offset is optirnised by the learning algorithm, the initial offset value of 0.097'6 being chosen randomly. For SAON the learning algorithm automatically optimises the voltage offset towards  2 0 , as shown in Table 5. Altogether, seven SAONs with the same structure as that given in Fig. 2 have been trained successfully, their performance being compared with the best trained conventional MLP (denoted as the standard in Table 6). All the networks in Table 6 have three inputs and one output. The number of iterations given in the table refers to the RMS attained at a given number of iterations. This iteration number is chosen so that the variation in RMS does not exceed 1% for a further 200 iterations. Two networks (an SAON and a standard MLP) have only voltage offset. It is shown that, compared with the partiallyoffset networks, the fullyoffset SAON acquires a higher accuracy. However, this achievement is sometimes at the expense of longer training. A large number of SAON and MLP have been trained during the present investigation. The networks given in Table 6 for fused and deposited areas are chosen according to: (i) no overtraining has been observed for these networks (see next section); (ii) their
Currem
Voltage
Speed
2.623
2.057
RMS lmrn~
Iteration
0.164 0.212
1200 5400
accuracy is matched to their respective maximum output lTable 8); and (iii) the smallest RMS. There are two exceptions, one being the SAON for the BOP deposited area and the other being the standard network for the SECB deposited area. In general, SAON requires less hidden neurons. Finally, Table 7 gives an example of an SAON with the coefficients (a and b of Eq. (2)) of the linear transfer function of the offset neuron adjusted automatically by the error backpropagation learning algorithm. For such a network, coefficient a plays the role of scaling.
5, Ovectraining and online checking Overtraining, which is characterised by good fit of the training data but with very poor generalisation between the training data, should be avoided. Although there are altogether 104 sets of training data (Table 1! for SAW it has been found that all of the test data need to be used for training. The 104 sets of test data cover 12 different test conditions for four types of joints. For each type of joint the three welding input parameters are adjusted in turn whilst keeping the other two fixed during tests. Because of the cost in obtaining the test results for each test condition (i.e. varying one input parameter) there are only up to nine sets of test results. Since the performance of a network depends on the test data, reservation of some test results for checking the ability of generalisation of a trained network has been found to result in very poor network performance. It is for this reason that all of the 104 sets of test results have been used for the training. The generalisation ability of the network has been assured by using the online checking facility of Professional II/Plus, i.e. the behaviour of the network is checked constantly by visual inspection [10] of the output to see whether the network fits the training data well, as well as whether the network prediction for the range of input variables for which there are no test results agrees with the general trend. Visual inspection is carried out after every 5(~0 or 600 iterations with typical outputs on the video screen as those given in Fig. 3.
P. Li et al./Journal of Materials Processing Technology 7I (1997) 288298
294
xe3 .....
sZ.
w
.......
2~
l~eea~ion l ~ c r e ~ s intJ Cuz ~,ent
~
xWidth
IncraasinsSpeed
I n c r e a s w~ g k ~ l ~ g e
×
Fig. 3. Onscreen checking of the trend between two test points (x) by changing one of the three welding parameters. This conventional network is for SECB width.
Extrapolation of the range of applicability of a trained network to the region which is not covered by the test data is always a difficult problem. The range was extended by observing the trend of the extended curve (e.g. Fig. 4) and sometimes also checks were carried out in a threedimensional space (e.g. penetration as a function of current and voltage whilst keeping the welding speed constant) to see whether the extension follows a smooth extended surface. The present extension appears to be successful However, in order to be able to fully validate the trained networks, more test results will be needed. Table 8 The structures of the 21 neural networks for the modelling of SAW BOP
SECB
BI
DI
Fused area Deposited area Bead width Penetration
(33)41 (33)41
(33)21 (33)31
(33)4I (33)41
(33)3I 3141
3151 371
3211 321
3111 331
Bead height
371
331
(33)71
351 (Root) 3101 (Effective) 331 381
6. The established SAW neural network models and results
Altogether 21 neural network models have been established, all of which have three inputs and one output (Table 8). Some of these networks have an offset layer as defined previously. For offset and output neurons the coefficient a in Eq. (2) is set respectively to 1 and 2. The structure of a network is denoted by, for example, (33)71, with the numbers in the brackets indicating the number of input neurons and that of the offset neurons. The accuracy of these networks in terms of RMS is given in Table 9, where the range of the experimental results for each output is given also. The table is organised in the numerical order of the maxima of the test ranges. The RMS is matched to this order since the RMS should be bigger for a large value of the maximum. However, there are some exceptions, these networks being listed separately in Table 10. Typical results are shown in Fig. 4 for the five geometrical factors of the SECB welds. The points in this diagram are the training data and the solid lines are those predicted by the corresponding neural networks. The results show that the trained networks have sufficient accuracy and have good generalisation ability between the training data.
Li et al. /Journal ef Materials Processing Techmdogy 71 (1997) 288298
P.
295
t20 I, t~ E
Ilx]~

E o
60 . . . . . . . . . . . . . . . . . . . . .
~
...........
(~0  
EgO
........................................
40
.
.
.
:.*
.
.
20~
°~~" ~
~
;N~
.
=o ,~ ~
.
.
.
.
~So ~m ~
.
.
'
~'oo rL
24
.
2.,
*',,
24
I
..................................
22 20
.........
"~",
=
20
.....................
................................
18
14
12 ~00
350
,tOO 450
500
S£O
6O0
6$0
700
7~
250 300 35O 4OO 450 ~ 0 S ~ 600 gSO 70O ?~0 8O0
~
t2
t~
,
. . . . ...............................
'°o"~s
. . . . 30 ........~s"
 'o
,s
so
"2
tO
................................
s
.*
.0 4m~ 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Z !

"~00
3S0
I~O
4~
SO0
5SO
~
8SO
700
7~
~
~
4~0
fO0
~
700
800
iI
. . . . . . . . . . . . . . . . . .
E V
• .,
...................................... .
~)
.
.
.
.
.
.
.
.
.
.
.
.
..... .
,_9
.
•
.
3 2!
7"''4t'"
 J)
•
oj 300
350
400
450
SO0
~
600
6~
700
Welding Current(A)
750
200 250 300 3S0 ~0
4SU SO0 5 5 0 6 0 0 6 . ~
700 IS0 S00
Welding Speed (mm/min)
120
25
30
35
4O
45
~0
Arc Voltage (V)
Fig. 4. The prediction of five single ~,utput networks (Table 8) (solid curves) compared with linear regression (dashed curves). The points are test results for SECB welding.
P. Li et al./Journal of Materials Processing Technology 71 (1997) 288298
296
Iio
120
. . . . . . . . . . . . . . . . . . . . . . . . .
i '
r'
.
I~
.
.
.
.
.
.
.
.
.
.
< ~
..
~ m ~ m ~ ~ ~ m ~ ~ ~ ~"~"m
I00
iiiiiiiill
iii. ...... i iii
lO0
~oo
4QO
r~o
~o
o00
Ago
" .... :: ................................... ioo,) ! ,,0o
i~I . . . .
M
.........................
, ....
, ....
.
.
.
.
.
.
......... °~)
, .... 39
IS
i .... 3s
J .... 4c
, .... U
so
~B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.i ,:, ................................
26 24 22
• . . . . . . . . .
12
12
IOL. • .......................................... 290 2~0 300 350 400 46Q SO0 550 ~OQ ~
II " 2QO 300
700 750 O00 650
llt'' ...............
I
;,
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10
~ ....
6t,~_l.~.,....,....,..._,....,....,....,....,....,.._,....
:t
.
30
30F~' ~/ . . 28 ...................................
400
.~.O0 ~
700
800
900
.
8
t.QO0 1,10Q
....
20
.
.
' ......... 25
.
.
. i
30
35
....
,
....
i r..,.~ 45 50
40
14
S
,~.~ 12
I
I 2t0
. 181• '. . . . . . .,. . . . . . . •
, ....
. . . . . . . . . . . . . . . . . . . . . . . . .
,lid . . . . . . . . . . . . . . . . . . . .
!!!
, .........
!0
.................................
12 1 I0
.... 
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6t. i
(I,/)
i
=
.'"
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
l.
2~
0 .... ''"'"''"""l"'''""*~''',.i... 2 0 0 2 5 0 3 0 0 3 5 0 4 0 0 4SO SO0 SSO 6 0 0 6 , ~ 7 0 0 7 5 0 8 0 0 6 5 0
E
O 200
300
~ 400
. ~ X ~ OOO
.
700
.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
M
I
l oIt , '. . . . . . . . . . . . . . . . . . . . ... . . . . ... . . . ... . . . . ..
"i (],)
4~
800
.
.
gO0
Q[ ....
1,000 1.100
20
, ....
2S
. I ....
30
~S
40
,...
45
50
.
Q

,~...~ .
.
. I
z~ ~
~ o ,oo
m soo ,so uo~
7.o 7so , ®
Welding Current (A)
'~so
~,
,®
4oo ~
,.
,® ,~
~
,~,,,.
Welding Speed (mm/min)
....
20
, .....
25
[ 3o
3S
40
,is
SO
Arc Voltage (V)
Fig. S. The prediction o f five single output networks (Table 8) (solid curves) compared with linear regression (dashed curves). The points are test results for BOP welding.
P. Li et al. Jourmd of MateriaLs Pro('cssing Te(hmdogy 71 (1997) 288 298
297
Table 9 Tile trained neural networks listed in the order of the maximum value of the required network output Network m~del
BOP fused area SECB fl,sed area BI fused area D1 Y~:';ed area SECB deposited area BI deposited area SECB bead width BOP bead width BI bead width D1 bead width BOP penetration SECB penetration BI root penetration BOP bead height BI bead height
SAON SAON SAON SAON SAON SAON
SAON
RMS
Experimemal results
2.17296 2.15361 2.15256 2.08533 1.54733 1.52506 0.32348 0.?0234 0.29843 0.28485 0.27778 0.26947 0.26472 0.17083 0.16415
Max
Min
Range
t 17.0 98.3 97.1 87.7 68.0 66.7 24.55 23.68 22.84 14.96 ! 2.79 10.20 9.23 7.39 6.51
13.3 27.7 28.8 33.1 15.3 19.8 12.26 9.93 8.69 7.31 2.05 3.5 ! 0.76 2.01 0.63
103.7 70.6 68.3 54.6 52.7 46.9 12.29 13.75 14.15 7.65 10.74 6.69 8.47 5.38 7.14
The RMS of networks are matched to these maximum values.
Also plotted in Fig. 4 are the results of linear regression [2]: the latter gives poor results when the voltage is varied. There is also a discontinuous variation at a current of around 600 A. Similar results for BOP are given in Fig. 5. In general, the neural networks presented in Tables 8 and 9 give a good overall fit of the training data and generalisation.
7. Conclusions It has been shown that a SAW process can be modelled by neural networks with a single output based on the error backpropagation learning algorithm. Scaling or offsetting of the input data can improve the performance of a network. Since the quantities characterising the bead geometry can in principle take on any numerical values, a linear transfer fm~ction has been adopted for the output neuron, thus avoiding the scaling of the output data. A detailed comparative study be, veen conventional MLP
and the MLP with offset of the inputs (SAON) shows that SAON usually achieves better performance. Overtraining is avoided by using online checking and by choosing a proper network structure. The networks were trained using 104 sets of test results. All of the available test data were used for training as it was found that the reservation of some test data for checking the ability of generalisation of the trained network resulted in poor performance. Instead, the performance of generalisation was ensured by online checking. Full verification of the present trained networks, especially those outside the range covered by the present test data, needs furthe~ 'st data.
Acknowledgements The authors would like to thank Dr. D.H. Weimann of the Queen's University of Belfast for making the SAW data available.
Table 10 The R M S errors of the networks listed above do not match their corresponding maximum values Network model
D! deposited area BOP deposited area DI effective penetration DI root penetration SECB bead height DI bead height
SAON
RMS error
2.76921 1.92684 0.56558 0.38171 0.19680 0.34037
These networks cannot be listed in the order adopted for Table 9.
Experimental results Max
Min
Range
75.3 62.7 14.88 5.74 4.95 2.37
19.3 ! 1.0 6.19 0.33 1.77 7.35
56.0 51.7 8.69 6.07 3.18 4.98
298
P. Li et aL /Journal of Materials Processing Teciwology 71 (1997) 288298
References [1] L.J. Yang, M.J. Bibby, R.S. Chandel, Linear regression equations for modeling the submergedarc welding process, J. Mater. Process. Technol. 39 (12) (1993) 3342. [2] D.H. Weimann, A study of welding procedure generation for the submergedarc welding process, Ph.D Thesis, The Queen's University of Belfast, UK, 1991. [3] W.A. Taylor, D.H. Weimann, P.J. Martin, Knowledge acquisition and synthesis in a multiple source multiple domain process context, Exp. Syst. Appl. 8 (2) 0995) 295302. [4] R.R. Stroud, T.J. Harris and K.L. Taylor Burge, Neural networks in automated weld control, in: W. Lucas (Ed.), Automated Welding Systems in Manufacturing, Paper 22, Gateshead, North East, UK, 1017 Nov. 1991. Published by TWl, Abington Hall, Abington, Cambridge CBI 6AL, UK. [5] T.J. Hands and R.R. Stroud, The control of submerged arc welding using neural network interpretation of ultrasound, Proc. 2nd Int. Conf. Artificial Neural Networks, ICANN92, Brighton, England, 1992.
[6] X.S. Xu, J.E. Jones, •. Sutter and X.L. Xu, A neural network model of weld bead geometry control system, in: T.A. Siewert (Ed.), Proc. Int. Conf. Computerization of Welding Information IV, Orlando, Florida, American Welding Society, Miami, Florida, 1993, pp. 267294. [7] A.R. Gallant, H. White, On learning the derivatives of an unknown mapping with muitilayer feedforward networks, Neur. Net. 5 (1) (1992) 129138. [8] P. Li, Neural networks for automatic arc welding, Ph.D. Thesis, UK, The University of Liverpool, 1995. [9] Y.H. Pao, Adaptive Pattern Recognition and Neural Networks, AddisonWesley, Readi~lg, MA, 1989. [10] S. Dutta, S. Shekhar, W.H. Wong, Decision support in nonconservative domains: generalization with neural networks, Dec. Sup. Syst. 11 (5) (1994) 527544. [1 I] P.G. Raeth, Applying technology with Neuralworks Professional II, Computer 24 (1) (1991) 111112. [12] T. Hong, M.T.C. Fang, D. Hilden, PD classification by a modular neural network based on task decomposition, IEEE Trans. Dielectr. Electr. Insul. 3 (2) (1996) 207212. [13] J.A. Anderson and E. Rosenfeld, Neurocomputing: Foundations of Research, The MIT Press, Cambridge, MA, 1988, p. 508.