

Improved Marquardt Algorithm for Training Neural Networks for Chemical Process Modeling
- 期刊名字:清华大学学报
- 文件大小:706kb
- 论文作者:吴建昱,何小荣
- 作者单位:Department of Chemical Engineering
- 更新时间:2020-11-11
- 下载次数:次
TSINGHUA SCIENCE AND TECHNOL0GYISSN 1007-0214 04/22pp454 - 457Volume 7,Number 5,October 2002Improved Marquardt Algorithm for Training Neural Networksfor Chemical Process ModelingWU Jianyu (吴建昱), HE Xiaorong (何小荣)**Department of Chemical Engineering,Tsinghua University, Beijing 100084, ChinaAbstract: Back -propagation (BP) artificial neural networks have been widely used to model chemicalprocesses. BP networks are often trained using the generalized delta rule (GDR) algorithm but application ofsuch networks is limited because of the low convergent speed of the algorithm. This paper presents a newalgorithm incorporating the Marquardt algorithm into the BP algorithm for training feedforward BP neuralnetworks. The new algorithm was tested with several case studies and used to model the Reid vapor pressure(RVP) of stabilizer gasoline. The new algorithm has faster convergence and is much more efficient than theGDR algorithm.Key words: neural network ; Marquardt algorithm ; trainingThe algorithm is simple and easy to program.IntroductionHowever,it has some important drawbacks thatResearch on artificial neural networks (ANN) haslimit the application of the network. First, thmade great progress during the past few years.convergence speed of a first-order GDR isNeural networks have been widely used in chemicalcommonly very slow.Secondly ,the trainingprocesses. Among all kinds of networks, the back -process is prone to a local minimal point, andpropagation ( BP ) network is the most commonoscillation often appears, which makes thechoice for its high capability of nonlinear mapping,converging process fail.study and classification. Through adjustingHowever, many improved methods have beennetwork weights according to samples, the BPpresentedo accelerate andameliorate thenetwork can simulate systems with complexconvergence.' The research falls roughly into twononlinear mapping relationships, such as chemicalcategories. The first category is to prevent thprocesses.indented oscillation through adjusting δ ( asThe most common method for using the BPdescribed in Eq. (1)) and the gradient directionnetwork training process is the generalized delta-dynamicallyt2. Different adjusting ways of thisrule ( GDR ) algorithm. It is one of thekind of methods should be designed according toalgorithms that decrease function values based ondifferent conditions. Therefore, sometimes itfunction gradient changes. Consider the followingworks well but tends to fail for other conditions.problem:The research of the second category is focused onminF(x),combining the GDR algorithm with other ones.On中国煤化工DR tragradient method .The iterative equation of GDR algorithm is givencortraining processbyThTMYHC NMH G the training processxl+1)= x(k)- δ。VF(x(k)(1)observably. But it is hard to make an efficientcontrol on switching different algorithms.In the research on this problem, someReceived: 2001-04-02; revised: 2001-12-03algorithms showed good application foreground by* *To wh而方数据pondence should be addressed.using information about the second partialTel: 86-10-62784572; E-mail: hexr@ tsinghua. edu. cnderivativel41. But to directly calculate the value ofWU Jianyu (吴建昱) et al: Improred Marquardt Algorithm for Training ....455second partial derivatives precisely is so difficultand y’ can be calculated using the followingfor complex problems like ANN that a commonequations that represent the forward transmittingway is to calculate it approximately by some otherprocess in the network:methods, such as the Marquardt ( named M forn= >[wn(k,j)●x'(j)] + bn(k),short in this paper) algorithm.In fact, the training process of BP-ANN is just atypical problem of nonlinear least squares and the .on(k)= 1+exp[- n%(k)]'M algorithm is one of the important methods toHsolve this kind of questions. It has not beenn。= >[w。(k)●ol(k)]+ bo,applied for the training of ANN because there is aninterimmatrix oflarge scale duringthe》= 1 + exp(-n')convergence process of the M algorithm. It wasdifficult to apply the M algorithm to big networksEquations to calculate all elements of the Jacobianwith large numbers of weights due to the limitationmatrix for each convergence are given by:af;of computer hardware in earlier times. In addition,Bro(k)=-y(1-)y)-6l(k),bi=y(1-y),though much faster convergence is obtained, thealgorithm is still based on differential coefficientsduen(k,j)=y(1-y).l(k).[1 -o:(k)].and tends to converge into a local minimum.wo(k).x'(j),Research on these methods is inactive because inthe past ten years great progress has been made onab.(k>=y(1-y)ol.(k).[1-o6(k)].w.(k).the genetic algorithm (GA) and the simulatedannealing (SA), both of which can skip localThe network can then be trained with an Mminimum to some extent and converge in thealgorithm through Eqs. (2) and (3) if initialweights are given.direction close to the global optimum.Recently,with the development of computers ,bottlenecks such as the physical size of EMS●b(1)memory and the large program size have alreadyx(2)been overcome. The application of the M●b.(2)algorithm seems possible, and a new method ispresented for training BP-ANN based on the Malgorithm in this paper.●b,(H)1 M Algorithm and Calculation ofJ acobian MatrixFig. 1 Neural network modelConsider the following problem of common2 Case Studies and Result Analysesnonlinear least squares :minE(x) = FT(x)F(x),2.1Case1x = (x,x,...xn)T∈R".The iterative equation of the M algorithm is given200 training and 70 checking samples are generatedby[5.6]from function y= 100+ 50 sinx. Both M and GDRx(+1)= xlk)+ .x(k)(2)algorithms are used to train the network with theO.x(A) is determined bysame initial weights. The topological structure of((J<)TJl") + λ)Q.x
-
C4烯烃制丙烯催化剂 2020-11-11
-
煤基聚乙醇酸技术进展 2020-11-11
-
生物质能的应用工程 2020-11-11
-
我国甲醇工业现状 2020-11-11
-
JB/T 11699-2013 高处作业吊篮安装、拆卸、使用技术规程 2020-11-11
-
石油化工设备腐蚀与防护参考书十本免费下载,绝版珍藏 2020-11-11
-
四喷嘴水煤浆气化炉工业应用情况简介 2020-11-11
-
Lurgi和ICI低压甲醇合成工艺比较 2020-11-11
-
甲醇制芳烃研究进展 2020-11-11
-
精甲醇及MTO级甲醇精馏工艺技术进展 2020-11-11