

Projection type neural network and its convergence analysis
- 期刊名字:控制理论与应用(英文版)
- 文件大小:397kb
- 论文作者:Youmei LI,Feilong CAO
- 作者单位:Department of Computer Science,China Jiliang University
- 更新时间:2020-12-06
- 下载次数:次
Jourmal of Control Theory and Applications3 (2006) 286- 290Projection type neural network and itsconvergence analysisYoumei LI, Feilong CAO2(1.Department of Computer Science, Shaoxing Clle of Arts and Sciences, Shaoxing Zhejiang 32000 China;2.China Jing University, Hangzhou Zhejiang 310018, China)Abstract: Projection type neural network for optimization problems has advantages over other networks for fewerparameters , low searching space dimension and simple structure. In this paper, by properly constructing a Lyapunovenergy function, we have proven the global convergence of this network when being used to optimize a continuouslydifferentiable convex function defined on a closed convex set. The result sttles the extensive applicability of the network.Several numerical examples are given to verify the eficiency of the network.Keywords: Neural network; Convex programming; Global convergence; Equilibrium points1 Introduction2 Projection-type neural network modelRecurrent neural network is a preferred optimizationConsider the following convex programming problemsolver due to its swiftness and validity, and has been exten-min{E(x),x∈8CR"},1)sively invetigated'in past decades. There exist a few neu-where 2 is a closed, nonempty convex subset of R", andral networks for convex programming or its special cases, E(x) is a continuously differentiable convex function of[1~3]. However their applications are limited for some x ∈s2. Let VE(x) denote the gradient vector of E(x) atdrawbacks, such as the difficulties of parameter settings and the point x. In this paper, we always assume the minimizerthe stringent conditions on which the convergence of neural set of the problem(1)2*≠0.network can be guaranteed.In [4], a projection-type neural network is presented,Recently in [4], a projection-type neural network waswhich can be described by the fllowing differential equa-proposed to solve nonlinear optimization problems with ation in terms of the state vectorcontinuously differentiable objective function and bounded=-x+ P2(x - aVE(x)),(2)constraints. This model has some advantages over others inwhere a, T are positive constants, and the activation func-network computation and implementation, for the numbertion Pa is defined byof variable parameters is fewer and the equilibrium pointsPa(x) = argmin|x - o川,of network correspond to the solutions of optimization prob-lem under suitable conditions.where I.1 is a vector norm. That is, P2(x) is the nearestBut regrettably, the model has some limitations when projection operator ofRn onto s2.LetF(x)= -x+Pn(x-applied to convex optimization problems, for is global aV E(x)) and denote all equilibriumn points of model (2)byconvergence require the objective function to be strictly02°.convex quadratic function and the feasible region is two- For convex optimization problem (1), it is well known [5]sides bounded. In this paper, we will further generalize this that x* is a optimizer if and only ifprojection-type neural network in [4] in two aspects:(x-x*)TVE(x*)≥0, Vx∈2.1) the feasible region is a closed nonempty set, needn'tBased on the monotone variational inequality theory, it canbe a bound-constraints region; 2) the objective function isbe represented equivalently that x* is a fixed point of projec-assumed to be convex. Based on this, the model is globallytion operator Pa(x - aVE(x)). That isconvergent for convex programming. Obviously the modelis suitable for those special case, such as linear or convex中国煤化工(")). .quadratic programming, and strictly convex programming. This.MYHCNMHGset of problem (1)ex-Received 25 January 2005; revised 14 April 2005.This work was supported by the National Natural Science Foundation of China (No. 60473034).Y L et al. /Journal of Control Theory and Applications3 (2006) 286 290287actly equals to the equilibrium set of model (2).we dropped the strict convex function condition of objectiveFurthermore, because 2 is closed convex set, for projec- function, also the feasible region is extended to any closedtion operator, it should satisfyconvex set.(u- P2(u)T(P2(u)-v)≥0,Vu∈R",v∈2,Remark 1 We can see from the theorem that the tra-|(P2(x) - Pa()l2≤((Pn(x) - P(),(x - y)》jectory of model (2) always converges to an optimal solu-≤|x- ylP.(3)tion despite 2* is finite or not. Starting from different ini-Lety = x- aVE(x) in (3), noting thatforanyx∈tial points, the trajectory may converge to different optimal82, Pn(x) =工, it immediately implies F(x)T*VE(x)≤0.solution. The simulation examples 1 and 2 will explain thisBy the definition of feasible direction, we know F(x) =clearly.-x + Pa(x - aVE(x)) is a feasible descent direction. ItRemark 2 Under certain simple constraints, the projec-should be pointed out that Pa(x) is Lipschiz continuous tion operator Pn can de described in detail, which makes(particularly, Lipschitz constant is 1).model (2) be readily implementable in hardware. Such asFrom a mathematical point of view, the characteristics fllowing insances :of an optimization neural network are relevant to the opti-Case1Q={x∈R"|a≤x≤b}.Here.Pr(x)=mization techniques employed. Prevalent and conventional (9r(x1), 92(x),... ,9n(x)T can be describedmethods are gradient methods, which involve constructingai; ifxi< ai,an appropriate computational energy function for the op-gi(xi)= Di, ifai≤xi≤ bi,timization problem and then designing a neural networkmodel which performs the gradient descent process of the(b,ifb;
-
C4烯烃制丙烯催化剂 2020-12-06
-
煤基聚乙醇酸技术进展 2020-12-06
-
生物质能的应用工程 2020-12-06
-
我国甲醇工业现状 2020-12-06
-
JB/T 11699-2013 高处作业吊篮安装、拆卸、使用技术规程 2020-12-06
-
石油化工设备腐蚀与防护参考书十本免费下载,绝版珍藏 2020-12-06
-
四喷嘴水煤浆气化炉工业应用情况简介 2020-12-06
-
Lurgi和ICI低压甲醇合成工艺比较 2020-12-06
-
甲醇制芳烃研究进展 2020-12-06
-
精甲醇及MTO级甲醇精馏工艺技术进展 2020-12-06