繁體版 English
登录 注册

极小问题

"极小问题"的翻译和解释

例句与用法

  • Then the perturbation methods are used to an equivalent nlp problem of minmax problems and the resulted smooth functions are just aggregate functions obtained by maximum entropy principle and minimum cross - entropy principle , respectively
    然后文中把这种方法应用到有限维极大极小问题的一个等价非线性规划问题,得到了相应的光滑函数,且与用最大熵原则和最小叉熵原则得到的凝聚函数相同。
  • The module conversion for a kind of max - min problems is given , namely , the max - min problem with equality and inequality constraint is converted into convex problem with linear constraint , which provides theoretical basis for designing effective algorithms
    最后,给出一类极大极小问题的模型转化,把带等式、不等式约束的极大极小问题转化为带线性约束的凸规划问题,这为设计更为有效的算法提供了理论依据。
  • Study of classifier in neural networks , 1 ) this paper presents fix an initializing algorithm on classifier in bp neural networks . the algorithm can not only speed up convergence of bp neural networks and reduce the error of training , but also abstain abstain from converging to local minimum point
    在神经网络分类器研究中, 1 )提出了一种确定bp网络分类器初值的方法,用来提高bp网络的收敛速度,降低分类误差,避免局部极小问题
  • Maximum entropy method is an effective smoothing one for the finite min - max problem , which , by adding shannon ' s informational entropy as a regularizing term to the lagrangian function of min - max problem , yields a smooth function that uniformly approaches the non - smooth max - valued function
    极大熵方法是解有限极大极小问题的一种有效光滑化法,它通过在极大极小问题的拉格朗日函数上引进shannon信息熵作正则项,给出一致逼近极大值函数的光滑函数。
  • The second chapter reveals the mathematical essence of entropy regularization method for the finite min - max problem , through exploring the relationship between entropy regularization method and exponential penalty function method . the third chapter extends maximum entropy method to a general inequality constrained optimization problem and establishes the lagrangian regularization approach . the fourth chapter presents a unified framework for constructing penalty functions by virtue of the lagrangian regularization approach , and illustrates it by some specific penalty and barrier function examples
    第一章为绪论,简单描述了熵正则化方法与罚函数法的研究现状;第二章,针对有限极大极小问题,通过研究熵正则化方法与指数(乘子)罚函数方法之间的关系,揭示熵正则方法的数学本质;第三章将极大熵方法推广到一般不等式约束优化问题上,建立了拉格朗日正则化方法;第四章利用第三章建立的拉格朗日正则化方法,给出一种构造罚函数的统一框架,并通过具体的罚和障碍函数例子加以说明。
  • Chapter 3 is devoted to the study of the convergence theory of a dual algorithm for unconstrained minimax problems . a dual algorithm for solving unconstrained minimax problems , based on the penalty function of bertsekas ( 1982 ) , is presented . we prove that there exits a threshold of the penalty parameter satisfying that the sequences generated by the dual algorithm converge locally to the kuhn - tuker point of the unconstrained minimax problems when the penalty parameter is less than the threshold
    第3章给出无约束极大极小问题的一个对偶算法的收敛理论,给出一个基于bertsekas ( 1982 )罚函数的求解无约束极大极小问题的对偶算法,证明罚参数存在一个阀值,当罚参数小于这一阀值时,该对偶算法产生的序列局部收敛到问题的kuhn - tuker点,并建立了参数解的误差估计式,同样估计了罚函数的hesse阵的条件数,它也依赖于罚参数。
  • 2 . on the base of detailedly analysing the fourier neural networks , we find this neural networks have the characteristic which can transform the nonlinear mapping into linear mapping . so , we improve the original learning algorithm based on nonlinear optimization and propose a novel learning algorithm based on linear optimization ( this dissertation adopts the least squares method ) . the novel learning algorithm highly improve convergence speed and avoid local minima problem . because of adopting the least squares method , when the training output samples were affected by white noise , this algorithm have good denoising function
    在详细分析已有的傅立叶神经网络的基础上,发现傅立叶神经网络具有将非线性映射转化成线性映射的特点,基于这个特点,对该神经网络原有的基于非线性优化的学习算法进行了改进,提出了基于线性优化方法(本文采用最小二乘法)的学习算法,大大提高了神经网络的收敛速度并避免了局部极小问题;由于采用了最小二乘方法,当用来训练傅立叶神经网络的训练输出样本受白噪声影响时,本学习算法具有良好的降低噪声影响的功能。
  • Experiments results show that the modified bp arithmetic not only has shorted study time , high efficiency , but also meet with the error goal , improve the generalization capability . so it can averted from getting into local minimum in some degree and achieve global optimization
    通过对bp改进模型的比较的研究及实验证明:改进的bp算法缩短了学习时间、提高了学习效率,不仅满足了误差目标的要求,而且提高了网络的泛化能力,在一定程度上避免了学习中的局部极小问题,实现了全局优化。
  • 更多例句:  1  2
用"极小问题"造句  
英语→汉语 汉语→英语