×

# minimum problem中文什么意思

• 极小问题
• 极小值问题

### 例句与用法

1. The statistical learning theory and support vector machine have been introduced . the model selection , over - learning , nonlinear , dimensions curse and local minimum problems have been researched
介绍了统计学习理论和支持向量机方法，研究分析了机器学习方法中存在的模型选择、过学习、非线性、维数灾难和局部极小点等问题。
2. A new genetic neural network is to conduc the non - rigid modeling , which is first put forward , in accordance with some drawbacks of the current software compensation , which include low accuracy and trouble some expressions . and the current difficult questions , which include local minimum problem and the selection of the number of neural network hidden units , are done
针对现有误差补偿过程中精度较低等问题，提出了一种改进的遗传神经网络算法建立三坐标测量机非刚性误差模型，这种算法克服了现有bp算法神经网络建模存在的局部极小点、网络隐单元数选择等问题。
3. We proposed an improved simulated annealing algorithm with neighbor function based on self - optimization of scale parameter . furthermore incorporating disaster - modification and the improved annealing into genetic algorithm , an improved genetic - annealing algorithm is proposed . in order to solve the deceptive minimum problem , an improved evolutionary strategy combined with similarity detection and improved mutation operator
提出了邻域尺度函数自寻优的模拟褪火算法，结合遗传算法，引入灾变算子，提出了改进遗传模拟褪火算法；为了解决寻优过程中的最小欺骗问题，我们提出了相似性检测，结合改进的适应值无关变异算子，提出了基于相似性检测和适应值无关变异算子的进化策略算法。
4. 2 . on the base of detailedly analysing the fourier neural networks , we find this neural networks have the characteristic which can transform the nonlinear mapping into linear mapping . so , we improve the original learning algorithm based on nonlinear optimization and propose a novel learning algorithm based on linear optimization ( this dissertation adopts the least squares method ) . the novel learning algorithm highly improve convergence speed and avoid local minima problem . because of adopting the least squares method , when the training output samples were affected by white noise , this algorithm have good denoising function
在详细分析已有的傅立叶神经网络的基础上，发现傅立叶神经网络具有将非线性映射转化成线性映射的特点，基于这个特点，对该神经网络原有的基于非线性优化的学习算法进行了改进，提出了基于线性优化方法(本文采用最小二乘法)的学习算法，大大提高了神经网络的收敛速度并避免了局部极小问题；由于采用了最小二乘方法，当用来训练傅立叶神经网络的训练输出样本受白噪声影响时，本学习算法具有良好的降低噪声影响的功能。