In chapter 4 we obtain the helly number for hyperplane transversal to translates of a convex cube in r ~ ( d ) . where we prove that the helly number for such families is 5 when d = 2 , and is greater than or equal to d + 3 when d 3 在第4章中我们探讨了o中超平面横截单位立方体平移形成的集族的heily数,证得碑中此heily数为5 ,在呼中此heily数z民并推广至呼,在胸中此heily数d 3
If these points can be cut by a hyperplane - in other words , an n - dimensional geometric figure corresponding to the line in the above example - then there is a set of weights and a threshold that define a tlu whose classifications match this cut 如果这些点可以被超平面换句话说,对应于上面示例中的线的n维的几何外形切割,那么就有一组权系数和一个阈值来定义其分类刚好与这个切割相匹配的tlu 。
The idea is proposed that those increased date , which near the separating hyperplane , is significant for the forming of the new hyperplane , whenever these date are classed by the former hyperplane to test error set berr or test right set bok 与传统的增量学习方法不同,本文中,作者认为那些在分类面边缘增加的数据对分类面的改变都起着重要的作用,无论这些数据被初硕士论文支持向量机在图像处理应用中若干问题研究始分类器p划分到测试错误集berr或者测试正确集b 。
By mapping input data into a high dimensional characteristic space in which an optimal separating hyperplane is built , svm presents a lot of advantages for resolving the small samples , nonlinear and high dimensional pattern recognition , as well as other machine - learning problems such as function fitting Svm的基本思想是通过非线性变换将输入空间变换到一个高维空间,然后在这个新的空间中求取最优分类超平面。它在解决小样本、非线性及高维模式识别问题中表现出许多特有的优势,并能够推广应用到函数拟合等其他机器学习问题中。
The separating plane with maximal margin is the optimal separating hyperplane which has good generation ability . to find a optimal separating hyperplane leads to a quadratic programming problem which is a special optimization problem . after optimization all vectors are evaluated a weight . the vector whose weight is not zero is called support vector 而寻找最优分类超平面需要解决二次规划这样一个特殊的优化问题,通过优化,每个向量(样本)被赋予一个权值,权值不为0的向量称为支持向量,分类超平面是由支持向量构造的。
Abstract : a methodology to reduce the input fuzzy sets with the hyperplane of generalized state error is discussed in this paper based on sliding mode control ( smc ) theory , a method of varying nonlinear fuzzy sets range using some parameter is proposed , and some internal properties of fuzzy controller is analyzed to show that the fuzzy controller outperform the pid controller , such as the stability and steady - state error 文摘:根据滑动模态原理,将模糊控制系统的输入量简化为广义跟踪误差的一个超平面,并基于三角形的非线性划分语言变量的隶属度,分析了模糊控制系统的某些性质,表明在系统稳定性、稳态误差等指标方面,模糊控制器优于一般的pid控制器
The advantage of multistage support vector machine is embodied in three aspects . first , towards the unpredicted areas of other multiclass support vector machines , multistage support vector machine can predict them more correctly ; secondly , according to the experimental comparison , the dissertation shows us the high accuracy of its evaluate performance . finally , for a multiclass classification , multistage support vector machine need less support vectors to construct multistage hyperplane than the other three methods , so the multistage support vector has the better generalization 多级支持向量机的优点主要体现在三个方面:一方面,对于其他几种多类支持向量机不能处理的不可测区域,它有了明显的改善;另一方面,本文通过实验比较,指出了多级支持向量机测试准确率高的特点;最后,对于一个多类问题,多级支持向量机在构造多级超平面时需要的支持向量明显少于其余三种多类支持向量机,因此具有更强的泛化能力。
A geometric transversal is defined to be an affine subspace ( such as a point , a line , a plane , or a hyperplane ) intersecting every member of a given family . in part i we discuss three kinds of such problems . in chapter 2 we discuss point transversal to a family of translates of a convex sets in the plane , where we prove a famous conjecture of griinbaum ' s by a concrete and straightforward method for some special cases 如果一仿射子空间(如一个点,一条直线,一个平面,或一个超平面)与一给定集族的每一个元都相交,则我们称该仿射子空间为该给定集族的一个几何横截(点横截,直线横截,平面横截等) ,也称该仿射子空间横截该给定集族。
Support vector machine is a kind of new general learning machine based on statistical learning theory . in order to solve a complicated classification task , it mapped the vectors from input space to feature space in which a linear separating hyperplane is structured . the margin is the distance between the hyperplane and a hyperplane through the closest points 支持向量机是在统计学习理论基础上发展起来的一种通用学习机器,其关键的思想是利用核函数把一个复杂的分类任务通过核函数映射使之转化成一个在高维特征空间中构造线性分类超平面的问题。
The separating hyperplane structured by support vectors . the larger data set in real world demands higher efficiency . decomposition is the first practical method to deal with larger data set . it decomposes the training set to two parts : active and inactive , the active part is called working set 由于现实世界的数据量一般比较大,因此对优化的效率要求较高,分解是第一种实用的可处理大数据集的技术,它把训练集分成固定大小的工作集和非工作集两部分,每次迭代只解决一个工作集中的子优化问题。