To speed up a subsequent training algorithm (in some cases it may even remove noise and redundant features, making the training algorithm perform better)
To visualize the data and gain insights on the most important features
Simply to save space (compression)
The main drawbacks
Some information is lost, possibly degrading the performance of subse‐
quent training algorithms
It can be computationally intensive
It adds some complexity to your Machine Learning pipelines Transformed features are often hard to interpret
2. Main Approaches for Dimensionality Reduction
Projection 投影 Many features are almost constant,highly correlated,将高纬数据投影到低纬度数据
Manifold Learning d-dimensional的数据在n-dimensional的空间卷起来,然后可以压缩回d-dimensional,假设高纬的数据是由低纬的数据变换来的 if you reduce the dimensionality of your training set before training a model, it will definitely speed up training, but it may not always lead to a better or simpler solution; it all depends on the dataset
3. PCA
主要思想 First it identifies the hyperplane that lies closest to the data 找到最优的超平面, preserves the maximum amount of Variance,then it projects the data onto it 将数据投影上去
Principal Components The unit vector that defines the ith axis is called the ith principal component (PC) 主成分是投影平面的单位坐标轴向量,n维平面有n个主成分向量,主成分的方向不重要,重要的是定义的平面
# Singular Value Decomposition (SVD)求解矩阵的主成分向量
X_centered = X - X.mean(axis=0) #主成分要求数据以原点为中心
U, s, V = np.linalg.svd(X_centered)
c1 = V.T[:, 0]
c2 = V.T[:, 1]
W2 = V.T[:, :2]
X2D = X_centered.dot(W2)