スタンフォード/CS131/宿題6-4 次元削減と固有顔可視化

その買うを、もっとハッピーに。|ハピタス

前回のStanford University/CS131/宿題6-3 主成分分析(固有分解・特異値分解)の続きをやる。今回の宿題は、PCA(Principal Component Analysis:主成分分析)のDimensionality Reduction(次元削減)とVisualizing Eigenface(固有顔視覚化)をカバーする。

スポンサーリンク

Dimensionality Reduction

The top $k$ principal components explain most of the variance of the underlying data.
上位$k$主成分は基礎データの分散の大部分を説明する。

By projecting our initial data (the images) onto the subspace spanned by the top k principal components,
we can reduce the dimension of our inputs while keeping most of the information.
上位k主成分によって張られる部分空間に初期データ(画像)を投影することで、情報のほとんどを維持しながら入力次元を削減できる。

In the example below, we can see that using the first two components in PCA is not enough to allow us to see pattern in the data. All the classes seem placed at random in the 2D plane.
下の例で、データのパターンを可視化するのに、PCAの最初の2成分を使用することでは不十分なことが分かる。全てのクラスが2次元平面に無造作に配置されているように見える。

import numpy as np
from skimage import io
from utils import load_dataset
import scipy
import scipy.linalg
from collections import defaultdict
import matplotlib.pyplot as plt
from matplotlib import rc

%matplotlib inline
plt.rcParams['figure.figsize'] = (15.0, 12.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
X_train, y_train, classes_train = load_dataset('faces', train=True, as_gray=True)
X_test, y_test, classes_test = load_dataset('faces', train=False, as_gray=True)
assert classes_train == classes_test
classes = classes_train
num_classes = len(classes)
# Flatten the image data into rows
# we now have one 4096 dimensional featue vector for each example
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
class PCA(object):
    """Class implementing Principal component analysis (PCA).
    Steps to perform PCA on a matrix of features X:
        1. Fit the training data using method `fit` (either with eigen decomposition of SVD)
        2. Project X into a lower dimensional space using method `transform`
        3. Optionally reconstruct the original X (as best as possible) using method `reconstruct`
    """
    def __init__(self):
        self.W_pca = None
        self.mean = None

    def fit(self, X, method='svd'):
        """Fit the training data X using the chosen method.
        Will store the projection matrix in self.W_pca and the mean of the data in self.mean
        Args:
            X: numpy array of shape (N, D). Each of the N rows represent a data point.
               Each data point contains D features.
            method: Method to solve PCA. Must be one of 'svd' or 'eigen'.
        """
        _, D = X.shape
        self.mean = None   # empirical mean, has shape (D,)
        X_centered = None  # zero-centered data
        # YOUR CODE HERE
        # 1. Compute the mean and store it in self.mean
        # 2. Apply either method to `X_centered`
        self.mean = np.mean(X, axis=0)
        X_centered = X - self.mean
        if method == 'svd':
            self.W_pca = self._svd(X_centered)[0]
        else:
            self.W_pca = self._eigen_decomp(X_centered)[0]
        # END YOUR CODE
        # Make sure that X_centered has mean zero
        assert np.allclose(X_centered.mean(), 0.0)
        # Make sure that self.mean is set and has the right shape
        assert self.mean is not None and self.mean.shape == (D,)
        # Make sure that self.W_pca is set and has the right shape
        assert self.W_pca is not None and self.W_pca.shape == (D, D)
        # Each column of `self.W_pca` should have norm 1 (each one is an eigenvector)
        for i in range(D):
            assert np.allclose(np.linalg.norm(self.W_pca[:, i]), 1.0)

    def _svd(self, X):
        """Performs Singular Value Decomposition (SVD) of X.
        Args:
            X: Zero-centered data array, each ROW containing a data point.
                Numpy array of shape (N, D).
        Returns:
            vecs: right singular vectors. Numpy array of shape (D, D)
            vals: singular values. Numpy array of shape (K,) where K = min(N, D)
        """
        vecs = None  # shape (D, D)
        N, D = X.shape
        vals = None  # shape (K,)
        # YOUR CODE HERE
        # Here, compute the SVD of X
        # Make sure to return vecs as the matrix of vectors where each column is a singular vector
        _, vals, vecs = np.linalg.svd(X)
        # END YOUR CODE
        assert vecs.shape == (D, D)
        K = min(N, D)
        assert vals.shape == (K,)
        return vecs.T, vals     
            
    def transform(self, X, n_components):
        """Center and project X onto a lower dimensional space using self.W_pca.
        Args:
            X: numpy array of shape (N, D). Each row is an example with D features.
            n_components: number of principal components..
        Returns:
            X_proj: numpy array of shape (N, n_components).
        """
        N, _ = X.shape
        X_proj = None
        # YOUR CODE HERE
        # We need to modify X in two steps:
        #     1. first substract the mean stored during `fit`
        #     2. then project onto a subspace of dimension `n_components` using `self.W_pca`
        X_proj = np.dot(X-self.mean,self.W_pca[:,:n_components])
        # END YOUR CODE
        assert X_proj.shape == (N, n_components), "X_proj doesn't have the right shape"
        return X_proj
    
    def reconstruct(self, X_proj):
        """Do the exact opposite of method `transform`: try to reconstruct the original features.
        Given the X_proj of shape (N, n_components) obtained from the output of `transform`,
        we try to reconstruct the original X.
        Args:
            X_proj: numpy array of shape (N, n_components). Each row is an example with D features.
        Returns:
            X: numpy array of shape (N, D).
        """
        N, n_components = X_proj.shape
        X = None
        # YOUR CODE HERE
        # Steps:
        #     1. project back onto the original space of dimension D
        #     2. add the mean that we substracted in `transform`
        X = np.dot(X_proj,self.W_pca[:,:n_components].T) \
                                            + self.mean
        # END YOUR CODE
        return X
pca = PCA()
# Dimensionality reduction by projecting the data onto
# lower dimensional subspace spanned by k principal components

# To visualize, we will project in 2 dimensions
n_components = 2
pca.fit(X_train)
X_proj = pca.transform(X_train, n_components)

# Plot the top two principal components
for y in np.unique(y_train):
    plt.scatter(X_proj[y_train==y,0], X_proj[y_train==y,1], label=classes[y])

plt.rcParams["font.size"] = "12"
plt.xlabel('1st component')
plt.ylabel('2nd component')
plt.legend()
plt.show()
スポンサーリンク

Visualizing Eigenfaces

The columns of the PCA projection matrix pca.W_pca represent the eigenvectors of $X^T X$.
PCA射影行列の列pca.W_pcaは、$X^T X$の固有ベクトルを表す。

We can visualize the biggest singular values as well as the corresponding vectors to get a sense of what the PCA algorithm is extracting.
PCAアルゴリズムが何を抽出しているのかを感覚的に得るために、最大特異値のように対応ベクトルもプロットして視覚化できる。

If we visualize the top 10 eigenfaces, we can see that the algorithm focuses on the different shades of the faces. For instance, in face n°2 the light seems to come from the left.
上位10固有顔を視覚化すれば、アルゴリズムが顔の異なる影に焦点を当てていることが見て取れる。例えば、顔n°2では、光は左側から来ているように見える。

# Perform SVD on directly on the training data.
u, s = pca._svd(X_train - X_train.mean(axis=0))

print(s.shape)
print(u.shape)
(800,)
(4096, 4096)
for i in range(10):
    plt.subplot(1, 10, i+1)
    plt.imshow(pca.W_pca[:, i].reshape(64, 64))
    plt.title("%.2f" % s[i])
plt.show()
# Reconstruct data with principal components
n_components = 100  # Experiment with different number of components.
X_proj = pca.transform(X_train, n_components)
X_rec = pca.reconstruct(X_proj)

print(X_rec.shape)
print(classes)

# Visualize reconstructed faces
samples_per_class = 10
for y, cls in enumerate(classes):
    idxs = np.flatnonzero(y_train == y)
    idxs = np.random.choice(idxs, samples_per_class, replace=False)
    for i, idx in enumerate(idxs):
        plt_idx = i * num_classes + y + 1
        plt.subplot(samples_per_class, num_classes, plt_idx)
        plt.imshow((X_rec[idx]).reshape((64, 64)))
        plt.axis('off')
        if i == 0:
            plt.title(y)
plt.show()
(800, 4096)
['angelina jolie', 'anne hathaway', 'barack obama', 'brad pitt', 'cristiano ronaldo', 'emma watson', 'george clooney', 'hillary clinton', 'jennifer aniston', 'johnny depp', 'justin timberlake', 'leonardo dicaprio', 'natalie portman', 'nicole kidman', 'scarlett johansson', 'tom cruise']
スポンサーリンク

Written Question 1

Question: Consider a dataset of $N$ face images, each with shape $(h, w)$. Then, we need $O(Nhw)$ of memory to store the data. Suppose we perform dimensionality reduction on the dataset with $p$ principal components, and use the components as bases to represent images. Calculate how much memory we need to store the images and the matrix used to get back to the original space.
質問:それぞれが形状$(h, w)$を持つ$N$顔画像のデータセットを考える。その場合、データを保存するのに$O(Nhw)$のメモリが必要になる。$p$主成分を使ってデータセットに対する次元削減を行い、その成分を画像を表現するベースとして使用すると仮定する。画像と元の空間に戻すのに使用される行列を保存するためにどのくらいのメモリが必要になるのかを計算せよ。
Said in another way, how much memory does storing the compressed images and the uncompresser cost.
別の言い方をすれば、圧縮された画像と解凍器を保存するのにどのくらいのメモリが消費されるのかということ。

答え:

参考サイトhttps://github.com/

スポンサーリンク
スポンサーリンク