Loading...

Monday, January 20, 2014

Python: Face Image Completion With Multi-Output Estimators


Hello Readers,








Today we will take a break from R and use Python (I use 2.7.5) to complete images of faces using training images. I shall demonstrate various methods for pixel prediction and compare the results from the derived image matrix of faces. As a teaser, the result is shown to the right:



To begin, start the scripting tool (such as Notepad++), as the code will be too text intensive for the IDLE interpreter. We can run the .py script from the command line when we complete the code. 

Let us start coding.



The Data




We obtain the data from scikit-learn dependencies available here. The Olivetti data itself contains a set of face images taken from 1992 to 1994 at AT&T Laboratories Cambridge. There are 10 different images of 40 distinct subjects- so the target is the identity of the individual in the image ranging from 0 to 39.


Here we seek to train the estimators to predict the lower half of the images. Let us see how well they construct the lower half of people's faces with:

  • Decision trees
  • K-nearest-neighbors
  • Linear regression
  • Ridge regression



The Code



Importing Data and Modules

We require both numpy and matplotlib.pyplot. From sklearn class there are the various data and functions we need to import. As you can see, we will be predicting using decision trees, k-nearest-neighbor, linear regression, and ridge regression where the Tikhonov matrix is supplemented to the usual minimization of ordinary least squares.


Training and Test Data

Since we need to separate the training and test data, we can use the targets variable (0 to 39) to divide them.


Subsetting Data

Then we need to modify the subset we are going to test after we train the estimators. We will use 5 targets out of the 10 we have as test targets. Using the np.ceil and np.floor functions we designate the upper and lower portions of the image.


Looping the Estimators

Next we have to fit the four estimators by first specifying attributes in the ESTIMATORS object. In the for loop we will cycle through each estimator in the object and train the data we created and create a prediction with the corresponding estimator name in y_test_predict. Then we plot the image matrix.



The Plotting





We have to specify the dimensions of the images to 64 pixels square, and the number of columns to be 1+4=5. After scaling the images, and adding a title, we begin the for loop which plots the faces each row at a time (i)- range of the number of faces, and from left to right (j)- the estimators in ESTIMATORS.



We can run the script by right clicking it and selecting IDLE. Another method is through the command prompt where we can run the python script after we navigate to the proper directory. And there we have the results below (like we saw at the very beginning)!


We can observe how well the decision trees, k-nearest-neighbor, linear regression, and ridge regression predicted the lower half of the faces. Linear regression was the least 'smooth' of the four and ridge regression improved upon it immensely. I thought ridge regression and decision trees worked best in predicting the lower half of the images.


This post was guided via scikit's documentation here.



Thanks for reading,


Wayne
@beyondvalence


No comments:

Post a Comment