Applied Machine Learning in Python Part 3 of 5 Applied Data Science with python Specialization

Applied Machine Learning in Python Module 1: Fundamentals of Machine Learning – Intro to SciKit Learn

Assignment 1

Question 0 (Example)

How many features does the breast cancer dataset have?

This function should return an integer.

# You should write your whole answer within the function provided. The autograder will call
# this function and compare the return value against the correct solution value
def answer_zero():
    # This function returns the number of features of the breast cancer dataset, which is an integer. 
    # The assignment question description will tell you the general format the autograder is expecting
    
    # YOUR CODE HERE
    cancer=load_breast_cancer()
    return len(cancer.feature_names)
    raise NotImplementedError()

# You can examine what your function returns by calling it in the cell. If you have questions
# about the assignment formats, check out the discussion forums for any FAQs

Question 1

Scikit-learn works with lists, numpy arrays, scipy-sparse matrices, and pandas DataFrames, so converting the dataset to a DataFrame is not necessary for training this model. Using a DataFrame does however help make many things easier such as munging data, so let’s practice creating a classifier with a pandas DataFrame.

Convert the sklearn.dataset cancer to a DataFrame.

*This function should return a (569, 31) DataFrame with *

*columns = *[‘mean radius’, ‘mean texture’, ‘mean perimeter’, ‘mean area’, ‘mean smoothness’, ‘mean compactness’, ‘mean concavity’, ‘mean concave points’, ‘mean symmetry’, ‘mean fractal dimension’, ‘radius error’, ‘texture error’, ‘perimeter error’, ‘area error’, ‘smoothness error’, ‘compactness error’, ‘concavity error’, ‘concave points error’, ‘symmetry error’, ‘fractal dimension error’, ‘worst radius’, ‘worst texture’, ‘worst perimeter’, ‘worst area’, ‘worst smoothness’, ‘worst compactness’, ‘worst concavity’, ‘worst concave points’, ‘worst symmetry’, ‘worst fractal dimension’, ‘target’]

*and index = *RangeIndex(start=0, stop=569, step=1)

def answer_one():
# YOUR CODE HERE
df=pd.DataFrame(cancer.data, columns=['mean radius', 'mean texture', 'mean perimeter', 'mean area',
'mean smoothness', 'mean compactness', 'mean concavity',
'mean concave points', 'mean symmetry', 'mean fractal dimension',
'radius error', 'texture error', 'perimeter error', 'area error',
'smoothness error', 'compactness error', 'concavity error',
'concave points error', 'symmetry error', 'fractal dimension error',
'worst radius', 'worst texture', 'worst perimeter', 'worst area',
'worst smoothness', 'worst compactness', 'worst concavity',
'worst concave points', 'worst symmetry', 'worst fractal dimension'])
df['target']=cancer.target
return df
raise NotImplementedError()

Question 2

What is the class distribution? (i.e. how many instances of malignant and how many benign?)

This function should return a Series named target of length 2 with integer values and index = ['malignant', 'benign']

def answer_two():
    
    # YOUR CODE HERE
    df=answer_one()
    malignant= (df['target'] == 0).sum()
    benign= (df['target'] == 1).sum()
    result=pd.Series([malignant,benign],index=['malignant','benign'])
    return result
    raise NotImplementedError()

Question 3

Split the DataFrame into X (the data) and y (the labels).

This function should return a tuple of length 2: (X, y), where

  • X has shape (569, 30)
  • y has shape (569,).

def answer_three():

    # YOUR CODE HERE
    df=answer_one()
    X=df.drop('target',axis=1)
    y=df['target']
    return X,y
    raise NotImplementedError()

Question 4

Using train_test_split, split X and y into training and test sets (X_train, X_test, y_train, and y_test).

Set the random number generator state to 0 using random_state=0 to make sure your results match the autograder!

This function should return a tuple of length 4: (X_train, X_test, y_train, y_test), where

  • X_train has shape (426, 30)
  • X_test has shape (143, 30)
  • y_train has shape (426,)
  • y_test has shape (143,)

from sklearn.model_selection import train_test_split

def answer_four():
    # YOUR CODE HERE
    df=answer_one()
    X,y=answer_three()
    X_train, X_test, y_train, y_test = train_test_split(X,y, random_state=0)
    return X_train, X_test, y_train, y_test
    raise NotImplementedError()

Question 5

Using KNeighborsClassifier, fit a k-nearest neighbors (knn) classifier with X_trainy_train and using one nearest neighbor (n_neighbors = 1).

*This function should return a sklearn.neighbors.classification.KNeighborsClassifier.

from sklearn.neighbors import KNeighborsClassifier

def answer_five():
# YOUR CODE HERE
X_train, X_test, y_train, y_test = answer_four()
knn=KNeighborsClassifier(n_neighbors=1)
knn.fit(X_train, y_train)
return knn
raise NotImplementedError()

Question 6

Using your knn classifier, predict the class label using the mean value for each feature.

Hint: You can use cancerdf.mean()[:-1].values.reshape(1, -1) which gets the mean value for each feature, ignores the target column, and reshapes the data from 1 dimension to 2 (necessary for the precict method of KNeighborsClassifier).

def answer_six():
# YOUR CODE HERE
cancerdf=answer_one()
mean=cancerdf.mean()[:-1].values.reshape(1, -1)
knn=answer_five()
predict=knn.predict(mean)
return predict
raise NotImplementedError()

Question 7

Using your knn classifier, predict the class labels for the test set X_test.

This function should return a numpy array with shape (143,) and values either 0.0 or 1.0.

def answer_seven():
# YOUR CODE HERE
X_train, X_test, y_train, y_test = answer_four()
knn=answer_five()
predcit = knn.predict(X_test)
return predcit
raise NotImplementedError()

Question 8

Find the score (mean accuracy) of your knn classifier using X_test and y_test.

This function should return a float between 0 and 1

def answer_eight():
# YOUR CODE HERE
X_train, X_test, y_train, y_test = answer_four()
knn=answer_five()
accuracy = knn.score(X_test, y_test)
return accuracy
raise NotImplementedError()

Optional plot

Try using the plotting function below to visualize the different predicition scores between train and test sets, as well as malignant and benign cells.

def accuracy_plot():
import matplotlib.pyplot as plt

%matplotlib notebook
%matplotlib inline
# YOUR CODE HERE
X_train, X_test, y_train, y_test = answer_four()
knn=answer_five()

train_accuracy = knn.score(X_train, y_train)
test_accruacy = knn.score(X_test, y_test)

malignant_predict=knn.predict(X_test[y_test==0])
benign_predict = knn.predict(X_test[y_test==1])

maligant_accuracy = (malignant_predict ==0).mean()
benign_accuracy = (benign_predict == 1).mean()

plt.figure(figsize=(25,10))
bars= plt.bar(['Train', 'Test', "Malignant", "Benign"], [train_accuracy, test_accruacy, maligant_accuracy, benign_accuracy], color=['blue', 'green', 'red', 'orange'])
plt.ylim(0,1)
plt.ylabel('Accuracy')
plt.title('My plot')

# for bar in bars:
#     yval = bar.get_height()
#     plt.text(bar.get_x() + bar.get_width)/2, yval + 0.02, round(yval, 2), ha='center', va='bottom', color='black')

plt.tight_layout()
plt.show()

raise NotImplementedError()

Leave a Reply