Đã hoàn thành

building neural networks for digit classification (parts a - e)

(a) First, implement a simple multilayer perceptron (MLP) with 1 hidden layer and a logistic activation function. Specifically, your network will have 2 layers: a hidden layer of size k and the

final output layer. Remember that your input is of size 28*28=784. Evaluate the training and

validation loss at each iteration, that is, average the loss over all batches in each epoch and record

the averaged loss value. Do early stopping based on the best validation loss computed in this

fashion. For example, if you run 50 epochs, you’ll have 50 validation losses, return the model

with the lowest validation loss across all epochs and use it to do testing. Plot your training and validation loss curves and report the best test accuracy after experimenting with optimization

parameters for gradient descent, as well as the width of your hidden layer. Complete TODO 1-3

first and then TODO 4-6 specified in the two given skeleton files. [Hint: You’ll either need to

save your model during training based on the validation loss and load the best model to return

for testing OR you’ll need to compute test accuracy after each epoch. Either way of finding the

test accuracy for the epoch with best validation loss is fine.]

(b) Convolution is a general concept that appears across many different disciplines. Convolution

operators can process many types of data including images, audio, text and other signals. We

will see how 2D convolutional neural nets operate for images and try to gain intuition for their

various parameterizations.

2D convolutional neural nets are specified by various hyperparameters, a receptive field (filter

size, or kernel size), number of filters K, stride S, amount of zero padding P, and type of pooling.

We will represent our input data, as well as the hidden layers, as 3D-arrays. We will denote their

dimensions by tuples, WxHxD, of width, height, and depth respectively. Since MNIST images

are black-and-white and thus have scalar-valued pixels, the depth of the input image is 1. This

means the total input dimensionality is 28x28x1. Suppose, for now, that MNIST was, in fact, in

color (RGB). This means the depth of the input image would be 3. Calculate the dimensionality

of the output for the following convolutions applied to a color-valued MNIST input:

i. Convolution Filter size of 2x2, number of filters 33, stride of 2, padding of 0

ii. Convolution Filter size of 3x3, number of filters 55, stride of 1, padding of 1

iii. Convolution Filter size of 3x3, number of filters 77, stride of 1, padding of 1. Followed by a

Max Pooling with filter size of 2x2 and stride 2.

(c) For each question above, compute the total number of parameters in the corresponding convolution


(d) Now, let’s implement a convolutional neural network and use it for digit classification on MNIST.

Start with a simple model with one convolutional layer consisting of a 5x5 kernel with 10 filters,

a stride of 1, and zero-padding of size 2. Use tanh as your non-linear activation. Use softmax as

the final layer of your model to determine output digit probabilities. Use early stopping as in (a)

and report train and validation curves along with your best test accuracy. Complete TODO 7-8

first and then TODO 9-11.

(e) Finally try experimenting and building deeper architectures (with more layers). You have the

option of adding convolution, max-pooling, and fully connected (FC) layers, as well as a choice

of activation function for each layer, along with all of the hyper parameters we have discussed

so far. To help get started, it maybe easiest to begin by adding convolution and pooling layers

that preserve the dimensions of their input, or half them. This should help getting your first deep

architectures running without errors. Describe the architecture you found with the best accuracy

on the test set. What factors do you find help improve the network? More layers, larger kernel

size, different activation function, different optimization methods?

Kĩ năng: Neural Networks, Python, Pytorch

Xem nhiều hơn: neural networks classification code, face recognition using SIFT technique and classification using Neural networks, classification of image using neural networks, mammogram classification using convolutional neural networks, age and gender classification using convolutional neural networks github, dermatologist level classification of skin cancer with deep neural networks github, building energy load forecasting using deep neural networks github, convolutional neural networks for time series classification, session-based fraud detection in online e-commerce transactions using recurrent neural networks, deep neural networks based recognition of plant diseases by leaf image classification, handwritten digit recognition using neural networks ppt, handwritten digit recognition using neural networks github, handwritten digit recognition using convolutional neural networks, handwritten digit recognition using neural networks, text classification using neural networks python, dermatologist-level classification of skin cancer with deep neural networks code, dermatologist-level classification of skin cancer with deep neural networks github, dermatologist-level classification of skin cancer with deep neural networks pdf, imagenet classification with deep convolutional neural networks ieee, imagenet classification with deep convolutional neural networks github

Về Bên Thuê:
( 0 nhận xét ) San diego, United States

ID dự án: #22617246

Được trao cho:


Hey there, we just talked about it............................................................................

$100 USD trong 7 ngày
(18 Đánh Giá)

2 freelancer đang chào giá trung bình $120 cho công việc này


HI I am experienced in Python Neural Networks Pytorch etc I can start right now but i have few doubts and questions lets have a quick chat and get it started waiting for your reply

$140 USD trong 7 ngày
(0 Nhận xét)