Great Deal! Get Instant $10 FREE in Account on First Order + 10% Cashback on Every Order Order Now

For the following assignments, please provide as much evidence of the results as possible, including the code, screenshots (only plots – not text or code) and documentation. Submit only one pdf file...

1 answer below »
For the following assignments, please provide as much evidence of the results as possible, including the code, screenshots (only plots – not text or code) and documentation. Submit only one pdf file and .ipynb / .py files containing the code with documentation.
1.a. [10 points]
We talked about a discriminator in GAN that can recognize real-life looking images. Is it possible to code the same functionality using the conventional / deductive programming paradigm? If yes,
iefly describe the algorithm. If not, explain why not. How about the veracity of tweets application?
1.b. [15 points]
Search for job descriptions which specifically ask for some of the Machine Learning terms listed on the syllabus sheet. Here's an example of one such search: https:
www.linkedin.com/jobs/search?keywords=Random%20Forest
Describe your top 5 takeaways / observations based on the search results.
2.a. [10 points]
Feature Engineering: List out as many features as possible, that can help determine the veracity of the following
· Web Page
· Student Homework
· News Item in a daily
· Image on Instagram
· Video on YouTube
2.b. [15 points]
Consider an example of conventional programming solution that you coded in the past. Briefly describe the algorithm. Can we use the inductive programming approach of Machine Learning to solve it now? If so, is there any advantage to shifting to the Machine Learning approach? If not, explain why not. In that case, can you think of an application that is cu
ently solved using conventional programming, which can benefit from Machine Learning techniques? Explain.
3. Follow the simple tutorial at https:
towardsdatascience.com/simply-explained-logistic-regression-with-example-in-r-b919acb1d6b3 (Links to an external site.) to see Logistic Regression in action. Implement the same functionality for the same dataset in Python. Does the Python version result in the same predictions of test data as R?
4.a. [15 points]
Sigmoid functions such as the logistic function played a major role during the discussion on Machine Learning for Veracity of the tweets. What are some of the other mathematical functions that can possibly take the place of the Sigmoid function and help with Machine Learning? Comment on their effectiveness.
4.b. [10 points]
Apply the discussion that we had in the second class for classifying the truthfulness of tweets to another popular classification problem: spam detection. Describe the same
iefly in your words.

– 1 –
Will Monroe
CS 109
Lecture Notes #22
August 14, 2017
Logistic Regression
Based on a chapter by Chris Piech
Logistic regression is a classification algorithm1 that works by trying to learn a function that
approximates P(Y |X ). It makes the central assumption that P(Y |X ) can be approximated as a
sigmoid function applied to a linear combination of input features. It is particularly important to
learn because logistic regression is the basic building block of artificial neural networks.
Mathematically, for a single training data point (x, y), logistic regression assumes:
P(Y = 1 | X = x) = σ(z) where z = θ0 +
m∑
i=1
θi xi
This assumption is often written in the equivalent forms:
P(Y = 1 | X = x) = σ(θTx) where we always set x0 to be 1
P(Y = 0 | X = x) = 1 − σ(θTx) by total law of probability
Using these equations for probability of Y | X we can create an algorithm that selects values of
theta that maximize that probability for all data. I am first going to state the log probability function
and partial derivatives with respect to theta. Then later we will (a) show an algorithm that can chose
optimal values of theta and (b) show how the equations were derived.
An important thing to realize is that: given the best values for the parameters (θ), logistic regression
often can do a great job of estimating the probability of different class labels. However, given bad,
or even random, values of θ it does a poor job. The amount of “intelligence” that your logistic
egression machine learning algorithm has depends on how good its values of θ are.
Notation
Before we get started I want to make sure that we are all on the same page with respect to notation.
In logistic regression, θ is a vector of parameters of length m and we are going to learn the values
of those parameters based off of n training examples. The number of parameters should be equal to
the number of features of each data point (see section 1).
Two pieces of notation that we use often in logistic regression that you may not be familiar with:
θTx =
m∑
i=1
θi xi = θ1x1 + θ2x2 + · · · + θm xm
σ(z) =
1
1 + e−z
The superscript T in θTx represents a matrix transpose; the operation θTx is equivalent to taking
the dot product of the vectors θ and x, or simply a weighted sum of the components of x (with θ
containing the weights).
1Yes, this is a te
ibly confusing name, given that regression refers to tasks that require predicting continuous
values. Perhaps logistic classification would have been better.
– 2 –
The function σ(z) = 11+e−z is called the logistic function (or sigmoid function); it looks like this:
σ(z)
z
An important quality of this function is that it maps all real numbers to the range (0, 1). In logistic
egression, σ(z) turns an a
itrary “score” z into a number between 0 and 1 that is interpreted as a
probability. Positive numbers become high probabilities; negative numbers become low ones.
Log Likelihood
In order to chose values for the parameters of logistic regression, we use maximum likelihood
estimation (MLE). As such we are going to have two steps: (1) write the log-likelihood function
and (2) find the values of θ that maximize the log-likelihood function.
The labels that we are predicting are binary, and the output of our logistic regression function is
supposed to be the probability that the label is one. This means that we can (and should) interpret
each label as a Bernoulli random variable: Y ∼ Ber(p) where p = σ(θTx).
To start, here is a super slick way of writing the probability of one data point (recall this is the
equation form of the probability mass function of a Bernoulli):
P(Y = y |X = x) = σ(θTx)y ·
[
1 − σ(θTx)
] (1−y)
Now that we know the probability mass function, we can write the likelihood of all the data:
L(θ) =
n∏
i=1
P(Y = y(i) | X = x(i)) the likelihood of independent training labels
=
n∏
i=1
σ(θTx(i))y
(i) ·
[
1 − σ(θTx(i))
] (1−y(i) )
substituting the likelihood of a Bernoulli
And if you take the log of this function, you get the log likelihood for logistic regression. The log
likelihood equation is:
LL(θ) =
n∑
i=1
y(i) logσ(θTx(i)) + (1 − y(i)) log[1 − σ(θTx(i))]
Recall that in MLE the only remaining step is to chose parameters (θ) that maximize log likelihood.
– 3 –
Gradient of Log Likelihood
Now that we have a function for log-likelihood, we simply need to chose the values of theta that
maximize it. Unfortunately, if we try just setting the derivative equal to zero, we’ll quickly get
frustrated: there’s no closed form for the maximum. However, we can find the best values of theta
y using an optimization algorithm. The optimization algorithm we will use requires the partial
derivative of log likelihood with respect to each parameter. First I am going to give you the partial
derivative (so you can see how it is used); we’ll derive it a bit later:
∂LL(θ)
∂θ j
=
n∑
i=1
[
y(i) − σ(θTx(i))
]
x (i)j
Gradient Ascent Optimization
Our goal is to choosing parameters (θ) that maximize likelihood, and we know the partial derivative
of log likelihood with respect to each parameter. We are ready for our optimization algorithm.
In the case of logistic regression we can’t solve for θ mathematically. Instead we use a computer to
chose θ. To do so we employ an algorithm called gradient ascent (a classic in optimization theory).
The idea behind gradient ascent is that gradients point “uphill”. If you continuously take small steps
in the direction of your gradient, you will eventually make it to a local maximum. In the case of
logistic regression you can prove that the result will always be a global maximum.
The update to our parameters that results in each small step can be calculated as:
θ newj = θ
old
j + η ·
∂LL(θ old)
∂θ oldj
= θ oldj + η ·
n∑
i=1
[
y(i) − σ(θTx(i))
]
x (i)j
Where η is the magnitude of the step size that we take. If you keep updating θ using the equation
above, you will converge on the best values of θ. You now have an intelligent model. Here is the
gradient ascent algorithm for logistic regression in pseudo-code:
It is also common to have a parameter θ0 that is added as a constant to the θTx inside the sigmoid.
Rather than computing special derivatives for θ0, we can simply define an additional feature x0 that
always takes the value 1. Taking a weighted average then results in adding θ0, the weight for x0.
– 4 –
Derivations
In this section we provide the mathematical derivations for the gradient of log-likelihood. The
derivations are worth knowing because these ideas are heavily used in Artificial Neural Networks.
Our goal is to calculate the derivative of the log likelihood with respect to each theta. To start, here
is the definition for the derivative of a sigmoid function with respect to its inputs:

∂z
σ(z) = σ(z)[1 − σ(z)] to get the derivative with respect to θ, use the chain rule
Take a moment and appreciate the beauty of the derivative of the sigmoid function. The reason that
sigmoid has such a simple derivative stems from the natural exponent in the sigmoid denominator.
Since the likelihood function is a sum over all of the data, and in calculus the derivative of a sum
is the sum of derivatives, we can focus on computing the derivative of one example. The gradient
of theta is simply the sum of this term for each training data point.
First I am going to show you how to compute the derivative the hard way. Then we are going to
look at an easier method. The derivative of gradient for one data point (x, y):
∂LL(θ)
∂θ j
=

∂θ j
y logσ(θTx) +

∂θ j
(1 − y) log[1 − σ(θTx] derivative of sum of terms
=
[
y
σ(θTx)
− 1 − y
1 − σ(θTx)
]

∂θ j
σ(θTx) derivative of log f (x)
=
[
y
σ(θTx)
− 1 − y
1 − σ(θTx)
]
σ(θTx)[1 − σ(θTx)]x j chain rule + derivative of σ
=
[
y − σ(θTx)
σ(θTx)[1 − σ(θTx)]
]
σ(θTx)[1 − σ(θTx)]x j alge
aic manipulation
=
[
y − σ(θTx)
]
x j cancelling terms
Derivatives Without Tears
That was the hard way. Logistic regression is the building block of artificial neural networks. If we
want to scale up, we are going to have to get used to
Answered 5 days After Mar 01, 2021

Solution

Sandeep Kumar answered on Mar 05 2021
151 Votes
76772
inary.csv
admit,gre,gpa,rank
0,380,3.61,3
1,660,3.67,3
1,800,4,1
1,640,3.19,4
0,520,2.93,4
1,760,3,2
1,560,2.98,1
0,400,3.08,2
1,540,3.39,3
0,700,3.92,2
0,800,4,4
0,440,3.22,1
1,760,4,1
0,700,3.08,2
1,700,4,1
0,480,3.44,3
0,780,3.87,4
0,360,2.56,3
0,800,3.75,2
1,540,3.81,1
0,500,3.17,3
1,660,3.63,2
0,600,2.82,4
0,680,3.19,4
1,760,3.35,2
1,800,3.66,1
1,620,3.61,1
1,520,3.74,4
1,780,3.22,2
0,520,3.29,1
0,540,3.78,4
0,760,3.35,3
0,600,3.4,3
1,800,4,3
0,360,3.14,1
0,400,3.05,2
0,580,3.25,1
0,520,2.9,3
1,500,3.13,2
1,520,2.68,3
0,560,2.42,2
1,580,3.32,2
1,600,3.15,2
0,500,3.31,3
0,700,2.94,2
1,460,3.45,3
1,580,3.46,2
0,500,2.97,4
0,440,2.48,4
0,400,3.35,3
0,640,3.86,3
0,440,3.13,4
0,740,3.37,4
1,680,3.27,2
0,660,3.34,3
1,740,4,3
0,560,3.19,3
0,380,2.94,3
0,400,3.65,2
0,600,2.82,4
1,620,3.18,2
0,560,3.32,4
0,640,3.67,3
1,680,3.85,3
0,580,4,3
0,600,3.59,2
0,740,3.62,4
0,620,3.3,1
0,580,3.69,1
0,800,3.73,1
0,640,4,3
0,300,2.92,4
0,480,3.39,4
0,580,4,2
0,720,3.45,4
0,720,4,3
0,560,3.36,3
1,800,4,3
0,540,3.12,1
1,620,4,1
0,700,2.9,4
0,620,3.07,2
0,500,2.71,2
0,380,2.91,4
1,500,3.6,3
0,520,2.98,2
0,600,3.32,2
0,600,3.48,2
0,700,3.28,1
1,660,4,2
0,700,3.83,2
1,720,3.64,1
0,800,3.9,2
0,580,2.93,2
1,660,3.44,2
0,660,3.33,2
0,640,3.52,4
0,480,3.57,2
0,700,2.88,2
0,400,3.31,3
0,340,3.15,3
0,580,3.57,3
0,380,3.33,4
0,540,3.94,3
1,660,3.95,2
1,740,2.97,2
1,700,3.56,1
0,480,3.13,2
0,400,2.93,3
0,480,3.45,2
0,680,3.08,4
0,420,3.41,4
0,360,3,3
0,600,3.22,1
0,720,3.84,3
0,620,3.99,3
1,440,3.45,2
0,700,3.72,2
1,800,3.7,1
0,340,2.92,3
1,520,3.74,2
1,480,2.67,2
0,520,2.85,3
0,500,2.98,3
0,720,3.88,3
0,540,3.38,4
1,600,3.54,1
0,740,3.74,4
0,540,3.19,2
0,460,3.15,4
1,620,3.17,2
0,640,2.79,2
0,580,3.4,2
0,500,3.08,3
0,560,2.95,2
0,500,3.57,3
0,560,3.33,4
0,700,4,3
0,620,3.4,2
1,600,3.58,1
0,640,3.93,2
1,700,3.52,4
0,620,3.94,4
0,580,3.4,3
0,580,3.4,4
0,380,3.43,3
0,480,3.4,2
0,560,2.71,3
1,480,2.91,1
0,740,3.31,1
1,800,3.74,1
0,400,3.38,2
1,640,3.94,2
0,580,3.46,3
0,620,3.69,3
1,580,2.86,4
0,560,2.52,2
1,480,3.58,1
0,660,3.49,2
0,700,3.82,3
0,600,3.13,2
0,640,3.5,2
1,700,3.56,2
0,520,2.73,2
0,580,3.3,2
0,700,4,1
0,440,3.24,4
0,720,3.77,3
0,500,4,3
0,600,3.62,3
0,400,3.51,3
0,540,2.81,3
0,680,3.48,3
1,800,3.43,2
0,500,3.53,4
1,620,3.37,2
0,520,2.62,2
1,620,3.23,3
...
SOLUTION.PDF

Answer To This Question Is Available To Download

Related Questions & Answers

More Questions »

Submit New Assignment

Copy and Paste Your Assignment Here