Test 1 Week 3

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

27/9/2018 Machine Learning - Inicio | Coursera

Logistic Regression

Cuestionario, 5 preguntas

1
point

1. 
Suppose that you have trained a logistic regression classi er, and it outputs on a new example x a prediction hθ (x) = 0.7. This means (check all that apply):

Our estimate for P (y = 1∣x; θ) is 0.3.

Our estimate for P (y = 1∣x; θ) is 0.7.

Our estimate for P (y = 0∣x; θ) is 0.7.

Our estimate for P (y = 0∣x; θ) is 0.3.

1
point

2. 
Suppose you have the following training set, and t a logistic regression classi er hθ (x) = g(θ0 + θ1 x1 + θ2 x2 ).

Which of the following are true? Check all that apply.

J(θ) will be a convex function, so gradient descent should converge to the global minimum.

Adding polynomial features (e.g., instead using hθ (x) = g(θ0 + θ1 x1 + θ2 x2 + θ3 x21 + θ4 x1 x2 + θ5 x22 ) ) could increase how well we can t the
training data.

The positive and negative examples cannot be separated using a straight line. So, gradient descent will fail to converge.

Because the positive and negative examples cannot be separated using a straight line, linear regression will perform as well as logistic regression on
this data.

1
point

3. 

()
https://www.coursera.org/learn/machine-learning/exam/Fd9cu/logistic-regression 1/3
27/9/2018 Machine Learning - Inicio | Coursera
∂ 1 m (i)
For logistic regression, the gradient is given by ∂θ J(θ) = ∑i=1 (hθ (x(i) ) − y (i) )xj . Which of these is a correct gradient descent update for logistic
Logistic Regression j m
regression
with a learning rate of α? Check all that apply.
Cuestionario, 5 preguntas

θ := θ − α m1 ∑m (i) (i) (i)


i=1 (hθ (x ) − y )x .

θ := θ − α m1 ∑m (i) (i)
i=1 (θ x − y ) x .
T

θ := θ − α m1 ∑i=1 ( 1
− y (i) ) x(i) .
m
T (i)
1+e−θ x

(i)
θj := θj − α m1 ∑i=1 (θT x − y (i) ) xj (simultaneously update for all j ).
m

1
point

4. 
Which of the following statements are true? Check all that apply.

The cost function J(θ) for logistic regression trained with m ≥ 1 examples is always greater than or equal to zero.

For logistic regression, sometimes gradient descent will converge to a local minimum (and fail to nd the global minimum). This is the reason we
prefer more advanced optimization algorithms such as fminunc (conjugate gradient/BFGS/L-BFGS/etc).

1
The sigmoid function g(z) = 1+e−z
is never greater than one (> 1).

Linear regression always works well for classi cation if you classify by using a threshold on the prediction made by linear regression.

1
point

5. 
Suppose you train a logistic classi er hθ (x) = g(θ0 + θ1 x1 + θ2 x2 ). Suppose θ0 = 6, θ1 = −1, θ2 = 0. Which of the following gures represents the
decision boundary found by your classi er?

Figure:

Figure:

Figure:

https://www.coursera.org/learn/machine-learning/exam/Fd9cu/logistic-regression 2/3
27/9/2018 Machine Learning - Inicio | Coursera

Figure:
Logistic Regression

Cuestionario, 5 preguntas

Entiendo que enviar trabajo que no es mío resultará en la desaprobación permanente de este curso y la desactivación de mi cuenta de
Coursera. 

Obtén más información sobre el Código de Honor de Coursera

Cristian Cabello Martín

Submit Quiz

https://www.coursera.org/learn/machine-learning/exam/Fd9cu/logistic-regression 3/3

You might also like