Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 7

1. The following statement best defines what regression is A.

Unconditional expectation function B. Random variable function C. The


weighted average of independent variable D. Conditional expectation
function of outcome variable on explanatory variable

D. Conditional expectation function of outcome variable on explanatory


variable (Definisi dari google dan chatGPT)
Regression is a statistical method used to model the relationship between a
dependent variable (outcome variable) and one or more independent
variables (explanatory variables). It seeks to find the conditional expectation or
conditional mean of the dependent variable given the values of the
independent variables. Therefore, option D is the correct definition of
regression.

2. The following notation expresses the concept of probability density


function A. f(x) B. f(xj) = pj C. F(x)(P(X)x) D. F(x) = c

A. f(x) (Book Appendices B, B.5)


In probability theory and statistics, the notation "f(x)" is commonly used to
represent the probability density function (PDF) of a continuous random
variable. The PDF describes the likelihood of a random variable taking on a
specific value, and it is typically denoted as "f(x)" where "x" represents the
variable of interest.

3. The following notation expresses the concept of cumulative density


function A. f(x) B. f(xj) = pj C. F(x)=(P(X)<x) D. F(x) = c

C. F(x)=(P(X)<x) (Book Appendices B, B.6)

The notation "F(x) = (P(X) < x)" represents the cumulative distribution function
(CDF) of a random variable X. The CDF gives the probability that a random
variable X takes on a value less than or equal to x.
4. The expression of FX,Y (x, y) = P(X = x, Y = y) defines ... A. Conditional
probability function B. Unconditional probability function C. Statistical
independence D. Joint probability density function

D. Joint probability density function (Book Appendices B, B.11)


The expression Fx,y(x,y)=F(x=x, y=y)FX,Y(x,y)=P(X=x,Y=y) defines the joint
probability density function (PDF) of two random variables X and Y. It
represents the probability that both X and Y take on specific values x and y
simultaneously.

5. One of the interpretations drawn from conditional distributions is that ...


A. X and Y are always independent random variables B. X and Y are
always dependent random variables C. If X and Y are independent,
knowledge about the value of X tells nothing about FY|X(y|x) D. fX,Y (x,
y) = fX(x)fY(y)

C. If X and Y are independent, knowledge about the value of X tells nothing


about FY∣X(y∣x). (Book Appendices B, B.16)
Conditional distributions help assess the relationship between two random
variables, and if X and Y are independent, then the conditional distribution of
Y given X should not depend on the value of X. This is why knowing the value
of X should tell us nothing about the conditional distribution FY∣X(y∣x) if X and
Y are independent. Options A, B, and D do not accurately describe this
interpretation of conditional distributions.

An important feature of conditional distributions is that, if X and Y are indepen- dent


random variables, knowledge of the value taken on by X tells us nothing about the
probability that Y takes on various values (and vice versa). That is, fY|X(y|x) 5 fY(y),
and fX |Y (x|y) 5 fX (x).
6. The equivalent summation operation of ∑ (xi − x̅ ) n 2 i=1 is ... A. ∑ xi −
nx^2 B. ∑ xi^2 − nx̅ ^2 C. ∑ xi − nx̅ ^2 D. ∑ xi^2 − nx^2

The equivalent summation operation of ∑ (xi − x̅ )^2 from i = 1 to n is:


B. ∑ xi^2 − nx̅ ^2

7. The following expression defines what is correlation and covariance A.


Correlation is bi-directional but covariance is not B. Covariance is unit
neutral but is not for correlation C. corr(X, X) = cov(X, X) = 1 D. cov(X, c)
≠ 0, c is constant

B. Covariance is unit neutral but is not for correlation.


This statement correctly describes the difference between covariance and
correlation:

• Covariance measures the degree to which two random variables change


together. It is not affected by the units of measurement and can take any
value, positive, negative, or zero.
The covariance between two random variables X and Y, sometimes called
the population covariance to emphasize that it concerns the relationship
between two. Covariance measures the amount of linear dependence
between two random variables. A positive covariance indicates that two
random variables move in the same direction, while a negative covariance
indicates they move in opposite directions. Interpreting the magnitude of a
covariance can be a little tricky, as we will see shortly. Because covariance
is a measure of how two random variables are related, it is natural to ask
how covariance is related to the notion of independence. This is given by
the following property. (Book Appendices B, B.27)
• Correlation, on the other hand, is a standardized measure that ranges
between -1 and 1, making it unit-neutral. Correlation also measures the
degree and direction of the linear relationship between two random
variables. When the correlation is 1, it means a perfect positive linear
relationship, and when it's -1, it means a perfect negative linear
relationship.
Because sX and sY are positive, Cov(X,Y) and Corr(X,Y) always have the
same sign, and Corr(X,Y) 5 0 if, and only if, Cov(X,Y) 5 0. Some of the
properties of covariance carry over to correlation. If X and Y are
independent, then Corr(X,Y ) 5 0, but zero correla-tion does not imply
independence. (Like the covariance, the correlation coefficient is also a
measure of linear dependence.) (Book Appendices B, B.29)

8. If X and Y are uncorrelated then ... A. Var(aX + bY) = aVar(X) + bVar(Y) B.


Var(aX + bY) = a^2Var(X) + b^2Var(Y) C. Var(aX − bY) = a^2Var(X) −
b^2Var(Y) D. Var(aX − bY) = aVar(X) − bVar(Y)

A. Var(aX + bY) = aVar(X) + bVar(Y) (Book Appendices B, B.30)


If X and Y are uncorrelated, then their covariance is zero. When calculating the
variance of a linear combination of random variables (aX + bY), you can ignore
the covariance term, and the variance is simply the sum of the variances of the
individual random variables weighted by their respective coefficients. This is
represented by option A.

9. If E(Y|X) = E(Y) then ... A. Cov(X, Y) ≠ 0 B. Corr(X, Y) ≠ 0 C. Cov(X, Y) ≠ 0


and Corr(X, Y) ≠ 0 D. Cov(X, Y) = 0 and Corr(X, Y) = 0

D. Cov(X, Y) = 0 and Corr(X, Y) = 0 (Book Appendices B, CE.5)


If E(Y∣X)=E(Y), it implies that Y and X are uncorrelated, which means their
covariance Cov(X,Y)) is zero. Furthermore, the correlation Corr(X,Y)) is also zero
when covariance is zero. So, both options D are correct in this context.

10. Which of the following formula describes the standard normal


distribution?
A. f(x) = 1 σ√2π exp [− (x−μ) 2 2σ2 ], −∞ < x < ∞
B. ∅(x) = 1 √2π exp [− z 2 2 ], −∞ < z < ∞
C. T = Z √X/n
D. F = (X2/k2) (X2/k2)
B. ∅(x) = 1 √2π exp [− z 2 2 ], −∞ < z < ∞
The formula presented in option B describes the probability density function
of the standard normal distribution, often denoted as ϕ(z) or ϕ(x), where z or x
represents a standard normal random variable, and it has a mean (μ) of 0 and
a variance (2σ2) of 1. This is the most common form used to describe the
standard normal distribution.

11. From the following matrices, which one does have the highest rank?

The rank of a matrix is the maximum number of linearly independent rows or


columns in the matrix. To determine which matrix has the highest rank, we can
calculate the rank of each matrix:
A. Matrix A has rank 2 because both rows are linearly independent. (Matriks
identitas)
B. Matrix B has rank 1 because the second row is just a scalar multiple of the
first row, so it's not linearly independent. (Gunakan rumus perhitungan Row
2= Row 2 – 1.Row 1, maka diperoleh rank nya adalah 1)
C. Matrix C has rank 3 because all three rows are linearly independent.
(Gunakan rumus perhitungan Row 2= Row 2 – nilai Row 2 x nilai Row 1, dan
begitupun untuk Row 3, maka diperoleh rank nya adalah 3)
D. Matrix D has rank 2 because the third row is a linear combination of the first
two rows. (Pada Row 2 dapat dilihat bahwa ketiga angka adalah hasil dari
perkalian Row 1 dengan angka 2, artinya bila disederhakan maka akan
menghasilkan nilai Row sama dengan Row 1. Dengan menggunakan rumus
seperti apda opsi 2, maka diperoleh rank nya adalah 2)

So, the matrix with the highest rank is Matrix C, which has a rank of 3.
12. If Ann is singular, then
A. A is invertible (matriks yang dapat dibalik/invers) (Matriks singular
tidak dapat diinverse, see Inverse, Definition D.9)
B. A has full rank (Matriks dengan full rank bukan singular karena dapat
diinverse, see Properties of Rank)
C. |A| = 0
D. all column vectors of A are linearly independent (Belum tentu
menjadi jawaban. Keputusan vector A adalah linearly independent
apabila vector A = 0)

C. |A| = 0
If matrix A is singular, it means that its determinant (|A|) is equal to zero. The
determinant of a square matrix is a key factor in determining whether a matrix
is invertible or not. If the determinant is zero, the matrix is not invertible. So,
option C is the correct statement.

Matriks singular merupakan matriks yang nilai determinannya nol, sedangkan


nilai determinan matriks non-singular tidak sama dengan nol. Matriks yang tak
singular mempunyai invers (nilai balikan), sedangkan matriks singular tidak
mempunyai invers. (Based on google definition).

13. Belum dijawab karena gk ngerti soalnya, apalagi ngerjainnya.

14.Let A = [ 1 2 3 4 8 6 2 4 2 ]. Which statement is true?


A. A is full rank
B. The column rank of A is 2
C. The row rank of A is 2
D. All row (or column) vectors are linearly independent
D. All row (or column) vectors are linearly independent. (Lakukan perhitungan
matriks ordo (3 x 3) untuk mendapatkan nilai determinan dan keputusan
column and row mempunyai rank 2).

In the given matrix A, all three row vectors are linearly independent, and all
three column vectors are also linearly independent. Therefore, option D is true.
The matrix A has full rank, and both the row rank and column rank are 3.

Jika determinan tidak nol, maka vektor-vektor tersebut linearly independent,


sedangkan jika determinan nol, maka vektor-vektor tersebut linearly
dependent.

15. Consider a simple linear model y = Dβ + e. Let y = [y1 y2 ... yn ]′ and D


= [d1 d2 ... dn ]′ . yi is a continuous variable. The values of di are either 1
or 0. Let assume that there are n1 elements with n1 < n. Also assumes
that yi 1 is part of y if di = 1 and y̅ 1 is the simple mean of yi 1 (∑yi 1 n1).
Further, let ρ = n1 n . Under sum square minimization min (e′e), β̂ is A.
ρyi 1 B. n1y̅ 1 C. y̅ 1 ρ D. y̅ 1

You might also like