Download as pdf or txt
Download as pdf or txt
You are on page 1of 66

SELECTIVE, EDUCATIVE COMMENTARY ON

JEE 2019 ADVANCED MATHEMATICS PAPERS


Commentary on Paper 1 and Paper 2

(Last Revised on 14/06/2019)

Contents
Paper 1 2

Paper 2 33

Concluding Remarks 64

This is a commentary on some selected problems from the Mathematics


sections of both the JEE 2019 Advanced papers. This is a deviation from the
past practice of 16 years (from 2003 to 2018) where the commentaries covered
all questions. There are two reasons for this change. First, some problems
are straightforward and similar problems have been asked in the past. They
lack any significant educative content. Secondly, detailed solutions to such
problems are easily available on many websites and there is not much point
in duplicating them.
For example, routine problems in coordinate geometry and trigonometry,
calculation of areas, integrals or of conditional probabilities are omitted.
(With this criterion, Q.15 and Q.16 of Paper 2 should also be excluded. But
they are included because they are referred to in the concluding remarks.)
The decision to include or exclude a particular problem is admittedly
subjective. If some readers strongly feel that some problems omitted here
deserve to be included, they are welcome to so suggest.
Readers who notice any errors, want to comment or give alternate solu-
tions are invited to send an email to the author at kdjoshi314@gmail.com or
send an SMS or a WhatsApp message to the author at 9819961036.
As in the past, unless otherwise stated, all the references made are to the
author’s book Educative JEE (Mathematics) published by Universities Press,
Hyderabad. The third edition of this book is now available in the market.
PAPER 1

Contents

Section - 1 (Only one Correct Option Type) 2

Section - 2 (One or More Correct Options Type) 9

Section - 3 (Numerical Answer Type) 23

SECTION - 1
This section contains FOUR questions each of which has four options out
of which ONLY ONE is correct. There are 3 marks if only the correct option
is chosen, 0 marks if no option is chosen and −1 marks in all other cases.

Q.1 Let S be the set of all complex numbers z satisfying |z − 2 + i| ≥ 5.
1
If the complex number z0 is such that is the maximum of the
|z0 − 1|
1 4 − z0 − z0
set { : z ∈ S}, then the principal argument of is
|z − 1| z0 − z0 + 2i

(A) − π2 (B) π
4
(C) π
2
(D) 3π
4

Answer and Comments:√(A). Clearly, S is the set of points outside


the circle, say C, of radius 5 centred at 2 − i shown below.
y

C S

P z
0

O . x
1

.2− i
M

2
The second sentence of the statement of the problem is just a twisted
way of saying that the point z0 is the point of S which is closest to 1.
Clearly, it must lie on the boundary of S, which is C. So, it must be
the closer end of the diameter through 1.
It is now tempting to identify z0 . For this, let z0 = x0 + iy0 . Then
P = (x0 , y0 ) is one of the points of intersection of the circle C and
the line, say L, passing through the centre M = (2, −1) of C and the
point (1, 0). The equations of C and L are easy to write down and
can be solved simultaneously to give two points, of which (x0 , y0 ) is
the one closer to (1, 0). A better approach is to determine P from
−→ √
MP = 5û where û is a unit vector in the √direction√from M to P .
Clearly, û = −√î+2 ĵ . Then P = (x0 , y0) = (2 − √52 , −1 + √52 ).
But all this work is not of much use because the question does not ask
for the argument of z0 , but rather, that of a related complex number,
4 − z0 − z0
say w0 = . For any complex number z0 , z0 + z0 is real while
z0 − z0 + 2i
z0 − z0 is purely imaginary. Hence the numerator of w0 is real while its
denominator is purely imaginary. Therefore w0 is a purely imaginary
number of the form ki for some k ∈ IR. Then depending upon the sign
of k the principal argument (which is to lie in the interval (−π, π]) of
w0 will be either π2 or − π2 . From the diagram it is clear without
much work that P = (x0 , y0) lies in the positive quadrant inside the
square with vertices at 0 and 1. So both x0 , y0 lie in (0, 1). Hence
4 − z0 − z0 = 4 − 2x0 > 2 Similarly, z0 − z0 + 2i = (2y0 + 2)i and
2y0 + 2 ≥ 2. So, w0 is of the form bia where a, b are positive. Hence
w0 = − ab i and therefore its argument is − π2 .
An intentionally tricky problem with traps laid to divert a candi-
date’s attention from the core. The convention about the interval in
which the principal argument is to lie ought to have been spelt out.
The paper setters have taken it as the interval (−π, π]. But the choice
[0, 2π) is also popular. In that case the answer would be 3π
2
.

Q.2 Let " #


sin4 θ −1 − sin2 θ
M= = αI + βM −1
1 + cos2 θ cos4 θ
where α = α(θ) and β = β(θ) are real numbers, and I is the 2 × 2
identity matrix. If

3
α∗ is the minimum of the set {α(θ) : θ ∈ [0, 2π)} and
β ∗ is the minimum of the set {β(θ) : θ ∈ [0, 2π)},

then the value of α∗ + β ∗ is


37 31
(A) − 16 (B) − 16 (C) − 29
16
17
(D) − 16

Answer and Comments: (C). The problem asks for the minimum
values of two functions α(θ) and β(θ) over the semi-open interval [0, 2π).
(Choice of the semi-open interval instead of the usual closed interval,
hardly matters because evidently α, β will have 2π as a period and so
α(2π) = α(0) and β(2π) = β(0).)
But neither α nor β is given explicitly as a function of θ. So we
first have to determine them from the given matrix equation, viz.

M = αI + βM −1 (1)
We can either use this equation as it is, or we can multiply it throughout
by M to get
M 2 = αM + β (2)

No matter which approach we take, α and β have to be found from the


four equations that result by matching the corresponding entries of the
matrices on the two sides. It is natural to think that (2) is a better
choice than (1), because finding the inverse of a matrix (in case it ex-
ists) is usually a laborious job, as compared with matrix multiplication
which is needed to find M 2 .
But things are different when we have a matrix of order 2. For such
a matrix, there is an easy explicit formula for the inverse, viz.
" #−1 " #
a b 1 d −b
= (3)
c d ∆ −c a

where ∆ = ad − bc is the determinant (which is assumed to be non-zero


as otherwise the inverse cannot exist).
As a result if we are given that
" # " # " #
a b α 0 β d −b
= + (4)
c d 0 α ∆ −c a

4
then, equating the entries in the first row and the second column of the
matrices on the two sides gives b = − βb ∆
and hence

β = −∆ (5)

(assuming b 6= 0) and further, equating the entires in the first row and
the first column we get a = α + βd

, which because of the last equation,
becomes
β
α=a− d=a+d (6)

Thus we have a very simple way of determining the functions α(θ) and
β(θ) by merely looking at the entries of M and doing a little calculation
to find its determinant. Specifically,

α(θ) = sin4 θ + cos4 θ (7)


and β(θ) = − sin4 θ cos4 θ − 2 − sin2 θ cos2 θ (8)

(For the sake of perfection, we must ensure that b, i.e. the entry in
the first row and second column of M is non-zero. But that is obvious
since −1 − sin2 θ is always negative.)
The problem is now reduced to minimising these functions over [0, 2π).
As a first measure of simplification we note that both of them can
be expressed in terms of sin2 θ cos2 θ. This is obvious for (8). For
(7), rewrite sin4 θ + cos4 θ as (sin2 θ + cos2 θ)2 − 2 sin2 θ cos2 θ = 1 −
2 sin2 θ cos2 θ. As θ varies over [0, 2π], sin2 θ cos2 θ varies from 0 to 41 ,
because sin2 θ cos2 θ = 14 sin2 2θ. So, if we let

u = sin2 θ cos2 θ (9)

then
1
α∗ = min{1 − 2u : u ∈ [0, ]} (10)
4
1
and β ∗ = min{−2 − u − u2 : u ∈ [0, ]} (11)
4
Clearly, α∗ = 12 . For β ∗ , we maximise 2 + u + u2 for u ∈ [0, 41 ]. As
both u and u2 are increasing functions over this interval, the maximum

5
occurs at u = 14 and equals 2 + 1
4
+ 1
16
= 37
16
. 37
Hence β ∗ = − 16 . Hence
37
α∗ + β ∗ = 12 − 16 = − 29
16
.
The problem is a combination of two unrelated problems, the
first one about matrices and the second one about minimisation of
trigonometric functions. The second part is not very exciting. But that
is more than made up by the first part. Note that (1) is weaker than
(2), which holds even when M is not invertible. When it is invertible,
both are equivalent. But (1) gave the values of α and β more readily.
With these values, (2) can be rewritten as

M 2 − (a + d)M + (ad − bc) = 0 (12)

Note that a + d and ad − bc are, respectively, the trace and the deter-
minant of the matrix M. There is a famous theorem, called Cayley
Hamilton theorem which says that (12) always holds if written in a
different form. Define a polynomial p(x) from the entries of M by

p(x) = det(xI − M) (13)



x−a −b
= = x2 − (a + d)x + (ad − bc) (14)

−c x−d

This polynomial is called the characteristic polynomial of the ma-


trix M. Generally, in a polynomial f (x) = a0 + a1 x + a2 x2 + . . . + an xn
with real (or complex) coefficients a0 , a1 , . . . , an , the variable x is called
an indeterminate. Normally it is given some particular real (or com-
plex) value and then the resulting expression p(x) is also a real or a
complex number. But if we treat a number a as a matrix aI, then f (M)
makes sense and is a square matrix of the same order as the identity
matrix I. With this understanding, the Cayley Hamilton theorem says
that every square matrix is a root of its characteristic polynomial, i.e.
p(M) is the zero matrix. In essence we have proved this theorem above
for matrices of order 2. It is a non-trivial theorem, which looks decep-
tively simple if we take the compact definition (13) of p(x), for then
one gets p(M) = det(MI − M) which vanishes since MI − M is the
zero matrix. The flaw in this reasoning is that (13) is merely a compact
notation for (14) which is the correct definition.
Cayley Hamilton theorem is beyond the JEE level. But it appears
in a disguised form in this problem. Problems involving matrices (and

6
their inverses) are common. But a problem hiddenly based on the
Cayley Hamilton theorem has not been asked before. That makes the
problem imaginatively designed. The trigonometric optimisation is a
dispensable addendum.

Q.3 Dropped. (Equation of a chord of a circle in terms of its midpoint.)

Q.4 The area of the region {(x, y) : xy ≤ 8, 1 ≤ y ≤ x2 } is

14 14
(A) 16 loge 2 − (B) 8 loge 2 −
3 3
7
(C) 16 loge 2 − 6 (D) 8 loge 2 −
3

Answer and Comments: (A). xy = 8 represents a rectangular hyper-


bola with one branch in the first and the other in the third quadrant.
But as we have y ≥ 1 for all points in the given set, we ignore the
branch in the third quadrant. y = x2 is a parabola with vertex at O.
It meets the hyperbola xy = 8 only at one point (2, 4). Call it A.

xy = 8

y = x2 A (2,4)

D E
R2 y =y0
R1
(−1,1) C (8,1)
F B (1,1) y =1
x
O

The line y = 1 meets the hyperbola xy = 8 only at the point (1, 8).
Call it C. The line y = 1 meets the parabola y = x2 at two points
B = (1, 1) and also at F = (−1, 1).
The region, say R1 , bounded by the three curves is a part of the given
set. It is easier to obtain its area, say ∆ by horizontal than by vertical
slicing. On R1 , y varies from 1 to 4. For each y0 ∈ [1, 4], the intersection

7

of R1 with the line y = y0 is the segment from ( y0 , 1) to ( y80 , 0). It is

shown by DE in the figure above. Its length is y80 − y0 . Hence
Z
8 √
4
∆ = ( − y)dy
1 y
2 y=4
= (8 loge y − y 3/2 )
3 y=1
16 2
= 8 loge 4 − +
3 3
14
= 16 loge 2 − (1)
3
So (A) is correct.
An absolutely straightforward problem. The computations needed
are manageable because 4 and 1 are prefect squares. The only catch is
that there is an unbounded region R2 (shown by shading in the figure)
whose points also satisfy the condition 1 ≤ y ≤ x2 . Here the hyperbola
does not enter the picture because x < 0 and so xy < 0.
Although there exist unbounded regions with finite areas (e.g. the
region {(x, y) : x ≤ 0, 0 ≤ y ≤ ex }, in the present problem, the area
Z −1
of the region R2 would be the improper integral x2 − 1dx. This
−∞
integral is not convergent. Hence R2 does not have finite area.
It is conceivable that some candidates were baffled because of R2 .
It is no defense to say that they should have used common sense to
exclude R2 and focus only on R1 . Obviously, the possibility of R2
eluded the paper setters. (Or else, they would have simply described
the region as that bounded by y = 1, y = x2 and xy = 1.) This
happens typically because at the time of paper setting, the person
who proposes a particular problem draws a figure of only what he/she
intended and the minds of the other paper setters are channelised by it.
One way to reduce, if not altogether eliminate, such lapses is to keep
some paper setters totally away from paper setting and to call them in
only at the end when they would look at every question as if they were
candidates. More importantly, the JEE board has to evolve a coherent
policy about such questions. Perhaps giving a NOTA option, which
is getting increasingly popular with elections, will protect the paper
setters if not the candidates.

8
SECTION - 2
This section contains EIGHT questions each of which has FOUR options
out of which ONE OR MORE THAN ONE is (are) correct. The maximum
score per question is 4, the minimum −1. There are partial scores 1, 2 and
3 depending on the number of correct options and the correct choices, with
the proviso, that when even one incorrect option is chosen, there is −1 mark,
regardless of how many options are correctly chosen.

Q.5 Let α and β be the roots of x2 − x − 1 = 0, with α > β. For all positive
integers n, define

αn − β n
an = , n≥1
α−β
and
b1 = 1 and bn = an−1 + an+1 , n ≥ 2.
Then which of the following options is/are correct?
(A) a1 + a2 + a3 + . . . + an = an+2 − 1 for all n ≥ 1

X an 10
(B) n
=
n=1 10 89
(C) bn = αn + β n for all n ≥ 1

X bn 8
(D) n
=
n=1 10 89

Answer and Comments: (A, B, C). The first two options involve
only a’s. Even here, it will be noticed that (B) is more directly related
1
to the data. Even without identifying α and β, we see that since α−β is
some constant, an is a linear combination of two geometric progressions,
one with common ratio α and the other with common ratio β. So,
it should be possible to reduce the sum in (B) to that of an infinite
geometric series. Hence it is the right option to try first. To actually
compute the sum, we may need the values of α and β. They come out
as,

1+ 5
α = (1)
2
9

1− 5
and β = (2)
2

Hence α − β = 5. Therefore,
∞ ∞ ∞
!
an 1 X α n X β
( )n
X
n
= √ ( ) −
n=1 10 5 n=1 10 n=1 10
!
1 α/10 β/10
= √ −
5 1 − (α/10) 1 − (β/10)
1 α β
= √ ( − )
5 10 − α 10 − β
1 10(α − β)
= √ (3)
5 (10 − α)(10 − β)
10
= (4)
100 − 10(α + β) + αβ
10 10
= = (5)
100 − 10 − 1 89
Thus we see
√ that (B) is correct. Note that we need not have calculated
α − β as 5, because the numerator√in (3) has α − β as a factor which
would have canceled anyway with 5. But it is not always easy to
foretell such redundancies. Nevertheless, we are wiser in reducing (3)
further because the denominator is a symmetric function of the roots α
and β of the polynomial x2 − x − 1 = 0 and so can be calculated simply
from α + β = 1, αβ = −1 without identifying the roots individually.
We can apply the formula for the sum of a G.P. to prove (A) too.
But there is a simpler proof by induction on n if we observe a simple
recurrence relation for an . Since α2 = α + 1, multiplying by αn we have

αn+2 = αn+1 + αn (6)

for all n ≥ 1. Similarly,

β n+2 = β n+1 + β n (7)

for all n ≥ 1. Subtracting (7) from (6) and dividing by α − β, we have

an+2 = an+1 + an (8)

10
for all n ≥ 1.
In other words, the numbers an satisfy a very well known recurrence
relation called the Fibonacci relation. The first two terms a1 and a2
have to be found directly. Clearly, a1 = α−β
α−β
= 1 and a2 = α + β = 1.
Hence a3 = 1 + 1 = 2, a4 = 2 + 1 = 3, a5 = 3 + 2 = 5 and so on.
Now the truth of (A) is easily verified for n = 1 since a1 = 1 and a3 = 2.
Assume that it is true for n = k, i.e.

a1 + a2 + . . . + ak = ak+2 − 1 (9)

Adding ak+1 to both the sides and using (8), we get

a1 + a2 + . . . + ak + ak+1 = ak+2 − 1 + ak+1 = ak+3 − 1 (10)

So (A) is true for n = k + 1. This completes the induction and proves


that (A) is correct.
We now turn to the options (C) and (D) both of which are about the
bn ’s. Let us first try (C), for if it is true then that might help in proving
(D) in a manner similar to the proof of (B). Note that bn is defined in
terms of the a’s which satisfy the Fibonacci relation. As a result, we
claim the b’s also satisfy the Fibonacci relation. Indeed,

bn + bn+1 = (an−1 + an+1 ) + (an + an+2 )


= (an−1 + an ) + (an+1 + an+2 )
= an+1 + an+3 (11)
= bn+2 (12)

where in (11) we have applied (8) twice (with different indices).


We now prove (C) by induction on n and using the Fibonacci relation.
Since in this relation, every term depends on the preceding two terms
rather than just one, we begin the induction by verifying the first two
cases, viz. b1 = α + β and b2 = α2 + β 2 . The first is trivial since b1 = 1.
For the second, write α2 +β 2 = (α+β)2 −2αβ = 3 which equals a1 +a3 .
For the inductive step too, we need the truth for n = k and n = k +1 to
deduce the truth for n = k + 2 (as against the normal situation where
the truth for n = k usually implies the truth for n = k +1). This is easy
because both the sequences {αn } and {β n } also satisfy the Fibonacci

11
relation because of (6) and (7). So, assuming that bk = αk + β k and
bk+1 = αk+1 + β k+1 and adding, we get

bk+2 = bk + bk+1 = (αk + αk+1) + (β k + β k+1) = αk+2 + β k+2 (13)

Hence (C) is true by induction on n.


We can now sum the infinite series in (D) exactly the same way as
we summed the series in (B), except that we use (C) instead of the
definition of an . Thus
∞ ∞
X bn X αn + β n
n
=
n=1 10 n=1 10n
∞ ∞
α β
( )n + ( )n
X X
=
n=1 10 n=1 10
α/10 β/10
= +
1 − (α/10) 1 − (β/10)
α β
= +
10 − α 10 − β
10(α + β) − 2αβ
= (14)
100 − 10(α + β) + αβ
12
= (15)
89
Hence (D) is false.
In a nutshell, the entire question is a play with the Fibonacci rela-
tion. Although not explicitly mentioned in the JEE syllabus, questions
based on it or on similar recurrence relations arising from the roots of
a quadratic equation have been asked before. (See the JEE 1992 prob-
lem on p. 145, or the JEE 1981 problem on p. 146 or the JEE 2000
problem on p. 823. See also Paragraph for Q. 31 and 32 in JEE 2012
or Paragraph 2 of Advanced JEE 2017 Paper 2 in the respective year’s
commentaries.) Candidates who recognise the similarity will find the
problem conceptually easier. But the work required is repetitious and
far in excess of the proportionate work for the time allotted. It would
have been better to ask only (D) as a numerical answer question.

12
Q.6 Let    
0 1 a −1 1 −1
M =  1 2 3  and adj M =  8 −6 2 
   

3 b 1 −5 3 −1
where a, b are real numbers. Which of the following options is/are
correct?

(A) a + b = 3
(B) (adj M)−1 + adj M −1 = −M
(C) det(adj M 2 ) = 81.
   
α 1
 β  =  2 , then α − β + γ = 3
(D) If M    

γ 3

Answer and Comments: (A, B, D). The major importance of the


adjoint adj M of a square matrix M comes from the following equation.

(adj M)M = M(adj M) = ∆I (1)

where ∆ is the determinant of M and I is the identity matrix (of the


same order as the matrix M).
When ∆ 6= 0, we can go one step further and get ∆1 (adj M) as
a (two sided) inverse of M. However, the adjoint of a square matrix
is constructed laboriously by calculating many determinants of lower
orders. This is a very inefficient task, as inefficient as solving a system
of linear equations using the Cramer’s rule. More efficient methods
are available for finding the inverse of a matrix (when it exists). Not
surprisingly, the adjoint figures far less commonly than inverses in the
theory of matrices. But it seems rather popular with JEE paper setters.
(See for example, Q. 44 in Paper 1 of Advanced JEE 2016 or Q.52
of Paper 1 of Advanced JEE 2013, in the respective commentaries.)
A possible explanation is that its construction, although laborious, is
elementary.
Now coming to the problem, we are given the adjoint of M, but not
the matrix M itself. Two of the nine entries of M are the unknowns a

13
and b. If we write down its adjoint, many entires will have these un-
knowns. So, the system of nine equations that would result by equating
the corresponding entries of the unknown and the given expression for
adj M will be a complicated one to solve. Instead, we use (1). Of
course, multiplying two 3 × 3 matrices fully is not a very enviable job.
But in the present case, note that the R.H.S. is a diagonal matrix. The
six non-diagonal entires of it are all 0. As there are only two unknowns
in M, we need only two equations to find them. Fortunately, if we start
multiplying adj M and M by writing
    
−1 1 −1 0 1 a −2 1 − b −a + 2
 8 −6 2   1 2 3  =  (2)
    

−5 3 −1 3 b 1
we need not go any further. The two non-diagonal entries is the first
row of the matrix on the R.H.S. must vanish. This gives a = 2 and
b = 1. Hence (A) is true. (We also get ∆ = −2 as a bonus, which we
may need later.)
Now that we know the matrix M completely, viz.
 
0 1 2
M = 1 2 3  (3)
 

3 1 1
we can answer the remaining options by brute force if necessary. But
rarely brute force is the best method. For example, take (D). Since M
is invertible, (as ∆ 6= 0), we can find α, β, γ by writing
   
α 1
−1 
 β =M  2  (4)
  

γ 3
To find M −1 we use (1). As already noted, ∆ = −2. Also adj M is
given to us. Hence
 
−1 1 −1
1 1
M −1 = − adj M = −  8 −6 2 

2 2
−5 3 −1
 
1/2 −1/2 1/2
=  −4 3 −1  (5)


5/2 −3/2 1/2

14
Putting this into (4), we get
      
α 1/2 −1/2 1/2 1 1
 β  =  −4 3 −1   2  =  −1  (6)
     

γ 5/2 −3/2 1/2 3 1

which gives α = 1, β = −1 and γ = 1. Hence α − β + γ = 3. Therefore


(D) is correct.
For (B), we already know Madj M = −2I3 . So, adj M = −2M −1 .
Taking inverses of both the sides, we get
1
(adj M)−1 = − M (7)
2
As for the second term, we apply (1) with M replaced by M −1 . Since
det M = −2, det M −1 = − 12 . Therefore

1 1
adj M −1 = − (M −1 )−1 = − M (8)
2 2
Adding (7) and (8), we see that (B) is correct too.
Finally, for (C), we apply (1) again with M replaced by M 2 . Since
det M = −2, det (M 2 ) = (−2)2 = 4. Therefore

adj (M 2 )M 2 = 4I3 (9)

We take determinants of both the sides. We already know det (M 2 ) =


4. The determinant of 4I3 is 43 = 64. Therefore det (adj M 2 ) equals
16 and not 81. So, (C) is false.
A good question based on adjoints and the multiplicative property
of determinants. The numbers have been chosen carefully so that the
calculations are manageable once a candidate avoids the temptation to
find adj M from M and instead uses (1). Options (A), (B) and (C)
are closely related to each other. (D) only imposes additional work of
a different type. In view of the time constraint, it would have been a
better idea to drop (D) altogether. But probably the paper setters are
constrained by the format requirement too.

Q.7 Dropped. (Conditional probability.)

15
Q.8 Dropped. (Solving a triangle.)

Q.9 Define the collections {E1 , E2 , E3 , . . . , } of ellipses and {R1 , R2 , R3 , . . .}


of rectangles as follows:
x2 y 2
E1 : + =1;
9 4
R1 : rectangle of largest area with sides parallel to the axes,
inscribed in E1 ;
x2 y 2
En : ellipse 2 + 2 = 1 of largest area
an bn
inscribed in Rn−1 , n > 1 ;
Rn : rectangle of largest area, with sides parallel to the axes,
inscribed in En , n > 1.
Then which of the following is/are correct?

5
(D) The distance of a focus from the centre in E9 is 32
1
(C) The length of the latus rectum of E9 is 6
N
X
(B) area of Rn < 24, for each positive integer N
n=1

(A) The eccentricities of E18 and E19 are NOT equal

Answer and Comments: (B, C). Although the question deals with
an infinite sequence of ellipses, defined recursively, they all turn out to
be similar to each other as we shall see. Given an ellipse

x2 y 2
E: + 2 =1 (1)
a2 b
let us first identify the rectangle, say R, of maximum area inscribed in E
and sides parallel to the axes of E. If (a cos θ, b sin θ) is its vertex in the
first quadrant, the other vertices are obtained by taking reflections into
the axes. So the sides of this rectangle are 2a cos θ and 2b sin θ. Hence
its area is 4ab sin θ cos θ = 2ab sin 2θ which is maximum when θ = π4 .
So, the rectangle R of maximum area has its vertices at (± √a2 , ± √b2 ).

16
y

(0,2)

E
E2 1
R1
x
(−3, 0) O (3,0)

Next, we have to find an ellipse E ′ with sides parallel to those of R and


having the largest area. (Given any rectangle, there is only one ellipse
with axes parallel to the sides of the rectangle that can be inscribed
in it. So, the stipulation of largest area is redundant. Probably it is
intended to help candidates who might misinterpret ‘inscribed’ to mean
merely ‘contained in’.) Clearly, the major and minor axes of E ′ must
be the length and the width of R, i.e. √2a2 and √2b2 respectively. So,

x2 y2
E′ : + =1 (2)
a2 /2 b2 /2
This construction of interlacing ellipses and rectangles then goes on.
The figure above shows the construction of R1 from E1 and of E2 from
R1 .
The crucial observation is that the lengths of the semi-major and semi-
minor axes each form a geometric progression with common ratio √12 .
We have a1 = 3, b1 = 2. Hence for the n-th ellipse En , we have
3
an = √ (3)
( 2)n−1
2
and bn = √ n−1 (4)
( 2)

Note that the ratio of the major to the minor axis of each ellipse is the
same, viz. 32 . Since the eccentricity depends only on this ratio, (A) is

17
immediately ruled out. Further this common eccentricity e is given by
s s √
2 2 5 5
e= 1−( ) = = (5)
3 9 3

From (3) and (5), the foci of En are at the points (± (√2)5n−1 , 0). Each

is at a distance √ 5 from the centre (which is the origin for all n).
( 2)n−1 √
For n = 9, this equals 165 which
shows that (D) is false. For (C), the

length of the latus rectum of En is 2bn 1 − e2 = 4b3n because of (5).
8
By (4), this equals 3×16 = 61 for n = 9. Hence (C) is true.
Finally, the sides of Rn are 2an+1 and 2bn+1 . So its area is
4an+1 bn+1 = 224n by (3) and (4). So the series in (B) is the sum of
a geometric progression of N terms with the first term 12 and common
ratio 21 . Its value is 24(1 − ( 12 )N −1 ) which is less than 24 for every N.
So (C) is true. (Alternately, the sum of the infinite geometric series

X
area of Rn . It is simply 24. Since all terms are positive, any partial
n=1
sum has to be less than 24.)
The construction of the sequence of ellipses and rectangles inscribed
in each other is a novel feature of the problem. (In the past there have
been questions about similar sequences of circles and squares.) After
that it reduces to the computation of certain standard things associated
with an ellipse. A little bit of trigonometric optimisation and sum of a
geometric progression are added for flavour.
Q.10 Let f : IR −→ IR be given by
x5 + 5x4 + 10x3 + 10x2 + 3x + 1, x < 0;



x2 − 2x + 1, 0 ≤ x ≤ 1;


f (x) = 2 3

 3
x − 4x2 + 7x − 38 , 1 ≤ x ≤ 3;
(x − 2) loge (x − 2) − x + 10

, x ≥ 3.

3

Then which of the following options is/are correct?

(A) f is increasing on (−∞, 0)


(B) f ′ has a local maximum at x = 1
(C) f is onto

18
(D) f ′ is NOT differentiable at x = 1

Answer and Comments: (B, C, D).


It is easy to check (A) by looking at the sign of f ′ (x) for x ∈
(−∞, 0). Note that f ′ (x) = 5x4 + 20x3 + 30x2 + 20x + 3 for x < 0. So
f ′ (−1) = 5 − 20 + 30 − 20 + 3 = −2 < 0. Hence f cannot be increasing
on the entire interval (−∞, 0). So (A) is false.
For (B) and (D), we first need to check if f ′ (1) exists. This follows
because for 0 < x < 1, f ′ (x) = 2x − 1 which tends to 1 as x → 1− . By
the Lagrange MVT, the left handed derivative of f at x = 1 exists and
equals 1. The same is true for the right handed derivative at x = 1
since f ′ (x) = 2x2 − 8x + 7 → 1 as x → 1+ . So f ′ exists and equals
1 at x = 1. Note also that f ′′ (x) = 2 > 0 for 0 < x < 1. So f ′ is
increasing on (0, 1) by LMVT. But f ′′ (x) = 4x − 8 < 0 for 1 < x < 2.
So f ′ changes its behaviour from increasing to decreasing at x = 1.
Therefore f ′ has a local maximum at x = 1. That is, (B) is true.
However, as for differentiability of f ′ at x = 1, f ′′ (x) = 2 → 2 as
x → 1− while f ′′ (x) = 4x − 8 → 4 as x → 1+ . Hence f ′′ (1) does not
exist. So (D) is also true.
Finally, for (C), since f (0) = 1 but f (x) → −∞ as x → −∞, by
the Intermediate Value Property of continuous functions, f ((−∞, 0])
includes (−∞, 1]. On the other hand, for x > 3, f ′ (x) = 1 + log(x −
2) − 1 > 0. So f is strictly increasing on (3, ∞) and tends to ∞ as
x → ∞. Also f (3) = 13 . Hence by the IVP again, all points in [ 13 , ∞)
are in the range of f . Since (−∞, 1] ∪ [ 13 , ∞) = IR f is onto.

Q.11 Let Γ denote the curve y = y(x) which is in the first quadrant and let
the point (1, 0) lie on it. Let the tangent to Γ at a point P intersect
the y-axis at Yp . If P YP has length 1 for each point P on Γ, then which
of the following options is/are correct?
 √
2
 √
(A) y = loge 1+ x1−x − 1 − x2

(B) xy ′ + 1 − x2 = 0
 √
2
 √
(C) y = − loge 1+ x1−x + 1 − x2

(D) xy ′ − 1 − x2 = 0

19
Answer and Comments: (A,B). There are three tasks here, first, to
write down a differential equation from the given geometric property
of the curve Γ, second, to find its general solution and the third, to
identify a particular solution using the given initial condition, viz., the
point (1, 0) lies on it, i.e. y(1) = 0.
The equation of the tangent to Γ at a point P = (x0 , y0 ) on it is

y − y0 = y ′(x − x0 ) (1)

So the point YP is (0, y0 − y ′x0 ). The condition Y Yp translates as

x20 + y ′2 x20 = 1 (2)

for all (x0 , y0 ). Hence the differential equation satisfied by Γ is



xy ′ = ± 1 − x2 (3)

Because of continuity, the same sign must hold throughout. To find


out which sign it is, note first that Γ has no points for which x > 1.
The tangent at (1, 0) is given to cut the y-axis at a point which is at a
distance 1 from (1, 0). So, this point must be the origin (0, 0). Hence
the tanget to Γ at (1, 0) is the x-axis. But this cannot be the curve Γ,
as otherwise, the x-axis will also be the tangent to Γ at any P = (x1 , 0)
with 0 < x1 < 1 and then YP = (0, 0) will be at a distance less than 1
from P .
We conclude that there is some x1 ∈ (0, 1) at which y(x1 ) 6= 0 and
hence y(x1 ) > 0 as the curve is in the positive quadrant. We now apply
the Lagrange MVT to the interval [x1 , 1] to get some c1 ∈ (x1 , 1) such
that
0 − y(x1 )
y ′(c1 ) = (4)
1 − x1
The R.H.S. is negative. So there is at least one point on Γ where the
negative sign holds for the R.H.S. of (3). But then because of continuity,
the negative sign holds at all points of Γ and so we get

xy ′ = − 1 − x2 (5)

at all points of Γ. Hence (B) is true and (D) is false.

20
We can now rewrite the differential equation as

′ 1 − x2
y =− (6)
x
and solve it by integrating both the sides.


1 − x2
Z
y = − dx
x
cos2 θ
Z
= − dθ
sin θ
1 − sin2 θ
Z
= − dθ
Z sin θ
= − (cosec θ − sin θ)dθ
= −(− loge (cot θ + cosec θ) + cos θ) + c
1 + cos θ
= −(− loge ( ) + cos θ) + c
√ sin θ
1 + 1 − x2 √
!
= loge − 1 − x2 + c (7)
x

We are given that the point (1, 0) lies on Γ. Hence c = 0. That makes
(A) true and (C) false.
Forming and solving the differential equation was an easy task in this
problem. The interesting part of the problem is the sign determination.

Q.12 Let L1 and L2 denote the lines

~r = ~i + λ(−î + 2ĵ + 2k̂), λ ∈ IR and


~r = µ(2î − ĵ + 2k̂), µ ∈ IR

respectively. If L3 is a line which is perpendicular to both L1 and L2


and cuts both of them, then which of the following options describe(s)
L3 ?

(A) ~r = 92 (4î + ĵ + k̂) + t(2î + 2ĵ − k̂), t ∈ IR


(B) ~r = 92 (2î − ĵ + 2k̂) + t(2î + 2ĵ − k̂), t ∈ IR

21
(C) ~r = 31 (2î + k̂) + t(2î + 2ĵ − k̂), t ∈ IR
(D) ~r = t(2î + 2ĵ − k̂), t ∈ IR

Answer and Comments: (A, B, C). A straight line L is uniquely


specified by any point ~r0 on it and a vector ~u parallel to it. (Such a
vector is sometimes called a guiding vector as it specifies the direction
of the line. The point ~r0 is called an initial point.) Then L consists of
all points whose position vectors are of the form ~r = ~r0 + t(~u), t ∈ IR.
The vector ~u is unique except for (non-zero) scalar multiples. The point
~r0 is not unique. It can be replaced by a point ~r1 , if and only if ~r1 − ~r0
is a scalar multiple of ~u.
In the present problem, as L3 is perpendicular to both L1 and L2
the vector ~u is any (non-zero) vector parallel to the cross product


î ĵ k̂
−1 2 2 = 6î + 6ĵ − 3k̂ (1)



2 −1 2

So, we may take ~u = 2î + 2ĵ − k̂. This vector appears as the guiding
vector in all the four given options. So none of them can be declared
wrong on this preliminary ground.
To investigate which options are correct, we need to know at
least one point on L3 . This can be done by observing that the lines
L1 and L2 are skew lines. Let (A, B) be the closest pair of points
on them with A on L1 and B on L2 . Then both lie on L3 and it
suffices to find either one of them. Let A = (1 − λ, 2λ, 2λ) for some
−→
λ ∈ IR and B = (2µ, −µ, 2µ) for some µ ∈ IR. Then AB is the vector
(2µ + λ − 1)î − (µ + 2λ)ĵ + (2µ − 2λ)k̂ and its perpendicularity with
both L1 and L2 gives a system of two equations in λ and µ, viz.

−2µ − λ + 1 − 2µ − 4λ + 4µ − 4λ = 0 (2)
and 4µ + 2λ − 2 + µ + 2λ + 4µ − 4λ = 0 (3)

This system can be solved for λ and µ. The value of λ will give A
while that of µ will give B. Luckily, in the present problem the first
equation reduces to 9λ = 1 and the second one to 9µ = 2. This gives

22
λ = 19 and hence A = ( 98 , 92 , 92 ). Its position vector is 29 (4î + ĵ + k̂). So,
immediately (A) is correct.
To check the remaining options, call 29 (4î + ĵ + k̂) as ~r0 . In each
option a vector ~r1 is given and we have to check if the difference ~r1 −~r0
is some multiple of the guiding vector ~u = 2î + 2ĵ − k̂ of L3 .
In (B), ~r1 = 92 (2î − ĵ + 2k̂) and so ~r1 − r0 = 29 (−2î − 2ĵ + k̂) which is a
scalar multiple of ~u. Hence (B) is a correct option.
In (C), ~r1 = 13 (2î + k̂) and so ~r1 − ~r0 = − 92 î − 29 ĵ + 91 k̂), which is also a
multiple of ~u. Hence (C) is also a correct option.
Finally, in (D), ~r1 = ~0 and so ~r1 − ~r0 == −~r0 which is not a multiple
of ~u. So (D) is incorrect.
Basically, a simple problem, which ends when you find some vec-
tor representation of L3 . But after getting it, testing each of the four
options is a highly repetitious task and prone to numerical errors. In-
stead, the problem could have been made a little more challenging by
giving one of the two lines L1 and L2 as the intersection of some pair
of lines, but avoiding the repetitios work by posing it as a numerical
problem (e.g. asking the prependicular distance of some point from
L3 ). It is difficult to understand what quality of a candidate, except
possibly perseverance, is tested by giving four possible options each
requiring the same kind of work.

SECTION - 3
This section contains SIX questions each of which is to be answered with
a number (rounded to two decimal places if necessary). The score for each
correct answer 3. In all other cases it is 0.
Q.13 Let ω 6= 1 be a cube root of unity. Then the minimum of the set
{|a + bω + cω 2 |2 : a, b, c are distinct non-zero integers}
equals .... .

Answer and Comments: 3.0. Note that ω and ω 2 are complex


conjugates of each other. Also, ω 3 = 1. So,
|a + bω + cω 2 |2 = (a + bω + cω 2 )(a + bω + cω 2 )

23
= (a + bω + cω 2 )(a + bω 2 + cω)
= a2 + b2 + c2 + (ab + bc + ca)(ω 2 + ω) (1)
= a2 + b2 + c2 − ab − bc − ca (2)
1
= [(a − b)2 + (b − c)2 + (c − a)2 ] (3)
2
where in going from (1) to (2) we have used that ω 2 + ω + 1 = 0 which
follows from the factorisation 0 = (ω 3 −1) = (ω −1)(ω 2 +ω +1) because
the factor ω − 1 is non-zero.
As a, b, c are distinct non-zero integers, the bracketed expression in (3)
will be minimum when two of the differences a − b, b − c and c − a
are 1 and the third one is 2. So the minimum value of the bracketed
expression is 1 + 1 + 4 = 6. Hence the minimum of the given expression
is 3.
A very simple problem based on the standard properties of ω. The
minimisation of the bracketed expression in (3) subject to the given
constraints on a, b, c is more a matter of common sense. As pointed out
by Vinayak Antarkar, the question is essentially a duplication of Q.3 of
JEE 2005 Screening Paper, the only change being that the restriction
on a, b, c there was that they are not all equal. See the commentary of
that year for a geometric solution too.

Q.14 Let AP (a; d) denote the set of all terms of an infinite arithmetic pro-
gression with first term a and common difference d > 0. If

AP (1; 3) ∩ AP (2; 5) ∩ AP (3; 7) = AP (a; d)

then a + d equals

Answer and Comments: 157.00. Apart from the mathematical


contents, the thing that strikes immediately about this problem is the
brevity of its statement. The statements of many JEE problems are
long winded (sometimes unnecessarily and sometimes to help candi-
dates’ understanding, as in the last problem where the word a ‘direct’
tangent was replaced by its description). As a result, they often run
to full paragraphs. A candidate often has to struggle just to find out
what is asked. The present problem is a plesant exception. It also

24
shows how the language of set theory can be used to combine concision
and precision.
Now coming to the problem, let us denote the three given sets by
S1 , S2 and S3 . When written in expanded form

S1 = AP (1; 3) = {1, 4, 7, 10, 13, 16, 19, 22, . . . , 1 + 3k, . . .} (1)


S2 = AP (2; 5) = {2, 7, 12, 17, 22, 27, . . . , 2 + 5k, . . .} (2)
S3 = AP (3; 7) = {3, 10, 17, 24, 31, 38, . . . , 3 + 7k, . . .} (3)

Let S = S1 ∩ S2 ∩ S3 . We are given that S is also an A.P. and asked


to identify its first term and common difference. Instead of doing so
right away, let us introduce T = S1 ∩ S2 first. If we can identify T and
show that it is an A.P., the procedure will then help in identifying S
because S is the intersection of two A.P.’s T and S3 .
It is obvious from the elements listed above that 7 is the smallest
element of T . After that comes 22. This could have been predicted
because an integer x is in S1 ∩ S2 if x − 1 divisible by 3 and x − 2 is
divisible by 5. The smallest x for which this happens is x = 7. If y
is another element of T , then y = 1 + 3r = 2 + 5s for some positive
integers r, s. So, y − 7 = 3r − 6 is divisible by 3. But y − 7 = 5s − 5
show that y −7 is also divisible by 5. Since 3 and 5 are relatively prime,
y − 7 is divisible by their product, i.e. by 15. So, y has to be of the
form 7 + 15k for some positive integer k. Conversely any such y can
be written as 1 + 3(5k + 2) and also as 2 + (3k + 1)5 and is, therefore,
both in S1 and S2 .
Summing up, we have shown that T = S1 ∩ S2 is the set AP (7; 15).
In full form,

T = {7, 22, 37, 52, 67, . . . , 7 + 15k, . . .} (4)

We now go from T to T ∩ S3 . Except for the particular numbers, the


reasoning above goes through. Thus we first identify the least element
common to T and S3 . We can do so by scanning either the lements of
T or of S3 one-by-one. We prefer T because it has a larger common
difference. We want an element x ∈ T which can also be written as
3 + 7k for some positive integer k. Clearly 52 is the smallest such

25
element. The common difference of T (viz. 15) and that of S3 (which
is 7) are relatively prime. So by the same reasoning as above, we get

S = T ∩ S3 = {52, 157, 207, . . .} = AP (52; 105) (5)

Hence a = 52, d = 105 and so a + d = 157.


We can generalise the construction by letting S1 = AP (a1 ; d1 ) and
S2 = (a2 ; d2 ) where d1 , d2 are two positive integers that are relatively
prime to each other. If we are able to identify some integer a such that
a = a1 + rd1 = a2 + sd2 for some integers r, s, then S1 ∩ S2 will be an
A.P. containing a and common difference d1 d2 . (It may happen that a
is not the least positive element of S1 ∩ S2 . In fact, a may be negative
as we are allowing r, s to be negative. But in that case we can add or
subtract suitable multiples of d1 d2 from it.)
The crucial question is how to identify some common element
of S1 ∩ S2 . In the solution above, this was done essentially by trial.
But there is a systematic procedure for it. We assume here a result
that if d1 and d2 are two relatively prime integers, then some linear
combination λ1 d1 + λ2 d2 of them (with integer coefficients) equals 1.
For example, in our problem when d1 = 3, d2 = 5 we can write 1 as
2 × 3 + (−1) × 5. (The coefficients λ1 , λ2 are not unique, for we can
also write 1 as (−3) × 3 + 2 × 5. Note that in any such combination,
one coefficient is a positive integer and the other a negative integer.) In
the linear combination 2 × 3 + (−1) × 5, call the first term m1 and the
second m2 . That is, m1 = 6 and m2 = −5. Now let a = m1 a2 + m2 a1 =
6 × 2 + (−5) × 1 = 7 which we found earlier by trial. (Had we taken the
combination 1 = (−3) × 3 + 2 × 5, then m1 = −9 and m2 = 10. This
would give a = m1 a2 + m2 a1 as (−9) × 2 + 10 × 1 = −8. This is not
in S1 ∩ S2 . But if we add to it 15, which is a multiple of the common
difference 15, we do get 7.)
Similarly, in the second part of the solution, we had a1 = 7, d1 = 15
and a2 = 3, d2 = 7. Then 1 = (−2) × 7 + 1 × 15. This time m1 = −14
and m2 = 15. So, a = m1 a2 + m2 a1 = (−14) × 7 + 15 × 3 = −53.
Again, this is not a positive integer. But if we add to it a multiple of
d1 d2 which is 105 now, we get 52 as a common element of T and S3 .
In essence, we have proved the following theorem.

26
Theorem: Let d1 , d2 be positive integers which are relatively prime
to each other (that is, they have no common factor other than 1). Let
a1 , a2 be any positive integers. Then there exists some positive integer
a ∈ AP (a1 ; d1 ) ∩ AP (a2 ; d2 ).

Proof: We begin by writing 1 as

1 = λ1 d 1 + λ2 d 2 (6)

where λ1 , λ2 are some (not necessarily positive) integers. Let m1 = λ1 d1


and m2 = λ2 d2 . Let b = m1 a2 + m2 a1 . Then

b − a1 = λ1 d1 a2 + λ2 d2 a1 − a1
= λ1 d1 a2 + (λ2 d2 − 1)a1
= λ1 d1 (a2 − a1 )
= k1 d 1 (7)

where k1 = λ1 (a2 − a1 ). Note that k1 is an integer. So we have shown


that b is of the form a1 +k1 d1 for some (not necessarily positive) integer
k1 . Interchanging the roles of a1 and a2 , we can show that b is of the
form a2 + k2 d2 for some integer k2 . b may be negative. And even
when positive, it may not be in AP (a1 ; d1 ) ∩ AP (a2 ; d2 ) since it may
be smaller than a1 or a2 . But if we take a = b + kd1 d2 for a sufficeintly
large positive integer k, then a ∈ AP (a1 ; d1) ∩ AP (a2 ; d2 ).

Note that the coprimality of d1 and d2 is essential. As an easy


counter-example, AP (1; 2) ∩ AP (2; 2) is the empty set.
The following extension to the case of more than two A.P.’s is
popularly called the Chinese Remainder Theorem.

Corollary: Suppose d1 , d2 , . . . , dn are pairwise coprime positive inte-


gers and a1 , a2 , . . . , an are any integers. Then there exists some integer
a in AP (a1 ; d1 ) ∩ AP (a2 ; d2 ) ∩ . . . ∩ AP (an ; dn ).

Proof: This follows from the theorem above and induction on n. The
inductive step follows from the fact that under the hypothesis, the two
integers d1 d2 . . . dn−1 and dn are relatively prime to each other.

27
Actually, the proof of the Chinese Remainder Theorem is incomplete
until we prove (6). But this gap can be filled.
More importantly, given two integers x and y whose greatest common
divisor is d, there is systematic procedure to find integers a and b such
that
ax + by = d (8)
The g.c.d. of two integers x and y can be read out if we know their prime
power factorisations. But this is a very difficult and time consuming
task. (There is, in fact, no known efficient way to factorise a given
integer into even two factors. All the secret passwords are designed
on the assumption that nobody knows such an efficient method. If
this assumption fails, there will be a chaos as the passwords would be
cracked very easily!) In any case, even if we are able to find d this way,
it still remains a problem how to express in the form (8).
But in schools we learn that there is a simpler way to find the
g.c.d. of two integers based on repeated applications of what is called
Euclid’s division algorithm. We begin by dividing the larger of
the two numbers by the smaller one to get some remainder (which is
necessarily smaller than the smaller of the two given numbers). We
then divide the smaller number by this remainder to get a still smaller
remainder. We repeat this process till we get 0 as the remainder, which
is bound to happen in a finite number of steps since we are getting
a strictly decreasing sequence of positive integers. (That is why the
procedure is called an algorithm. Guaranteed termination in a finite
number of steps is a sine qua non of any algorithm.) The last divisor
is the g.c.d. of the original two numbers.
Familiar as it is, we illustrate this procedure by an example. Suppose
we want to find the g.c.d. of 1092 and 195. We proceed as follows.
1092 = 5 × 195 + 117
195 = 1 × 117 + 78
117 = 1 × 78 + 39
78 = 2 × 39 + 0 (9)
So the g.c.d. of 1092 and 195 is 39. What is generally not done in
schools is to use the equations above in a backward manner (starting

28
with the last but one equation) so as to express the g.c.d. as a linear
combination of the original numbers. In the illustration above, this can
be done as follows.

39 = 117 − 1 × 78
= 117 − 1 × (195 − 1 × 117) = 2 × 117 − 1 × 195
= 2 × (1092 − 5 × 195) − 1 × 195
= 2 × 1092 + (−11) × 195 (10)

So we have expressed the g.c.d. of 1092 and 195 as a linear combination


of 1092 and 195 with integer coefficients.
The proof of the Chinese Remainder Theorem is now complete.
The theorem is usually stated in the language of residue classes (see
Exercise (4.13)(b)). We have avoided it and instead used the elemen-
tary formulation in terms of arithmetic progressions, as in the present
problem. Of course, the present problem is meant to be done by trial.
But it is instructive to know the underlying theorem just as Cayley
Hamilton theorem was the underlying theorem of Q.2 in Section 1.
The Chinese Remainder Theorem is an ancient theorem, popular
because it can be used to predict the simultaneous occurrence of some
periodically recurring events. The events may be pertaining to motions
of celestial bodies or to some mundane affairs. (See Exercise (4.14) for
a sample.)
Although problems on arithmetic progressions are asked almost every
year, the present one is a novel one and more a problem in number
theory.

Q.15 Let S be the sample space of all 3 × 3 matrices with entries from the
set {0, 1}. Let the events E1 and E2 be given by

E1 = {A ∈ S : det A = 0} and
E2 = {A ∈ S : sum of entries of A is 7}

If a matrix is chosen at random from S, then the conditional probability


P (E1 |E2 ) equals .... .

29
Answer and Comments: 0.50. By Bayes theorem

P (E1 ∩ E2 )
P (E1 |E2 ) = (1)
P (E2 )

The sample space S has 29 elements in all and they are equally likely.
Therefore, the probabilities of events are proportional to the numbers
of elements in the respective subsets. Hence

|E1 ∩ E2 |
P (E1 |E2 ) = (2)
|E2 |

It is easier to find E2 . As each of the 9 entries of a matrix A ∈ S has


only 0 and 1 as possible values, their sum will equal 7 if and only if
exactly 7 of the entries are 1 and the remaining 2 are 0. This gives
!
9 9.8
|E2 | = = = 36 (3)
2 2

For a matrix A ∈ E2 , the determinant will vanish if the two zero entries
are in the same row or the same column because then two columns
(or rows) of A would be equal. In all other cases, by a permutation
of rows
 and columns
 the matrix A can be converted to the matrix
0 1 1
B =  1 0 1 . But then det (B) = 1 and so det (A) = ±1 6= 0. So
 

1 1 1
the only matrices in E1 are those where the two zero entries are in the
same row or in the same column. These two possibilities are mutually
exclusive. In each possibility the exceptional row or column (the one
containing both the zeros) can be chosen in 3 ways, and for each such
choice the two zeros can be put into the three possible places in 3 ways.
Putting it all together,

|E1 ∩ E2 | = 3 × 3 + 3 × 3 = 18 (4)
18
So P (E1 |E2 ) = 36
= 0.5.
The key idea is to note that for the determinant to vanish the two
zero entries must be in the same row or in the same column. Once this
strikes, the combinatorics involved is very elementary.

30
Q.16 Let the point B be the reflection of the point A(2, 3) with respect to
the line 8x − 6y − 23 = 0. Let ΓA and ΓB be circles of radii 2 and 1
with centres A and B respectively. Let T be a common tangent to the
circles ΓA and ΓB such that both the circles are on the same side of T .
If C is the point of intersection of T and the line passing through A
and B, then the length of the line segment AC is —-.

Answer and Comments: 10.00. This is in essence a problem of pure


geometry. The paper-setters have avoided the archaic term a ‘direct’
tangent to the two circles, by specifying what it means. (When the
two circles lie on the opposite side of a common tangent, it is called an
inverse tangent.)

ΓB
B
1
E

A(2,3) ΓA

2
D

Drop perpendiculars AD, BE on T . Then the right angled triangles


AC AD 2
ADC and BEC are similar. Therefore = = = 2. Hence
BC BE 1
AC = 2BC. So, AB = BC. Thus AC = 2AB.
The problem is now reduced to calculating the distance AB. The
point A is given as (2, 3) and B is given as the reflection of A w.r.t.
the line, say L : 8x − 6y − 23 = 0. A natural tendency would be to
identify B. But that is hardly necessary. Our interest is not so much
in the point B per se as its distance from A. Clearly this distance is
twice the perpendicular distance of A from the line L. It comes out as

31

16 − 18 − 23
√ = | −25 | = 25 . Therefore the distance AB equals 5 and

10

2 2
8 +6
hence AC equals 10.
A good, simple problem of pure geometry, which is artificially
converted to a problem of coordinate geometry along with a trap laid
similar to that in Q.1. There the answer came without identifying the
complex number z0 . In the present problem it comes without iden-
tifying B. Those who realise this stand to win in terms of the time
saved.

Q.17 Dropped. (Evaluation of a definite integral.)

Q.18 Dropped. (Finding the volume of a tetrahedron.)

32
PAPER 2

Contents

Section - 1 (One or More Correct Options Type) 33

Section - 2 (Numerical Answer Type) 52

Section - 3 (Matching Pairs Type) 62

SECTION - 1
This section contains FOUR questions each of which has four options out
of which ONE OR MORE is/are correct. There are 4 marks if only all the
correct option(s) is/are chosen, 0 marks if no option is chosen and −1 marks
if even one incorrect option is chosen, regardless of others. There are partial
scores 1, 2 or 3 equal to the number of correct options chosen, provided no
incorrect option is chosen.
Q.1 Let
     
1 0 0 1 0 0 0 1 0
P1 = I = 
 0 1 0 
 , P 2 = 
 0 0 1 
 , P 3 =  1 0 0 ,
 

0 0 1 0 1 0 0 0 1
     
0 1 0 0 0 1 0 0 1
P4 =  0 0 1  , P5 =  1 0 0  , P6 =  0 1 0 
     

1 0 0 0 1 0 1 0 0
 
6 2 1 3
Pk  1 0 2  PkT
X
and X =
 
k=1 3 2 1
where PkT denotes the transpose of the matrix Pk . Then which of the
following options is/are correct?
   
1 1
(A) If X  1  = α  1 , then α = 30
   

1 1

33
(B) X is a symmetric matrix
(C) The sum of diagonal entries of X is 18
(D) X − 30I is an invertible matrix

Answer and Comments: (A, B, C). Matrices have been added rel-
atively recently into the JEE syllabus, although determinants have
been there for ages. As a result, there is a clear understanding as
to which concepts and properties pertaining to determinants the can-
didates should be familiar. Things are yet to settle down for matrices.
This is evident from the phrase ‘sum of the diagonal entries’ in option
(C), where a single word ‘trace’ would do. (We already used trace in
the solution to Q.2 of Paper 1.) Probably, this is done to help those
candidates who might not have heard that term before. But sometimes
this has the opposite effect. That is, those who know this word, are
puzzled why it is not used and start wondering if there is some hidden
trap in not using it. It is better if the syllabus of the JEE clearly spells
this out.
Now coming to the question itself, it will be horrendous to write X
out fully. It is a sum of six matrices, X1 , X2 , . . . , X6 where Xk = Pk APkT
where we let
 
2 1 3
A= 1 0 2 

 (1)
3 2 1

Even calculating each Xk is a tedious job. Fortunately, that is not


needed. The key idea is to observe that each Pk corresponds to some
permutation of the set {1, 2, 3}, because the rows of Pk are obtained
by a permutation of the rows of I and a similar statement holds for the
columns. Thus, if we denote the rows of I by the row vectors ~r1 , ~r2 and
~r3 respectively, then the rows of P5 are ~r3 , ~r1 , ~r2 . So, P5 corresponds
to the permutation (312) of the symbols 1, 2, 3 (i.e. the bijection θ :
{1, 2, 3} −→ {1, 2, 3} defined by θ(1) = 3, θ(2) = 1, θ(3) = 2). If we let
~c1 , ~c2 and ~c3 , be the column vectors of I, then the column vectors of P5
are ~c2 , ~c3 , ~c1 . So P5 corresponds to the permutation (231). Note that
this is precisely the inverse permutation θ−1 of θ.

34
Not surprisingly, the matrices P1 to P6 are called permutation
matrices. Note that P4 and P5 are transposes of each other. The
permutation of rows induced by P4 is (231) which is the inverse of
the permutation of the rows induced by P5 . The same holds for the
permutation of the columns induced by P4 , viz. (312) which is the
inverse of the permutation (231) of columns induced by P5 .
This suggests that for every permutation matrix P , P T is the same
as P −1 . This is true. In fact, P does not have to be a permutation
matrix. It can be any matrix whose rows are mutually orthogonal unit
vectors. For, if the rows of P are, say, the row  vectors ~ u, ~v, w,
~ then
~u · ~u ~u · ~v ~u · w
~
by a direct calculation, the matrix P P T equals   ~ v · ~u ~v · ~v ~v · w
~ 

w~ · ~u w~ · ~v w~ ·w~
which is simply the identity matrix if ~u, ~v , w
~ are mutually orthogonal
unit vectors. Although matrix multiplication is not commutative in
general, P P T = I implies P T P = I. (One proof is using the adjoint of
P .) Hence P T = P −1 .
Returning to the question, when a column vector is premultiplied
by a permutation matrix P , the result is a permutation of the column
 
a
vector. For example, we see by a direct computation that P5   b 

c
 
c
equals  a . We apply this to test (A). As all entries of the column
 

b
 
1
vector  1  are equal, no matter which permutation matrix PkT we
 

1
 
1
apply to it, the resulting column vector is the same, viz.  1 . So, we
 

1
have
      
1 X6 2 1 3 1 X6 6
X 1 = Pk  1 0 2   1  = Pk  3  (2)
      

1 k=1 3 2 1 1 k=1 6

Instead of expanding each of the six terms of the last summation and

35
adding, we apply the (right) distributive law for matrix multiplication,
viz. (A + B)C

= AB + AC. If we add the six matrices P1 to P6 we get
2 2 2
the matrix  2 2 2 . (This can also be obtained by noting that for
 

2 2 2
each of the nine ordered pairs (i, j) where 1 ≤ i ≤ 3, 1 ≤ j ≤ 3, there
are exactly two of these six matrices that have their (i, j)-th entry 1.)
Therefore the calculation above can be continued to yield
        
1 2 2 2 6 30 1
X  1  =  2 2 2   3  =  30  = 30  1  (3)
        

1 2 2 2 6 30 1

which proves that (A) is correct. Simultaneouly,


  
we get that (D) is
1 0
false because we have (X − 30I)  1  =  0 . If X − 30I had an
   

1 0
inverse, say B, then we would have B(X − 30I) = I. But then
   
1 1
B(X − 30I)  1  =  1  (4)
   

1 1
     
1 0 0
while on the other hand, B(X − 30I)  1  = B  0  =  0 , a
     

1 0 0
contradiction.
We now turn to (B). A sum of symmetric matrices is symmetric. So,
if we can prove that for every k = 1, 2, . . . , 6, Pk APkT (where A is given
by (1)) is symmetric, then (B) would be true. This follows by taking
transposes (and keeping in mind that (XY )T = Y T X T ). Indeed

(Pk APkT )T = (PkT )T AT PkT = Pk APkT (5)

because (PkT )T = Pk and AT = A as A is symmetric. Thus (B) is true.


The truth of (C) will follow from a property of trace, called its
invariance under similarity of matrices. Two square matrices X
and Y are said to be similar if there exists some invertible matrix Q

36
such that Y = Q−1 XQ. Taking determinants of both the sides and
keeping in mind that det (Q−1 ) = det1 Q we get that X and Y have the
same determinants. This is expressed by saying that the determinant
is a similarity invariant. It is also true that the trace, that is, the sum
of the diagonal elements, is a similarity invaraint, i.e. tr (X) = tr (Y ).
But the proof is a bit tricky. It consists of first proving, by direct
calculation, that for any two square matrices U, V of the same order,
tr (UV ) = tr (V U) and then applying this result taking U = Q and
V = Q−1 X.
If we assume this result, and use the fact (noted above) that for
every permutation matrix P , P T = P −1 , we get an easy proof of (C).
Indeed, then each of the six matrices in the summation has the same
trace as A, which is 3. Since the trace of a sum equals the sum of the
traces, it follows that tr (X) = 6 × 3 = 18. So (C) is true.
It is not clear whether the paper setters had this proof in mind. The
fact that they have refused to use the word ‘trace’ probably indicates
that the candidates are not expected to know the concept, much less
its properties. On the other hand, the paper-setters could be double
crossing. That is, by deliberately avoiding the word ‘trace’ they want to
test whether a candidate is alert enough to realise that that is what they
mean by the sum of the diagonal entries and then they can comfortably
use whatever properties of trace they know.
In case no properties of trace are to be used, a proof of (C) can be
given by verifying
 that forevery k = 1, 2, . . . , 6, the sum of the diagonal
2 1 3
elements of Pk  1 0 2   PkT is indeed 3. This is obvious for k = 1,

3 2 1
because P1 AP1 = A. It is also not so difficult for k = 2, 3 and 6, for
in each case, Pk is its own inverse, and the corresponding permutation
of the set {1, 2, 3} has one fixed point and interchanges the other two
elements. The cases k = 4 and k = 5 remain. We do the case k = 5.
Then the case k = 4 will follow by a similar argument.
To calculate the sum of the diagonal elements of P5 AP4 , we first
caclulate P5 A. As observed before, this comes by permuting the rows

37
of A. Specifically,
    
0 0 1 2 1 3 3 2 1
P5 A =  1 0 0   1 0 2  =  2 1 3  (6)
    

0 1 0 3 2 1 1 0 2

We now have to multiply this on the right by P5T , i.e. by P4 . This time
it is the columns of the matrix on the left that will get permuted. But,
as we are interested only in the diagonal entries of the product P5 AP4 ,
we do the multiplication partly and get
    
3 2 1 0 1 0 1 × ×

 2 1 3 
 0 0 1 
 =

 × 2 × 
 (7)
1 0 2 1 0 0 × × 0

the sum of whose diagonal elements is 3 as desired.


The relationship given

in (A) between the matrix X and the column
1
vector, say, ~u =  1  is very important. It is expressed by saying
 

1
that 30 is an eigenvalue of X with eigenvector ~u. The justification
given for ruling out (D) shows that if λ is an eigenvalue of X, then the
matrix X − λI is singular. This can be taken as an alternate definition
of an eigenvalue. In fact, by equating det(X − λI) with 0, we get
a polynomial equation in λ, whose roots give the eigenvalues of the
matrix X.
A very lengthy problem, both because of diverse options and the
ambiguity about whether properties of the trace can be assumed. If
trace and its invariance under similarity are not explicitly mentioned
in the syllabus, the candidates who happen to know these get an unfair
advantage.

Q.2 Let x ∈ IR and let


   
1 1 1 2 x x
−1
P =
 0 2 2 
 , Q =  0 4 0  and R = P QP .
 

0 0 3 x x 6

Then which of the following options is/are correct?

38
(A) There exists a real number x such that P Q = QP
 
2 x x
(B) det R = det  0 4 0  + 8, for all x ∈ IR
 

x x 5
   
1 1
(C) For x = 0, if R  a  = 6  a  , then a + b = 5
   

b b
(D) For

x = 1, there

exists a unit vector αî + β ĵ + γ k̂ for which
α 0
R β  =  0 
   

γ 0

Answer and Comments: (B, C). Yet another long problem about
matrices with unrelated options. Here the matrix P is a fixed matrix,
But Q is a variable matrix with a parameter x and hence so is R.
Naturally, the properties of Q and hence of R will vary as x does. The
options are about these properties.
For (A), the equation P Q = QP in full form means
     
1 1 1 2 x x 2 x x 1 1 1
 0 2 2  0 4 0  =  0 4 0  0 2 2  (1)
     

0 0 3 x x 6 x x 6 0 0 3

On expansion, this will reduce to a system of 9 simultaneous equations


in the unknown x. But if we can show its inconsistency with just two
equations, we need not compute the other entries. Thus equating the
first rows of both the sides gives

[2 + x 2x x + 6] = [2 2 + 2x 2 + 5x] (2)

The second entries of the two sides can never match. So with no further
computation, we declare that (A) is false.
For (B), we first note that det R = det Q since R and Q are
similar. (Here we are using the similarity invariance of the determinant
mentioned in the solution to the last problem.) Hence the statement

39
in (B) is equivalent to
   
2 x x 2 x x
det  0 4 0  = det  0 4 0  + 8 (3)
   

x x 6 x x 5

for all x ∈ IR.


The matrices on both the sides are equal except for their third columns.
So, we split the third column of the matrix on the left and write
     
2 x x 2 x x 2 x 0
det 

0 4 0  = det 
 
0 4 0  + det 
 
0 4 0 
 (4)
x x 6 x x 5 x x 1

The second determinant on the R.H.S. is a constant 8 for all x ∈ IR.


So (B) is true.
For (C), when x = 0, the matrix Q is a diagonal matrix
 
2 0 0
Q= 0 4 0 

 (5)
0 0 6

We can calculate R = P QP −1 if we want. But let us see if there is a


better way. Using the language of eigenvalues introduced at the end
of the solution to the last problem, the hypothesis of (C) can be para-
phrased
  to say that for x = 0, 6 is an eigenvalue of R with eigenvector
1
 a . For a diagonal matrix such as Q above, the diagonal entries
 

b
are the eigenvavlues, with the corresponding basic column vectors
 as
0
eigenvectors. In particular, 6 is an eigenvalue of Q with  0  as an
 

1
eigenvector. That is,
   
0 0
Q 0  = 6 0  (6)
   

1 1

40
(which can be guessed and proved without knowing anything about
eigenvalues or eigenvectors). Let us rewrite R = P QP −1 as RP = P Q.
We then have
     
0 0 0
RP 
 0 
 = P Q 
 0 
 = 6P  0 
 
(7)
1 1 1
 
0
This shows that P  0  is indeed an eigenvector of R with eigenvalue
 

1
6. Evidently, it is the last column of P . Or, we can compute it explicitly
as
      
0 1 1 1 0 1
P  0  =  0 2 2  0  =  2  (8)
      

1 0 0 3 1 3
 
1
Thus we see that the column vector  2  satisfies the hypothesis of
 

3
(C), viz.
   
1 1
R 2  = 6 2  (9)
   

3 3
From this we must not hastily conclude that a = 2 and b = 3. We must
ensure that this is the only column vector which satisfies the hypothesis
of (C). This is not so easy to do without identifying the matrix R. So
we calculate R explicitly from R = P QP −1 . We already know P and
Q. Clearly, det (P ) = 6. Using the adjoint formula for the inverse of a
matrix, P −1 comes out as
   
6 −3 0 1 −1/2 0
1
P −1 =  0 3 −2  =  0 1/2 −1/3  (10)
  
6
0 0 2 0 0 1/3
Therefore,
   
1 1 1 2 0 0 1 −1/2 0
R = P QP −1 =  0 2 2   0 4 0   0 1/2 −1/3 
   

0 0 3 0 0 6 0 0 1/3

41
  
2 4 6 1 −1/2 0
=  0 8 12   0 1/2 −1/3 
  

0 0 18 0 0 1/3
 
2 1 2/3
=  0 4 4/3  (11)
 

0 0 6
Hence
      
1 2 1 2/3 1 1 + a + 2b/3
R  a  =  0 4 4/3   a  =  4a + 4b/3  (12)
      

b 0 0 6 b 6b
 
1
Equating this with 6  a  gives a system of two equations in the two
 

b
unknowns a and b, viz.
2
1+a+ b = 6 (13)
3
4
and 4a + b = 6a (14)
3
which simplify to 3a + 2b = 15 and 2b = 3a respectively. So we can
now legitimately say that a = 2, b = 3 and hence a + b = 5. Thus the
option (C) is correct.
 
2 1 1
Finally, for x = 1, Q becomes  0 4 0 

. Fortunately, this time
1 1 6
we are spared of the torture of calculating R explicitly. By a direct
calculation, det (Q) = 4 × 11 = 44 6= 0. Therefore, Q is non-singular.
But then so is R = P QP −1. And this is sufficient to disprove (D). For,
if it were true then we would get, by premultiplication by R−1 (which
exists)
     
α 0 0
−1 
 β =R  0 = 0  (15)
    

γ 0 0
regardless of what R−1 is. This is a contradiction because the L.H.S. is
a non-zero vector since α2 + β 2 + γ 2 = 1. (This piece of data is actually
superfluous. All that matters is that α, β, γ are not all 0.)

42
It is thus seen, that both (A) and (D) are simple appendages to the
main theme of the problem. In fact, (A) is childishly simple while in
(D) the paper setters have revealed their childishness by unnecessarily
stipulating that the vector αî + β ĵ + γ k̂ is a unit vector, when all that
matters is that it is a non-zero vector.
(B) is a novel and good option. Although problems on determinants
are common, those which use their multilinearity (i.e. linearity in each
column or in each row, see Comment No. 23 of Chapter 2, p. 78) are
rare. See the JEE 1985 problem on p.166 where a column is split after
applying a binomial identity.
In (C), the paper setters have asked a question about eigenvalues
and eigenvectors without using either of these terms, just as they asked
(C) of the last problem without using the word ‘trace’. Like trace
and determinants, eigenvalues are also a similarity invariant. This can
be proved from the similarity invariance of determinants, using the
characterisation of eigenvalues mentioned at the end of the solution to
the last problem. So, suppose, R = P QP −1 and λ is an eigenvalue of
Q. Then det (Q − λI) = 0. But since the matrix λI commutes with
all matrices, we have

P (Q − λI)P −1 = P QP −1 − P (λI)P −1 = R − λI (16)

Hence R − λI is similar to Q − λI. So det(R − λI) = det (Q − λI) = 0.


Therefore λ is also an eigenvalue of R.
A more direct proof is obtained by writing R = P QP −1 as RP =
P Q. In essence this is what we did in our first attempt to tackle (C).
As λ is an eigenvalue of Q, there exists a non-zero column vector ~u
such that Q~u = λ~u. But then we get RP ~u = P λ~u = λP ~u. Further
by non-singularity of P , P ~u is a non-zero column vector. This not
only gives λ as an eigenvalue of R, it also gives an eigenvector P ~u,
obtained from the eigenvector ~u of Q. It also shows that even though
similar matrices have the same eigenvalues, the corresponding
eigenvectors need not be the same.
Of course, eigenvectors are never unique. Trivially, A~0 = λ~0 no
matter what A and λ are. So, we do not regard the zero vector as
an eigenvector. Even after disallowing ~0, if ~u is a (non-zero) eigen-
vector of A, then so is α~u for any non-zero scalar α as we see from

43
A(α~u) = αA~u = αλ~u = λ(α~u). As a measure of standardisation, one
can consider only eigenvectors of unit length, or eigenvectors where one
of the entries is given some arbitrary non-zero value, say 1.
This is what the paper setters have done in designing Part (C)
of the question. They
 have, without using the words, given 6 as an
1
eigenvalue and  a  as a standardised eigenvector of R corresponding
 

b
to the eigenvalue 6. The number
 6 comes from the last diagonal entry
2 0 0
of the matrix Q =   0 4 0 . It is obviuous that 6 is an eigenvlue

0 0 6
 
0
of Q with  0  as an eigenvector. (The other two diagonal entries, 2
 

1
and 4 of Q are also eigenvalues of Q but theeigenvectors

are different.)
1
Multiplying this on the left by P we get  2  as an eigenvector of
 

3
 
1
R. It is standardised and so it is tempting to equate it with  a  to
 

b
conclude that a = 2 and b = 3.
Apparently, the paper-setters thought that this makes the problem
reasonable, since nowhere do we have to compute the matrix R. Iden-
tifying the eigenvalues of a diagonal matrix is easy, identifying their
eigenvectors is also easy and converting any of these eigenvectors to
corresponding eigenvectors of R is also easy since all it involves is pre-
multiplication of the eigenvectors of Q by the matrix P . (In fact, these
eigenvectors of Q can be chosen in such a way, that the correspond-
ing eigenvectors of R are simply the corresponding columns of P . So,
hardly any calculations are needed.) When one such eigenvector of R
is in a standardised form (for example, its first entry is 1), the other
two entries are uniquely determined.
So, everything sounds simple enough. The catch is that even after
standardisation, the same matrix may have more than one eigenvector
for the same eigenvavlue. As an extreme example, the identity matrix

44
I3 has 1 as its only eigenvalue, and every (non-zero)
 
column vector
1
of length 3 is an eigenvector. In particular,  a  is a standardised
 

b
eigenvector of I3 for any values of a and b. Therefore just becasue
 we 
1 1
are given that  a  is an eigenvector of R and we have found  2 
   

b 3
also as an eigenvector of R, both corresponding to the same eigenvalue,
viz., 6, we cannot conclude that a = 2 and  b = 3. As a less extreme
2 0 0
example, if our matrix Q were  0 6 0 , then the second and the
 

0 0 6
   
1 1
third columns of P , viz.,  2  and  2  would both be standardised
   

0 3
eigenvectors of R each with eigenvalue 6.
There is a way to salvage the situation without going through
the computation of R. But it requires the concept of not only an
eigenvector corresponding to an eigenvalue but that of an eigenspace.
Suppose λ is an eigenvalue of A. Then, by definition, there is some non-
zero column vector ~u such that A~u = λ~u, i.e. (A − λI)~u = ~0. Now let
Nλ be the set of all column vectors ~u which satisfy (A−λI)~u = ~0. Note
that ~0 ∈ Nλ and is specifically excluded from being an eigenvector of λ.
However, all other elements of Nλ are eigenvectors of A corresponding
to the eigenvalue λ. As a result, Nλ (or more precisely, Nλ (A)) is called
the eigenspace of A corresonding to the eigenvalue λ. It is a subspace
of IR3 of dimension at least 1. In some cases,  it may have a higher
2 0 0
dimension, e.g. for the matrix Q =  0 6 0 , N6 (Q) and hence also
 

0 0 6
N6 (R) have dimensions 2 each. In this case N6 (R) is a plane spanned
by the second and the third columns of R.


Coming

to our problem as it stands, in (C) the matrix Q is
2 0 0
 0 4 0  and we know that N6 (R) contains at least the line spanned
 

0 0 6

45
by the third column of P . We shall be dome if we can show that the
dimension of N6 (R) cannot exceed 1. For this we note that 2 and 4 are
also eigenvalues of R and hence the eigenspaces N2 (R) and N4 (R) are
also of dimension at least one each. They contain, respectively, the first
and the second column of P . As the three columns of P are linearly
independent, there is no room in IR3 to accommodate any vector   in
1
N6 (R) which is not a multiple of the third column of P . So  a  is
 

b
the only standardised eigenvector of R corresponding
  to the eigenvalue
1
6. We are now justified in equating it with  2 . That yields a = 2

3
and b = 3.
For a candidate who knows the concepts of eigenvalues and eigenspaces,
(C) is a good problem. For others, it is a torture if done honestly.
Considering that even eigenvalues are not in the JEE syllabus, it is too
much to expect the first possibility. So, (C) is an excellent example
where scruples do not pay.

Q.3 Dropped. (Sums of trigonometric functions of angles in A.P.)

Q.4 Let f : IR −→ IR be a function. We say that f has

f (h) − f (0)
PROPERTY 1 if lim q exists and is finite, and
h→0 |h|

f (h) − f (0)
PROPERTY 2 if lim exists and is finite.
h→0 h2

Then which of the following options is /are correct?

(A) f (x) = |x| has PROPERTY 1


(B) f (x) = x2/3 has PROPERTY 1
(C) f (x) = x|x| has PROPERTY 2
(D) f (x) = sin x has PROPERTY 2

46
Answer and Comments: (A, B). In all four options, the given func-
tion f (x) vanishes at x = 0. So the question is to check the behaviour
f (x) f (x)
of q as x → 0 for PROPERTY 1 and of 2 for PROPERTY 2.
|x| x
PROPERTY 1 will hold if f (x) is smaller than or comparable to
q f (x) q
|x|, as x → 0. In (A), the ratio q equals |x| which tends to 0 as
|x|
2 1
x → 0. In (B) this ratio is |x| 3 − 2 . As 23 > 21 , the exponent is positive
and hence the ratio tends to 0. So both (A) and (B) are true.
|x|
In (C), the ratio equals which is 1 for x > 0 and −1 for x < 0.
x
f (x) f (x)
So, although both lim+ 2 and lim− 2 exist, they are unequal and
x→0 x x→0 x
f (x)
so lim 2 does not exist. Hence (C) is false.
x→0 x
sin x sin x 1
In (D), the ratio equals 2 = × . The first factor tends to
x x x
1 as x → 0. But the second factor does not tend to any limit. So (D)
is false too.
An extremely simple problem.

Q.5 Let
sin πx
f (x) = , x > 0.
x2
Let x1 < x2 < x3 < . . . < xn < . . . be all the points of local maximum
of f
and y1 < y2 < y3 < . . . < yn < . . . be all the points of local minimum
of f .
Then which of the following options is/are correct?
(A) x1 < y1 (B) xn+1 − xn > 2 for every n
 
1
(C) xn ∈ 2n, 2n + 2
for every n (D) |xn − yn | > 1 for every n

Answer and Comments: (B, C, D). The given function has deriva-
tives of all orders at all points x > 0. So, the points of local extrema

47
can be found from the behaviour of its first derivative, with optional
help from the second derivative.

x2 π cos πx − 2x sin πx πx cos πx − 2 sin πx


f ′ (x) = 4
= (1)
x x3
As the denominator is positive for all x, we only need to consider the
sine of the numerator, say,

h(x) = πx cos πx − 2 sin πx (2)


πx
Clearly h(x) = 0 if and only if tan πx = 2
.
It is not possible to solve this equation exactly. But the locations of
its solutions can be determined from the points of intersection of the
graphs of y = tan πx and the straight line y = π2 x shown in the figure
below where the line y = π2 x is shown by L.

x
O 1/2 1 y1 3/2 2 x1 3 y2 4 x
2
5 y
3

Note that f is not defined at x = 0. But for x > 0, from (1) we see
that f ′ (x) < 0 if x is close to 0. So, f is decreasing over the interval
(0, α) where α is the first zero of f ′ (x). Therefore, α is a point of local

48
minimum. Hence α = y1 . At y1 , f ′ changes its sign from − to + and
so f changes its behaviour from decreasing to increasing till x equals
the second zero of f ′ , which has to be a local maximum and hence x1 .
Thus we see that the xn ’s and yn ’s are interlaced by

0 < y1 < x1 < y2 < x2 < . . . < yn < xn < yn+1 < xn+1 < . . . (3)

In particular, we see that (A) is false. But these points are not equally
spaced. From the figure above we see that 2n < xn < 2n + 21 , but at
the same time xn is moving closer to the dotted line x = 2n + 12 as n
increases. (Of course, this can also be proved directly from (1). An
analytic proof would actually require that. But it is much easier to
see it from the figure. That is the advantage of a well drawn diagram.
Note that the diagram above is not that of the graph of f (x). But it
is the one that gives us the information we need.)
This proves both (B) and (C). Finally, for (D), too, we note that
xn is closer to the dotted line x = 2n + 12 than yn is to the dotted line
x = 2n − 21 . Hence

1 1
xn − yn > (2n + ) − (2n − ) = 1 (4)
2 2
Hence (D) is true too.
A simple but very good problem whose solution comes simply by
looking at a suitable subsidiary diagram, without drawing the graph of
f (x) (which would involve a lot more work). There is not much calculus
in the problem. After taking the derivative f ′ (x), the problem reduces
to the solutions of a trigonometric equation. But its novel feature is
that it does not ask to identify these solutions, but only to locate them
approximately.
Although not important in an MCQ, we illustrate the analytic
proof too. It is based on the fact that tan πx is strictly increasing on
(n, n + 12 ) for every n ∈ IN. Now, we already have x1 < y2 . We claim
that x1 + 1 < y2 , for which it suffices to prove that tan(π(x1 + 1)) <
tan πy2 . But this is true because tan(π(x1 + 1)) = tan πx1 = πx2 1 and
tan πy2 = πy2 2 . Similarly, from y2 < x2 , we prove that y2 +1 < x2 . Thus
the zeros of h(x) are getting closer and closer to their nearest dotted
lines, which are equally spaced.

49
Q.6 For a ∈ IR, |a| > 1, let
 √
3
√ 
1+ 2+ ...+ 3
n
lim 
n→∞

1 1 1
 = 54.
n7/3 (an+1)2
+ (an+2)2
+ ...+ (an+n)2

Then the possible value(s) of a is/are

(A) −9 (B) −6 (C) 7 (D) 8

Answer and Comments: (A, D). In both the numerator and the
denominator, we have a sum of the functional values of some function
at the points 1, 2, . . . , n. If we divide these sums by suitable powers
of n, they can be expressed as the Riemann sums of suitable functions
over the interval [0, 1] partitioned into n equal parts.
For this we factor the coefficient n7/3 in the denominator as n2 ×n1/3 .
Then the expression whose limit is to be taken can be written as A n
Bn
where
n
k
( )1/3
X
An = (1)
k=1
n
n
X n2
and Bn = 2
k=1 (an + k)
n
X 1
= k 2 (2)
k=1 (a + n )

If we multiply both the numerator and the denominator by n1 , then


the numerator is a Riemann sum of f (x) = x1/3 and the denominator
1
a Riemann sum of the function g(x) = (a+x) 2 , both over the interval

[0, 1] partitioned into n equal parts.


I1
Therefore the given limit equals where
I2
Z 1
I1 = x1/3 dx (3)
0
1 1
Z
and I2 = dx (4)
0 (a + x)2
(The hypothesis |a| > 1 ensures that the denominator of the second
integrand does not vanish anywhere on [0, 1].)

50
Both the integrals are easy to evaluate. Indeed,
3 x=1 3
I1 = [ x4/3 ] = (5)

4 x=0 4
1 x=1
and I2 = [− ]
a + x x=0
1 1 1
= − = (6)
a a+1 a(a + 1)
The data now reduces to a quadratic in a, viz. a(a + 1) = 72, i.e.
a2 + a − 72 = 0, whose roots are 8 and −9.
A straightforward, if slightly laborious problem once the idea of
Riemann sums strikes.

Q.7 Let f : IR −→ IR be given by f (x) = (x − 1)(x − 2)(x − 5), Define


Z x
F (x) = f (t)dt, x > 0.
0

Then which of the following options is/are correct?

(A) F has a local minimum at x = 1


(B) F has a local maximum at x = 2
(C) F has two local maxima and one local minimum in (0, ∞)
(D) F (x) 6= 0 for all x ∈ (0, 5)

Answer and Comments: (A, B, D). We can evaluate F (x) explicitly.


But for the options (A) to (C), we shall be dealing with F ′ (x) which
equals f (x) by the Fundamental Theorem of Calculus (second form).
Thus F ′ (x) has three zeros, 1, 2 and 5. For x < 1, all the three factors
x − 1, x − 2 and x − 5 are negative and so F ′(x) < 0. For x ∈ (1, 2)
F ′ (x) > 0 as two factors are negative and one positive. So, F changes
from decreasing to increasing at x = 1. Hence there is a local minimum
at x = 1. By a similar reasoning, there is a local maximum at x = 2.
So, both (A) and (B) are true.
For x ∈ (2, 5) F ′ (x) < 0 as two factors are positive and the third
negative. For x > 5, F ′ (x) > 0. So there is a local minimum as 5. As
there is already a local minimum at x = 1, (C) is false.

51
For (D), suppose there is some b ∈ (0, 5) such that F (b) = 0. Since
F (0) = 0 too, by Rolle’s theorem there is some c ∈ (0, b) such that
F ′ (c) = 0, i.e. (c − 1)(c − 2)(c − 5) = 0. But c < 5 and so c − 5 6= 0.
So c must be either 1 or 2. However, this is not a sufficient ground to
prove (D). So here we actually calculate F (x) by integrating f . Since
(t − 1)(t − 2)(t − 5) = t3 − 8t2 + 17t − 10, F (x) comes out as
x x4 8 3 17 2
Z
F (x) = (t3 − 8t2 + 17t − 10)dt = − x + x − 10x (1)
0 4 3 2
On the interval [0, 5], F (x) can attain its maximum either at 2 (the
point of local maximum) or at one of the end points 0 and 5. Since F
is decreasing on [2, 5] we need not consider the end point 5. As between
0 and 2, F (0) = 0 while F (2) = 4 − 64 3
+ 34 − 20 = 18 − 64 3
< 0. So,
the maximum of F (x) on [0, 5] is F (0) = 0. Therefore there cannot be
any x ∈ (0, 5] for which F (x) = 0. Hence (D) is correct. (As a matter
of fact, F (x) ≤ 0 for all x ∈ [0, 5].)
A simple problem on maxima and minima. Only the last part
requires an evaluation of F (x).

Q.8 Dropped. (Finding collinear points from three given lines, one on each).

SECTION - 2
This section contains SIX questions. Each question is to be answered by
a numerical value rounded to two places of decimals. There are 3 marks if
the answer is correct and 0 marks otherwise. No negative marks.

Q.9 Suppose  n n 
n 2
X X
 k Ck k 
 k=0 k=0 
 
det 


 =0
n
 X n 
n n
Ck 3k
X
Ck k
 
k=0 k=0

n n
X Ck
holds for some positive integer n. Then equals .... .
k=0 k + 1

52
Answer and Comments: 6.2. The first entry of the determinant is
n(n+1)
2
. All other entries as well as the expression in the answer involve
the binomial theorem, in its simplest form, viz.
n
(1 + x)n = n
Ck xk
X
(1)
k=0

Indeed, if we integrate both the sides from 0 to 1, we get


n n
Ck Z 1
1
(1 + x)n dx = (2n − 1)
X
= (2)
k=0 k + 1 0 n + 1
So we shall get the answer as soon as we know the integer n. For this
we need to express the entries of the determinant as functions of n and
n
n
Ck 3k
X
then solve the resulting equation in n. The last entry, viz.
k=0
comes directy by putting x = 3 in (1). So it is 4n .
For the remaining two entries of the determinant we have to do some
n
n
X
work. First consider Ck k. We can drop the term for k = 0. For
k=0
the rest, we differentiate (1) to get
n
n
Ck kxk−1 = n(1 + x)n−1
X
(3)
k=1

Putting x = 1 yields
n
n
Ck k = n2n−1
X
(4)
k=0

n
n
Ck k 2 , we
X
To get the remaining entry of the determinant, viz.
k=0
drop the term k = 0 and write k 2 as k(k − 1) + k to get
n n n
n
Ck k 2 = n n
X X X
Ck k(k − 1) + Ck k (5)
k=1 k=1 k=1

The second sum on the R.H.S. was already evaluated. For the first
sum, we can drop the term for k = 1 too. For the rest, differentiate (1)
twice to get
n
n
Ck k(k − 1)xk−2 = n(n − 1)(1 + x)n−2
X
(6)
k=2

53
Putting x = 1 gives the first sum on the R.H.S. of (5) as n(n − 1)2n−2 .
Hence the given equation now becomes
n(n+1)
" #
n(n − 1)2n−2 + n2n−1
2
n−1 =0 (7)
n2 4n
which simplifies to
n(n + 1)22n−1 = n2 22n−2 + n2 (n − 1)22n−3 (8)
and further to
4(n + 1) = 2n + n(n − 1) (9)
i.e. n2 − 3n − 4 = 0. This is a quadratic with only one positive root,
viz. 4. So, n = 4. As already noted, the answer by (2) is 31
5
= 6.2.
Binomial identities have taken a back seat after the JEE became
totally multiple choice examination. So any attempts to revive them
deserve to be appreciated. In the present problem, the identities for
getting evaluating the sums appearing in the determinant are fairly
well known. For a candidate who does not know them, it will be very
time consuming to come with the right expressions for their sums. So,
this problem is more a test of memory.
Q.10 Five persons A, B, C, D and E are seated in a circular arrangement. If
each of them is given a hat of one of the three colours red, blue and
green, then the number of ways of distributing the hats such that the
persons seated in adjacent seats get different coloured hats is ... .

Answer and Comments: 30.0 Suppose that the persons A to E are


seated anticlockwise around the circle as shown below.
.B
.
C

r .A

.
D

.
E

54
We treat the persons as vertices of a (not necessarily regular) pentagon.
The problem amounts to colouring these five vertices with three colours,
r, b, g so that no two adjacent vertices get the same colour.
Clearly, no colour can appear more than two times. So 2 colours
appear twice each and the remaining colour appears only once. This
lone colour can be chosen in 3 ways and the vertex to be coloured
with in 5 ways. So there are 15 such pairings where a vertex get the
lone colour. Take any one such, e.g. where A gets red as shown. Then
either B and D get green and C and E the remaining colour blue or vice
versa. So for each one out of these 15 pairings, there are two possible
distributions. Hence the total number of colourings is 15 × 2 = 30.
A good, simple problem on combinatorics. Once the essential idea
of a lone colour strikes, the calculations can be done mentally.
There is a more systematic way to do the problem using what
is called the principle of inclusion and exclusion. We spare the
details as they were given in the solution to Q.17 of Paper 1 of Advanced
JEE 2018. (See also Exercises (1.43) to (1.47) on pp. 32-33.)
We let S be the set of all possible colourings of the five vertices with
three colours r, b, g without any restriction. Clearly,

|S| = 35 = 243 (1)

We have to exclude from S, the ‘bad’ colourings, i.e. those colourings in


which some vertex and its anticlockwise neighbour get the same colour.
Denote by SA , the set of those colourings in which A and B get the
same colour. Define SB , SC , SD and SE similarly. We want colourings
which are not in any of these five subsets SA , SB , SC , SD , SE .
The principle of inclusion and exclusion says that the number of such
valid colourings is given by

|S| − s1 + s2 − s3 + s4 − s5 (2)

where
X
s1 = |SP | = |SA | + |SB | + SC | + |SD | + |SE | (3)
X
s2 = |SP ∩ SQ | (4)
X
s3 = |SP ∩ SQ ∩ SR | (continued...) (5)

55
X
s4 = |SP ∩ SQ ∩ SR ∩ ST | (6)
s5 = |SA ∩ SB ∩ SC ∩ SD ∩ SD | (7)

where each summation runs over all intersections with P, Q, R, T dis-


tinct elements from {A, B, C, D, E}. Clearly, the number of terms in
s1 , s2 , s3 and s4 is 5, 10, 10 and 5 respectively.
Calculation of each |SP | is easy. Thus, in SA , A, B get the same colour
and C, D, E any colour without restriction. So |SP | = 34 = 81 for each
P and hence s1 = 5 × 81 = 405.
To find s2 , we classify the ten terms in the sum into two types: those
of the form |SA ∩ SB |, i.e. where P, Q are adjacent letters and those of
the form |SA ∩ SC |, i.e. where P, Q are not adjacent. There are 5 terms
of each type. In the first type, three adjacent vertices get the same
colour and the other two any colour. So each term equals 33 = 27. In
the second type, too, say in |SA ∩ SC |, A, B get the same colour, C, D
the same colour (which may be the same as that of A and B) and E
gets any of the three colours. So each such term also equals 27. Hence
s2 = 270.
For S3 , out of the ten terms, there are five terms of the type
|SA ∩ SB ∩ SC |. In each, 4 adjacent vertices get the same colour. Hence
each such term equals 9. The remaining five terms are of the type
|SA ∩ SB ∩ SD |. Here A, B, C get the same colour and D, E get the
same colour. So each such term is 9 too. Hence s3 = 10 × 9 = 90.
In s4 , no matter which four distinct letters are represented by
P, Q, R and T , all vertices get the same colour. Hence every term is 3.
So s4 = 5 × 3 = 15.
Finally, in s5 there is only one term and it is 3 as all vertices get the
same colour.
Adding,

|S| = 243 − 405 + 270 − 90 + 15 − 3 = 30 (8)

So, we get the same answer. Of course, the first solution is much
shorter. But it requires the key idea that in any valid colour distribu-
tion, the colours are distributed as 2 + 2 + 1. The second solution is
more methodical. In problems where the parameters are small, ad-hoc

56
methods often work better. But there are situations where we have to
find the answer for any n, e.g. counting dn , the number of derange-
ments, i.e. permutations of n symbols having no fixed point. Here the
principle of inclusion and exclusion gives the answer as
1 1 1 1
 
dn = n! − + + . . . + (−1)n (9)
0! 1! 2! n!

For large n this is very close to n!e . So, if n letters are blindly put into
n envelops with matching addresses, then the probability that no letter
is placed in the right envelope is nearly 1e .
The requirement that no two adjacent vertices get the same colour is
important in graph theory. If in the diagram above, we join all adjacent
pairs of vertices, we get a pentagonal graph C5 with 5 vertices and 5
edges. The minimum number of colours needed to colour its vertices so
that no two adjacent vertices get the same colour is 3. For any graph,
this number is called its chromatic number. Thus the chromatic
number of the pentagonal graph C5 is 3. More generally, the chromatic
number of Cn , (n ≥ 3) is 3 if n is odd and 2 if n is even. But this has
very little to do with the present problem. It is one thing to determine
the chromatic number of a graph. It is quite another to count how
many valid colourings are possible when the number of colours equals
the chromatic number (or more). The present problem is of the latter
type.

Q.11 Let |X| denote the number of elements in a set X. Let S = {1, 2, 3, 4, 5, 6}
be a sample space, where each element is equally likely to occur. If A
and B are independent events associated with S, then the number of
ordered pairs (A, B) such that 1 ≤ |B| < |A|, equals ... .

Answer and Comments: 422. Two events A and B are said to be


independent when P (A ∩ B) = P (A)P (B). In the present case, as all
elements of S are equally likely, for any event X associated with S (i.e.
represented by a subset X of S), P (X) is simply |X|6
. So the condition
about independence of A and B reduces to

|A ∩ B| |A| |B|
= (1)
6 6 6

57
i.e.

6x = yz (2)

where x, y, z denote respectively, |A ∩ B|, |A| and |B|. Evidently, x ≤


min{y, z}. We are also given that

1≤z<y≤6 (3)

So, we first consider all possible solutions of (2) subject to the con-
straints in (3). x has to be between 0 and 6. But x = 0 is ruled out as
y, z are both positive. Similarly, x = 6 would make y = z = 6 which is
ruled out as z < y. For every x between 1 and 5, z = x and y = 6 gives
one possible solution. In this case, |A ∩ B| = |B| and so B ⊂ A. Here
A has to be S and B a proper subset of A, i.e. a subset other than ∅
and A. So the number of ordered pairs (A, B) of this type is the same
as the number of proper subsets of S, i.e. 26 − 2 = 62.
But there are some other solutions of (2) for certain values of x.
When x = 1, z = 2 and y = 3 is a solution. This means |A| = 3, |B| = 2
and |A ∩ B| = 1. So the subsets A − (A ∩ B) and B − (A ∩ B)
have 2 and 1 elements respectively. Further these subsets are mutually
disjoint and also disjoint from A ∩ B as shown in (a) below where the
numbers indicate the numbers of elements in the respective regions.
(The numbers are not proportional to the areas of the regions.)

A A
2 2 1
2
1 2

1 B 1 B

S S

(a) (b)

58
Another possibility is when x = 2, z = 3 and y = 4. This is shown in
(b).
We now count ordered pairs of both these types. In (a), the lone
element in A ∩ B can be chosen in 6 ways. Having chosen it, the other
element in B can be chosen in 5 ways. After that the two elements
in A − (A ∩ B) can be chose in 4 C2 i.e. 6 ways. So, in all there are
6 ×5 ×6 = 180 ordered pairs of type (a). A similar reasoning gives that
the number of ordered pairs of type (b) is 6 C2 × 4 × 3 C2 = 15 × 4 × 3 =
180.
Hence the total number of ordered pairs of the type asked is
62 + 180 + 180 = 442.
An unusual problem where the very definition of independence
of events is translated in terms of subsets of the sample space. After
that the number theory, set theory and combinatorics needed are very
elementary.

Q.12 The value of


10
! !!
−1 1X 7π kπ 7π (k + 1)π
sec sec + sec +
4 k=0 12 2 12 2

π 3π
 
in the interval − , equals ... .
4 4

Solution and Comments: 0. Call the expression in the parentheses


as E. As a first measure of simplification we replace the secants by
cosines which we are more comforatable with. Thus
10
1X 1
E= (k+1)π
(1)
4 k=0 cos( 7π
12
+ kπ
12
) cos( 7π
12
+ 2
)

It is tempting to apply the formula for 2 cos α cos β to convert each


denominator to a sum of two cosines. But this will not be of much
avail because that will only complicate each term. (Doing so might
have been a good idea had the product of the two cosines been in the
numerator rather than in the denominator.)
Nevertheless, we notice that the denominator of each term is of the
form cos α cos(β) where β = α + π2 . As a result, cos β = − sin α and

59
the product cos α sin α is amenable to a simplification. So,
10
1X 1
E = −
2 k=0 2 cos( 7π
12
+ kπ
2
) sin( 7π
12
+ kπ
2
)
10
1X 1
= − 7π
2 k=0 sin( 6 + kπ)
10
1X 1
= − (2)
2 k=0 sin((k + 1)π + π6 )

As k is an integer, we can apply the (half)-periodicity of the sine func-


tion, whereby

sin(kπ + θ) = (−1)k sin θ (3)

Using this and the fact that sin( π6 ) = 21 , E further simplifies to


10
1X 1
E = −
2 k=0 (−1) sin( π6 )
k+1

10
(−1)k+1
X
= −
k=0
10
(−1)k
X
= (4)
k=0

This is a sum of 11 terms alternately 1 and −1, the first term being 1.
So the sum is 1. Hence E = 1.
The (unique) angle θ ∈ [− π4 , 3π
4
] with sec θ = 1 is 0.
A simple problem based on the periodicity of the sine function.

Q.13 Dropped. (Evaluation of a definite integral.)

Q.14 Let ~a = 2î + ĵ − k̂ and ~b = î + 2ĵ + k̂ be two vectors. Consider a vector


= α~a + β~b, α, β ∈ IR. If the projection of ~c on the vector (~a + ~b) is
~c √
3 2, then the minimum value of (~c − (~a × ~b)) · ~c equals ... .

Answer and Comments: 18. The function to be minimised is a


function, say f (α, β) of two real variables α and β. It is to be minimised
subject to a constraint, given by the projection requirement.

60
By a direct calculation,
~c = α(2î + ĵ − k̂) + β(î + 2ĵ + k̂)
= (2α + β)î + (α + 2β)ĵ + (β − α)k̂ (1)
Also
~a + ~b = 3î + 3ĵ (2)

Therefore a unit vector, say û in the direction of ~a + ~b is


1
û = √ (î + ĵ) (3)
2
Therefore the projection of ~c on ~a + ~b is
1
~c · û = √ (~c · (î + ĵ))
2
1
= √ ((2α + β) + (α + 2β))
2
3(α + β)
= √ (4)
2

Equating this with 3 2, we get the constraint which α and β have to
satisfy, viz.
α+β =2 (5)

Because of this relationship, ~c simlifies to


~c = (α + 2)î + (β + 2)ĵ + (β − α)k̂
= (α + 2)î + (4 − α)ĵ + (2 − 2α)k̂ (6)

Note also that since ~c is given as a linear combination of ~a and ~b,


(~a × ~b) · ~c = 0 without any calculation. Therefore,

f (α, β) = (~c − (~a × ~b)) · ~c


= ~c · ~c (7)
= (α + 2)2 + (4 − α)2 + 4(1 − α)2
= 6α2 − 12α + 24 (8)
= 6(α − 1)2 + 18 (9)

61
whose minimum value is 18. (It occurs when α = 1 and hence β = 1
too. But that is not asked.)
A straightforwrd problem. Those who miss the simplifications used
in calculating f (α, β) are prone to make numerical mistakes.

SECTION - 3
This section contains TWO paragraphs. Based on each paragraph, there
are two multiple option questions of the List-Match type. Only one of the
options is correct.
There are 3 points if the correct option (and nothing else) is chosen, 0
marks if no oprtion is chosen and −1 mark in all other cases.

Paragraph for Q.15 and Q.16


Let f (x) = sin(π cos x) and g(x) = cos(2π sin x) be two functions de-
fined for x > 0. Define the following sets whose elements are written in the
increasing order:

X = {x : f (x) = 0}, Y = {x : f ′ (x) = 0},

Z = {x : g(x) = 0}, W = {x : g ′ (x) = 0}.

List-I contains the sets X, Y, Z and W . List -II contains some information
regarding these sets.
List-I List-II

(I) X (P) ⊇ { π2 , 3π
2
, 4π, 7π}

(II) Y (Q) an arithmetic progression

(III) Z (R) NOT an arithmetic progression

(IV) W (S) ⊇ { π6 , 7π
6
, 13π
6
}

(T) ⊇ { π3 , 2π
3
, π}

(U) ⊇ { π6 , 3π
4
}

62
Q.15 Which of the following is the only CORRECT combination?

(A) (I), (P), (R) (B) (II), (Q), (T) (C) (I), (Q), (U) (D) (II), (R), (S)

Answer and Comments: (B). The definitions of all four sets X, Y, Z


and W are quite clear. It is not clear what is gained by saying that
their elements are written in an increasing order. Probably that is
considered as an essential requirement of a progression.
Anyway, to answer the question, we need to identify the zeros of f
and of f ′ .
Clearly, f (x) = sin(π cos x) = 0 if and only if π cos x is an in-
tegral multiple of π, which can happen only when cos x = 0 or ±1.
Hence X consists of all multiples of π2 . Hence only (P) and (Q) are
correct. That rules out the options (A) and (C).To choose between
(B) and (D), we need to identify Y , the set of zeros of f ′ . Since
f ′ (x) = −π cos(π cos x) sin x, f ′ (x) can vanish only when sin x = 0
or cos(π cos x) = 0. The first possibility gives all integral multiples of
π. The second gives all values of x for which π cos x is an odd multiple
of π2 , i.e. cos x is an odd multiple of 12 . This is possible only when
cos x = ± 12 , that is when x = 2kπ ± π3 or x = 2kπ ± 2π
3
for some integer
k. Combined with all integral multiples of π, the set Y is now all inte-
gral multiples of π3 . These for an A.P. Hence (Q) is true and (R) false.
So without any further checking, (B), if at all, must be true. Still, it is
easy to verify that Y contains π3 , 2π3
and π. So (T) is true too.
An absolutely trivial problem.

Q.16 Which of the following is the only CORRECT combination?


(A) (III), (R), (U) (B) (IV), (P), (R), (S)
(C) (III), (P), (Q), (U) (D) (IV), (Q), (T)

Answer and Comments: (B). This time the options deal with the
zeros of g(x) and of g ′ (x). Qualitatively, the work is the same as in the
last question.
So g(x) = cos(2π sin x) vanishes only when 2π sin x is an odd multiple
of π2 , i.e. 2 sin x is an odd multiple of 12 , i.e. when sin x is an odd multiple
of 14 . For this to be possible, sin x = ± 14 or sin x = ± 34 . So (U) fails

63
since sin π6 = 12 is an even and not an odd multiple of 41 . That rules out
both (A) and (C), without bothering to check anything else.
Now the choice of the correct option is narrowed down to (B) and
(D), both of which are about W i.e. the set of zeros of g ′(x). By a
direct calculation

g ′ (x) = −2π sin(2π sin x) cos x

Hence for g ′(x) to vanish, either cos x = 0 or sin(2π sin x) = 0. The first
possibility gives x as an odd multiple of π2 . The second gives 2π sin x
as an integral multiple of π, i.e. 2 sin x as an integer, which is possible
only when sin x = 0, ± 12 , ±1. Hence x is an odd multiple of π2 which is
already included (in the zeros of cos x), or x is an angle of the form kπ,
or kπ ± π6 for some integer k. All possibilities put together show that
(P) and (S) hold for W . But π3 ∈ / W . So W is not an A.P. Hence (Q)
is false and (R) is true. So (B) is the only correct option.
Another simple problem, but not as trivial as the last one because it
takes some thought to realise that W is not an arithmetic progression.
At any rate, when there is already a problem (Q.5) about solutions
of trigonometric equations, there was no need to include two more
questions on that topic in the same paper.

Q.17 and Q.18 dropped. (The paragraph is about circles in coordinate


geometry.)

CONCLUDING REMARKS

As in the past, the papers are a mixed bag. The striking feature this year
is the importance given to matrices. Q.2 and Q.6 of Paper 1 and Q.1 and
Q.2 of Paper 2 are about matrices. All of them are good questions. But
those in Paper 2 use the concepts of trace and eigenvalues, without using
these words. It is time that these things are specifically included in the JEE
syllabus. Otherwise those who know them get an unfair advantage over the
other candidates. Becuase of a lapse on the part of the paper setters, an
honest answer to Q.2 in Paper 2 (based on invariance of eigenvalues under
similarity of matrices) requires a lot of computation. It is possible that
the paper setters had some elegant solution in mind. It is impossible to

64
say. At present, the JEE paper setters are asked to provide justifications
to the organisers of the JEE. It will serve a very valuable purpose if these
justifications are passed on to the public.
In Q.5 of Paper 2, too, those who are familiar with the Fibonacci numbers
will have an easier time. As already pointed out, there is a lapse in the
framing of Q.4 of Paper 1 (about the definition of the region) on finding the
area. Q.15 (about the intersection of three arithmetic progressions) is a very
good question, based on the Chinese remainder theorem.
There is only one problem on differential equations, viz. Q.11 of Paper 1.
But there too, differential equation is itself easy to construct and solve. The
real challenge is in making the choice of the sign.
In Paper 2, the best question is probably Q.5 about the locations of the
points of local extrema of the function f (x) = sinx2πx . In presence of this
question, there was no need to ask Q.15 and 16 which also reduce to solutions
of trigonometric equations. Instead, a couple of questions on inequalities
(which are totally ignored this year) would have been welcome. The paper
setters do deserve some praise for bringing in binomial identities in Q.9 of
Paper 2, which are also often a neglected lot after the JEE became totally
MCQ format. Q.10 of Paper 2 (giving coloured hats to five persons seated in
a circle) is a nice combinatorial problem. Last year too there was a problem
based on the principle of inclusion and exclusion. That problem too, could
be done by an elementary argument. But in this years’s problem, once the
essential idea strikes, the computations do not run into many cases as they
did last year.
Q.11 of Paper 2 on counting the pairs of independent events is unusual in
that it requires the very definition of independence of two events. Singularly
missing this year are any problems based on some novel real life settings.
Keeping in mind the current political situation, a probability problem about
prediction of the results of an election would have been very appealing this
year. (Incidentally, the paper setters deserve to be commended for not asking
any question where the number 2019 is involved. This practice probably
began with the International Mathematics Olympiads. But as many others
copied it, it lost its appeal. It is good that the JEE paper setters avoided
the temptation.)
It is fairly obvious that in many multiple options problems, the main theme
deals with only one or two options. But as four options are mandatory, some
appendages to the main theme are cooked up. Since many questions have
the possibility of more than one correct options, this results in a lot of waste

65
of time of the candidates.
The JEE policy to have the same format across all the three subjects is
faulty in its inception, because it ignores the unique character of mathemat-
ics. Ease of mechanical evaluation is a lame justification if it cuts at the
academics. The ideal solution would be to go back to the old practice of
requiring the candidates to write justifications. But in its absence, many
things can still be salvaged if all mathematics questions are made numerical
answer questions. The paper-setters must, of course, give the reasoning. And
as noted before, this ought to be made public along with the key.

66

You might also like