Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

UtilitasMathematica

ISSN 0315-3681 Volume 120, 2023

Asymptotic Normality of Estimator Variance Components for One-Way


Repeated Measurements Model

Hadeel Ismail Mustafa1, Abdul hussein Saber AL-Mouel2


1
Computer Information System Department, Computer Scince and Information System collage,
University of Basrah, Iraq, hadeelismu@gmail.com
2
Mathematics Department, Education for Pure Science collage, University of Basrah, Iraq,
abdulhusseinsaber@yahoo.com

Abstract

In this research, a modification was made to the mean bias reduction method to estimate the
variance components in the repeated measurements model by replacing the bias function in the
mean bias reduction method with another that is dependent on a sample of independent
observations that simulates the study model with variance components. As a result, we obtained a
modified function converging with the mean bias reduction function, and from it we find new
estimators for the variance components of the repeated measurements model. The aim of this
research is to study the behavior of the new estimator of the variance components in the repeated
measurements model resulting from the modified method on the mean bias reduction method by
studying the asymptotic normality of the modified method estimator.

1. Introduction
Repeated measurements models are statistical models that are used to analyze data collected from
the same subjects or objects at different points in time. These models are commonly used in
various fields, including medical research, social sciences, and engineering, to study the changes
in variables over time and to understand the relationship between these variables. One of the key
advantages of repeated measurements models is their ability to account for the correlation
between measurements taken from the same subject. This correlation arises due to the fact that
the measurements are not independent of each other, and therefore, ignoring this correlation can
lead to biased estimates of the model parameters. In practical life, repeated measurements models
are used to analyze data from longitudinal studies, where subjects are followed up over a period
of time to study the progression of a disease or the effect of an intervention. They are also used in
quality control, where multiple measurements of the same product or process are taken to ensure
consistency and reliability. The estimation of variance plays a critical role in repeated
measurements models, as it is used to quantify the amount of variability in the data. In practice,
the variance is often estimated using sample data, and it is important to find reliable and accurate
estimators of the variance to ensure the validity of the statistical analysis. The Asymptotic
normality of estimators is an important concept in repeated measurements models, as it refers to
the fact that as the sample size increases, the estimator becomes more and more accurate, and its
variability decreases. This property is important because it allows us to assess the precision of the
estimator and to determine the sample size needed to achieve a desired level of precision. In
261
UtilitasMathematica
ISSN 0315-3681 Volume 120, 2023

addition, the natural convergence of estimators helps us to understand the behavior of the
estimator under different conditions, such as when the data is skewed or when there are outliers.
The aim of this research is to study the behavior of the new estimator of the variance components
in the repeated measurements model resulting from the modified method on the mean bias
reduction method by studying the asymptotic normality of the modified method estimator.

2. Description the model


The One-Way repeated measurements model is define as following
𝒕𝒊𝒋𝒌 = 𝝃 + 𝝋𝒋 + 𝜳𝒌 + (𝝋𝜳)𝒋𝒌 + 𝝎𝟏𝒊(𝒋) + 𝝎𝟐𝒊(𝒌) + 𝒆𝒊𝒋𝒌 … (𝟏)
Where 𝑖 = 1,2, … , 𝑛1 is a unite of experimentation index .𝑗 = 1,2, … , 𝑛2 is an indicator of the
between units factor’s levels (Group). 𝑘 = 1,2, … , 𝑛3 is an indicator of the within unite factor’s
levels (Time). 𝑡𝑖𝑗𝑘 = is the measurement of unit’s response over time in a group. 𝑒𝑖𝑗𝑘 = is the
random error. 𝜉 = is the overall mean .In the table (1) classifies the effects of the math model (1)
by type.
Table (1): classifies the effects of One-Way repeated measurements model.
The factor The Type The definition The condition
𝒏𝟐
Is treatment between unite factor
𝝋𝒊 Fixed (group) ∑ 𝝋𝒊 = 𝟎
𝒋=𝒊
𝒏𝟑
Is treatment within unite factor
𝜳𝒌 Fixed (Time) ∑ 𝜳𝒌 = 𝟎
𝒌=𝟏
𝒏𝟐

∑ 𝝋 𝒋 𝜳𝒌 = 𝟎
(𝝋𝜳)𝒊𝒋 Fixed Is the effect (between X within) 𝒋=𝟏
unites 𝒏𝟑

∑ 𝝋 𝒋 𝜳𝒌 = 𝟎
𝒌=𝟏
𝝎𝟏𝒊(𝒋) Random Is the random effect to unit 𝒊 𝝎𝟏𝒊𝒋 ∼ 𝑵(𝟎, 𝝈𝟐𝝎𝟏 )
within Group (𝒋)
𝝎𝟐𝒊(𝒌) Random Is the random effect to unit 𝒊 𝝎𝟐𝒊𝒋 ∼ 𝑵(𝟎, 𝝈𝟐𝝎𝟐 )
within time
𝒆𝒊𝒋𝒌 Random Is the random error 𝒆𝒊𝒋𝒌 ∼ 𝑵(𝟎, 𝝈𝟐𝒆 )
To formulate our model as matrix formula, Where 𝐼 is the identity matrix, 1 is the vector one’s,
⊗ be the Kronecker product . first we written the model (1) in as below:
𝒕𝒊𝒋 = 𝜶𝒋 + 𝟏𝒏𝟑 𝒃𝒊(𝒋) + 𝟏𝒏𝟐 𝒄𝒊 + 𝒆𝒊𝒋 …(2 )

where 𝑡𝑖𝑗 = [𝑡𝑖𝑗1, , … , 𝑡𝑖𝑗𝑛3 ]′ is a vector of response, 𝛼𝑗 = [𝛼𝑗1, , … , 𝛼𝑗𝑛3 ]′ is a vector of the fixed
treatments, 𝑏𝑖(𝑗) = 𝜔1𝑖(𝑗) is the random effect of unit 𝑖 within a group (j), 𝑐𝑖 =

262
UtilitasMathematica
ISSN 0315-3681 Volume 120, 2023

[𝑐𝑖(1), , … , 𝑐𝑖(𝑛3) ]′ is the random effect to unit 𝑖 within time(𝜔2𝑖(𝑘) ) and 𝒆𝒊𝒋 = [𝑒𝑖𝑗1, , … , 𝑒𝑖𝑗𝑛3 ]′ is
a vector of random error. Let бij represent the indicators of a group such that бij=
1 if unit i is from group j
{
0 o. w
j = 1,…, 𝑛2 . By setting б′𝑖 = [𝛼𝑖1, , … , 𝛼𝑖𝑛2 ]′ , we can rewrite model (2) as τi = Xi 𝜶 +ZiW + εi
… (3)

where t ′𝑖 = [ti1 ,…, t 𝑖𝑛3 ], Xi = 𝐼𝑛3 ⊗ бi, 𝛼 ′ = [𝛼11, , … , 𝛼𝑛2 𝑛3 ], Zi = 1𝑛3 ⊗ 1𝑛2 , W =
𝛼11 ⋯ 𝛼1𝑛3
[𝜔1𝑖(𝑗) , 𝜔2𝑖(𝑘) ], 𝜀𝑖 = [ε i1 ,…, ε𝑖𝑛3 ] and 𝛼 = Vec [ ⋮
′ ⋯ ⋮ ] .The final formulation of
𝛼𝑛2 1 ⋯ 𝛼𝑛2𝑛3
the study model is an alternative form of the model(4) as shown in the relationship (5), let 𝜏 ′ = [
𝑡1′ , … , 𝑡𝑛′ 1 ], 𝑋 ′ = [ X1′ , … , X 𝑛′ 1 ] , 𝑊 = [𝜔1𝑖(𝑗) , 𝜔2𝑖(𝑘) , 𝜀𝑖 ] and Z =[z1′ , … , z𝑛′ 1 ] then

𝜏 = X 𝜶 + Z 𝑊 … (4)
∑𝜔 1 + ∑𝜔 2 + ∑𝑒 𝟎 ⋯ 𝟎
𝟎 ∑ 𝜔 1 + ∑𝜔 2 + ∑𝑒 𝟎 𝟎
The variance matrix of 𝜏 is Σ = [ ⋯ ⋮ ]
⋮ ⋮
𝟎 𝟎 ⋯ ∑𝜔 1 + ∑𝜔 2 + ∑𝑒

Where

∑𝜔1 =𝜎𝜔2 1 [𝐼𝑛1 ⊗ 𝐼𝑛2 ⊗ 𝐽𝑛3 ], ∑𝜔2 =𝜎𝜔2 2 [𝐼𝑛1 ⊗ 𝐽𝑛2 ⨂ 𝐼𝑛3 ] , where 𝐽 denoted the matrix of
ones.

∑𝑒 = 𝜎𝑒2 [𝐼𝑛1 ⊗ 𝐼𝑛2 ⊗ 𝐼𝑛3 ]


3. Reduction Estimation of variance components
To estimate the variance components of the repeated measurements model shown in Equation (4)
using the maximum likelihood method, by deriving the maximum likelihood function for the Θ =
(𝜎𝜔2 1 , 𝜎𝜔2 2 , 𝜎𝑒2 ) = Θ𝑟 i.e ( Θ𝑟 is 𝛾 𝑡ℎ component of vector of variance Θ ) defined by formula (5),
and by solving equation (5) we obtain estimators for the variance components that are biased.
1
[𝑅 ′ Σ −1 ( Θ) Σ0 Σ −1 ( Θ) 𝑅 − 𝑡𝑟 (Σ −1 ( Θ)Σ0 )]
2
1
ΓΘ = [𝑅 ′ Σ −1 ( Θ) Σ1 Σ −1 ( Θ) 𝑅 − 𝑡𝑟 (Σ −1 ( Θ)Σ1 )] …(5)
2
1 ′ −1 −1 −1
[2 [𝑅 Σ ( Θ) Σ2 Σ ( Θ) 𝑅 − 𝑡𝑟 (Σ ( Θ)Σ2 )]]
To reduce the bias resulting from the above method, we use the method of Mean bias reduction
where the derivative is defined with respect to the variance components in the formula (6),
̃ is the estimators of the variance components that
through the solution of equation (6) we get Θ
are less biased than maximum likelihood method.

263
UtilitasMathematica
ISSN 0315-3681 Volume 120, 2023

𝜕𝑙(𝛼,𝛴) 𝜕2𝛴 𝜕2 𝛴 𝜕2𝛴 𝜕𝛴


𝜕 Θ0 −1 𝜕Θ0 𝜕Θ0 𝜕Θ0 𝜕Θ1 𝜕Θ0 𝜕Θ2 𝜕Θ0
0 E01 E02
𝜕𝑙(𝛼,𝛴) 1 1 𝜕2𝛴 𝜕2 𝛴 𝜕2𝛴 𝜕𝛴
ΓΘ̃ = 𝜕 Θ1
+2 tr [E10 0 E12 ] tr Σ −1 𝜕Θ1 𝜕Θ0 𝜕Θ1 𝜕Θ1 𝜕Θ1 𝜕Θ2
Σ −1 𝜕Θ
2 1
𝜕𝑙(𝛼,𝛴) E20 E21 0 𝜕2𝛴 𝜕2 𝛴 𝜕2𝛴 𝜕𝛴
[ 𝜕 Θ2 ] [ [𝜕Θ2𝜕Θ0 𝜕Θ2 𝜕Θ1 𝜕Θ2 𝜕2 ] [𝜕Θ2 ] ]
[ [ ]]
…(6)
0 E01 E02
where E(Θ) = [E10 0 E12 ]
E20 E21 0
𝜕2 𝑙(𝛼,𝛴) 1 𝜕2 𝑙(𝛼,𝛴) 1
E01 = E( 𝜕Θ ) = - 2 tr (𝛴 −1 Σ1 𝛴 −1 Σ0 - 𝛴 −1 Σ0 𝛴 −1 Σ1), E02= E( 𝜕Θ ) = - 2 tr (𝛴 −1 Σ2 𝛴 −1 Σ0
0 𝜕Θ1 0 𝜕Θ2
- 𝛴 −1 Σ0 𝛴 −1 Σ2 ).
𝜕2 𝑙(𝛼,𝛴) 1 𝜕2 𝑙(𝛼,𝛴) 1
E10 = E ( 𝜕Θ )= - 2 tr (𝛴 −1 Σ0 𝛴 −1 Σ1 - 𝛴 −1 Σ1 𝛴 −1 Σ0),E12 = E( 𝜕Θ ) = - 2 tr (𝛴 −1 Σ2 𝛴 −1 Σ1
1 𝜕Θ0 1 𝜕Θ2
- 𝛴 −1 Σ1 𝛴 −1
Σ2).
𝜕2 𝑙(𝛼,𝛴) 1 𝜕2 𝑙(𝛼,𝛴) 1
E20 = E( 𝜕Θ ) = - 2 tr (𝛴 −1 Σ0 𝛴 −1 Σ2 - 𝛴 −1 Σ2 𝛴 −1 Σ0 ),E21 = E 𝜕Θ ) = - 2 tr (𝛴 −1 Σ1 𝛴 −1 Σ2
2 𝜕Θ0 2 𝜕Θ1
- 𝛴 −1 Σ2 𝛴 −1 Σ1).
𝜕2𝛴 𝜕2𝛴 𝜕2𝛴 𝜕𝛴
𝜕Θ0 𝜕Θ0 𝜕Θ0 𝜕Θ1 𝜕Θ0 𝜕Θ2 𝜕Θ0
𝜕2𝛴 𝜕2𝛴 𝜕2𝛴 𝜕𝛴
Set M = tr Σ −1
Σ −1 ̃.
…(7) is bias of Θ
𝜕Θ1 𝜕Θ0 𝜕Θ1 𝜕Θ1 𝜕Θ1 𝜕Θ2 𝜕Θ1
𝜕2𝛴 𝜕2𝛴 𝜕2𝛴 𝜕𝛴
[ [𝜕Θ2 𝜕Θ0 𝜕Θ2 𝜕Θ1 𝜕Θ2 𝜕2 ] [𝜕Θ2] ]
4. Method based on Mean bias reduction
In some cases, it is difficult to find the analytical solution for the amount of bias M defined by
∑𝐷 Θ ̃ (𝑋 )
formula (7) so we defined M* (Θ) = 𝜘=1 𝐷𝑟 𝜘 − Θ , where Θ ̃ (𝑋𝜘 ) is a solution of ΓΘ (𝑋𝜘 ) =0
and 𝑋𝜘 are responses simulated by model with Θ . substituted M* (Θ ̃ ) in formula (6) and as a
*
result, we got a new equation ΓΘ̃ based on the mean bias reduction method
𝜕𝑙(𝛼,𝛴)
𝜕 Θ0
0 E01 E02 −1 𝐷−1 ∑𝐷 ̃
𝜘=1 Θ0 (𝑋𝜘 ) − Θ
𝜕𝑙(𝛼,𝛴) 1
ΓΘ̃ * = +2 tr [ [E10 0 E12 ] [𝐷−1 ∑𝐷 ̃
𝜘=1 Θ1 (𝑋𝜘 ) − Θ] ] …(8)
𝜕 Θ1
𝜕𝑙(𝛼,𝛴) E20 E21 0 −1 𝐷
𝐷 ∑𝜘=1 Θ ̃ 2 (𝑋𝜘 ) − Θ
[ 𝜕 Θ2 ]
Theorem 1 (asymptotically consistent). If Ä⊂ ℜ𝑝 is a compact set for parameters, ΓΘ and E(Θ)
are continuous functions of Θ then ΓΘ̃ and ΓΘ̃ * are converges in probability .

Proof :
264
UtilitasMathematica
ISSN 0315-3681 Volume 120, 2023

𝜕𝑙(𝛼,𝛴)
𝜕 Θ0
0 E01 E02 Θ ̃ 0 (𝑋𝜘 ) Θ0
𝜕𝑙(𝛼,𝛴)
*
let ΓΘ̃ = 𝐷−1 ∑𝐷
𝜘=1 𝜕 Θ1
− [E10 0 ̃1 (𝑋𝜘 ) − Θ1 ] .
E12 ] [Θ
𝜕𝑙(𝛼,𝛴) E20 E21 0 ̃ 2 (𝑋𝜘 ) Θ2
Θ
{[ 𝜕 Θ2 ] }
𝜕𝑙(𝛼,𝛴)
𝜕 Θ0
0 E01 E02 ̃ 0 (𝑋𝜘 ) Θ0
Θ
𝜕𝑙(𝛼,𝛴)
set Θ ∊ Ä then − [E10 0 E12 ] ̃1 (𝑋𝜘 ) − Θ1 ] is continuous for every Θ.

𝜕 Θ1
𝜕𝑙(𝛼,𝛴) E20 E21 0 ̃ 2 (𝑋𝜘 ) Θ2
Θ
[ 𝜕 Θ2 ]
𝜕𝑙(𝛼,𝛴)
𝜕 Θ0
0 E01 E02 ̃ 0 (𝑋𝜘 ) Θ0
Θ
‖ 𝜕𝑙(𝛼,𝛴) ‖ ̃1 (𝑋𝜘 ) − Θ1 ] ‖ such that
By the triangle inequality with 𝒦= + ‖[E10 0 E12 ] [Θ
‖ 𝜕 Θ1 ‖
𝜕𝑙(𝛼,𝛴) E20 E21 0 ̃ 2 (𝑋𝜘 ) Θ2
Θ
𝜕 Θ2
:
𝜕𝑙(𝛼,𝛴) 𝜕𝑙(𝛼,𝛴)
𝜕 Θ0
0 E01 E02 ̃ 0 (𝑋𝜘 ) Θ0
Θ 𝜕 Θ0
‖ 𝜕𝑙(𝛼,𝛴)
̃1 (𝑋𝜘 ) − Θ1 ]‖ 𝜕𝑙(𝛼,𝛴)
− [E10 0 E12 ] [Θ ≤ 𝒦. Then −
‖ 𝜕 Θ1 ‖ 𝜕 Θ1
𝜕𝑙(𝛼,𝛴) E20 E21 0 ̃ 2 (𝑋𝜘 ) Θ2
Θ 𝜕𝑙(𝛼,𝛴)
[ 𝜕 Θ2 ] [ 𝜕 Θ2 ]
0 E01 E02̃ 0 (𝑋𝜘 ) Θ0
Θ
[E10 0 ̃1 (𝑋𝜘 ) − Θ1 ] is bounded on Ä . ∀𝜀 > 0, ∃𝛿 > 0 and Θ0 , Θ ⊂ Ä such that
E12 ]

E20 E21 0 ̃ 2 (𝑋𝜘 ) Θ2
Θ
𝛿
| Θ − Θ0 | < 𝛿 when Ȧ is partition of Ä then ‖ Ȧ‖ < since 𝒦 is continuous on Ä there are Θ𝑖
p √
𝛿
, Θ0𝑖 in Ä𝑖 such that 𝒦(Θ𝑖 ) = sup 𝒦(Θ)Θ ∊Ä𝑖 and 𝒦(Θ0 ) = inf 𝒦(Θ)Θ ∊Ä𝑖 since ‖ Ȧ‖ < and
√p
| Θ𝑖 − Θ0𝑖 | < 𝛿 with Θ = Θ𝑖 and Θ0 = Θ0𝑖 then |𝒦(Θ) − 𝒦(Θ0 )| < 𝜀 i.e 𝒦 is continuous in ℜ𝑝 .
By Heine-Borel theorem Ä is closed and bounded . Ä is closed subset in product of bounded
𝑃
integrable. Since ΓΘ̃ = ΓΘ when Θ̃ = Θ such that M=0 then ΓΘ̃ → ΓΘ Similarly ΓΘ̃ *= ΓΘ as R→
𝑃
∞ then ΓΘ̃ * → ΓΘ are such that

‖ ΓΘ̃ − ΓΘ̃ ∗ ‖ ≤ ‖ ΓΘ̃ − ΓΘ + ΓΘ − ΓΘ̃ ∗ ‖ ≤ ‖ ΓΘ̃ − ΓΘ ‖ + ‖ ΓΘ − ΓΘ̃ ∗ ‖

Sup ‖ ΓΘ̃ − ΓΘ̃ ∗ ‖ ≤ sup ‖ ΓΘ̃ − ΓΘ ‖ + sup‖ ΓΘ̃ ∗ − ΓΘ ‖ → 0 as R→ ∞ .

Theorem 2. If Ä⊂ ℜ𝑝 is a compact set for parameters, ΓΘ and E(Θ) are continuous functions of
uniform 𝑃
̃ ∊ Ä and ΓΘ̃ * →
Θ. ΓΘ̃ has a unique solution at Θ ̃∗ ∊ Ä → Θ
ΓΘ̃ as R→ ∞ then any Θ ̃ when
*
ΓΘ̃ = 0.

265
UtilitasMathematica
ISSN 0315-3681 Volume 120, 2023

Proof:
uniform
Proof is easy from theorem (1) , ΓΘ̃ * → ΓΘ̃ as D→ ∞ then ∀𝜀 > 0, ∃ ℬ > 0 such that D >
ℬ,

𝜀 > supΘ ∊Ä ‖ ΓΘ̃ − ΓΘ̃ ∗ ‖ ≥ ‖ ΓΘ̃ (Θ


̃ ∗ ) − ΓΘ̃ ∗ (Θ
̃ ∗ )‖ = ‖ ΓΘ̃ ∗ (Θ
̃ ∗ )‖

̃ ∗ → unique Θ
So Θ ̃.

̃ ∗ estimator
5. Asymptotic normality of 𝚯
Asymptotic normality is a fundamental concept in statistics that refers to the behavior of
statistical estimators as the sample size approaches infinity.

Theorem 3. If Ä⊂ ℜ𝑝 is a compact set for parameters, ΓΘ and E(Θ) are continuous functions of
Θ,
𝑃
𝑛−1 ∑𝑁
𝑖=1 ΓΘΘ → lim 𝑛
−1 ∑𝑁
𝑖=1 E(Θ)𝑖 the matrix 𝒵= -𝑛
−1 ∑𝑁
𝑖=1 E(Θ)𝑖 is positive definite with
𝑁→∞
∗ 𝑃 1
𝜕ΓΘ
| → 𝒵 when n= n1 n2 n3 and j= 1,2,…,p then 𝑛−2 (Θ
̃ ∗ − Θ) ~ N(0, (1+D-1) E(Θ)-1).
−1 ̃
𝑛 |− 𝜕Θ𝑗

Proof:
̃ ∗ is consistent estimator of Θ .By Taylor`s theorem to
From asymptotically consistent theorem Θ
∗ ̃ ∗ such that :
ΓΘ̃ a bout Θ

ΓΘ̃ ∗ (Θ) + ∇ ΓΘ̃ ∗ (Θ


̌ ) (Θ
̃ ∗ - Θ) = 0 where Θ
̌ = Θ + 𝛾(Θ
̃ ∗ - Θ) , 𝛾 ∊ (0,1)

̌ )}−1
̃ ∗ - Θ) = - ΓΘ̃ ∗ (Θ) {∇ ΓΘ̃ ∗ (Θ

−1
̃ ∗ - Θ) = 𝑛1⁄2 ΓΘ̃ ∗ (Θ) {−∇ ΓΘ̃ ∗ (Θ
𝑛1⁄2 (Θ ̌ )}

∗ ̌ ) −1
̃ ∗ - Θ) = {−∇ ΓΘ̃
𝑛1⁄2 (Θ

} (𝑛−1⁄2 ΓΘ̃ ∗ (Θ)).
𝑛

d d
̃ ∗ - Θ) → N( 0p
From the central limit theorem 𝑛−1⁄2 ΓΘ → N( 0p , E(Θ)) as n → ∞ .Then 𝑛1⁄2 (Θ
−1
, E(Θ) )
̃ ∗ 𝜘 - Θ) are independent for 𝜘 = 1,2,…,D we have the following joint
as n → ∞ . where 𝑛1⁄2 (Θ
limit

266
UtilitasMathematica
ISSN 0315-3681 Volume 120, 2023

̃ ∗1 − Θ)
𝑛1⁄2 (Θ
̃ ∗ 2 − Θ) d
𝑛1⁄2 (Θ → N( 0pD ,Ξ) such that Ξ is diagonal blocks of E(Θ)−1 . From continuous

̃ ∗ 𝐷 − Θ)]
[𝑛1⁄2 (Θ
mapping theorem
d
1⁄2 ̃ ∗
𝑛1⁄2 M(Θ)∗𝜘 = 𝐷−1 ∑𝐷
𝜘=1 𝑛 (Θ 𝜘 − Θ) → N( 0p , 𝐷−1 E(Θ)−1 )
1⁄2 ̃ ∗
Because 𝑛1⁄2 ∑𝑛𝑖=1 ΓΘ and 𝐷 −1 ∑𝐷
𝜘=1 𝑛 (Θ 𝜘 − Θ) are independent

𝑛1⁄2 ∑𝑛𝑖=1 ΓΘ d E(Θ) 0𝑝×𝑝


then [ ] → N (02𝑝 , )
1⁄2
𝑛 M(Θ)𝜘 ∗ 0𝑝×𝑝 𝐷 E(Θ)−1
−1

∗ d
and we have 𝑛−1⁄2 ΓΘ̃ = 𝑛−1⁄2 ΓΘ - 𝑛−1 E(Θ) 𝑛1⁄2 M(Θ)∗𝜘 → N( 0p , (1 + 𝐷−1 )E(Θ)). From the
1
̌ and slutusky`s lemma we have 𝑛−2 (Θ
consistency of Θ ̃ ∗ − Θ) ~ N(0, (1+D-1) E(Θ)-1)

6. Conclusions
The mean bias reduction method was modified to estimate the variance components in the
repeated measurement model by replacing the bias function in a mean bias reduction method
with another that is dependent on a sample of independent observations that simulates the study
model with variance components. Then proved the modified method converged with the mean
bias reduction method, it was found that the estimator of the variance components in the
modified method is consistent and finally, it was proved that the asymptotic normality of the
estimator is achieved.

References

[1] Al-Mouel , A. H. S. and Wang, J. L., (2004) .One –Way Multivariate Repeated
Measurements Analysis of Variance Model. Applied Mathematics a Journal of Chinese
Universities 19(4),pp.435-448.
[2] AL-Mouel, A. H. S and Mustafa, H. I., (2014). The Sphericity Test for OneWay Multivariate
Repeated Measurements Analysis of Variance Mode. Journal of Kufa for Mathematics and
Computer Vol.2, no.2, Dec,2014,pp 107-115.
[3] Al-Mouel, A. H. S., (2004). Multivariate Repeated Measures Models and Comparison of
Estimators, Ph.D. Thesis, East China Normal University, China.
[4] Jassim N O and Al-Mouel, A H S., (2020) .Lasso Estimation for High-Dimensional Repeated
Measurement Model, AIP Conf. Proc., 2292, 020002, pp. 1-10.
[5] Jiang, J.,(2007). Linear and Generalized Linear Mixed Models and Their Applications.
Springer Series in Statistics.

267
UtilitasMathematica
ISSN 0315-3681 Volume 120, 2023

[6] Kori H. A. and AL-Mouel, A. H. S, (2021). Expected mean square rate estimation of repeated
measurements model, Int. J. Nonlinear Anal. Appl. 12 (2021) No. 2, 75-83.
[7] Kosmidis I, Guolo A and Varin C.,( 2017). Improving the accuracy of likelihood-based
inference in meta-analysis and metaregression. Biometrika; 104: 489–496.
[8] Kosmidis, I., (2014a). Bias in parametric estimation: reduction and useful sideeffects. Wiley
Interdisciplinary Reviews: Computational Statistics 6, 185–196.
[9] Vonesh, E.F. and Chinchilli, V.M., (1997). Linear and Nonlinear Models for the Analysis of
Repeated Measurements, Marcel Dakker, Inc., New York.
[10] Wand, M., (2007). Fisher information for generalised linear mixed models. Journal of
Multivariate Analysis 98, 1412–1416.

268

You might also like