Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

On ηh Convexity

a
Muhammad Shoaib Saleem 1 , b Hamood Ur Rehman 2 ,
c
Muhammad Imran Qureshi 3 , d Muhammad Sajid Zahoor 4 and
e
Mamoona Ghafoor 5
1
Department of Mathematics, University of Okara
a
shaby455@yahoo.com

b
hamood84@gmail.com

3
Department of Computer Science, COMSATS Vehari, Campus.
c
imranqureshi18@gmail.com

d
sajidzahoor308@gmail.com

e
mamoona.ghafoor9@gmail.com

Abstract
The definition of ηh − convex function is introduced in this paper.The basic
properties under certain conditions are proved for such class of function.
Moreover, we derive famous Jensen type , Hermite Hadamard and Ostrowski
type inequality for such functions.

Keywords: η-convex function, ηh -convex function,Ostrowski Type Inequality,


Hermite Hadamard inequality and Jensen type Inequality.

1 Introduction
The convexity of sets and functions is main topic of most of the studies in the
recent years. Among many reasons for the study of convexity, the important
one is its interesting geometric shape and use in non-linear programing and
optimization theory. For convenience, the generalizations of classical convexity
can be viewed as three categories. Firstly, the generalizing the definition of
convexity, secondly, the generalization in domains and thirdly, the extension of
rang set of functions.
The famous generalization in the definition of convexity are quasi convex [1],
strongly convex [2], logarithmic convex [3], approximately convex [4], η convex
[13], h-convex [5] and mid convex functions [6] etc. The work in which author
extends the domains are E convex functions [7], α convex functions [8] and invex
functions [9] etc. And the famous work in which range is extended is convex
vector [16].
For reader interest in convex analysis, we referred books [14], and [15].
The current paper may be viewed as to fall in first category i.e in this article

1
the definition of convexity is generalized. The main motivation of the work is
the paper [10] and [11], in which author discussed the concept of η convexity
and derive some important inequality for η convex function.

Through this paper, I and J are intervals in R , (0, 1) ⊆ J and η : A × A → B


for appropriate A, B ⊆ R

Definition 1.1 (η-convex function)[13]


A function f : I → R is called convex with respect to η, if

f (tx + (1 − t)y) ≤ f (y) + tη(f (x), f (y))

for all x, y ∈ I and t ∈ [0, 1].

Definition 1.2 (h-convex function)[5]


Let f, h : J → R be a non-negative function. We say that f is an h-convex
function, or that f belongs to the class SX(h; I), if

f (tx + (1 − t)y) ≤ h(t)f (x) + h(1 − t))f (y) (1.1)

∀x, y ∈ J and t ∈ [0, 1]

Definition 1.3 (Modified h-convex function)[12]


Let f, h : J ⊆ R → R be a non-negative function. A function f : I ⊆ R → R is
called modified h-convex function if

f (tx + (1 − t)y) ≤ h(t)f (x) + (1 − h(t))f (y) (1.2)

∀x, y ∈ J and t ∈ [0, 1]

Definition 1.4 A function f : I → R has a loca minimum at x0 ∈ I, if there


is a neighbourhood Nr (x0 ) ⊆ I such that f (x0 ) ≤ f (x) for all x ∈ Nr (x0 ).

It is always interesting in convex analysis that we generalize, the concept


of convexity that unifies the already existing definition. The main purpose of
this paper is to introduce a definition of convexity that unifies the concept of
η-convex and modified h-convexity, we name it as ηh convexity.

Definition 1.5 (ηh convex function) Let h : J → R be a non-negative func-


tion.A function f : I → R is called an ηh convex function, if

f (tx + (1 − t)y) ≤ f (y) + h(t)η(f (x), f (y)) (1.3)

for all x, y ∈ I and t ∈ [0, 1]

1. If we put in (1.3) h(t) = t then f becomes η convex function.


2. If η(x, y) = x − y in (1.3) then f becomes modified h-convex function.

2
3. If we put in (1.3) h(t) = t and η(x, y) = x − y then f becomes classical
convex function.

We observe that by taking x = y in (1.3) for any x ∈ I and t ∈ [0, 1] we get,

h(t)η(f (x), f (y)) ≥ 0

since, h(t) ≥ 0 so,


η(f (x), f (y)) ≥ 0
for any x ∈ I.
If we take t = 1 and h(1) ≥ 1 in (1.3), we get

f (x) − f (y) ≤ η(f (x), f (y))

for any x, y ∈ I the second condition obviously implies the first. So, if we want
to define ηh convex functions f on an interval Iof real numbers and h(1) ≥ 1,
we should assume that

η(a, b) ≥ a − b (1.4)

for any a, b ∈ I, we observe that if f : I → R is a convex function and η :


A × A → R is an arbitrary bifunction that satisfies the condition (1.4) and
h(t) ≥ t, then for any x, y ∈ I and t ∈ [0, 1], we have

f (tx + (1 − t)y) ≤ f (y) + t(f (x) − f (y)) ≤ f (y) + h(t)η(f (x), f (y))

which shows that f is ηh convex function. However there exists ηh convex


functions for some bi-functions η that are not convex.
Example 1.6 Consider a function f : R → R defined by
(
−x, if x ≥ 0
f (x) =
x, if x < 0

and define a bi-function η as η(x, y) = −x − y, for all x, y ∈ R− and hk (t) =


tk , k ≤ 1 then f is an ηh convex function but not convex.

Example 1.7 Let hk , k < 0, be a function defined as in example 1.6 and let the
function f : I = [a, b] → R be defined as follows:
(
1, if x 6= a+b
2
f (x) =
2 1−k
, ifx a+b
2

and define a bifunction η as η(x, y) = x + y, for all x, y ∈ R ,then f is an ηh


convex function but not convex.

3
2 Basic Results
Proposition 2.1 Consider two ηh convex function f, g : I → R. Then
1. If η is non-negatively homogeneous for any γ ≥ 0, the function γf : I → R
is ηh convex.
2. If η is additive then f + g : I → R is ηh − convex.

3. Let g : I −→ R be linear and f : I −→ R be ηh − convex function,then


f ◦ g is ηh convex function.

Proof. The proof of the proposition is straight forward.


Proposition 2.2 If g and h be non negative functions defined on I with the
property h(t) ≤ g(t). If f is ηh convex function then f is ηg convex function.
Proof.

f (tx + (1 − t)y) ≤ f (y) + h(t)η(f (x), f (y))


≤ f (y) + g(t)η(f (x), f (y))

=⇒ f is ηg convex.
Proposition 2.3 If f : [a, b] → R is ηh − convex , and h(t) ≤ k , then
max
x∈[a,b] f (x) ≤ max {f (b), f (b) + kη(f (a), f (b))}

Proof. For any x ∈ [a, b] we have x = ta + (1 − t)b for some t ∈ [0, 1], which
implies that

f (x) ≤ f (b) + h(t)η(f (a), f (b)) ≤ max {f (b), f (b) + kη(f (a), f (b))}

since x is arbitrary, so
max
x∈[a,b] f (x) ≤ max {f (b), f (b) + kη(f (a), f (b))}

and the statement is proved.


Proposition 2.4 If f : I → R is ηh − convex and attains a local minimum at
x0 ∈ I, then η(f (x), f (x0 )) ≥ 0, for any x ∈ I.

Proof. Suppose that f has a local minimum at x0 ∈ I. For any x ∈ I we can


find t > 0 sufficiently small such that tx + (1 − t)x0 ∈ Nr (x0 ). So we reach to
conclusion by the following inequality:

f (x0 ) ≤ f (tx + (1 − t)x0 ) ≤ f (x0 ) + h(t)η(f (x), f (x0 ))


=⇒ η(f (x), f (x0 )) ≥ 0

4
Theorem 2.5 Let h : J → R be sub-multiplicative function for all x, y ∈ J. A
f : I → R is ηh − convex if and only if for any x1 , x2 , x3 ∈ I with x1 < x2 < x3 ,

 
f (x3 ) − f (x2 ) −h(x2 − x3 )
det ≥ 0
η(f (x1 ), f (x3 )) h(x1 − x3 )
(2.1)

and

f (x1 ) ≤ f (x3 ) + h(1)η(f (x1 ), f (x3 ))


(2.2)

Proof. Suppose that f is an ηh − convex function.Consider arbitrary x1 , x2 , x3 ∈


I , x1 < x2 < x3 .So there exists a t ∈ (0, 1) such that x2 = tx1 + (1 − t)x3 ,
−x3
namely t = xx12 −x 3
. From ηh − convexity of f we have

x2 − x3
f (x2 ) ≤ f (x3 ) + h( )η(f (x1 ), f (x3 ))
x1 − x3
(2.3)

=⇒ h(x1 − x3 )[f (x3 ) − f (x2 )] + h(x2 − x3 ) η(f (x1 ), f (x3 )) ≥ 0


which is equal to (2.1).
Also for t = 1 in (2.3), we get

f (x1 ) ≤ f (x3 ) + h(1)η(f (x1 ), f (x3 ))

For converse , consider x, y ∈ I with x < y. choosing any t ∈ (0, 1) we have


x < tx + (1 − t)y < y and so
 
f (y) − f (tx + (1 − t)y) −h(tx + (1 − t)y − y)
det ≥0
η(f (x), f (y)) h(x − y)
By expanding this determinant , we get

0 ≤ h(x − y)[f (y) − f (tx + (1 − t)y) + h(t)η(f (x), f (y))]

=⇒ f (tx + (1 − t)y) ≤ f (y) + h(t)η(f (x), f (y))]


for any t ∈ (0, 1). So f is ηh − convex function.
Theorem 2.6 For a function f : I → R, the following assertions are equivlent.
(a) f is ηh − convex function.
(b) for any x, y, z ∈ I with x < y < z, we have
f (y) − f (z) η(f (x), f (z))
≤ (2.4)
h(y − z) h(x − z)
where h : J → R be sub-multiplicative function

5
Proof. Suppose f is ηh − convex function and x, y, z ∈ I with x < y < z, then
y−z
there is a t ∈ (0, 1) such that y = tx + (1 − t)z. So, we have t = x−z . Also

y−z
f (y) ≤ f (z) + h( ) η(f (x), f (z))
x−z
h(y − z)
f (y) − f (z) ≤ η(f (x), f (z))
h(x − z)
f (y) − f (z) η(f (x), f (z))

h(y − z) h(x − z)
For converse, consider x, y ∈ I with x < y.It is clear that for any t ∈ (0, 1),
x < tx + (1 − t)y < y. It follows from (2.4) that

f (tx + (1 − t)y) − f (y) η(f (x), f (y))



h(tx + (1 − t)y − y) h(x − y)

that is equivalent to

f (tx + (1 − t)y) − f (y) η(f (x), f (y))



h(tx − ty) h(x − y)

since h is sub-multiplicative, so

f (tx + (1 − t)y) − f (y) η(f (x), f (y))



h(t)h(x − y) h(x − y)

Therefore
f (tx + (1 − t)y) ≤ f (y) + h(t) η(f (x), f (y))
for any x, y ∈ I with x < y and t ∈ (0, 1).So f is ηh − convex function.
Remark 2.7 If we take h(t)=t then we obtain this result for [??]

3 Jensen Type Inequality


We will use the following relations in the proof of theorem 6 which is Jensen
type inequality for ηh -convex functions. Let f : I → R be an ηh − convex
function. For x1 , x2 ∈ I and α1 + α2 = 1, we have f (α1 x1 + α2 x2 ) ≤ f (x2 ) +
h(α1 )η(f (x1 ), f (x2 )).
Pn Pi
Also when n > 2 for x1 , x2 , ..., xn ∈ I, i=1 αi = 1 and Ti = j=1 αj , we have

n n−1 n−1
X X αi X αi
f( αi xi ) = f ((h(Tn−1 ) xi ) + αn xn ) ≤ f (xn ) + h(Tn−1 )η(f ( xi ), f (xn ))(3.1)
i=1 i=1
Tn−1 i=1
Tn−1

6
Theorem 3.1 Let f : I → R be an ηh − convex function andPη be non-
i
decreasing non-negatively sub-linear in first variable. If Ti = j=1 αj for
i = 1, ..., n such that Tn = 1, then

Xn n−1
X
f( αi xi ) ≤ f (xn ) + h(Ti ) ηf (xi , xi+1 , ..., xn ) (3.2)
i=1 i=1

where ηf (xi , xi+1 , ..., xn ) = η(ηf (xi , xi+1 , ..., xn−1 ), f (xn )) and ηf (x) = f (x) for
all x ∈ I.

Proof. since η be non-decreasing, non-negatively sublinear in first variable , so


from (3.1) it follows that
n n−1
! !
X X αi
f( αi xi ) ≤ f (xn ) + h(Tn−1 )η f xi , f (xn )
i=1
T
i=1 n−
n−2
! !
Tn−2 X αi αn−1
= f (xn ) + h(Tn−1 )η f xi + xn−1 , f (xn )
Tn−1 i=1 Tn−2 Tn−1
Tn−2
≤ f (xn ) + h(Tn−1 )η(f (xn−1 ) + h( )
Tn−1
n−2
X αi
× η(f ( xi ), f (xn−1 )), f (xn ))
i=1
Tn−2
n−2
X αi
≤ f (xn ) + h(Tn−1 )η(f (xn−1 ), f (xn )) + h(Tn−2 )η(η(f ( xi ), f (xn−1 )), f (xn ))
i=1
Tn−2
≤ ... ≤ f (xn ) + h(Tn−1 )η(f (xn−1 ), f (xn )) + h(Tn−2 )η(η(f (xn−2 ), f (xn−1 )), f (xn ))
+ ... + h(T1 )η(η(...η(η(f (x1 ), f (x2 )), f (x3 ))...)), f (xn−1 )), f (xn ))
= f (xn ) + h(Tn−1 )ηf (xn−1 , xn ) + h(Tn−2 )ηf (xn−2 , xn−1 , xn )
+ ... + h(T1 )ηf (x1 , x2 , ..., xn−1 , xn )
n−1
X
= f (xn ) + h(Ti )ηf (xi , xi+1 , ..., xn ). (3.3)
i=1

Remark 3.2 If we take h(Ti ) = Ti , inequality (3.2) reduces to Jensen type


inequality for φ- convex function.

4 Hermite Hadamard Type Inequality


Theorem 4.1 (Hermite Hadamard type inequality)
Let f : I → R be ηh − convex function on the interval [a, b] with a < b, then we

7
have
Z 1 Z b
a+b 1 1
f( ) − h( ) η[f (ta + (1 − t)b), f ((1 − t)a + tb)]dt ≤ f (x)dx
2 2 0 b−a a
Z 1
f (a) + f (b) 1
≤ + [η(f (a), f (b)) + η(f (b), f (a))] h(t)dt
2 2 0
(4.1)

Proof. Let u = ta + (1 − t)b , v = (1 − t)a + tb then

a+b u+v 1 1
=⇒ f ( ) = f( ) = f ( (ta + (1 − t)b) + ((1 − t)a + tb))
2 2 2 2
1
≤ f ((1 − t)a + tb) + h( ) η(f (u), f (v)) (4.2)
2
Integrating above inequality w.r.t ’t’ on [0, 1]

Z b Z 1
a+b 1 1
f( )≤ f (x)dx + h( ) η(f (u), f (v))dt.
2 b−a a 2 0
Z 1 Z b
a+b 1 1
f( ) − h( ) η(f (ta + (1 − t)b), f ((1 − t)a + tb))dt ≤ f (x)dx(4.3)
2 2 0 b−a a

Now,
Z b Z 1 Z 1
f (x)dx = (b − a) f (ta + (1 − t)b)dt ≤ (b − a)f (b) + h(t)η(f (a), f (b))dt
a 0 0
Z b Z 1
1
=⇒ f (x)dx ≤ h(t)η(f (a), f (b))dt
b−a a 0
(4.4)

similarly,
Z b Z 1
1
f (x)dx ≤ f (a) + h(t)η(f (b), f (a))dt (4.5)
b−a a 0

Adding (4.4) and (4.5)


Z b
1 f (a) + f (b) 1
f (x)dx ≤ + [η(f (a), f (b)) + η(f (b), f (a))]
b−a a 2 2
Z 1
× h(t)dt (4.6)
0

Combining (4.3) and (4.6) we get (4.1).


Remark 4.2 In (4.1), if we choose η(x, y) = x − y then it reduces to classical
Hermite Hadamard type inequality for convex function.

8
5 Ostrowski-type inequality
In order to proof Ostrowski type inequality for ηh convex function the following
Lemma is needed.
Lemma 5.1 Let f : I ⊆ R → R be a differentiable mapping on I ◦ where a, b ∈ I
0
with a < b. If f ∈ L[a, b], then the following equality holds:
Z b
(x − a)2 1 0
Z
1
f (x) − f (u)du = tf (tx + (1 − t)a)dt
b−a a b−a 0
(b − x)2 1 0
Z
− tf (tx + (1 − t)b)dt (5.1)
b−a 0
for each x ∈ [a, b].
Theorem 5.2 (Ostrowski-type inequality)
Let h : J ⊆ R → R be a non-negative function and let f : I ⊆ R → R be a
0
differentiable mapping on I ◦ such that f ∈ L[a, b], where a, b ∈ I with a < b
R1 R1 2 0
and h(t) ≥ t , 0 h(t)dt < ∞, 0 h (t)dt < ∞. If |f | is an ηh convex function
0
on I and |f (x)| ≤ M, x ∈ [a, b], then we have
Z b Z 1 Z 1
1 (x − a)2 + (b − x)2
|f (x) − f (u)du| ≤ M [ ] h(t)dt + η ∗ (f (x), f (y)) h2 (t)dt
b−a a b−a 0 0
(5.2)
(x−a)2 0 0 (b−x)2 0 0
where η ∗ (f (x), f (y)) = b−a η(|f (x)|, |f (a)|) + b−a η(|f (x)|, |f (b)|) and
x, y ∈ [a, b]
0
Proof. By lemma 5.1 and since |f | is ηh convex, then we can write
Z b
(x − a)2 1 0
Z
1
|f (x) − f (u)du| ≤ t|f (tx + (1 − t)a)|dt
b−a a b−a 0
(b − x)2 1 0
Z
+ t|f (tx + (1 − t)b)|dt
b−a 0
(x − a)2 1
Z
0 0 0
≤ t[|f (a)| + h(t)η(|f (x)|, |f (a)|)]dt
b−a 0
(b − x)2 1
Z
0 0 0
+ t[|f (b)| + h(t)η(|f (x)|, |f (b)|)]dt
b−a 0
(x − a)2 1 (x − a)2 1 2
Z Z
0 0
≤ h(t).M dt + h (t)η(|f (x)|, |f (a)|)dt
b−a 0 b−a 0
2 Z 1 2 Z 1
(b − x) (b − x) 0 0
+ h(t).M dt + h2 (t)η(|f (x)|, |f (b)|)dt
b−a 0 b−a 0
2 2 Z 1 Z 1
(x − a)2
 
(x − a) + (b − x) 0 0
≤ M h(t)dt + η(|f (x)|, |f (a)|) h2 (t)dt
b−a 0 b − a 0
Z 1
(b − x)2 0 0
+ η(|f (x)|, |f (b)|) h2 (t)dt (5.3)
b−a 0

9
and (5.2) is obtained.
Remark 5.3 In (5.2), if we choose h(t) = t, and η(x, y) = x − y then it reduces
to classical Ostrowski inequaity for convex function.

References
[1] B. Definetti, Sulla Stratificazini convesse, Ann. Math. Pura. Appl., 30,
(1949), 173183.
[2] B. T. Polyak, Existence theorems and convergence of minimizing sequences
in extremum problems with restrictions, Soviet Math. Dokl., 7, (1966),
7275.
[3] J. E. Pecaric, F. Proschan and Y. L. Tong,Convex functions, partial order-
ings and statistical applications, Academic Press, Boston, 1992.
[4] D. H. Hyers and S. M. Ulam, Approximately convex functions, Proc. Amer.
Math. Soc., 3, (1952),821828.
[5] S. Varosanec, On h-convexity, J. Math. Anal. Appl., 326 1 (2007), 303-311.
[6] J. L. W. V. Jensen, On konvexe funktioner oguligheder mellem middl-
vaerdier, Nyt. Tidsskr. Math.B., 16, (1905), 4969.

[7] X. M. Yang, E-convex sets, E-convex functions and E-convex programming,


J. Optimium Theory. Appl. 109, (2001), 699704.
[8] C. R. Bector and C. Singh, B-vex functions, J. Optim. Theory. Appl., 71,
2 (1991), 237253.

[9] M. A. Hanson, On sufficiency of the Kuhn-Tucker conditions, J. Math.


Anal. Appl., 80, (1981),545550.
[10] M. E. Gordji, S. S. Dragomir and M. Rostamian Delavar, An inequality
related to η -convex functions, Int. J. Nonlinear Anal. Appl., 6, 2 (2015),
2632.

[11] M. E. Gordji, M. R. Delavar and M. De La Sen, On φ-convex functions, J.


Math. Inequal., 10, 1 (2016), 173183.
[12] M. A. Noor, K. I. Noor and M. U. Awan, Hermite Hadamard Inequalities
for modified h-convex functions. TJMM 6 (2014), No. x, 00-00.

[13] M. R. Delavar and S. S. Dragomir. On η-Convexity. Volume 20, Number


1 (2017), 203216, doi:10.7153/mia-20-14.
[14] P.Niculescu, L. E. Persson, Convex functions and their applications,
Springer September,2004.

10
[15] J. Pečarić, F. Proschan, Y.L.Tong, Convex functions, Partial ordering and
statistical applications. Mathematics in Science and engineering volume ,
187.
[16] M. S. Saleem, J. Pečarić, S. Hussain, M. W. Khan and A. Hussain, The
weighted reverse Poincar-type estimates for the difference of two convex vec-
tors. Saleem et al. Journal of Inequalities and Applications (2016) 2016:194
DOI 10.1186/s13660-016-1133-x.

11

You might also like