Professional Documents
Culture Documents
Tensors For Data Processing 1St Edition Yipeng Liu Full Chapter
Tensors For Data Processing 1St Edition Yipeng Liu Full Chapter
Yipeng Liu
Visit to download the full and correct content document:
https://ebookmass.com/product/tensors-for-data-processing-1st-edition-yipeng-liu/
Tensors for Data Processing
FIRST EDITION
Yipeng Liu
School of Information and Communication Engineering, University of
Electronic Science and Technology of China (UESTC), Chengdu, China
Table of Contents
Cover image
Title page
Copyright
List of contributors
Preface
Abstract
1.1. Introduction
1.5. Challenges
References
Abstract
2.1. Introduction
References
Chapter 3: Partensor
Abstract
Acknowledgement
3.1. Introduction
3.6. Conclusion
References
Abstract
4.1. Introduction
4.5. Experiments
4.6. Conclusion
References
Abstract
5.1. Introduction
5.6. Conclusion
References
Abstract
6.1. Introduction
6.8. Summary
References
Abstract
7.1. Introduction
7.5. Conclusion
References
Abstract
8.1. Introduction
8.2. Background
8.5. Conclusion
References
Abstract
References
Acknowledgements
10.1. Introduction
References
Abstract
Acknowledgements
11.5. Remarks
References
Chapter 12: Tensors for neuroimaging
Abstract
12.1. Introduction
12.7. Conclusion
References
Abstract
13.1. Introduction
References
Abstract
14.1. Introduction
14.7. Experiments
14.8. Conclusion
References
Index
Copyright
Academic Press is an imprint of Elsevier
125 London Wall, London EC2Y 5AS, United Kingdom
525 B Street, Suite 1650, San Diego, CA 92101, United States
50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States
The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United
Kingdom
Notices
Knowledge and best practice in this field are constantly changing.
As new research and experience broaden our understanding,
changes in research methods, professional practices, or medical
treatment may become necessary.
To the fullest extent of the law, neither the Publisher nor the
authors, contributors, or editors, assume any liability for any injury
and/or damage to persons or property as a matter of products
liability, negligence or otherwise, or from any use or operation of
any methods, products, instructions, or ideas contained in the
material herein.
ISBN: 978-0-12-824447-0
Typeset by VTeX
List of contributors
Kim Batselier Delft Center for Systems and Control, Delft
University of Technology, Delft, The Netherlands
Yingyue Bi School of Information and Communication
Engineering, University of Electronic Science and Technology of
China (UESTC), Chengdu, China
Jérémie Boulanger CRIStAL, Université de Lille, Villeneuve
d'Ascq, France
Rémy Boyer CRIStAL, Université de Lille, Villeneuve d'Ascq,
France
Cesar F. Caiafa
Instituto Argentino de Radioastronomía – CCT La Plata, CONICET /
CIC-PBA / UNLP, Villa Elisa, Argentina
RIKEN Center for Advanced Intelligence Project, Tokyo, Japan
Jocelyn Chanussot LJK, CNRS, Grenoble INP, Inria, Université
Grenoble, Alpes, Grenoble, France
Christos Chatzichristos KU Leuven, Department of Electrical
Engineering (ESAT), STADIUS Center for Dynamical Systems, Signal
Processing and Data Analytics, Leuven, Belgium
Cong Chen Department of Electrical and Electronic Engineering,
The University of Hong Kong, Pokfulam Road, Hong Kong
Nadav Cohen School of Computer Science, Hebrew University of
Jerusalem, Jerusalem, Israel
Xudong Cui School of Mathematics, Tianjin University, Tianjin,
China
André L.F. de Almeida Department of Teleinformatics
Engineering, Federal University of Fortaleza, Fortaleza, Brazil
Aybüke Erol Circuits and Systems, Department of
Microelectronics, Delft University of Technology, Delft, The
Netherlands
Yiming Fang Department of Computer Science, Columbia
University, New York, NY, United States
Gérard Favier Laboratoire I3S, Université Côte d'Azur, CNRS,
Sophia Antipolis, France
Borbála Hunyadi Circuits and Systems, Department of
Microelectronics, Delft University of Technology, Delft, The
Netherlands
Pratik Jawanpuria Microsoft, Hyderabad, India
Tai-Xiang Jiang School of Economic Information Engineering,
Southwestern University of Finance and Economics, Chengdu,
Sichuan, China
Paris A. Karakasis School of Electrical and Computer Engineering,
Technical University of Crete, Chania, Greece
Ouafae Karmouda CRIStAL, Université de Lille, Villeneuve
d'Ascq, France
Hiroyuki Kasai Waseda University, Tokyo, Japan
Eleftherios Kofidis Dept. of Statistics and Insurance Science,
University of Piraeus, Piraeus, Greece
Christos Kolomvakis School of Electrical and Computer
Engineering, Technical University of Crete, Chania, Greece
Yoav Levine School of Computer Science, Hebrew University of
Jerusalem, Jerusalem, Israel
Zechu Li Department of Computer Science, Columbia University,
New York, NY, United States
Athanasios P. Liavas School of Electrical and Computer
Engineering, Technical University of Crete, Chania, Greece
Zhouchen Lin Key Lab. of Machine Perception, School of EECS,
Peking University, Beijing, China
Xiao-Yang Liu
Department of Computer Science and Engineering, Shanghai Jiao
Tong University, Shanghai, China
Department of Electrical Engineering, Columbia University, New
York, NY, United States
Yipeng Liu School of Information and Communication
Engineering, University of Electronic Science and Technology of
China (UESTC), Chengdu, China
Zhen Long School of Information and Communication
Engineering, University of Electronic Science and Technology of
China (UESTC), Chengdu, China
George Lourakis Neurocom, S.A, Athens, Greece
Canyi Lu Carnegie Mellon University, Pittsburgh, PA, United
States
Liangfu Lu School of Mathematics, Tianjin University, Tianjin,
China
Yingcong Lu School of Information and Communication
Engineering, University of Electronic Science and Technology of
China (UESTC), Chengdu, China
George Lykoudis Neurocom, S.A, Athens, Greece
Bamdev Mishra Microsoft, Hyderabad, India
Michael K. Ng Department of Mathematics, The University of
Hong Kong, Pokfulam, Hong Kong
Ioannis Marios Papagiannakos School of Electrical and Computer
Engineering, Technical University of Crete, Chania, Greece
Bo Ren Key Laboratory of Intelligent Perception and Image
Understanding of Ministry of Education of China, Xidian University,
Xi'an, China
Or Sharir School of Computer Science, Hebrew University of
Jerusalem, Jerusalem, Israel
Amnon Shashua School of Computer Science, Hebrew University
of Jerusalem, Jerusalem, Israel
Ioanna Siaminou School of Electrical and Computer Engineering,
Technical University of Crete, Chania, Greece
Christos Tsalidis Neurocom, S.A, Athens, Greece
Simon Van Eyndhoven
KU Leuven, Department of Electrical Engineering (ESAT), STADIUS
Center for Dynamical Systems, Signal Processing and Data
Analytics, Leuven, Belgium
icometrix, Leuven, Belgium
Sabine Van Huffel KU Leuven, Department of Electrical
Engineering (ESAT), STADIUS Center for Dynamical Systems, Signal
Processing and Data Analytics, Leuven, Belgium
Anwar Walid Nokia Bell Labs, Murray Hill, NJ, United States
Fei Wen Department of Electronic Engineering, Shanghai Jiao
Tong University, Shanghai, China
Noam Wies School of Computer Science, Hebrew University of
Jerusalem, Jerusalem, Israel
Ngai Wong Department of Electrical and Electronic Engineering,
The University of Hong Kong, Pokfulam Road, Hong Kong
Zebin Wu School of Computer Science and Engineering, Nanjing
University of Science and Technology, Nanjing, China
Yang Xu School of Computer Science and Engineering, Nanjing
University of Science and Technology, Nanjing, China
Liuqing Yang Department of Computer Science, Columbia
University, New York, NY, United States
Fei Ye School of Computer Science and Engineering, Nanjing
University of Science and Technology, Nanjing, China
Tatsuya Yokota
Nagoya Institute of Technology, Aichi, Japan
RIKEN Center for Advanced Intelligence Project, Tokyo, Japan
Zhonghao Zhang School of Information and Communication
Engineering, University of Electronic Science and Technology of
China (UESTC), Chengdu, China
Qibin Zhao
RIKEN Center for Advanced Intelligence Project, Tokyo, Japan
Guangdong University of Technology, Guangzhou, China
Xi-Le Zhao School of Mathematical Sciences/Research Center for
Image and Vision Computing, University of Electronic Science and
Technology of China, Chengdu, Sichuan, China
Pan Zhou SEA AI Lab, Singapore, Singapore
Ce Zhu School of Information and Communication Engineering,
University of Electronic Science and Technology of China (UESTC),
Chengdu, China
Yassine Zniyed Université de Toulon, Aix-Marseille Université,
CNRS, LIS, Toulon, France
Preface
Yipeng Liu Chengdu, China
Abstract
Many classical data processing techniques rely on the
representation and computation of vector and matrix forms,
where the vectorization or matricization is often employed on
multidimensional data. However, during this process, vital
underlying structure information can be lost, leading to
suboptimal performances in processing. As a natural
representation for multidimensional data, the tensor has drawn
a great deal of attention over the past several years. Tensor
decompositions are effective tools for tensor analysis. They have
been intensively investigated in a number of areas, such as
signal processing, machine learning, neuroscience,
communication, psychometrics, chemometrics, biometrics,
quantum physics, and quantum chemistry. In this chapter, we
give a brief introduction into the basic operations of tensors and
illustrate them with a few examples in data processing. To better
manipulate tensor data, a number of tensor operators are
defined, especially different tensor products. In addition, we
elaborate on a series of important tensor decompositions and
their properties, including CANDECOMP/PARAFAC
decomposition, Tucker decomposition, tensor singular value
decomposition, block term decomposition, tensor tree
decomposition, tensor train decomposition, and tensor ring
decomposition. We further present a list of machine learning
techniques based on tensor decompositions, such as tensor
dictionary learning, tensor completion, robust tensor principal
component analysis, tensor regression, statistical tensor
classification, coupled tensor fusion, and deep tensor neural
networks. Finally, we discuss the challenges and opportunities
for data processing under a tensor scheme.
Keywords
Tensor computation; Tensor norm; Tensor product;
CANDECOMP/PARAFAC decomposition; Tucker decomposition;
Tensor singular value decomposition; Block term decomposition;
Tensor network; Higher-order tensor; Multilinear subspace analysis
Chapter Outline
1.1 Introduction
1.1.1 What is a tensor?
1.1.2 Why do we need tensors?
1.2 Tensor operations
1.2.1 Tensor notations
1.2.2 Matrix operators
1.2.3 Tensor transformations
1.2.4 Tensor products
1.2.5 Structural tensors
1.2.6 Summary
1.3 Tensor decompositions
1.3.1 Tucker decomposition
1.3.2 Canonical polyadic decomposition
1.3.3 Block term decomposition
1.3.4 Tensor singular value decomposition
1.3.5 Tensor network
1.4 Tensor processing techniques
1.5 Challenges
References
1.1 Introduction
1.1.1 What is a tensor?
The tensor can be seen as a higher-order generalization of vector and
matrix, which normally has three or more modes (ways) [1]. For
example, a color image is a third-order tensor. It has two spatial
modes and one channel mode. Similarly, a color video is a fourth-
order tensor; its extra mode denotes time.
As special forms of tensors, vector is a first-order tensor
whose i-th entry (scalar) is and matrix is a second-order
tensor whose -th element is . A general N-th-order tensor can
be mathematically denoted as and its -th
entry is . For example, a third-order tensor is
illustrated in Fig. 1.1.
Definition 1.2.1
(Matrix trace [6]) The trace of matrix is obtained by
summing all the diagonal entries of A, i.e., .
Definition 1.2.2
( -norm [6]) For matrix , its -norm is defined as
(1.1)
Definition 1.2.3
(Matrix nuclear norm [7]) The nuclear norm of matrix A is
denoted as , where is the i-th largest singular
value of A.
Definition 1.2.4
(Hadamard product [8]) The Hadamard product for matrices
and is defined as with
(1.2)
Definition 1.2.5
(Kronecker product [9]) The Kronecker product of matrices
and is defined as
, which can be written mathematically as
(1.3)
Based on the Kronecker product, a lot of useful properties can be
derived. Given matrices A, B, C, D, we have
(1.4)
Definition 1.2.6
(Khatri–Rao product [10]) The Khatri–Rao product of matrices
and is defined as
(1.5)
Similar to the Kronecker product, the Khatri–Rao product also has
some convenient properties, such as
(1.6)
Definition 1.2.7
(Tensor transpose [11]) Given a tensor , whose frontal
slices are , its transpose is acquired by first
transposing each of the frontal slices and then placing them in the
order of , , , ⋯, along the third
mode.
Definition 1.2.8
(Tensor mode-n matricization [1]) For tensor , its
matricization along the n-th mode is denoted as
, as shown in Fig. 1.6. It rearranges fibers on
the n-th mode to form the columns of . For instance, there
exists a third-order tensor whose frontal slices are
(1.7)
(1.9)
(1.10)
(1.11)
(1.12)
or the big-endian convention (colexicographic ordering)
(1.13)
(1.14)
Definition 1.2.11
(Tensor norm [1]) The norm of a tensor is the square
root of the summation over the square of all its elements, which
can be expressed as
(1.15)
Definition 1.2.12
(Tensor mode-n product with a matrix [1]) The tensor mode-n
product of and matrix is denoted as
(1.16)
or element-wisely,
(1.17)
(1.18)
(1.19)
Definition 1.2.13
(Tensor mode-n product with a vector [1]) The tensor mode-n
product of the tensor and vector is denoted as
(1.20)
with entries
(1.21)
(1.22)
Definition 1.2.14
(t-product [11]) The t-product of and is
defined as
(1.23)
where ,
represents the block matrix [11] of , and
Definition 1.2.15
(Tensor contraction [5]) Given two tensors and
, suppose they have L equal indices in
and . The contraction of these two tensors
yields an -th-order tensor , whose entries can
be calculated by
(1.24)
(1.26)
FIGURE 1.11 A graphical representation of contraction over
two tensors, and , where
K 1 = I m = Jn .
Definition 1.2.16
(Identity tensor [11]) An identity tensor is a tensor whose first
frontal slice is an identity matrix and the rest are zero matrices.
Definition 1.2.17
(Orthogonal tensor [14]) Using the t-product, an orthogonal tensor
is defined as
(1.27)
Definition 1.2.18
(Rank-1 tensor [1]) A rank-1 tensor is formed by the
outer product of vectors, as shown in Fig. 1.12. Its mathematical
formulation can be written as
(1.28)
(1.29)
Definition 1.2.19
(Diagonal tensor [1]) Tensor is a diagonal tensor if and only if all
its nonzero elements are on the superdiagonal line. Specifically, if
is a diagonal tensor, then we have if
and only if . A graphical illustration of a third-order
diagonal tensor is demonstrated in Fig. 1.13.
FIGURE 1.13 A third-order diagonal tensor .
Definition 1.2.20
(f-diagonal tensor [14]) An f-diagonal tensor is a tensor with
diagonal frontal slices. A third-order f-diagonal tensor is
visualized in Fig. 1.14.
1.2.6 Summary
In this section, we first briefly described some notations of tensor
representations. Then by giving basic operations of matrices, we
discussed several common tensor operations, including tensor
transformations and tensor products. Concepts of structural tensors
such as orthogonal tensor, diagonal tensor, and f-diagonal tensor are
also given. It is worth noting that we only focus on the most
commonly used definitions; for more information, please refer to [1],
[5], and [6].
Definition 1.3.1
(Tucker decomposition) Given a tensor , its Tucker
decomposition is
(1.30)
where are semi-orthogonal factor
matrices that satisfy and is the core
tensor. Even though the core tensor is usually dense, it is generally
much smaller than , i.e., .
(1.31)
where .
Fig. 1.15 is an illustration of Tucker decomposition on a third-
order tensor, i.e., .
»Emme puhu siitä nyt sen pitemmältä.» sanoi hän, »minä voin
odottaa. Te olette minulle niin rakas. — Kesän puoleen tulen
uudelleen, teen kysymykseni vielä kerran ja toivon ajan auttavan
minua.»
*****
Ja minä olen elänyt, sillä olen tuntenut, tuntenut niin, että sydän on
ollut haljeta ilosta ja surusta.
Loppu.
*** END OF THE PROJECT GUTENBERG EBOOK KAKSITOISTA
KUUKAUTTA ***
Updated editions will replace the previous one—the old editions will
be renamed.
1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside the
United States, check the laws of your country in addition to the terms
of this agreement before downloading, copying, displaying,
performing, distributing or creating derivative works based on this
work or any other Project Gutenberg™ work. The Foundation makes
no representations concerning the copyright status of any work in
any country other than the United States.
• You pay a royalty fee of 20% of the gross profits you derive from
the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”
• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.
1.F.
1.F.4. Except for the limited right of replacement or refund set forth in
paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO
OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.
Please check the Project Gutenberg web pages for current donation
methods and addresses. Donations are accepted in a number of
other ways including checks, online payments and credit card
donations. To donate, please visit: www.gutenberg.org/donate.
Most people start at our website which has the main PG search
facility: www.gutenberg.org.