Download as pdf or txt
Download as pdf or txt
You are on page 1of 64

Tensors for Data Processing 1st Edition

Yipeng Liu
Visit to download the full and correct content document:
https://ebookmass.com/product/tensors-for-data-processing-1st-edition-yipeng-liu/
Tensors for Data Processing

Theory, Methods, and Applications

FIRST EDITION

Yipeng Liu
School of Information and Communication Engineering, University of
Electronic Science and Technology of China (UESTC), Chengdu, China
Table of Contents

Cover image

Title page

Copyright

List of contributors

Preface

Chapter 1: Tensor decompositions: computations, applications, and


challenges

Abstract

1.1. Introduction

1.2. Tensor operations

1.3. Tensor decompositions

1.4. Tensor processing techniques

1.5. Challenges
References

Chapter 2: Transform-based tensor singular value decomposition in


multidimensional image recovery

Abstract

2.1. Introduction

2.2. Recent advances of the tensor singular value


decomposition

2.3. Transform-based t-SVD

2.4. Numerical experiments

2.5. Conclusions and new guidelines

References

Chapter 3: Partensor

Abstract

Acknowledgement

3.1. Introduction

3.2. Tensor decomposition

3.3. Tensor decomposition with missing elements

3.4. Distributed memory implementations

3.5. Numerical experiments

3.6. Conclusion
References

Chapter 4: A Riemannian approach to low-rank tensor learning

Abstract

4.1. Introduction

4.2. A brief introduction to Riemannian optimization

4.3. Riemannian Tucker manifold geometry

4.4. Algorithms for tensor learning problems

4.5. Experiments

4.6. Conclusion

References

Chapter 5: Generalized thresholding for low-rank tensor recovery:


approaches based on model and learning

Abstract

5.1. Introduction

5.2. Tensor singular value thresholding

5.3. Thresholding based low-rank tensor recovery

5.4. Generalized thresholding algorithms with learning

5.5. Numerical examples

5.6. Conclusion
References

Chapter 6: Tensor principal component analysis

Abstract

6.1. Introduction

6.2. Notations and preliminaries

6.3. Tensor PCA for Gaussian-noisy data

6.4. Tensor PCA for sparsely corrupted data

6.5. Tensor PCA for outlier-corrupted data

6.6. Other tensor PCA methods

6.7. Future work

6.8. Summary

References

Chapter 7: Tensors for deep learning theory

Abstract

7.1. Introduction

7.2. Bounding a function's expressivity via tensorization

7.3. A case study: self-attention networks

7.4. Convolutional and recurrent networks

7.5. Conclusion
References

Chapter 8: Tensor network algorithms for image classification

Abstract

8.1. Introduction

8.2. Background

8.3. Tensorial extensions of support vector machine

8.4. Tensorial extension of logistic regression

8.5. Conclusion

References

Chapter 9: High-performance tensor decompositions for


compressing and accelerating deep neural networks

Abstract

9.1. Introduction and motivation

9.2. Deep neural networks

9.3. Tensor networks and their decompositions

9.4. Compressing deep neural networks

9.5. Experiments and future directions

References

Chapter 10: Coupled tensor decompositions for data fusion


Abstract

Acknowledgements

10.1. Introduction

10.2. What is data fusion?

10.3. Decompositions in data fusion

10.4. Applications of tensor-based data fusion

10.5. Fusion of EEG and fMRI: a case study

10.6. Data fusion demos

10.7. Conclusion and prospects

References

Chapter 11: Tensor methods for low-level vision

Abstract

Acknowledgements

11.1. Low-level vision and signal reconstruction

11.2. Methods using raw tensor structure

11.3. Methods using tensorization

11.4. Examples of low-level vision applications

11.5. Remarks

References
Chapter 12: Tensors for neuroimaging

Abstract

12.1. Introduction

12.2. Neuroimaging modalities

12.3. Multidimensionality of the brain

12.4. Tensor decomposition structures

12.5. Applications of tensors in neuroimaging

12.6. Future challenges

12.7. Conclusion

References

Chapter 13: Tensor representation for remote sensing images

Abstract

13.1. Introduction

13.2. Optical remote sensing: HSI and MSI fusion

13.3. Polarimetric synthetic aperture radar: feature extraction

References

Chapter 14: Structured tensor train decomposition for speeding up


kernel-based learning

Abstract
14.1. Introduction

14.2. Notations and algebraic background

14.3. Standard tensor decompositions

14.4. Dimensionality reduction based on a train of low-order


tensors

14.5. Tensor train algorithm

14.6. Kernel-based classification of high-order tensors

14.7. Experiments

14.8. Conclusion

References

Index
Copyright
Academic Press is an imprint of Elsevier
125 London Wall, London EC2Y 5AS, United Kingdom
525 B Street, Suite 1650, San Diego, CA 92101, United States
50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States
The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United
Kingdom

Copyright © 2022 Elsevier Inc. All rights reserved.

MATLAB® is a trademark of The MathWorks, Inc. and is used with


permission.
The MathWorks does not warrant the accuracy of the text or
exercises in this book.
This book's use or discussion of MATLAB® software or related
products does not constitute endorsement or sponsorship by The
MathWorks of a particular pedagogical approach or particular use of
the MATLAB® software.

No part of this publication may be reproduced or transmitted in any


form or by any means, electronic or mechanical, including
photocopying, recording, or any information storage and retrieval
system, without permission in writing from the publisher. Details on
how to seek permission, further information about the Publisher's
permissions policies and our arrangements with organizations such
as the Copyright Clearance Center and the Copyright Licensing
Agency, can be found at our website:
www.elsevier.com/permissions.
This book and the individual contributions contained in it are
protected under copyright by the Publisher (other than as may be
noted herein).

Notices
Knowledge and best practice in this field are constantly changing.
As new research and experience broaden our understanding,
changes in research methods, professional practices, or medical
treatment may become necessary.

Practitioners and researchers must always rely on their own


experience and knowledge in evaluating and using any
information, methods, compounds, or experiments described
herein. In using such information or methods they should be
mindful of their own safety and the safety of others, including
parties for whom they have a professional responsibility.

To the fullest extent of the law, neither the Publisher nor the
authors, contributors, or editors, assume any liability for any injury
and/or damage to persons or property as a matter of products
liability, negligence or otherwise, or from any use or operation of
any methods, products, instructions, or ideas contained in the
material herein.

Library of Congress Cataloging-in-Publication Data


A catalog record for this book is available from the Library of
Congress

British Library Cataloguing-in-Publication Data


A catalogue record for this book is available from the British Library

ISBN: 978-0-12-824447-0

For information on all Academic Press publications visit our


website at https://www.elsevier.com/books-and-journals
Publisher: Mara Conner
Acquisitions Editor: Tim Pitts
Editorial Project Manager: Charlotte Rowley
Production Project Manager: Prem Kumar Kaliamoorthi
Designer: Miles Hitchen

Typeset by VTeX
List of contributors
Kim Batselier Delft Center for Systems and Control, Delft
University of Technology, Delft, The Netherlands
Yingyue Bi School of Information and Communication
Engineering, University of Electronic Science and Technology of
China (UESTC), Chengdu, China
Jérémie Boulanger CRIStAL, Université de Lille, Villeneuve
d'Ascq, France
Rémy Boyer CRIStAL, Université de Lille, Villeneuve d'Ascq,
France
Cesar F. Caiafa
Instituto Argentino de Radioastronomía – CCT La Plata, CONICET /
CIC-PBA / UNLP, Villa Elisa, Argentina
RIKEN Center for Advanced Intelligence Project, Tokyo, Japan
Jocelyn Chanussot LJK, CNRS, Grenoble INP, Inria, Université
Grenoble, Alpes, Grenoble, France
Christos Chatzichristos KU Leuven, Department of Electrical
Engineering (ESAT), STADIUS Center for Dynamical Systems, Signal
Processing and Data Analytics, Leuven, Belgium
Cong Chen Department of Electrical and Electronic Engineering,
The University of Hong Kong, Pokfulam Road, Hong Kong
Nadav Cohen School of Computer Science, Hebrew University of
Jerusalem, Jerusalem, Israel
Xudong Cui School of Mathematics, Tianjin University, Tianjin,
China
André L.F. de Almeida Department of Teleinformatics
Engineering, Federal University of Fortaleza, Fortaleza, Brazil
Aybüke Erol Circuits and Systems, Department of
Microelectronics, Delft University of Technology, Delft, The
Netherlands
Yiming Fang Department of Computer Science, Columbia
University, New York, NY, United States
Gérard Favier Laboratoire I3S, Université Côte d'Azur, CNRS,
Sophia Antipolis, France
Borbála Hunyadi Circuits and Systems, Department of
Microelectronics, Delft University of Technology, Delft, The
Netherlands
Pratik Jawanpuria Microsoft, Hyderabad, India
Tai-Xiang Jiang School of Economic Information Engineering,
Southwestern University of Finance and Economics, Chengdu,
Sichuan, China
Paris A. Karakasis School of Electrical and Computer Engineering,
Technical University of Crete, Chania, Greece
Ouafae Karmouda CRIStAL, Université de Lille, Villeneuve
d'Ascq, France
Hiroyuki Kasai Waseda University, Tokyo, Japan
Eleftherios Kofidis Dept. of Statistics and Insurance Science,
University of Piraeus, Piraeus, Greece
Christos Kolomvakis School of Electrical and Computer
Engineering, Technical University of Crete, Chania, Greece
Yoav Levine School of Computer Science, Hebrew University of
Jerusalem, Jerusalem, Israel
Zechu Li Department of Computer Science, Columbia University,
New York, NY, United States
Athanasios P. Liavas School of Electrical and Computer
Engineering, Technical University of Crete, Chania, Greece
Zhouchen Lin Key Lab. of Machine Perception, School of EECS,
Peking University, Beijing, China
Xiao-Yang Liu
Department of Computer Science and Engineering, Shanghai Jiao
Tong University, Shanghai, China
Department of Electrical Engineering, Columbia University, New
York, NY, United States
Yipeng Liu School of Information and Communication
Engineering, University of Electronic Science and Technology of
China (UESTC), Chengdu, China
Zhen Long School of Information and Communication
Engineering, University of Electronic Science and Technology of
China (UESTC), Chengdu, China
George Lourakis Neurocom, S.A, Athens, Greece
Canyi Lu Carnegie Mellon University, Pittsburgh, PA, United
States
Liangfu Lu School of Mathematics, Tianjin University, Tianjin,
China
Yingcong Lu School of Information and Communication
Engineering, University of Electronic Science and Technology of
China (UESTC), Chengdu, China
George Lykoudis Neurocom, S.A, Athens, Greece
Bamdev Mishra Microsoft, Hyderabad, India
Michael K. Ng Department of Mathematics, The University of
Hong Kong, Pokfulam, Hong Kong
Ioannis Marios Papagiannakos School of Electrical and Computer
Engineering, Technical University of Crete, Chania, Greece
Bo Ren Key Laboratory of Intelligent Perception and Image
Understanding of Ministry of Education of China, Xidian University,
Xi'an, China
Or Sharir School of Computer Science, Hebrew University of
Jerusalem, Jerusalem, Israel
Amnon Shashua School of Computer Science, Hebrew University
of Jerusalem, Jerusalem, Israel
Ioanna Siaminou School of Electrical and Computer Engineering,
Technical University of Crete, Chania, Greece
Christos Tsalidis Neurocom, S.A, Athens, Greece
Simon Van Eyndhoven
KU Leuven, Department of Electrical Engineering (ESAT), STADIUS
Center for Dynamical Systems, Signal Processing and Data
Analytics, Leuven, Belgium
icometrix, Leuven, Belgium
Sabine Van Huffel KU Leuven, Department of Electrical
Engineering (ESAT), STADIUS Center for Dynamical Systems, Signal
Processing and Data Analytics, Leuven, Belgium
Anwar Walid Nokia Bell Labs, Murray Hill, NJ, United States
Fei Wen Department of Electronic Engineering, Shanghai Jiao
Tong University, Shanghai, China
Noam Wies School of Computer Science, Hebrew University of
Jerusalem, Jerusalem, Israel
Ngai Wong Department of Electrical and Electronic Engineering,
The University of Hong Kong, Pokfulam Road, Hong Kong
Zebin Wu School of Computer Science and Engineering, Nanjing
University of Science and Technology, Nanjing, China
Yang Xu School of Computer Science and Engineering, Nanjing
University of Science and Technology, Nanjing, China
Liuqing Yang Department of Computer Science, Columbia
University, New York, NY, United States
Fei Ye School of Computer Science and Engineering, Nanjing
University of Science and Technology, Nanjing, China
Tatsuya Yokota
Nagoya Institute of Technology, Aichi, Japan
RIKEN Center for Advanced Intelligence Project, Tokyo, Japan
Zhonghao Zhang School of Information and Communication
Engineering, University of Electronic Science and Technology of
China (UESTC), Chengdu, China
Qibin Zhao
RIKEN Center for Advanced Intelligence Project, Tokyo, Japan
Guangdong University of Technology, Guangzhou, China
Xi-Le Zhao School of Mathematical Sciences/Research Center for
Image and Vision Computing, University of Electronic Science and
Technology of China, Chengdu, Sichuan, China
Pan Zhou SEA AI Lab, Singapore, Singapore
Ce Zhu School of Information and Communication Engineering,
University of Electronic Science and Technology of China (UESTC),
Chengdu, China
Yassine Zniyed Université de Toulon, Aix-Marseille Université,
CNRS, LIS, Toulon, France
Preface
Yipeng Liu Chengdu, China

This book provides an overview of tensors for data processing,


covering computing theories, processing methods, and engineering
applications. The tensor extensions of a series of classical
multidimensional data processing techniques are discussed in this
book. Many thanks go to all the contributors. Students can read this
book to get an overall understanding, researchers can update their
knowledge on the recent research advances in the field, and
engineers can refer to implementations on various applications.
The first chapter is an introduction to tensor decomposition. In the
following, the book provides variants of tensor decompositions with
their efficient and effective solutions, including some parallel
algorithms, Riemannian algorithms, and generalized thresholding
algorithms. Some tensor-based machine learning methods are
summarized in detail, including tensor completion, tensor principal
component analysis, support tensor machine, tensor-based kernel
learning, tensor-based deep learning, etc. To demonstrate that
tensors can effectively and systematically enhance performance in
practical engineering problems, this book gives implemental details
of many applications, such as signal recovery, recommender
systems, climate forecasting, image clustering, image classification,
network compression, data fusion, image enhancement,
neuroimaging, and remote sensing.
I sincerely hope this book can serve to introduce tensors to more
data scientists and engineers. As a natural representation of
multidimensional data, tensors can be used to substantially avoid
the information loss in matrix representations of multiway data, and
tensor operators can model more connections than their matrix
counterparts. The related advances in applied mathematics allow us
to move from matrices to tensors for data processing. This book is
promising to motivate novel tensor theories and new data
processing methods, and to stimulate the development of a wide
range of practical applications.
Aug. 10, 2021
Chapter 1: Tensor
decompositions: computations,
applications, and challenges
Yingyue Bi; Yingcong Lu; Zhen Long; Ce Zhu; Yipeng Liu School of
Information and Communication Engineering, University of Electronic
Science and Technology of China (UESTC), Chengdu, China

Abstract
Many classical data processing techniques rely on the
representation and computation of vector and matrix forms,
where the vectorization or matricization is often employed on
multidimensional data. However, during this process, vital
underlying structure information can be lost, leading to
suboptimal performances in processing. As a natural
representation for multidimensional data, the tensor has drawn
a great deal of attention over the past several years. Tensor
decompositions are effective tools for tensor analysis. They have
been intensively investigated in a number of areas, such as
signal processing, machine learning, neuroscience,
communication, psychometrics, chemometrics, biometrics,
quantum physics, and quantum chemistry. In this chapter, we
give a brief introduction into the basic operations of tensors and
illustrate them with a few examples in data processing. To better
manipulate tensor data, a number of tensor operators are
defined, especially different tensor products. In addition, we
elaborate on a series of important tensor decompositions and
their properties, including CANDECOMP/PARAFAC
decomposition, Tucker decomposition, tensor singular value
decomposition, block term decomposition, tensor tree
decomposition, tensor train decomposition, and tensor ring
decomposition. We further present a list of machine learning
techniques based on tensor decompositions, such as tensor
dictionary learning, tensor completion, robust tensor principal
component analysis, tensor regression, statistical tensor
classification, coupled tensor fusion, and deep tensor neural
networks. Finally, we discuss the challenges and opportunities
for data processing under a tensor scheme.

Keywords
Tensor computation; Tensor norm; Tensor product;
CANDECOMP/PARAFAC decomposition; Tucker decomposition;
Tensor singular value decomposition; Block term decomposition;
Tensor network; Higher-order tensor; Multilinear subspace analysis
Chapter Outline

1.1 Introduction
1.1.1 What is a tensor?
1.1.2 Why do we need tensors?
1.2 Tensor operations
1.2.1 Tensor notations
1.2.2 Matrix operators
1.2.3 Tensor transformations
1.2.4 Tensor products
1.2.5 Structural tensors
1.2.6 Summary
1.3 Tensor decompositions
1.3.1 Tucker decomposition
1.3.2 Canonical polyadic decomposition
1.3.3 Block term decomposition
1.3.4 Tensor singular value decomposition
1.3.5 Tensor network
1.4 Tensor processing techniques
1.5 Challenges
References

1.1 Introduction
1.1.1 What is a tensor?
The tensor can be seen as a higher-order generalization of vector and
matrix, which normally has three or more modes (ways) [1]. For
example, a color image is a third-order tensor. It has two spatial
modes and one channel mode. Similarly, a color video is a fourth-
order tensor; its extra mode denotes time.
As special forms of tensors, vector is a first-order tensor
whose i-th entry (scalar) is and matrix is a second-order
tensor whose -th element is . A general N-th-order tensor can
be mathematically denoted as and its -th
entry is . For example, a third-order tensor is
illustrated in Fig. 1.1.

FIGURE 1.1 A third-order tensor .

1.1.2 Why do we need tensors?


Tensors play important roles in a number of applications, such as
signal processing, machine learning, biomedical engineering,
neuroscience, computer vision, communication, psychometrics, and
chemometrics. They can provide a concise mathematical framework
for formulating and solving problems in those fields.
Here are a few cases involving tensor frameworks:

• Many spatial-temporal signals in speech and image


processing are multidimensional. Tensor factorization-based
techniques can effectively extract features for enhancement,
classification, regression, etc. For example, nonnegative
canonical polyadic (CP) decomposition can be used for
speech signal separation where the first two components of
CP decomposition represent frequency and time structure of
the signal and the last component is the coefficient matrix
[2].
• The fluorescence excitation–emission data, commonly used
in chemistry, medicine, and food science, has several
chemical components with different concentrations. It can be
denoted as a third-order tensor; its three modes represent
sample, excitation, and emission. Taking advantage of CP
decomposition, the tensor can be factorized into three factor
matrices: relative excitation spectral matrix, relative emission
spectral matrix, and relative concentration matrix. In this
way, tensor decomposition can be applied to analyze the
components and corresponding concentrations in each
sample [3].
• Social data often have multidimensional structures, which
can be exploited by tensor-based techniques for data mining.
For example, the three modes of chat data are user, keyword,
and time. Tensor analysis can reveal the communication
patterns and the hidden structures in social networks, and
this can benefit tasks like recommender systems [4].

1.2 Tensor operations


In this section, we first introduce tensor notations, i.e., fibers and
slices, and then demonstrate how to represent tensors in a graphical
way. Before we discuss tensor operations, several matrix operations
are reviewed.

1.2.1 Tensor notations


Subtensors, such as fibers and slices, can be formed from the original
tensor. A fiber is defined by fixing all the indices but one and a slice
is defined by fixing all but two indices. For a third-order tensor
, its mode-1, mode-2, and mode-3 fibers are denoted by
, , and , where , and
, which are illustrated in Fig. 1.2. Its horizontal slices
, lateral slices , and frontal slices
, are shown in Fig. 1.3. For ease of denotation, we
refer to the frontal slice of as in some formulas.

FIGURE 1.2 The illustration of mode-1 fibers ,


mode-2 fibers , and mode-3 fibers with
i1 = 1,⋯,I1, i2 = 1,⋯,I2 and i3 = 1,⋯,I3.

FIGURE 1.3 The illustration of horizontal slices


i1 = 1,⋯,I1, lateral slices i2 = 1,⋯,I2, and frontal
slices i3 = 1,⋯,I3.

Other than the aforementioned notations, there is another way to


denote tensors and their operations [5]. Taking advantage of
graphical representations, tensors can be denoted by nodes and
edges in a straightforward way. Graphical representations for
scalars, vectors, matrices, and tensors are shown in Fig. 1.4. The
number next to the edge represents the indices of the corresponding
mode.
FIGURE 1.4 Graphical representations of scalar, vector,
matrix and tensor.

1.2.2 Matrix operators

Definition 1.2.1
(Matrix trace [6]) The trace of matrix is obtained by
summing all the diagonal entries of A, i.e., .

Definition 1.2.2
( -norm [6]) For matrix , its -norm is defined as

(1.1)
Definition 1.2.3
(Matrix nuclear norm [7]) The nuclear norm of matrix A is
denoted as , where is the i-th largest singular
value of A.

Definition 1.2.4
(Hadamard product [8]) The Hadamard product for matrices
and is defined as with

(1.2)

Definition 1.2.5
(Kronecker product [9]) The Kronecker product of matrices
and is defined as
, which can be written mathematically as
(1.3)
Based on the Kronecker product, a lot of useful properties can be
derived. Given matrices A, B, C, D, we have

(1.4)

where and represent the transpose and Moore–Penrose inverse


of matrix A.

Definition 1.2.6
(Khatri–Rao product [10]) The Khatri–Rao product of matrices
and is defined as

(1.5)
Similar to the Kronecker product, the Khatri–Rao product also has
some convenient properties, such as

(1.6)

1.2.3 Tensor transformations

Definition 1.2.7
(Tensor transpose [11]) Given a tensor , whose frontal
slices are , its transpose is acquired by first
transposing each of the frontal slices and then placing them in the
order of , , , ⋯, along the third
mode.

Fig. 1.5 demonstrates the tensor transpose of .


FIGURE 1.5 A graphical illustration of the tensor transpose
on .

Definition 1.2.8
(Tensor mode-n matricization [1]) For tensor , its
matricization along the n-th mode is denoted as
, as shown in Fig. 1.6. It rearranges fibers on
the n-th mode to form the columns of . For instance, there
exists a third-order tensor whose frontal slices are

(1.7)

Thus, its mode-1, mode-2, and mode-3 matricizations can be


written as
(1.8)

(1.9)

(1.10)

FIGURE 1.6 A graphical illustration of tensor mode-n


matricization for .
Definition 1.2.9
(Tensor n-th canonical matricization [12]) For a fixed index
, the n-th canonical matricization of tensor
can be defined as

(1.11)

where , are multiindices and .


Take the multiindex as an example, ,
. It can either be defined using the little-endian
convention (reverse lexicographic ordering) [13]

(1.12)
or the big-endian convention (colexicographic ordering)

(1.13)

1.2.4 Tensor products


Definition 1.2.10
(Tensor inner product [1]) The inner product of two tensors
and , shown in Fig. 1.7, is expressed as

(1.14)

FIGURE 1.7 A graphical illustration of the tensor inner


product.

Definition 1.2.11
(Tensor norm [1]) The norm of a tensor is the square
root of the summation over the square of all its elements, which
can be expressed as
(1.15)

Definition 1.2.12
(Tensor mode-n product with a matrix [1]) The tensor mode-n
product of and matrix is denoted as

(1.16)

or element-wisely,

(1.17)

A visual illustration is shown in Fig. 1.8.


FIGURE 1.8 A graphical illustration of the tensor mode-n
product.

Taking advantage of tensor matricization, Eq. (1.16) can also be


expressed in an unfolded form as

(1.18)

For example, given tensor (Eq. (1.7)) and matrix ,


the mode-n product will yield a tensor , whose
frontal slices are

(1.19)
Definition 1.2.13
(Tensor mode-n product with a vector [1]) The tensor mode-n
product of the tensor and vector is denoted as

(1.20)

with entries

(1.21)

For example, given tensor in Eq. (1.7) and vector ,


we have

(1.22)

It can be clearly seen that the operation of multiplying a tensor by


a matrix will not change the number of ways of the tensor. However,
if a tensor is multiplied by a vector, the number of ways will
decrease.

Definition 1.2.14
(t-product [11]) The t-product of and is
defined as

(1.23)

where ,
represents the block matrix [11] of , and

is the block-circulant matrix [11] of , where and ,


, represent the -th frontal slice of and , respectively.

Definition 1.2.15
(Tensor contraction [5]) Given two tensors and
, suppose they have L equal indices in
and . The contraction of these two tensors
yields an -th-order tensor , whose entries can
be calculated by

(1.24)

A graphical illustration of tensor contraction is shown in Fig. 1.9.

FIGURE 1.9 Graphical representation of contraction of


two tensors, and , where
{K1,K2,⋯,KL} denotes the L equal indices in {I1,I2,⋯,IM}
and {J1,J2,⋯,JN}.

For example, given tensors and , based on


the aforementioned definition, we can conclude that ,
, , and . As shown in Fig. 1.10,
the result of tensor contraction is of the size of ,
and its entries are
(1.25)

FIGURE 1.10 The contraction of two tensors,


and , where K1 = I2 = J5 = 4, K2 = I3 = J1 = 2,
K3 = I5 = J3 = 7, I1 = 3, I4 = 6, J2 = 5, and J4 = 8.

Consider a special case when and , as


demonstrated in Fig. 1.11. The contraction of tensors
and results in an ( )-th-order tensor ,
whose entries can be calculated by

(1.26)
FIGURE 1.11 A graphical representation of contraction over
two tensors, and , where
K 1 = I m = Jn .

1.2.5 Structural tensors

Definition 1.2.16
(Identity tensor [11]) An identity tensor is a tensor whose first
frontal slice is an identity matrix and the rest are zero matrices.

Definition 1.2.17
(Orthogonal tensor [14]) Using the t-product, an orthogonal tensor
is defined as

(1.27)

Definition 1.2.18
(Rank-1 tensor [1]) A rank-1 tensor is formed by the
outer product of vectors, as shown in Fig. 1.12. Its mathematical
formulation can be written as
(1.28)

where ∘ means the outer product. Therefore, the entries of can


be written as . Generalizing it to the N-th-order
tensor , we have

(1.29)

FIGURE 1.12 A rank-1 tensor .

Definition 1.2.19
(Diagonal tensor [1]) Tensor is a diagonal tensor if and only if all
its nonzero elements are on the superdiagonal line. Specifically, if
is a diagonal tensor, then we have if
and only if . A graphical illustration of a third-order
diagonal tensor is demonstrated in Fig. 1.13.
FIGURE 1.13 A third-order diagonal tensor .

Definition 1.2.20
(f-diagonal tensor [14]) An f-diagonal tensor is a tensor with
diagonal frontal slices. A third-order f-diagonal tensor is
visualized in Fig. 1.14.

FIGURE 1.14 An f-diagonal tensor .

1.2.6 Summary
In this section, we first briefly described some notations of tensor
representations. Then by giving basic operations of matrices, we
discussed several common tensor operations, including tensor
transformations and tensor products. Concepts of structural tensors
such as orthogonal tensor, diagonal tensor, and f-diagonal tensor are
also given. It is worth noting that we only focus on the most
commonly used definitions; for more information, please refer to [1],
[5], and [6].

1.3 Tensor decompositions


The idea of tensor decomposition was first put forward by Hitchcock
in 1927 and developed by a lot of scholars until these days.
Traditionally, it was implemented in psychometrics and
stoichiometry. With the growing prosperity of tensor decomposition
in [15–18], it began to draw attention in other fields, including signal
processing [19–21], numerical linear algebra [22,23], computer vision
[24], numerical analysis [25,26], and data mining [27–29].
Meanwhile, different decomposition approaches were developed to
meet various requirements.
In this section, we first discuss two cornerstones, Tucker
decomposition and CP decomposition, and go through some other
methods like block term decomposition (BTD), tensor singular value
decomposition (t-SVD), and tensor networks (TNs).

1.3.1 Tucker decomposition


In 1963, Tucker decomposition was firstly proposed in [30] by Tucker
and perfected by Levin and Tucker later on. In 2000, the name of
higher-order singular value decomposition (HOSVD) was put
forward by De Lathauwer [31]. Nowadays, the terms Tucker
decomposition and HOSVD are used alternatively to refer to Tucker
decomposition.
Taking advantage of the mode-n product, Tucker decomposition
can be defined as a multiplication of a core tensor and the matrix
along each mode.

Definition 1.3.1
(Tucker decomposition) Given a tensor , its Tucker
decomposition is
(1.30)
where are semi-orthogonal factor
matrices that satisfy and is the core
tensor. Even though the core tensor is usually dense, it is generally
much smaller than , i.e., .

We can also write Tucker decomposition in an element-wise style as

(1.31)

where .
Fig. 1.15 is an illustration of Tucker decomposition on a third-
order tensor, i.e., .

FIGURE 1.15 An illustration of Tucker decomposition on a


third-order tensor . The core tensor is and
factor matrices are .
Another random document with
no related content on Scribd:
mies, tahdonko ruveta hänen vaimoksensa!

Vähää ennen joulua olin lukenut, että rakastettuni oli nimitetty


sairashuoneen lääkäriksi pieneen, kaukana olevaan seutuun, ja nyt
vei hän vaimonsa kotiinsa.

Tuntui surkealta ja häpeälliseltä seurata hänen elämänsä


tapauksia aivan samalla tavalla, kuin muut saattoivat sen tehdä, ja
kun käteeni rutistin sanomalehden, minä melkein halveksin häntä ja
tuntui hyvältä ja helpottavalta olla katkera ja kova.

En tahtonut häntä muistella; en hetkeäkään ajatella, että hän nyt


palkitsisi tuon alttiin sielun seitsemänvuotista uskollisuutta.

Kuumeen polttaessa suoniani ehdotin, että päivä kulutettaisiin


huvituksissa — iloissa.

Teimme ensin rekiretken; sitten tarjosi pormestari päivällistä


shampanjan kera — ei sovi keski-ikäisen naima-aikeissa olevan
ruotsalaisen tyytyä vähempään — ja sitten joimme kahvia ja
kuuntelimme musiikkia kullatussa kahvilassa ja tulimme kotiin
myöhään hämärissä.

Olisimme päättäneet päivän eräässä teatterissa, mutta täti Agneta


ei jaksanut enää ja uskoi, että me toisetkin voisimme paremmin, jos
joisimme teetä kotona rauhassa.

Tuli oli sammunut vieraskamarin pesässä; punainen hiillos henki


vielä lämpöä ja heikkoa valoa. Huoneessa tuntui hyasintin ja kielon
tuoksua ja kodin lämmin, hauska ilma lehahti meitä vastaan.

Istuuduin korkeaselkäiseen iso-isäntuoliin. Täti Agneta tipsutti


keittiöön ja pormestari käveli edestakaisin lattialla.
»Te olette ollut niin iloinen tänään, neiti», sanoi hän pysähtyen ja
laski kätensä tuolini selustalle. »Minulle on ollut mieluista nähdä teitä
sellaisena. — Ensi aikoina olitte niin kalpea ja hiljainen. Mutta
olittehan ollut sairas?»

»Niin, olin ollut sairas.» vastasin hajamielisesti. Ehkäpä hehkui


palanut roihu toisessakin kodissa illanhämyssä? Ehkä oli ilma
sielläkin hellä ja lämmin? Ei, ei ajatella, ei ajatella!

Pakotin itseni kuuntelemaan pormestaria, joka seisoi puhellen


hiukan pitemmältä ilostaan, nähdessään minut iloisena ja sitten
yksinäisestä kodistaan, ja kuinka hän tulisi meitä kaipaamaan. Ja
kuinka olikaan, kysyi hän minulta enkö voisi päättää tulla
sulostuttamaan tätä yksinäistä kotia.

»Ymmärrän kyllä hyvin, ettette te koskaan voi tuntea minua


kohtaan sitä kuin minä teitä», jatkoi hän lämpimämmällä äänellä, »ja
mielessäni välkähti, että tekisin viisaasti, jos vastaanottaisin hänen
rehellisen kätensä, sillä minä — minä rakastan teitä.»

Tuo sana koski minuun, en saattanut sitä kuulla, ja minä ojensin


käteni estääkseni sen toistamista.

Mutta tämän käsitti kosijani kehoitukseksi, hän kumartui


lähemmäksi minua ja hänen silmiinsä syttyi äkkiä tuli, joka samalla
sammui, mutta joka kertoi minulle, että minua hän pyysi ja tahtoi
omistaa eikä vaimoa noin vain ylimalkaan.

Tätä en ennen tullut ajatelleeksi: taikka olin ajatellut, niinkuin sitä


koneellisesti ja kylmästi ajattelee kaikenlaisia kohtauksia — en ollut
tuntenut sitä. — Mutta samalla kuin tunsin sen, tiesin myöskin, ettei
hän koskaan voisi saada sitä, mitä pyysi.
Huuliani, joita yhden ainoan lemmensuutelot olivat pyhittäneet, ei
saanut toinen koskettaa.

Kauvas häipyivät arkionnen unelmat ja kotoiset hauskuudet hänen


rinnallaan. Minua tympäisi tuo kunnon mies ja hänen lempivä
katseensa. — Mitäpä voisi hän minulle tarjota? Ei hän voinut saada
verenpisaraakaan minussa liikkeelle.

Siirsin tuolini kauvemmaksi ja aloin, hämilläni ja änkyttäen, outoa


työtä, rukkasten antamista.

Ja tein sen niin huonosti, että pormestari katsoi vastaajan syyt


mitättömiksi ja täydensi omaa puhettaan.

Hän kävi kaunopuheiseksi ja innokkaaksi ja puhui itsensä


lämpimäksi, mutta lämpöä minä vähimmin halusin.

Viimein täytyi minun sanoa hänelle, että aivan hiljattain olin


rakastanut, toista niin, kuin minulle oli mahdollista ja etten vielä
voinut kuvitellakaan voivani antaa toiselle hituistakaan siitä
mieltymyksestä, jota vaimon tulisi tuntea miestänsä kohtaan.

Sanan »vielä» pistin lauseeseen kohteliaisuudesta häntä kohtaan,


mutta hän tarttui siihen, kuin olisi se ollut pääasia puheessani.

»Emme puhu siitä nyt sen pitemmältä.» sanoi hän, »minä voin
odottaa. Te olette minulle niin rakas. — Kesän puoleen tulen
uudelleen, teen kysymykseni vielä kerran ja toivon ajan auttavan
minua.»

Luulen, että hän piti aikaa korkeampana oikeuspaikkana, jossa


jutut hitaasti päätettiin. Herra pormestari »kävi kuninkaissa»
kosimisensa kanssa. Kuningas Aika parantavalla voimallaan
varmaan päättäisi asian hänen edukseen.

Seuraavana päivänä matkusti hän hallitsemaan kaupunkiansa ja


koska täti Agneta näytti vähemmän pettyneeltä, kuin mitä luulin,
otaksun minä, että hän oli tehnyt toivehikkaan aikateoriiansa
tädillekin selväksi.

Keskusteluamme emme uusineet. Vähää ennenkuin hän matkusti,


täti
Agnetan hyväntahtoisesti kadottua, sanoi hän vainen:

»Luvatkaa minulle, armas, kalpea lapsi, että toisinaan muistelette


minua, joka tästä päivästä alkaen, siihen asti, kunnes kesä-aurinko
paistaa, joka päivä tahdon ajatella sitä hetkeä, jolloin taasen saan
pyytää onneani.»

Hän suuteli kättäni monta kertaa ja painoi sitä rintaansa vasten


nöyrällä hellyydellä, joka ei mitään pyytänyt, vaan ainoasi antoi. —
Ja siilon houkutteli hänen hellyytensä minua eikä tympäissyt.

Mutta ei, en tahdo antaa pelkurimaisen suojaa, turvaa ja kotia


kaipaavan tunteen houkutella itseäni. Se ei kumminkaan koskaan
voisi korvata sitä, mitä vainen yksi voipi antaa.

*****

Ajatukset liikkuvat vyöryävässä sekasorrossa nyt, kun teen


tilinpäätökseni vuoden viimeisenä päivänä.

Kirkas ja kylmä on talviyö, kalpeita ovat kuvat niistä ihmisistä, joita


olen kohdannut — nyt kun ne kulkevat sisäisen katseeni sivu.
Hänelläkin, jota rakastan, on harso silmillä, mutta lempeni elää.
Vieläkin saatan tuntea autuaan väreilevän tunteen hänen
syleilyssään; vieläkin saatan kuulla sydämellisen sävyn hänen
äänessään. »Rakastaa sinua enemmän kuin kaikkia muita — se ei
olisi paljon!» sanoi hän kerran.

Kun kesäinen onni oli täysinäinen, hallitsin minä ihanaa


nykyisyyttä. Nyt hallitsee ajatukseni ihanaa menneisyyttä, jota ei
mikään maailmassa voi riistää minulta pois. Koskaan ei muistojeni
onni saata kuolla, rakkauteni ei koskaan muuttua tavaksi.

Ja minä olen elänyt, sillä olen tuntenut, tuntenut niin, että sydän on
ollut haljeta ilosta ja surusta.

Maljani on täysi muistojen primuloita, jotka eivät koskaan kuihdu.

Loppu.
*** END OF THE PROJECT GUTENBERG EBOOK KAKSITOISTA
KUUKAUTTA ***

Updated editions will replace the previous one—the old editions will
be renamed.

Creating the works from print editions not protected by U.S.


copyright law means that no one owns a United States copyright in
these works, so the Foundation (and you!) can copy and distribute it
in the United States without permission and without paying copyright
royalties. Special rules, set forth in the General Terms of Use part of
this license, apply to copying and distributing Project Gutenberg™
electronic works to protect the PROJECT GUTENBERG™ concept
and trademark. Project Gutenberg is a registered trademark, and
may not be used if you charge for an eBook, except by following the
terms of the trademark license, including paying royalties for use of
the Project Gutenberg trademark. If you do not charge anything for
copies of this eBook, complying with the trademark license is very
easy. You may use this eBook for nearly any purpose such as
creation of derivative works, reports, performances and research.
Project Gutenberg eBooks may be modified and printed and given
away—you may do practically ANYTHING in the United States with
eBooks not protected by U.S. copyright law. Redistribution is subject
to the trademark license, especially commercial redistribution.

START: FULL LICENSE


THE FULL PROJECT GUTENBERG LICENSE
PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK

To protect the Project Gutenberg™ mission of promoting the free


distribution of electronic works, by using or distributing this work (or
any other work associated in any way with the phrase “Project
Gutenberg”), you agree to comply with all the terms of the Full
Project Gutenberg™ License available with this file or online at
www.gutenberg.org/license.

Section 1. General Terms of Use and


Redistributing Project Gutenberg™
electronic works
1.A. By reading or using any part of this Project Gutenberg™
electronic work, you indicate that you have read, understand, agree
to and accept all the terms of this license and intellectual property
(trademark/copyright) agreement. If you do not agree to abide by all
the terms of this agreement, you must cease using and return or
destroy all copies of Project Gutenberg™ electronic works in your
possession. If you paid a fee for obtaining a copy of or access to a
Project Gutenberg™ electronic work and you do not agree to be
bound by the terms of this agreement, you may obtain a refund from
the person or entity to whom you paid the fee as set forth in
paragraph 1.E.8.

1.B. “Project Gutenberg” is a registered trademark. It may only be


used on or associated in any way with an electronic work by people
who agree to be bound by the terms of this agreement. There are a
few things that you can do with most Project Gutenberg™ electronic
works even without complying with the full terms of this agreement.
See paragraph 1.C below. There are a lot of things you can do with
Project Gutenberg™ electronic works if you follow the terms of this
agreement and help preserve free future access to Project
Gutenberg™ electronic works. See paragraph 1.E below.
1.C. The Project Gutenberg Literary Archive Foundation (“the
Foundation” or PGLAF), owns a compilation copyright in the
collection of Project Gutenberg™ electronic works. Nearly all the
individual works in the collection are in the public domain in the
United States. If an individual work is unprotected by copyright law in
the United States and you are located in the United States, we do
not claim a right to prevent you from copying, distributing,
performing, displaying or creating derivative works based on the
work as long as all references to Project Gutenberg are removed. Of
course, we hope that you will support the Project Gutenberg™
mission of promoting free access to electronic works by freely
sharing Project Gutenberg™ works in compliance with the terms of
this agreement for keeping the Project Gutenberg™ name
associated with the work. You can easily comply with the terms of
this agreement by keeping this work in the same format with its
attached full Project Gutenberg™ License when you share it without
charge with others.

1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside the
United States, check the laws of your country in addition to the terms
of this agreement before downloading, copying, displaying,
performing, distributing or creating derivative works based on this
work or any other Project Gutenberg™ work. The Foundation makes
no representations concerning the copyright status of any work in
any country other than the United States.

1.E. Unless you have removed all references to Project Gutenberg:

1.E.1. The following sentence, with active links to, or other


immediate access to, the full Project Gutenberg™ License must
appear prominently whenever any copy of a Project Gutenberg™
work (any work on which the phrase “Project Gutenberg” appears, or
with which the phrase “Project Gutenberg” is associated) is
accessed, displayed, performed, viewed, copied or distributed:
This eBook is for the use of anyone anywhere in the United
States and most other parts of the world at no cost and with
almost no restrictions whatsoever. You may copy it, give it away
or re-use it under the terms of the Project Gutenberg License
included with this eBook or online at www.gutenberg.org. If you
are not located in the United States, you will have to check the
laws of the country where you are located before using this
eBook.

1.E.2. If an individual Project Gutenberg™ electronic work is derived


from texts not protected by U.S. copyright law (does not contain a
notice indicating that it is posted with permission of the copyright
holder), the work can be copied and distributed to anyone in the
United States without paying any fees or charges. If you are
redistributing or providing access to a work with the phrase “Project
Gutenberg” associated with or appearing on the work, you must
comply either with the requirements of paragraphs 1.E.1 through
1.E.7 or obtain permission for the use of the work and the Project
Gutenberg™ trademark as set forth in paragraphs 1.E.8 or 1.E.9.

1.E.3. If an individual Project Gutenberg™ electronic work is posted


with the permission of the copyright holder, your use and distribution
must comply with both paragraphs 1.E.1 through 1.E.7 and any
additional terms imposed by the copyright holder. Additional terms
will be linked to the Project Gutenberg™ License for all works posted
with the permission of the copyright holder found at the beginning of
this work.

1.E.4. Do not unlink or detach or remove the full Project


Gutenberg™ License terms from this work, or any files containing a
part of this work or any other work associated with Project
Gutenberg™.

1.E.5. Do not copy, display, perform, distribute or redistribute this


electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1 with
active links or immediate access to the full terms of the Project
Gutenberg™ License.
1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if you
provide access to or distribute copies of a Project Gutenberg™ work
in a format other than “Plain Vanilla ASCII” or other format used in
the official version posted on the official Project Gutenberg™ website
(www.gutenberg.org), you must, at no additional cost, fee or expense
to the user, provide a copy, a means of exporting a copy, or a means
of obtaining a copy upon request, of the work in its original “Plain
Vanilla ASCII” or other form. Any alternate format must include the
full Project Gutenberg™ License as specified in paragraph 1.E.1.

1.E.7. Do not charge a fee for access to, viewing, displaying,


performing, copying or distributing any Project Gutenberg™ works
unless you comply with paragraph 1.E.8 or 1.E.9.

1.E.8. You may charge a reasonable fee for copies of or providing


access to or distributing Project Gutenberg™ electronic works
provided that:

• You pay a royalty fee of 20% of the gross profits you derive from
the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”

• You provide a full refund of any money paid by a user who


notifies you in writing (or by e-mail) within 30 days of receipt that
s/he does not agree to the terms of the full Project Gutenberg™
License. You must require such a user to return or destroy all
copies of the works possessed in a physical medium and
discontinue all use of and all access to other copies of Project
Gutenberg™ works.

• You provide, in accordance with paragraph 1.F.3, a full refund of


any money paid for a work or a replacement copy, if a defect in
the electronic work is discovered and reported to you within 90
days of receipt of the work.

• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.

1.E.9. If you wish to charge a fee or distribute a Project Gutenberg™


electronic work or group of works on different terms than are set
forth in this agreement, you must obtain permission in writing from
the Project Gutenberg Literary Archive Foundation, the manager of
the Project Gutenberg™ trademark. Contact the Foundation as set
forth in Section 3 below.

1.F.

1.F.1. Project Gutenberg volunteers and employees expend


considerable effort to identify, do copyright research on, transcribe
and proofread works not protected by U.S. copyright law in creating
the Project Gutenberg™ collection. Despite these efforts, Project
Gutenberg™ electronic works, and the medium on which they may
be stored, may contain “Defects,” such as, but not limited to,
incomplete, inaccurate or corrupt data, transcription errors, a
copyright or other intellectual property infringement, a defective or
damaged disk or other medium, a computer virus, or computer
codes that damage or cannot be read by your equipment.

1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except


for the “Right of Replacement or Refund” described in paragraph
1.F.3, the Project Gutenberg Literary Archive Foundation, the owner
of the Project Gutenberg™ trademark, and any other party
distributing a Project Gutenberg™ electronic work under this
agreement, disclaim all liability to you for damages, costs and
expenses, including legal fees. YOU AGREE THAT YOU HAVE NO
REMEDIES FOR NEGLIGENCE, STRICT LIABILITY, BREACH OF
WARRANTY OR BREACH OF CONTRACT EXCEPT THOSE
PROVIDED IN PARAGRAPH 1.F.3. YOU AGREE THAT THE
FOUNDATION, THE TRADEMARK OWNER, AND ANY
DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE LIABLE
TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL,
PUNITIVE OR INCIDENTAL DAMAGES EVEN IF YOU GIVE
NOTICE OF THE POSSIBILITY OF SUCH DAMAGE.

1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you


discover a defect in this electronic work within 90 days of receiving it,
you can receive a refund of the money (if any) you paid for it by
sending a written explanation to the person you received the work
from. If you received the work on a physical medium, you must
return the medium with your written explanation. The person or entity
that provided you with the defective work may elect to provide a
replacement copy in lieu of a refund. If you received the work
electronically, the person or entity providing it to you may choose to
give you a second opportunity to receive the work electronically in
lieu of a refund. If the second copy is also defective, you may
demand a refund in writing without further opportunities to fix the
problem.

1.F.4. Except for the limited right of replacement or refund set forth in
paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO
OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.

1.F.5. Some states do not allow disclaimers of certain implied


warranties or the exclusion or limitation of certain types of damages.
If any disclaimer or limitation set forth in this agreement violates the
law of the state applicable to this agreement, the agreement shall be
interpreted to make the maximum disclaimer or limitation permitted
by the applicable state law. The invalidity or unenforceability of any
provision of this agreement shall not void the remaining provisions.

1.F.6. INDEMNITY - You agree to indemnify and hold the


Foundation, the trademark owner, any agent or employee of the
Foundation, anyone providing copies of Project Gutenberg™
electronic works in accordance with this agreement, and any
volunteers associated with the production, promotion and distribution
of Project Gutenberg™ electronic works, harmless from all liability,
costs and expenses, including legal fees, that arise directly or
indirectly from any of the following which you do or cause to occur:
(a) distribution of this or any Project Gutenberg™ work, (b)
alteration, modification, or additions or deletions to any Project
Gutenberg™ work, and (c) any Defect you cause.

Section 2. Information about the Mission of


Project Gutenberg™
Project Gutenberg™ is synonymous with the free distribution of
electronic works in formats readable by the widest variety of
computers including obsolete, old, middle-aged and new computers.
It exists because of the efforts of hundreds of volunteers and
donations from people in all walks of life.

Volunteers and financial support to provide volunteers with the


assistance they need are critical to reaching Project Gutenberg™’s
goals and ensuring that the Project Gutenberg™ collection will
remain freely available for generations to come. In 2001, the Project
Gutenberg Literary Archive Foundation was created to provide a
secure and permanent future for Project Gutenberg™ and future
generations. To learn more about the Project Gutenberg Literary
Archive Foundation and how your efforts and donations can help,
see Sections 3 and 4 and the Foundation information page at
www.gutenberg.org.
Section 3. Information about the Project
Gutenberg Literary Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non-profit
501(c)(3) educational corporation organized under the laws of the
state of Mississippi and granted tax exempt status by the Internal
Revenue Service. The Foundation’s EIN or federal tax identification
number is 64-6221541. Contributions to the Project Gutenberg
Literary Archive Foundation are tax deductible to the full extent
permitted by U.S. federal laws and your state’s laws.

The Foundation’s business office is located at 809 North 1500 West,


Salt Lake City, UT 84116, (801) 596-1887. Email contact links and up
to date contact information can be found at the Foundation’s website
and official page at www.gutenberg.org/contact

Section 4. Information about Donations to


the Project Gutenberg Literary Archive
Foundation
Project Gutenberg™ depends upon and cannot survive without
widespread public support and donations to carry out its mission of
increasing the number of public domain and licensed works that can
be freely distributed in machine-readable form accessible by the
widest array of equipment including outdated equipment. Many small
donations ($1 to $5,000) are particularly important to maintaining tax
exempt status with the IRS.

The Foundation is committed to complying with the laws regulating


charities and charitable donations in all 50 states of the United
States. Compliance requirements are not uniform and it takes a
considerable effort, much paperwork and many fees to meet and
keep up with these requirements. We do not solicit donations in
locations where we have not received written confirmation of
compliance. To SEND DONATIONS or determine the status of
compliance for any particular state visit www.gutenberg.org/donate.

While we cannot and do not solicit contributions from states where


we have not met the solicitation requirements, we know of no
prohibition against accepting unsolicited donations from donors in
such states who approach us with offers to donate.

International donations are gratefully accepted, but we cannot make


any statements concerning tax treatment of donations received from
outside the United States. U.S. laws alone swamp our small staff.

Please check the Project Gutenberg web pages for current donation
methods and addresses. Donations are accepted in a number of
other ways including checks, online payments and credit card
donations. To donate, please visit: www.gutenberg.org/donate.

Section 5. General Information About Project


Gutenberg™ electronic works
Professor Michael S. Hart was the originator of the Project
Gutenberg™ concept of a library of electronic works that could be
freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg™ eBooks with only a loose network of
volunteer support.

Project Gutenberg™ eBooks are often created from several printed


editions, all of which are confirmed as not protected by copyright in
the U.S. unless a copyright notice is included. Thus, we do not
necessarily keep eBooks in compliance with any particular paper
edition.

Most people start at our website which has the main PG search
facility: www.gutenberg.org.

This website includes information about Project Gutenberg™,


including how to make donations to the Project Gutenberg Literary
Archive Foundation, how to help produce our new eBooks, and how
to subscribe to our email newsletter to hear about new eBooks.

You might also like