Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 9

A Thesis

A dissertation submitted in fulfilment of the requirements for the degree of


Doctor of Philosophy
By
sk

Australia.

June 2023

Introduction

i
Methodology

Result Analysis

Conclusion

References

ii
3
1 REFERENCES

[1] C. Yao, T. Tillo, Y. Zhao, J. Xiao, H. Bai, and C. Lin, “Depth map driven hole filling
algorithm exploiting temporal correlation information,” IEEE Trans. Broadcast., vol. 60,
no. 2, pp. 394–404, 2014, doi: 10.1109/TBC.2014.2321671.
[2] C. Yao, Y. Zhao, and H. Bai, “View synthesis based on background update with gaussian
mixture model,” in Pacific-Rim Conference on Multimedia, 2012, pp. 651–660, doi:
10.1109/TBC.2014.2321671.
[3] G. Luo, Y. Zhu, and Z. Li, “A Hole Filling Approach based on Background
Reconstruction for View Synthesis in 3D Video,” in IEEE International Conference on
Computer Vision and Pattern Recignition, 2016, pp. 1781–1789.
[4] M. K. Abadi, R. Subramanian, S. M. Kia, P. Avesani, I. Patras, and N. Sebe, “DECAF:
MEG-Based Multimodal Database for Decoding Affective Physiological Responses,”
IEEE Trans. Affect. Comput., vol. 6, no. 3, pp. 209–222, 2015, doi:
10.1109/TAFFC.2015.2392932.
[5] J. Kim, “Emotion Recognition Using Speech and Physiological Changes,” Robust Speech
Recognit. Underst., no. June, pp. 265–280, Jun. 2007, doi: 10.5772/4754.
[6] Z. Tong, X. Chen, Z. He, K. Tong, Z. Fang, and X. Wang, “Emotion Recognition Based
on Photoplethysmogram and Electroencephalogram,” 2018 IEEE 42nd Annu. Comput.
Softw. Appl. Conf., pp. 402–407, 2018, doi: 10.1109/COMPSAC.2018.10266.
[7] G. Chanel, C. Rebetez, M. Bétrancourt, and T. Pun, “Emotion assessment from
physiological signals for adaptation of game difficulty,” IEEE Trans. Syst. Man, Cybern.
Part ASystems Humans, vol. 41, no. 6, pp. 1052–1063, 2011, doi:
10.1109/TSMCA.2011.2116000.
[8] S. Koelstra et al., “DEAP: A database for emotion analysis; Using physiological signals,”
IEEE Trans. Affect. Comput., vol. 3, no. 1, pp. 18–31, 2012, doi: 10.1109/T-
AFFC.2011.15.
[9] M. Soleymani, J. Lichtenauer, T. Pun, and M. Pantic, “A multimodal database for affect
recognition and implicit tagging,” IEEE Trans. Affect. Comput., vol. 3, no. 1, pp. 42–55,
2012, doi: 10.1109/T-AFFC.2011.25.

4
References

[10] J. Chen, B. Hu, L. Xu, P. Moore, and Y. Su, “Feature-level fusion of multimodal
physiological signals for emotion recognition,” Proc. - 2015 IEEE Int. Conf. Bioinforma.
Biomed. BIBM 2015, pp. 395–399, 2015, doi: 10.1109/BIBM.2015.7359713.
[11] R. W. Picard, E. Vyzas, and J. Healey, “Toward machine emotional intelligence: Analysis
of affective physiological state,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 23, no. 10,
pp. 1175–1191, 2001, doi: 10.1109/34.954607.
[12] J. Scheirer, R. Fernandez, J. Klein, and R. W. Picard, “Frustrating the user on purpose: A
step toward building an affective computer,” Interact. Comput., vol. 14, no. 2, pp. 93–118,
2002, doi: 10.1016/S0953-5438(01)00059-5.
[13] C. Collet, E. Vernet-Maury, G. Delhomme, and A. Dittmar, “Autonomic nervous system
response patterns specificity to basic emotions,” J. Auton. Nerv. Syst., vol. 62, no. 1–2, pp.
45–57, 1997, doi: 10.1016/S0165-1838(96)00108-7.
[14] J. R. Balbin et al., “Development of scientific system for assessment of post-traumatic
stress disorder patients using physiological sensors and feature extraction for emotional
state analysis,” 2017IEEE 9th Int. Conf. Humanoid, Nanotechnology, Inf. Technol.
Commun. Control. Environ. Manag., pp. 1–6, 2017, doi:
10.1109/HNICEM.2017.8269424.
[15] P. K. Podder and M. Paul, “Efficient Video Coding and Quality Assessment by Exploiting
Human Visual Features,” PhD thesis, 2017.
[16] F. Zou, D. Tian, A. Vetro, H. Sun, O. C. Au, and S. Shimizu, “View synthesis prediction
in the 3-D video coding extensions of AVC and HEVC,” IEEE Trans. Circuits Syst. Video
Technol., vol. 24, no. 10, pp. 1696–1708, 2014, doi: 10.1109/TCSVT.2014.2313891.
[17] Y. H. Cho, H. Y. Lee, and D. S. Park, “Multi-view synthesis based on single view
reference layer,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell.
Lect. Notes Bioinformatics), vol. 7727 LNCS, no. PART 4, pp. 565–575, 2013, doi:
10.1007/978-3-642-37447-0_43.
[18] J. Cheng, C. Leng, J. Wu, H. Cui, and H. Lu, “Fast and accurate image matching with
cascade hashing for 3D reconstruction,” in Proceedings of the IEEE Computer Society
Conference on Computer Vision and Pattern Recognition, 2014, pp. 1–8, doi:
10.1109/CVPR.2014.8.
[19] S. Hu, S. Kwong, Y. Zhang, and C.-C. J. Kuo, “Rate-Distortion Optimized Rate Control

5
References

for Depth Map based 3D Video Coding,” IEEE Trans. Image Process., vol. 22, no. 2, pp.
585–594, 2013, doi: 10.1109/TIP.2012.2219549.
[20] S. Lu, T. Mu, and S. Zhang, “A survey on multiview video synthesis and editing,”
Tsinghua Sci. Technol., vol. 21, no. 6, pp. 678–695, 2016, doi:
10.1109/TST.2016.7787010.
[21] P. Ndjiki-Nya et al., “Depth Image-Based Rendering With Advanced Texture Synthesis
for 3-D Video,” IEEE Trans. Multimed., vol. 13, no. 3, pp. 453–465, 2011, doi:
10.1109/TMM.2011.2128862.
[22] B. Pang, L. Lee, and S. Vaithyanathan, “Thumbs up?: Sentiment Classification using
Machine Learning Techniques,” in Proceedings of the Conference on Empirical Methods
in Natural Language Processing, 2002, pp. 79–86, doi: 10.3115/1118693.1118704.
[23] J. Blitzer, J. Blitzer, M. Dredze, M. Dredze, F. Pereira, and F. Pereira, “Biographies,
bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification,”
Annu. Meet. Comput. Linguist., vol. 45, no. 1, p. 440, 2007, doi:
10.1109/IRPS.2011.5784441.
[24] P. Melville, W. Gryc, and R. D. Lawrence, “Sentiment analysis of blogs by combining
lexical knowledge with text classification,” in Proceedings of the 15th ACM SIGKDD
international conference on Knowledge discovery and data mining - KDD ’09, 2009, p.
1275, doi: 10.1145/1557019.1557156.
[25] A. L. Maas, R. E. Daly, P. T. Pham, D. Huang, A. Y. Ng, and C. Potts, “Learning Word
Vectors for Sentiment Analysis,” in Proceedings of the 49th Annual Meeting of the
Association for Computational Linguistics: Human Language Technologies, 2011, pp.
142–150, doi: 978-1-932432-87-9.
[26] A. Go, R. Bhayani, and L. Huang, “Twitter Sentiment Classification using Distant
Supervision,” Processing, vol. 150, no. 12, pp. 1–6, 2009, doi:
10.1016/j.sedgeo.2006.07.004.
[27] M. Speriosu, N. Sudan, S. Upadhyay, and J. Baldridge, “Twitter Polarity Classification
with Label Propagation over Lexical Links and the Follower Graph,” Proc. Conf. Empir.
Methods Nat. Lang. Process., pp. 53–56, 2011, doi: 10.1017/CBO9781107415324.004.
[28] D. A. Shamma, L. Kennedy, and E. F. Churchill, “Tweet the debates,” in Proceedings of
the first {SIGMM} workshop on Social media - {WSM} ’09, 2009, p. 3, doi:

6
References

10.1145/1631144.1631148.
[29] E. S. Dan-Glauser and K. R. Scherer, “The Geneva affective picture database (GAPED): a
new 730-picture database focusing on valence and normative significance,” Behav. Res.
Methods, vol. 43, no. 2, pp. 468–477, 2011, doi: 10.3758/s13428-011-0064-1.
[30] J. Machajdik and A. Hanbury, “Affective image classification using features inspired by
psychology and art theory,” in Proceedings of the international conference on Multimedia
- MM ’10, 2010, p. 83, doi: 10.1145/1873951.1873965.
[31] M. Biehl et al., “Matsumoto and Ekman’s Japanese and Caucasian facial expressions of
emotion (JACFEE): Reliability data and cross-national differences,” J. Nonverbal Behav.,
vol. 21, no. 1, pp. 3–21, 1997, doi: 10.1023/a:1024902500935.
[32] R. Banse and K. R. Scherer, “Acoustic Profiles in Vocal Emotion Expression,” J. Pers.
Soc. Psychol., vol. 70, no. 3, pp. 614–636, 1996, doi: 10.1037/0022-3514.70.3.614.
[33] I. S. Engberg, A. V Hansen, O. Andersen, and P. Dalsgaard, “Design, Recording and
Verification of a Danish Emotional Speech Database,” Proc. Eurospeech 1997, vol. 4, pp.
1695–1698, 1997.
[34] A. Paeschke and W. F. Sendlmeier, “Prosodic characteristics of emotional speech:
Measurements of fundamental frequency movements,” Speech Emot. ISCA Tutor. Res.
Work., pp. 75–80, 2000, [Online]. Available:
http://www.isca-speech.org/archive_open/speech_emotion/spem_075.html.
[35] P. Greasley, C. Sherrard, and M. Waterman, “Emotion in language and speech:
Methodological issues in naturalistic approaches,” Lang. Speech, vol. 43, no. 4, pp. 355–
375, 2000, doi: 10.1177/00238309000430040201.
[36] M. Pantic, M. Valstar, R. Rademaker, and L. Maat, “Web-based database for facial
expression analysis,” in IEEE International Conference on Multimedia and Expo, ICME
2005, 2005, vol. 2005, pp. 317–321, doi: 10.1109/ICME.2005.1521424.
[37] P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, “The Extended
Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified
expression,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern
Recognition - Workshops, Jun. 2010, no. July, pp. 94–101, doi:
10.1109/CVPRW.2010.5543262.
[38] M. Frank, “RUFACS1 (Rochester/UCSD FacialActionCodingSystem Database 1),” 2004.

7
References

http://mplab.ucsd.edu/grants/project1/research/rufacs1-dataset.html (accessed Jan. 05,


2018).
[39] M. Valstar et al., “AVEC 2013: The Continuous Audio/Visual Emotion and Depression
Recognition Challenge,” in Proceedings of the 3rd ACM International Workshop on
Audio/Visual Emotion Challenge (AVEC ’13), 2013, pp. 3–10, doi:
10.1145/2512530.2512533.
[40] M. Valstar et al., “AVEC 2014: 3D Dimensional Affect and Depression Recognition
Challenge,” Proc. 4th ACM Int. Work. Audio/Visual Emot. Chall. (AVEC ’14), pp. 3–10,
2014, doi: 10.1145/2661806.2661807.
[41] E. Douglas-cowie, R. Cowie, and M. Schröder, “A New Emotion Database:
Considerations, Sources and Scope,” in Speech and Emotion: Proceedings of the ISCA
workshop, 2000, pp. 39–44.
[42] A. Battocchi, F. Pianesi, and D. Goren-bar, “DaFEx : Database of Facial Expressions 1
Description of the Database,” pp. 303–306, 2005.
[43] K. R. Scherer and G. Ceschi, “Lost luggage: A field study of emotion antecedent
appraisal,” Motiv. Emot., vol. 21, no. 3, pp. 211–235, 1997, doi: Doi
10.1023/A:1024498629430.
[44] E. Douglas-Cowie et al., “The HUMAINE Database: Addressing the Collection and
Annotation of Naturalistic and Induced Emotional Data,” in Affective Computing and
Intelligent Interaction, 2007, pp. 488–500.
[45] M. Wollmer et al., “You tube movie reviews: Sentiment analysis in an audio-visual
context,” IEEE Intell. Syst., vol. 28, no. 3, pp. 46–53, 2013, doi: 10.1109/MIS.2013.34.
[46] V. Perez-Rosas, R. Mihalcea, and L. Morency, “Utterance-Level Multimodal Sentiment
Analysis,” Proc. 51st Annu. Meet. Assoc. Comput. Linguist. (Volume 1 Long Pap., pp.
973–982, 2013, doi: 10.1.1.387.1131.
[47] L.-P. Morency, R. Mihalcea, and P. Doshi, “Towards multimodal sentiment analysis,” in
Proceedings of the 13th international conference on multimodal interfaces - ICMI ’11,
2011, p. 169, doi: 10.1145/2070481.2070509.
[48] J. A. Healey and R. W. Picard, “Detecting stress during real-world driving tasks using
physiological sensors,” IEEE Trans. Intell. Transp. Syst., vol. 6, no. 2, pp. 156–166, 2005,
doi: 10.1109/TITS.2005.848368.

8
References

[49] D. M. M. Rahaman and M. Paul, “Free View-Point Video Synthesis Using Gaussian
Mixture Modelling,” in IEEE conference on Image and Vision Computing New Zealand,
2015, pp. 1–6.
[50] D. M. M. Rahaman and M. Paul, “Hole-filling for single-view plus-depth based rendering
with temporal texture synthesis,” in IEEE International Conference on Multimedia and
Expo Workshop, ICMEW, 2016, pp. 1–6, doi: 10.1109/ICMEW.2016.7574740.
[51] D. M. M. Rahaman and M. Paul, “View Synthesised Prediction with Temporal Texture
Synthesis for Multi-View Video,” in International Conference on Digital Image
Computing: Techniques and Applications (DICTA), 2016, pp. 1–8, doi:
10.1109/DICTA.2016.7797096.
[52] P. K. Podder, M. Paul, D. M. M. Rahaman, and M. Murshed, “Improved depth coding for
HEVC focusing on depth edge approximation,” Signal Process. Image Commun., vol. 55,
pp. 80–92, 2017, doi: 10.1016/j.image.2017.03.017.

You might also like