Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Sample 1

The existing models for performance optimization used in compilers are limited in their ability to identify
profitable parallelism. Effective model-based heuristics and profitability estimates exist in order to
distinguish optimization. Empirical search on the set of valid possibilities to perform code motion, and
model-based mechanisms to perform tiling, vectorization and parallelization on the transformed program
are the main focus of developing automatic framework [1].
There are some automatic parallelization of programs that use pointer-based dynamic data structures,
written in Java. The approach exploits parallelism among methods by creating an asynchronous thread of
execution for each method invocation in a program [2].
A comparative study of prevailing tools showed that PLUTO is more efficient than the other
tools. Even though CETUS was efficient in terms of dependency analysis and parallel loops detection, it
shows error results in detecting and parallelizing nested loops. GASPARD shows the limit of the model-
to-source parallelize comparing to source-to-source parallelizer, thus is not flexible and applicable for all
the scenarios. But, it gave tolerable results for MM workload. As observed. One common limit of those
auto-parallelization tools is the generation of parallel openMP code which depends on the OpenMP API,
compiler and OS run time support to realize task partition. However, such support is rarely available in an
embedded context where OS is not always present [3]. For future work, an automatic accelerator
generation flow that integrates PLUTO and adapts an application targeting the general purpose processor
to an embedded environment seems much more favorable. [4]

Parallel computing has been developed day by day to achieve and improve the benefits of High
performance computing. From the hardware side, different multiprocessor designs have been introduced
for the betterment of the parallel computing. Future of parallel computing is not predicted since vast
research areas are going on.
Since the architecture is complex of machines, efficient programming has become a little bit difficult.
Some decisions are difficult or impossible to make at compile time. For example, to determine data
dependences exactly, the values of certain variables must be known. For deciding which one of two
nested parallel loops is better to move to the outermost position, the number of iterations of each loop is
usually needed.[5]
Effectiveness of traditional compilers is available on papers describe about the effectiveness of many
traditional techniques such as common subexpression elimination, code motion and dead code
elimination.[6]

References
[1] Louis-Noel Pouchet, Uday Bondhugula, Cedric Bastoul, Albert Cohen,Jagannathan Ramanujam and
Ponnuswamy Sadayappan, “Combined Iterative and Model-driven Optimization in an Automatic
Parallelization Framework,” Conference on Supercomputing (SC’10), New Orleans, LA, United States.
2010.
[2] Bryan Chan,” Run-Time Support for the Automatic Parallelization of Java Programs,” M.S. thesis,
Dept. Elect. Com. Eng, University of Toronto, 2002.
[3] G. Tian, G Hammami, O. “Performance measurements of synchronization mechanisms on 16PE NOC
based multi-core with dedicated synchronization and data NOC”. In: International Conference on
Electronics, Circuits, and Systems (ICECS’09), 2009, pp. 988 – 991
[4] Emna KALLEL, Yassine AOUDNI, Mohamed ABID,”“OpenMP” automatic parallelization tools: An
Empirical comparative evaluation” in “IJCSI International Journal of Computer Science Issues”, 2013
[5] Rudolf Eigenmanny, David Padua : On the Automatic Program Parallelization, 1993
[6] N.Jones and S.Muchnick. Flow analysis and optimization of lisp-like structures. In Program Flow
Analysis, Theory and Applications, chapter 4, pages 102 – 131. Prentice- Hall , Englewood Cliffs, N.J.,
1981
Sample 2
There are different approaches for computational emotion detection. Following are some main
techniques which heavily employ Natural Language Processing to detect emotions in text.
1. Lexicon Based Approaches uses one or more lexical resources to detect emotions. Keyword
based approaches, ontology based approaches, and statistical approaches come under this [2]. Of
all lexical based methods, lexical chain are majorly used to detect emotions in text. “A lexical
chain represents a string of concepts that are semantically bonded throughout the text”. Using
lexical chain approach, emotions in text can be detected more robustly while exceeding the
limitations in the other lexical based methods [3].
2. Machine Learning Approaches refers to a scientific discipline that deals with the
construction and study of algorithms that can learn from data. Machine learning theory supported
elements such as support vector machines, conditional random fields are used. These approaches
can be divided to two categories:
a. Supervised learning approaches rely on labelled training data. Most of the existing
techniques for emotion detection in text use supervised learning methods where a large
set of annotated data is required to be used by the model. Although the supervised
methods provide successful results, a large annotated data set will not always be available
[4].
b. Unsupervised learning approaches algorithms try to find hidden structure in
unlabeled data in order to build models for emotion classification [2]. Availability of
annotated data is not required in unsupervised learning approaches. However, most of the
unsupervised methods use manually designed dictionaries that contain emotion
keywords. Researches have been done to implement an unsupervised method that does
not depend on dictionary data or annotated data. According to those researches, such
methods have the ability to give more “accurate results than recent unsupervised
approaches.” [4]

3. The Keyword Spotting Technique is based on predefined keywords. They are categorized as
disgusted, sad, happy, angry, fearful, surprised etc. In a given string, the occurrence of keywords
is detected as substrings and then the conclusion is derived. Simply it follows five sequential
steps as tokenization, emotion word detection, intensity analysis, negation check and derive
emotion [5].
4. The Lexical Affinity Method is extended from Keyword Spotting Technique. It assigns a
probabilistic affinity for a particular emotion rather than picking up emotional keywords.
Disadvantages of this method are the assigned probabilities are biased toward corpus-specific
genre of texts and it misses out emotional content that resides deeper than the word-level on
which this technique operates [5].
5. In Hybrid Methods both keyword spotting technique and learning based methods are used
[5].
6. Annotation in emotion detection is to compare the results of emotion detection with human
labeled text. Annotation can be done to words, sentences, tones, and phrases or to the whole
document. Text will be labeled with positive, negative or neutral emotion. Some studies annotate
the text by labeling the intensity of the emotion. Annotation is one way of building the
knowledgebase required to detect emotions in text [6]. Databases of text annotated with emotion
information are quite common, whereas databases of text annotated with tone information are
very rare. The reason is, it is very difficult to know what linguistic features to be used as cues.
“Tone could also be retrievable once the relevant linguistic cues are identified” [7].

Reference

[2] L. Canales and P. Martinez-Barco, “Emotion Detection from text: A Survey,” presented at
the 11th International Workshop on Natural Language Processing and Cognitive Science -
NAACL, 2014.
[3] M. N. Kumar and R. Suresh, “Emotion Detection using Lexical Chains”, Department of
Computer Science and Engineering, Saveetha Engineering College, vol. 57, no. 4, Nov. 2012
[4] A. Agrawal and A. An, “Unsupervised Emotion Detection from Text using Semantic and
Syntactic Relations”, Department of Computer Science and Engineering, York University,
Toronto, Canada Available [Online]:
http://www.cse.yorku.ca/~aan/research/paper/Emo_WI10.pdf
[5] S. N. Shivhare and S. Khethawat, “Emotion detection from text,” arXiv preprint
arXiv:1205.4944, 2012.
[6] S. Aman and S. Szpakowicz, “Identifying Expressions of Emotion in Text,” in Proceedings
of the 10th International Conference on Text, Speech and Dialogue, Berlin, Heidelberg, 2007,
pp. 196–205.
[7] L. Pearl and M. Steyvers, ““C’mon –You Should Read This”:Automatic Identification of
Tone from Language Text”, Department of Cognitive Sciences, University of California, Irvine.
Sample 3
The paper-based health records currently in use may generate an extensive paper trail. There is
consequently a great interest in moving from paper-based health records to electronic health
records (EHRs). These efforts are principally being made by independent organizations.
However, recent proposals suggest that integrated health records provide many benefits [1],
some of which include: a reduction in costs, improved quality of care, the promotion of
evidence-based medicine and record keeping and mobility. In order to achieve these benefits,
EHR systems need to satisfy certain requirements in term of data completeness, resilience to
failure, high availability, and the consistency of security policies [2].
Four great obstacles limit the deployment of EHR systems: funding, technology, attitude and
organizational aspects [3]. Many governments rely on integrated EHRs because of the benefits
expected from them. One example of this interest is that of the US government. In 2004, the US
President decided that the majority of Americans would be connected to EHRs by 2014[4]. In
February 2009, the US President signed The American Recovery and Reinvestment Act, which
included the investment of 19,000 million dollars in the digitalization of medical records in the
USA [5]. The Member States of the European Union also intend to make their health systems
compatible before 2015, as the Vice-President of the European Commission announced at the
High Level eHealth Conference 2010. The EU’s objective is to share patients’ EHR data with the
objective of ‘‘Free Movement’’ and of obtaining quality and efficient health care [6]
However, there has been very little activity in policy development involving the numerous
significant privacy issues raised by a shift from a largely disconnected, paper-based health record
system to one that is integrated and electronic [7]. Moreover, the advances in Information and
Communications Technologies have led to a situation in which patients’ health data are
confronting new security and privacy threats [8]. The three fundamental security goals are [9]
confidentiality, integrity and availability (CIA). The protection and security of personal
information is critical in the health sector, and it is thus necessary to ensure the CIA of personal
health information. According to the ISO EN13606 standard [10], confidentiality refers to the
‘‘process that ensures that information is accessible only to those authorized to have access to
it’’. Integrity refers to the duty to ensure that information is accurate and is not modified in an
unauthorized fashion. The integrity of health information must therefore be protected to ensure
patient safety, and one important component of this protection is that of ensuring that the
information’s entire life cycle is fully auditable. Availability refers to the ‘‘property of being
accessible and useable upon demand by an authorized entity’’. The availability of health
information is also critical to effective healthcare delivery. Health informatics systems must
remain operational in the face of natural disasters, system failures and denial-of-service attacks.
Security also involves accountability, which refers to people’s right to criticize or ask why
something has occurred

Reference
[1] Greenhalgh T, Hinder S, Stramer K, Bratan T, Russell J. Adoption, nonadoption, and
abandonment of a personal electronic health record: case study of HealthSpace. BMJ
2010;341:c5814.
[2] Allard T, Anciaux N, Bouganim L, Guo Y, Folgoc LL, Nguyen B, et al. Secure personal data
servers: a vision paper. PVLDB 2010;3(1–2):25–35.
[3] Sainz-Abajo B, La-Torre-Díez I, Bermejo-González P, García-Salcines E, Díaz Pernas J,
Díez-Higuera JF, et al. Evolución, beneficios y obstáculos en la implantación del Historial
Clínico Electrónico en el sistema sanitario. RevistaeSalud.com 2010;6(22):1–14.
[4] Hesse BW, Hansen D, Finholt T, Munson S, Kellogg W, Thomas JC. Social participation in
health 2.0. Computer 2010;43(11):45–52.
[5] Benaloh J, Chase M, Horvitz E, Lauter K. Patient controlled encryption: ensuring privacy of
electronic medical records. In: Proc ACM workshop on cloud computing security; 2009. p. 103–
14.
[6] Los países europeos compartirán las historias clínicas de sus pacientes antes de; 2015. <
http://www.europapress.es/> [accessed 07.12.12].
[7] Rothstein MA. Health privacy in the electronic age. J Leg Med 2007;28(4):487–501.
[8] Farzandipour M, Sadoughi F, Ahmadi M, Karimi I. Security requirements and solutions in
electronic health records: lessons learned from a comparative study. J Med Syst 2010;34(4):629
42.
[9] Haas S, Wohlgemuth S, Echizen I, Sonehara N, Müller N. Aspects of privacy for electronic
health records. Int J Med Inform 2011;80(2):e26–31.
[10] ISO/EN 13606. < http://www.iso.org/iso/home.htm/ > [accessed 07.12.12].

You might also like