Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

Department of Accounting and Information Systems

Course Title: Applied Research Methodology


Course Code: 506

Term Paper On
“Data, Variables and model and Empirical research in accounting.”

SUBMITTED TO:

Tarik Hossain
Associate Professor
Department of Accounting & Information Systems,
Faculty of Business Studies,
Comilla University

Submitted By:
Hossain Ahmed
Group Leader,
On Behalf of the Group-(E)
Department of Accounting and Information Systems
Comilla University

Date of Submission: May 7, 2021

1
May 7, 2021

Tarik Hossain

Associate Professor

Department of Accounting & Information Systems,

Faculty of Business Studies, Comilla University

Subject: Submission of Term Paper on ― Data, Variables and model and Empirical
research in accounting

Sir, with due respect, we are submitting this term paper on ― Data, Variables and model and
Empirical research in accounting under the requirement of the course ―Applied Research
Methodology‖ as you have asked us to prepare it.

We all are very grateful to you for your concrete knowledge about the subject matter that helped
us to lead the term paper to its successful completion. We have prepared this through a group
work. We hope that, this term paper will help us in our future. If you face any disorder or
mistakes in interpreting this term paper, then please inform us so that we can correct our
mistakes and learn it properly.

Sincerely yours,

Hossain Ahmed

Group leader,

On behalf of the Group- E

2
Acknowledgement

By the grace of Allah, we have completed our Term paper on “Data, Variables and model
and Empirical research in accounting”. At first a special thank goes to our honorable course
teacher. We have tried to our best to make the term paper comprehensive and reliable within
the given time period. Nevertheless, some mistake might be occurred, please notice this type
of unconscious mistakes with sympathy and pardon. Your suggestions and comments for the
improvement of this term paper will be received thankfully and if you need any quires the
study, please inform us.

Yours faithfully

Hossain Ahmed

Group leader,

On behalf of the Group- E.

3
Groups Members
Group- E

Name ID No.

Ahsan Habib Joy 11606002

Md Monjurul Islam Mithun 11606011

Md. Ishrafil 11606014

Robin Miha 11606030

Junaied Ahmed 11606033

Md. Arif Chowdhury Hridoy 11606035

Mohammed Ujjal Khan 11606037

Hossain Ahmed (G.L.) 11606042

Bahar Uddin 11506053

4
Data collection

Data collection is the systematic approach to gathering and measuring information from a variety of
sources to get a complete and accurate picture of an area of interest. Data collection enables a person or
organization to answer relevant questions, evaluate outcomes and make predictions about future
probabilities and trends.

To ensure that high quality data is recorded in a systematic way, here are some best practices:

• Record all relevant information as and when you obtain data. For example, note down whether
or how lab equipment is recalibrated during an experimental study.
• Double-check manual data entry for errors.
• If you collect quantitative data, you can assess the reliability and validity to get an indication
of your data quality.

Primary Data and Secondary Data

Primary Data:

Primary data is information collected directly from first-hand experience. This is the information that
you gather for the purpose of a particular research project. Primary data collection is a direct approach
that is tailored to specific company needs. It can be a long process but does provide important first-hand
information in many business cases.

Primary data is the original data – from the first source. It is like raw material.

Examples of primary data sources are:

5
➢ Interview (personal interview, telephone, e-mail)
➢ Self-administered surveys and questionnaires
➢ Field observation
➢ Experiments
➢ Life histories
➢ Action research
➢ Case studies
➢ Diary entries, letters, and other correspondence
➢ Eyewitness accounts
➢ Ethnographic research
➢ Personal narratives, memoirs

In fact, the source of primary data is the population sample from which you gather your data. The
sample is selected by some of the different types of sampling methods and techniques.

Secondary data:

Secondary data is the data that have been already collected for another purpose but has some relevance
to your research needs. In addition, the data is collected by someone else instead of the researcher
himself. Secondary data is second-hand information. It is not used for the first time. That is why it is
called secondary.

Secondary data sources provide valuable interpretations and analysis based on primary sources. They
may explain in detail primary sources and often uses them to support a specific thesis or a point of view.

Examples of secondary data sources are:

➢ Previous research
➢ Mass media products
➢ Government reports
➢ Official statistics
➢ Letters
➢ Diaries
➢ Web information
➢ Google Analytics or other sources that show statistics and data for digital customer experience.
➢ Historical data
➢ Encyclopedias
➢ Monographs
➢ Journal articles

6
➢ Biography
➢ Research analysis

Big Data Analysis:

Big Data:

Big Data is a collection of data that is huge in volume, yet growing exponentially with time. It is a data
with so large size and complexity that none of traditional data management tools can store it or process
it efficiently. Big data is also a data but with huge size.

7
Big Data Analysis

Big Data analytics is a process used to extract meaningful insights, such as hidden patterns, unknown
correlations, market trends, and customer preferences. Big Data analytics provides various
advantages—it can be used for better decision making, preventing fraudulent activities, among other
things.

The Lifecycle Phases of Big Data Analytics:

➢ Stage 1 - Business case evaluation - The Big Data analytics lifecycle begins with a business
case, which defines the reason and goal behind the analysis.

➢ Stage 2 - Identification of data - Here, a broad variety of data sources are identified.

➢ Stage 3 - Data filtering - All of the identified data from the previous stage is filtered here to
remove corrupt data.

➢ Stage 4 - Data extraction - Data that is not compatible with the tool is extracted and then
transformed into a compatible form.

➢ Stage 5 - Data aggregation - In this stage, data with the same fields across different datasets are
integrated.

➢ Stage 6 - Data analysis - Data is evaluated using analytical and statistical tools to discover
useful information.

➢ Stage 7 - Visualization of data -Big Data analysts can produce graphic visualizations of the
analysis.

➢ Stage 8 - Final analysis result - This is the last step of the Big Data analytics lifecycle, where
the final results of the analysis are made available to business stakeholders who will take action.

8
Open data analysis

Open data may be defined as some data which are available for reuse by everyone. Those data are not
confined to patent; copyright etc. with the use of internet, the concept of open data has become a tool
for businesses, analysis, discussion and decision making. The open data is booming as a next big thing
around the globe. The open data concept is not new rather it can be considered as a replacing term for
open definition which can be defined as a piece of data that can be used, reused and redistributed by
anyone. In this paper, we have discussed about the sources of open data, issues regarding open data,
formats for open data and presented some views and opinions.

Sources of open data


Geo-data: these data are helpful in formulating maps by considering the location of buildings and roads
with respect to topography and boundaries.

Cultural: Data about cultural works and artefacts of the countries. These are data normally held by
galleries, libraries, archives and museums.
Science: Data that is produced as part of scientific research in all categories from A to Z. The concept
of open access to scientific data was institutionally established with the formation of the World Data
Center system, in preparation for the International Geophysical Year Of 1957-1958.
Finance: Data such as expenditure and revenue Of government and information on financial markets
such as stocks, shares, bonds etc.
Statistics: Data produced by statistical offices such as the census and key socioeconomic indicators.
Weather: Data obtained from satellite and Other sources to predict the weather and climatic conditions.
Environment: Information related to the natural environment such as pollution, rivers, seas, mountains,
volcanoes etc.
Government: Nowadays many national governments publish data catalogue for the sake of
transparency of their plans and policies among the general public.

Necessity for open data


The need for open data are of manifold: public need to know the policies of government, data pertaining
to health, environment will be of great help to the common person if shared using open data. Increased
use of Internet increases the desire of people to access open data.

Advantages of open data


For each category of data, the advantages may be specific and also pertaining to a particular community,
For example, government data may improve public service delivery, corruption may be reduced due to

9
transparency and better understanding of the plans, policies of the government. Likewise, the
information regarding schools of a particular area can help parents to identify the right school for
educating their children. This school information may not be useful for others.
In general, the advantages Of open data are:

1. creation Of new socio-economic models


2. better services
3. lower IT cost

Selecting Variables
After selecting the data source, the next step is to specify the variables to be imported. Three types of
variables can be imported into a project.

Unique ID Variable (Required)


The ID variable is a unique numeric or string key that identifies each respondent. The data file does not
need to be ordered by the unique ID variable to successfully read it. After being read into the program,
the records can be sorted by various criteria. See the topic Sorting Variables for more information. This
ID variable is required to import data. Each imported record (or case) must have a unique ID value.
Two situations will cause the import to fail:
I. Duplicate ID values detected
II. Records with blank ID values

Open-Ended Text Variable(s) (Required)


The open-ended text variables represent the text responses to the question(s) in the survey. At least one
of these variables is required to import data. These variables can be string or long-string variables in
SPSS Statistics, columns containing general or text cells in Microsoft Excel, or text or note fields from
databases. Each open-ended text variable will be analyzed separately. There is a 4,000-character limit
on the size (width) of each text variable imported from a save file.

Reference Variable(s) (Optional)


The reference variables are additional, optional variables, generally categorical, that can be imported
for reference purposes. Reference variables are not used in text analysis but provide supplemental
information describing the respondent, which may aid understanding and interpretation. Demographic
variables are often included as reference variables, since they can contribute to understanding which
terms or categories are being used by which groups of individuals.

Construct a primary model

10
The basic steps for variable selection are as follows:
a) Specify the maximum model to be considered.
b) Specify a criterion for selection a model.
c) Specify a strategy for selecting variables.
d) Conduct the specified analysis.
e) Evaluate the Validity of the model chosen.

Characteristics of research in Accounting :

Analytical

Researchers who utilize analytical methods base analysis and conclusions on formally modelling
theories or substantiated ideas in mathematical terms. These analytical studies use math to predict,
explain, or give substance to theory.

Archival

Researchers who utilize archival methods base analysis and conclusions on objective data collected
from repositories of third parties. Also included are studies in which the researchers collected the data
and in which the data has objective amounts such as net income, sales, fees, etc.

Experimental

Researchers who utilize experimental methods base analysis and conclusions on data the researcher
gathered by administering treatments to subjects. Usually these studies employ random assignment;
however, if the researcher selects different populations in an attempt to “manipulate” a variable, we
include these as experimental in nature.

Capital market research in Accounting


The principal sources of demand for capital markets research in accounting are fundamental analysis and
valuation, tests of market efficiency, and the role of accounting numbers in contracts and the political
process. The capital markets research topics of current interest to researchers include tests of market
efficiency with respect to accounting information, fundamental analysis, and value relevance of financial
reporting. Evidence from research on these topics is likely to be helpful in capital market investment
decisions, accounting standard setting, and corporate financial disclosure decisions.

Bank risk management

Risk management is important for a bank to ensure its profitability and soundness. It is also a concern of
regulators to maintain the safety and soundness of the financial system. Over the past decades, banking
business has developed with the introduction of advanced trading technologies and sophisticated financial
products. While these advancements enhance bank’s intermediation role, promote profitability, and better

11
diversify bank risk, they raise significant challenges to bank risk management. The risk management of
banks has been considered to be weak compared to the rapid changes in the financial markets. In the light
of the recent global financial crisis, bank risk management has become the major concern of banking
regulators and policy makers.

Cross-sectional data:

Cross-sectional data, or a cross section of a study population, in statistics and econometrics is a type of
data collected by observing many subjects (such as individuals, firms, countries, or regions) at the one
point or period of time. The analysis might also have no regard to differences in time. Analysis of cross-
sectional data usually consists of comparing the differences among selected subjects.

For example, if we want to measure current obesity levels in a population, we could draw a sample of
1,000 people randomly from that population (also known as a cross section of that population), measure
their weight and height, and calculate what percentage of that sample is categorized as obese. This cross-
sectional sample provides us with a snapshot of that population, at that one point in time. Note that we
do not know based on one cross-sectional sample if obesity is increasing or decreasing; we can only
describe the current proportion.

Time series:

Time series analysis is a statistical technique that deals with time series data, or trend analysis. Time
series data means that data is in a series of particular time periods or intervals. The data is considered
in three types:

Time series data: A set of observations on the values that a variable takes at different times.

Cross-sectional data: Data of one or more variables, collected at the same point in time.

Pooled data: A combination of time series data and cross-sectional data.

Money market:

The money market involves the purchase and sale of large volumes of very short-term debt products,
such as overnight reserves or commercial paper.

An individual may invest in the money market by purchasing a money market mutual fund, buying a
Treasury bill, or opening a money market account at a bank.

Money market investments are characterized by safety and liquidity, with money market fund shares
targeted at $1.

12
Capital market:

A capital market is a financial market in which long-term debt (over a year) or equity-backed securities
are bought and sold,[6] in contrast to a money market where short-term debt is bought and sold. Capital
markets channel the wealth of savers to those who can put it to long-term productive use, such as
companies or governments making long-term investments.[a] Financial regulators like Securities and
Exchange Board of India (SEBI), Bank of England (BoE) and the U.S. Securities and Exchange
Commission (SEC) oversee capital markets to protect investors against fraud, among other duties.

Cross-sectorial Consolidated Requirements for data analysis

In order to establish a common understanding of requirements as well as technology descriptions across


domains, the sector-specific requirement labels were aligned. Each sector provided their requirements
with the associated user needs, and similar and related requirements were merged, aligned, or
restructured to create a homogenous set.

While most of the requirements exist within each of the sectors, the level of importance for the
requirement in each sector varies. For the cross-sector analysis , any requirements that were identified
by at least two sectors as being a significant requirement for that sector were included into the cross-
sector roadmap definition.

Factors considered in data analysis:

❖ Data Management techniques


❖ Prioritization of Cross-sectorial Requirements

❖ Data management techniques


➢ Data enrichment
➢ Data integration
➢ Data sharing
➢ Real-time data transmission
➢ Data Quality
➢ Data Security and Privacy

13
Data enrichment

data enrichment aims to make unstructured data understandable across domains, application, and value
chains.

• Information extraction from text


• Image understanding algorithms
• Standardized annotation framework
• Data integration

Data integration and data sharing

data sharing and integration aims to establish a basis for the seamless integration of multiple and diverse
data sources into a big data platform. The lack of standardized data schemas, semantic data models , as
well as the fragmentation of data ownership are important aspects that need to be tackled.

In order to address these requirements, the following challenges need to be tackled:

✓ Semantic data and knowledge models


✓ Context information
✓ Entity matching
✓ Scalable triple stores , key/value stores
✓ Facilitate core integration at data acquisition
✓ Best practice for sharing high-velocity and high-variety data
✓ Usability of semantic systems
✓ Metadata and data provenance frameworks
✓ Scalable automatic data/schema mapping mechanisms

Real-time data transmission

Real-time data transmission aims at acquiring (sensor and event) information in real time. In the public
sector, this is closely related with the increasing capability of deploying sensors and Internet of Things
scenarios, like in public safety and smart cities .

In order to address these requirements, the following challenges need to be tackled:

✓ Distributed data processing and cleaning


✓ Read/write optimized storage solutions for high velocity data
✓ Near real-time processing of data streams

14
Data Quality

The high-level requirement, data quality , describes the need to capture and store high-quality data so
that analytic applications can use the data as reliable input to produce valuable insights. Data quality
has one sub-requirement:

✓ Data improvement

data improvement aims at removing noise/redundant data, checking for trustworthiness, and adding
missing data.

In order to address these requirements, the following challenges need to be tackled:

✓ Provenance management
✓ Human data interaction
✓ Unstructured data integration

Data Security and Privacy

The high-level requirement data security and privacy describes the need to protect highly sensitive
business and personal data from unauthorized access. Thus, it addresses the availability of legal
procedures and the technical means that allow the secure sharing of data.

❖ Prioritization of Cross-sectorial Requirements

An actionable roadmap should have clear selection criteria regarding the priority of all actions. In
contrast to a technology roadmap for the context of a single company, a European technology roadmap
needs to cover developments across different sectors. The process of defining the roadmap included an
analysis of the big data market and feedback received from stakeholders. Through this analysis, a sense
of what characteristics indicate higher or lower potential of big data technical requirements was reached.

As the basis for the ranking, a table-based approach was used that evaluated each candidate according
to a number of applicable parameters. In each case, the parameters were collected with the goal of being
sector independent. Quantitative parameters were used where possible and available.

In consultation with stakeholders , the following parameters were used to rank the various technical
requirements. The ranking parameters included:

✓ Number of affected sectors


✓ Size of affected sector(s) in terms of % of GDP
✓ Estimated growth rate of the sector(s)
✓ Possible prognosticated estimated growth rate by the sector due to big data technologies

15
✓ Estimated export potential of the sector(s)
✓ Estimated cross-sectorial benefits
✓ Short-term low-hanging fruit

What Is Earnings Management?

Earnings management is the use of accounting techniques to produce financial statements that present
an overly positive view of a company's business activities and financial position. Many accounting rules
and principles require that a company's management make judgments in following these principles.
Earnings management takes advantage of how accounting rules are applied and creates financial
statements that inflate or "smooth" earnings.

In accounting, earnings management is a method of manipulating financial records to improve the


appearance of the company's financial position.

Companies use earnings management to present the appearance of consistent profits and to smooth
earnings' fluctuations.

One of the most popular ways to manipulate financial records is to use an accounting policy that
generates higher short-term earnings.

What is the Quality of Earnings?

The quality of earnings refers to the proportion of income attributable to the core operating activities of
a business. Thus, if a business reports an increase in profits due to improved sales or cost reductions,
the quality of earnings is considered to be high. Conversely, an organization can have low-quality
earnings if changes in its earnings relate to other issues, such as:

1. Aggressive use of accounting rules


2. Elimination of LIFO inventory layers
3. Inflation
4. Sale of assets for a gain
5. Increases in business risk

In general, any use of accounting trickery to temporarily bolster earnings reduces the quality of earnings.

A key characteristic of high-quality earnings is that the earnings are readily repeatable over a series of
reporting periods, rather than being earnings that are only reported as the result of a one-time event. In
addition, an organization should routinely provide detailed reports regarding the sources of its earnings,

16
and any changes in the future trends of these sources. Another characteristic is that the reporting entity
engages in conservative accounting practices, so that all relevant expenses are appropriately recognized
in the correct period, and revenues are not artificially inflated.

Value Relevance of Accounting Information

Value relevance is being defined as the ability of information disclosed by financial statements to
capture and summarize firm value. Value relevance can be measured through the statistical relations
between information presented by financial statements and stock market values or returns

Value relevance of accounting information


Value relevance studies represent simultaneous testing of the relevance and reliability of accounting
information and are one of the most productive areas of accounting research
Empirical researches from the accounting area conducted in developed Countries show that accounting
information are significant factors in evaluation of corporations and that these variables are significantly
related to share prices. In other words
accounting information is value relevant. Accounting information is considered value relevant if it is
correlated with market value of a company. If there is no statistically significant relation between
accounting information and market value of a company it can be concluded that accounting information
is not value relevant which implies that financial statements do not meet one of the fundamental
objectives of financial reporting.

The Importance of the Accounting Information and Accounting Research


The accounting information plays a positive role in the integrity of the decisions as well as the
success of the development plans; such role is derived from the availability of information
required for preparing, implementing and following up these plans. In many cases, the failure
of such plans is mainly attributed to the absence of a serious evaluation of the accounting role
in succeeding the economic development plans. A lack of the required information is one of
the obstacles affecting negatively the development plans; such effect is represented by
choosing the model which builds on unreal bases; in addition, such model may cover certain
aspects of the economy; such aspects are not important.
However, the information related to them is available. A lack of information of the relative
scarcity of the available resources leads to misdistribution of these resources; in addition, a
lack of information of the achievement of the development plan makes any amendment of these
plans impossible.

17
Accounting information contained in financial statements is expected to be useful for decision makers.
In order to provide this, financial statements should meet some basic characteristics. “If financial
information is to be useful, it must be relevant and faithfully represent what it purports to represent. The
usefulness of financial information is enhanced if it is comparable, verifiable, timely and
understandable”

Even study method and how it’s work.

What Is an Event Study?

An event study is an empirical analysis that examines the impact of a significant catalyst occurrence or
contingent event on the value of a security, such as company stock.

Event studies can reveal important information about how a security is likely to react to a given event.
Examples of events that influence the value of a security include a company filing for bankruptcy
protection, the positive announcement of a merger, or a company defaulting on its debt obligations.

KEY TAKEAWAYS

An event study, or event-history analysis, examines the impact of an event on the financial performance
of a security, such as company stock.

An event study analyzes the effect of a specific event on a company by looking at the associated impact
on the company's stock.If the same type of statistical analysis is used to analyze multiple events of the
same type, a model can predict how stock prices typically respond to a specific event.

How an Event Study Works

An event study, also known as event-history analysis, employs statistical methods, using time as the
dependent variable and then looking for variables that explain the duration of an event—or the time
until an event occurs. Event studies that use time in this way are often employed in the insurance
industry to estimate mortality and compute life tables. In business, these types of studies may instead
be used to forecast how much time is left before a piece of equipment fails. Alternatively, they could
be used to predict how long until a company goes out of business. Other event studies, such as an
interrupted time series analysis (ITSA), compare a trend before and after an event to explain how, and
to what degree, the event changed a company or a security. This method may also be employed to see
if the implementation of a particular policy measure has resulted in some statistically significant change
after it has been put in place.

18
An event study conducted on a specific company examines any changes in its stock price and how it
relates to a given event. It can be used as a macroeconomic tool, as well, analyzing the influence of an
event on an industry, sector, or the overall market by looking at the impact of the change in supply and
demand.

An event study, whether on the micro- or macro-level, tries to determine if a specific event has, or will
have, an impact on a business's or economy's financial performance.

Empirical methods:

Empirical methods are employed in communication studies in an attempt to yield objective and
consistent findings. This approach is positivistic in the sense that the social world is perceived as
governed by laws or law-like principles that make it predictable. Initially, empirical methods have been
equated with the use of quantitative measures (e.g., content analyses, surveys) and primary collection
and analysis of data. Nowadays, secondary analyses and qualitative research are also considered
empirical. It seems plausible to categorize qualitative research as empirical to the extent that scholars
provide sufficient information that allows the reproduction of their findings (e.g., sampling strategy,
data collection and analysis). However, this categorization is likely to be debatable.

Types of Empirical Evidence

• Qualitative: Qualitative evidence is the type of data that describes non-measurable information.
• Quantitative: Quantitative evidence refers to numerical data that can be further analyzed using
mathematical and/or statistical methods.

Necessity of Empirical research

There is a reason why empirical research is one of the most widely used method. There are a few
advantages associated with it. Following are a few of them.

• It is used to authenticate traditional research through various experiments and observations.


• This research methodology makes the research being conducted more competent and
authentic.
• It enables a researcher understand the dynamic changes that can happen and change his
strategy accordingly.
• The level of control in such a research is high so the researcher can control multiple variables.
• It plays a vital role in increasing internal validity.

19
20

You might also like