Professional Documents
Culture Documents
Information System For Managers
Information System For Managers
Information System For Managers
HOW IT IS
IMPORTANT IN ORGANISATION?
The basic form of organization of companies in the new economy is organizational structure
characterized by horizontal organization, a high degree of decentralization, democratic style of
decision making, high level of flexibility and adaptability to constant market changes. Given that
customers are drivers of today’s economy, it is logical to focus on organizational efforts in order
to meet their needs. Therefore, the hierarchical organization can be completely inverted and thus
create an inverse hierarchical pyramid in which all employees are directly connected with
customers and able to quickly respond to market needs. Such a change requires substantial
reengineering of business and new internal and external “communication range”, based on
intensive use of information-communication, and Internet technologies. Decisions are made on
the spot, and each job requires an appropriate level of autonomy, independence and
entrepreneurial thinking. Managers lose their exclusive right to control the outcome of the trust
in colleagues and encourage entrepreneurial initiatives at all levels of an organization. These
organizations are characterized by a very high level of adaptability, willingness to change, a wide
range of communication and orientation of inductive logic, innovation, education and continuous
learning. Management in these circumstances, the New economy, based on Internet technology,
should meet multiple criteria and cover all areas of organizational activity: planning, organizing,
managing, communicating, controlling, motivating, effective action, creating a pleasant and
stimulating environment, innovation, recognition of talent, management human resources etc.
The business activity in terms of the New economy is causing many changes in many areas of
organizational performance. Through the intensive use of publicly accessible computer networks
companies accelerate business processes, globalize operations, effectively communicate with the
general public and change their behaviour in relation to the unpredictable environment and
society as a whole. There are three main factors affecting the changes in organizational
behaviour:
Decision-making and conduct of business become especially important when the managerial
functions of the business environment are dynamic, unpredictable and uncertain. What
knowledge and skills should managers possess to effectively run the business in the new
economy? The intensive development of the Internet as a modern business infrastructure has led
to substantial changes in business strategy, development and innovation of business models, and
new forms of organization of business; however, relatively little attention has been devoted to
developing new models of management and development of management models that are
matched by global conditions, networked business environment and the New economy. While
some managerial functions in the conditions of New economy change, rare studies show that, in
addition to their classical functions, managers in the New economy (“e-CEO”, “e-Manager”)
need to develop new or improved competence in these key areas:
Communication skills,
Human resource management, and attracting, motivating and finding talented co-workers,
In business, management information systems (or information management systems) are tools
used to support processes, operations, intelligence, and IT. MIS tools move data and manage
information. They are the core of the information management discipline and are often
considered the first systems of the information age.
MIS produce data-driven reports that help businesses make the right decisions at the right time.
While MIS overlaps with other business disciplines, there are some differences:
Enterprise Resource Planning (ERP): This discipline ensures that all departmental
systems are integrated. MIS uses those connected systems to access data to create
reports.
IT Management: This department oversees the installation and maintenance of hardware
and software that are parts of the MIS. The distinction between the two has always been
fuzzy.
E-commerce: E-commerce activity provides data that the MIS uses. In turn, the MIS
reports based on this data affect e-commerce processes.
Management information system is a broad term that incorporates many specialized systems.
The major types of systems include the following:
At their core, management information systems exist to store data and create reports that
business pros can use to analyze and make decisions. There are three basic kinds of reports:
Scheduled: Created on a regular basis, these reports use rules the requestor has provided
to pull and organize the data. Scheduled reports allow businesses to analyze data over
time (e.g. an airline can see the percentage of lost luggage by month), location (e.g. a
retail chain can compare sales figures from different stores), or other parameters.
Ad-hoc: These are one-off reports that a user creates to answer a question. If the reports
are useful, you can turn ad-hoc reports into scheduled reports.
Real-time: This type of MIS report allows someone to monitor changes as they occur. For
example, a call center manager may see an unexpected spike in call volume, and find a
way to increase productivity or send some of the calls elsewhere.
Beyond the need to stay competitive, there are some key advantages of effective use of
management information systems:
Information technology (IT) is an integral part of every single business plan. Information
technology plays a vital role in every business type including small, medium and large
(multinational).
Information technology is used in companies to implement communication. Network (intranet
and internet) and email play a key role in the organisational communication internally as well as
externally. Today, organisational communication has been developed via chats and Voice Over
Internet Protocol (VOIP) and telephones (Curry, 2008). Inventory management systems will
allow organisations to track stocks and activate an order of extra stock when the numbers drop
below a pre-defined quantity. Companies connect the inventory management system to their
Point-of-Sale (POS) systems to gain maximum efficiency.
The POS system ensures that every time a product is sold, the inventory is updated and
information is shared among other divisions. Data is the critical element in businesses today, and
the security and effective management of data can be considered key drivers of organisations.
This has been simplified by information technology through databases, document digital versions
and storage devices. These options have enabled businesses to ensure security, no unnecessary
data duplication, data consistency, data backups and easy replication.
Storage data is simply an advantage if the data can be used efficiently. Advanced businesses use
data in their strategic planning procedure in addition to the strategic implementation of the
strategy. Management Information Systems (MIS) allow businesses to sort and plan sales data,
costs, and efficiency levels. Management can find everyday sales levels, allowing them to
directly respond to below-than-projected numbers by increasing worker efficiency or dropping
the cost of a product. Businesses use information technology to develop effective methods in
customer relationship management. Instead of depending on communication in a traditional
perspective, companies usually host CRM systems that connect the business systems to
customers in a faster and more secure manner.
Basically, any organization need to numbers of information systems supports to support it. in
short, this technology refers to information and the complementary networks of hardware and
software that people and organizations use to collect, filter, process, create and also distribute
data.
The purpose of the operation support system is to facilitate the business transaction, control
production, support internal as well as external communication and update organization central
database. The operation support system is further divided into a transaction-processing system,
processing control system and enterprise collaboration system.
The MIS system analyzes the input with routine algorithms i.e. aggregate, compare and
summarizes the results to produced reports that tactical managers use to monitor, control and
predict future performance.
Sales management systems: they get input from the point of sale system
Budgeting systems: they give an overview of how much money is spent within the
organization for the short and long terms.
Human resource management system: overall welfare of the employees, staff turnover,
etc.
3- Process Control System In this type of system, critical information is fed to the system on a
real-time basis thereby enabling process control.
A Decision Support System can be seen as a knowledge-based system, which facilitates the
creation of knowledge and allows its integration into the organization. These systems are often
used to analyze existing structured information and allow managers to project the potential
effects of their decisions into the future. Such systems are usually interactive and are used to
solve ill-structured problems. They offer access to databases, analytical tools, allow “what if”
simulations, and may support the exchange of information within the organization.
The information in such systems is often weakly structured and comes from both internal and
external sources. Executive Information System is designed to be operated directly by executives
without the need for intermediaries and easily tailored to the preferences of the individual using
them.
MIS is a system providing management with accurate and timely information. Such information
is necessary to facilitate the decision-making process and enable the organizations planning,
control, and operational functions to be carried out effectively. MISs increase competitiveness of
the firm by reducing cost and improving processing speed. The power of technology has
transformed the role of information in a business firm. Now information has become recognized
as the lifeblood of an organization and without information, the modern company is dead. MIS
and its organizational subsystems contribute to the decision making process in many ways.
Power has stated that making decisions is an important part of working in the business
environment. Companies often make decisions regarding operational improvements or selecting
new business opportunities for maximizing the company's profit. Companies develop a decision-
making process based on individuals responsible for making decisions and the scope of the
company's business operations. A useful tool for making business decisions is a management
information system. Historically, MIS was a manual process used to gather information and
funnel it to individuals responsible for making decisions.
MIS is an organization – wide effort to provide decision making process information. The
system is a formal commitment by executive to make the computer available to all managers.
MIS sets the stage for accomplishments in the other area, which is DSS, the virtual office and
knowledge based systems. The main idea behind MIS is to keep a continuous supply of
information flowing to the management. Afterwards, by data and information gathered from
MIS, decisions are made. MIS may be viewed as a mean for transformation of data, which are
used as information in decision-making processes. Figure 1 shows this understanding about
information as data processed for a definite purpose.
MIS differs from regular information systems. The primary objectives of these systems are to
analyze other systems dealing with the operational activities in the organization. MIS is a subset
of the overall planning and control activities covering the application of humans, technologies,
and procedures of the organization. Within the field of scientific management, MIS is most of
tailored to the automation or support of human decision making.
MIS MODEL
The database contains the data provided by accounting information system. In addition, both data
and information are entered from the environment. The data based content is used by software
that produces periodic and special report, as well as mathematical model that simulate various
aspects of the firm operations. The software output is used by people who are responsible for
solving the firm’s problems. Note that some of the decision makers might exist in the firm’s
environment. The environment will involve once the firm bonds together with other
organizations such as suppliers to form an Inter Organizational Information System (IOS). In
such case, MIS supplies information to the other member of the IOS.
QUES 1: WHAT IS MEANT BY E-COMMERCE AND E-BUSINESS? HOW IT IS
DIFFERENT FROM EACH OTHER?
Electronic commerce is generally considered to be the sales aspect of e-business. It also consists
of the exchange of data to facilitate the financing and payment aspects of business transactions.
This is an effective and efficient way of communicating within an organization and one of the
most effective and useful ways of conducting business. It is a Market entry strategy where the
company may or may not have a physical presence.
E-Business
E-Business is the term used to describe the information systems and applications that support and
drive business processes, most often using web technologies.
E-Business allows companies to link their internal and external processes more efficiently and
effectively, and work more closely with suppliers and partners to better satisfy the needs and
expectations of their customers, leading to improvements in overall business performance.
While a website is one of the most common implementations, E-Business is much more than just
a web presence. There are a vast array of internet technologies all designed to help businesses
work smarter not harder. Think about collaboration tools, mobile and wireless technology,
Customer Relationship Management and social media to name a few.
E-commerce refers to online transactions - buying and selling of goods and/or services over the
Internet.
E-business covers online transactions, but also extends to all Internet based interactions with
business partners, suppliers and customers such as: selling direct to consumers, manufacturers
and suppliers; monitoring and exchanging information; auctioning surplus inventory; and
collaborative product design. These online interactions are aimed at improving or transforming
business processes and efficiency.
Online: E commerce is an electronic environment that allows sellers to buy and sell products,
services, and information on the internet. The Products may be physical like Cars, Computers,
Books or services like news or consulting.
Structure: Ecommerce deals with various media: data, text, video, web pages, and internet
telephony.
Market: E-commerce is a worldwide network. A local store can open a web storefront and find
the world at doorstep- customers, suppliers, competitors, and payments services, Of course, an
advertising presence is essential.
The architectural framework for e-commerce consists of six layers of functionality or services as
follows:
1. Application services.
2. Brokerage services, data or transaction management.
3. Interface and support layers.
4. Secure messaging, security and electronic document interchange.
5. Middleware and structured document interchange, and
6. Network infrastructure and the basic communication services.
In the application layer services of e-commerce, it is decided that what type of e- commerce
application is going to be implemented. There are three types of distinguished e-commerce
applications i.e., consumer to business application, businessto-business application and intra-
organizational application.
This layer is rapidly becoming necessary in dealing with the voluminous amounts of information
on the networks. This layer works as an intermediary who provides service integration between
customers and information providers, given some constraint such as low price, fast services or
profit maximization for a client. For example, a person wants to go to USA from Bangladesh.
The person checks the sites of various airlines for the low-price ticket with the best available
service. For this he must know the URLs of all the sites. Secondly, to search the services and the
best prices, he also has to feed the details of the journey again and again on different sites. If
there is a site that can work as information broker and can arrange the ticket as per the need of
the person, it will save the lot of time and efforts of the person. This is just one example of how
information brokerages can add value.
Another aspect of the brokerage function is the support for data management and traditional
transaction services. Brokerages may provide tools to accomplish more sophisticated, time-
delayed updates or future-compensating transactions.
The third layer of the architectural framework is interface layer. This layer provides interface for
e-commerce applications. Interactive catalogs and directory support services are the examples of
this layer.
Interactive catalogs are the customized interface to customer applications such as home
shopping. Interactive catalogs are very similar to the paper-based catalog. The only difference
between the interactive catalog and paper-based catalog is that the first one has the additional
features such as use of graphics and video to make the advertising more attractive.
Directory services have the functions necessary for information search and access. The
directories attempt to organize the enormous amount of information and transactions generated
to facilitate e-commerce.
The main difference between the interactive catalogs and directory services is that the interactive
catalogs deal with people while directory support services interact directly with software
applications.
5.4 Secure Messaging Layer
In any business, electronic messaging is an important issue. The commonly used messaging
systems like phone, fax and courier services have certain problems like in the case of phone if
the phone line is dead or somehow the number is wrong, you are not able to deliver the urgent
messages. In the case of courier service, if you want to deliver the messages instantly, it is not
possible as it will take some time depending on the distance between the source and destination
places. The solution for such type of problems is electronic messaging services like e-mail,
enhanced fax and EDI.
The electronic messaging has changed the way the business operates. The major advantage of the
electronic messaging is the ability to access the right information at the right time across diverse
work groups.
The main constraints of the electronic messaging are security, privacy, and confidentiality
through data encryption and authentication techniques.
The enormous growth of networks, client server technology and all other forms of
communicating between/among unlike platforms is the reason for the invention of middleware
services. The middleware services are used to integrate the diversified software programs and
make them talk to one another.
We know that the effective and efficient linkage between the customer and the supplier is a
precondition for e-commerce. For this a network infrastructure is required. The early models for
networked computers were the local and long distance telephone companies. The telephone
company lines were used for the connection among the computers. As soon as the computer
connection was established, the data traveled along that single path. Telephone company
switching equipment (both mechanical and computerized) selected specific telephone lines, or
circuits, that were connected to create the single path between the caller and the receiver. This
centrally-controlled, single-connection model is known as circuit switching.
QUES 3: DISCUSS THE TYPES OF E-COMMERCE AND ITS APPLICATIONS.
Electronic Market
Electronic Market: is a place where online shoppers and buyers meet. E-market handles business
transaction including bank-to-bank money transfer also. In e-market, the business center is not a
physical building. But it is a network-based location where business activities occur. In e-market,
the participants like buyers, sellers and transaction handler are not only one different locations
but even they do not know each other.
An IOS is a unified system with several business partners. A typical IOS will include a company
and its supplier and customers. Through IOS buyers and sellers arrange routine business
transactions. Information is exchanged over communication network using specific formats. So,
there is no need for telephone calls, papers, documents or correspondence.
- EDI (Electronic Data Interchange): It provide secure B2B connection over value added
network(Van’s)
- Extranet: which provide secure B2B connection over internet.
- EFT (Electronic Fund Transfer): Electronic Fund Transfer from one account to another.
- Electronic Forms: Online (web-pages) forms on internet.
- Shared Data Base: information stored in repositories (collection of data) shared by trading
partners
- Supply Chain Management: Co-operation between company and its suppliers and customers
regarding demand forecasting, inventory management and order fulfillment.
E-commerce conducted between businesses differs from that carried out between a business and
its consumers. There are five generally accepted types of e-commerce:
With the help of B2B e-commerce, companies are able to improve the efficiency of several
common business functions, including supplier management, inventory management and
payment management.
Using e-commerce enabled business applications, companies are able to better control their
supplier costs by reducing PO (purchase order) processing costs and cycle times. This has the
added benefit of being able to process more POs at a lesser cost in the same amount of time. E-
commerce technology can also serve to shorten the order-ship- bill cycle of inventory
management by linking business partners together with the company to provide faster data
access. Businesses can improve their inventory auditing capabilities by tracking order shipments
electronically, which results in reduced inventory levels and improves upon the ability of the
company to provide “just-in- time” service.
This e-commerce technology is also being used to improve the efficiency of managing payments
between a business and its partners and distributors. By processing payments electronically,
companies are able to lower the number of clerical errors and increase the speed of processing
invoices, which results in lowered transaction fees.
Business to Customer or B2C refers to e-commerce activities that are focused on consumers
rather than on businesses. For instance, a book retailer would be a B2C company such as
Amazon.com and other companies that follow a merchant model or brokerage business models.
Other examples could also be purchasing services from an insurance company, conducting
online banking and employing travel services.
Customer to Business or C2B refers to e-commerce activities, which use reverse pricing models
where the customer determines the prices of the product or services. In this case, the focus shifts
from selling to buying. There is an increased emphasis on customer empowerment.
In this type of e-commerce, consumers get a choice of a wide variety of commodities and
services, along with the opportunity to specify the range of prices they can afford or are willing
to pay for a particular item, service or commodity. As a result, it reduces the bargaining time,
increases the flexibility and creates ease at the point of sale for both the merchant and the
consumer.
Most companies have given little thought to how to scale back their environmental impact.
According to Stanford, the energy that powers individual workstations can be reduced anywhere
from 17-74 percent. Even when we turn off and unplug our computers and devices, technology is
involved in huge output.
Here are ways that companies and departments can make an impact towards sustainable IT:
Relocate (and collocate) servers. Maximize your data center space as much as possible to
minimize your cooling and energy costs. If realistic, relocate your servers to colder
climates for 8% reduction in GHG emissions.
Follow data center best practices, such as:
Harness outside air cooling.
Automate controls for lights, security, and outdoor cooling.
Do not over-cool; cool to the minimum necessary.
Separate aisles based on hot and cold temperatures.
Aim for a power usage effectiveness (PUE) of 1.2 or lower.
Unplug and remove zombie servers, the ultimate consumers: servers which are plugged in
and using energy but aren’t doing any computing.
Migrate to the cloud. Cloud energy tends to be more efficient because of economic of
scale. (Although some research does challenge this.)
Use state-of-the-art IT. Legacy systems can require more power and their large sizes often
mean outsized heat output—requiring additional cooling. Tools like BMC Discovery can
help you manage your assets, including releasing those that are no longer useful.
Promote and purchase computers that are rated for energy efficiency. Groups like TCO
Certified and Energy Star audit and certify factories and devices for their efficiencies and
sustainable practices.
Offer rebates or increased budgets to teams who promote sustainability. Some teams and
departments at your company may be able to virtualize or work from home.
Let individual teams determine the most applicable solutions for their needs. The Stanford
research indicates that when teams can choose their options—instead of being mandated—
they will see higher energy savings.
Of course, sustainable practices aren’t left entirely to companies. Individuals can reduce energy
spent on their devices with these best practices:
Set computers to sleep. Sleep is the lowest use of energy (besides powering down and
unplugging). So, set monitors to turn off after 15 inactive minutes and hard disks even
sooner: 5 minutes of inactivity. Your computer shouldn’t be awake after more than 30
minutes of activity.
Upgrade to smart power strips. These smart strips cut down on vampire energy that
computers, TVs, and peripheral devices all consume.
Share printers. Whether at home or the office, consider how often printers are necessary.
Who can you share with?
Work remotely. Unless your daily commute is by foot (walking or biking), try working
from home to reduce GHG emissions associated with commuting—with approval from
your boss, of course.
Is green growth possible?
A popular belief is that economic growth and sustainability are compatible—an idea known as
green growth. Many climate activists disagree, and research is starting to show that in order for
us to protect our environment, one of three sustainability pillars, we may have to “liberate
ourselves” from economic development. This doesn’t mean stopping economic activity, but it
should mean that we reevaluate certain industries and our own companies and practices to ensure
that we don’t produce or consume more than we are right now. Indeed, a closed-loop supply
chain may be the best way forward.
Green computing is the term referring to efficient use of resources in computing and IT/IS
infrastructure. Efficiency of green computing emphases on mineralizing hazardous
environmental impact in conjunction with achieving economic viability and improved system
performance. The field of “green technology” covers a board spectrum of subjects – from
alternative energy-generation and electricity consumption techniques and use of eco-friendly,
recyclable materials to implementing sustainable digital services.
Nowadays in order to achieve social awareness and promotion of green technology solutions,
main four complementary approaches are employed:
• Green Use: Reducing the power consumption of computers, information systems and their
peripheral subsystems in environmentally friendly manner.
• Green Disposal: Refurbishing and reusing existing old computers and other electronic
associated devices. Recycling unwanted used computers and other electronic-waste by IT
vendors using their “take back” policy in order to take responsibility for the full lifecycle of
products they produce.
In order to achieve goals set by the idea of ICT sustainability whole process of creating ICT
infrastructure should be taken into account. Minimal impact on the environment should be one of
the key assumptions for IT manufacturers during the process of design and production of all ICT
components. Major IT companies are already applying green standards to their own operations in
order to: gain new revenue opportunities and promote social and environmental responsibility
influencing customers and market competition. Main areas in green manufacturing of computers
are:
• Eco-friendly design: the design of computing resources that meet the stringent restriction of
e.g. Energy Star enabling further utilization with determined power supply and power
management requirements (including special modes and allowances). “The Energy Star devices
can be programmed to power-down to a low electric state when they are not in use, helping to
save energy and run cooler which helps them last even longer”.
• Use of bio-products: A biodegradable and renewable material often requires less energy to
produce in comparison to traditional toxic materials. Manufacturers use many different types of
plastic in computers, which makes is very changeling do recycle.
What is more computers contain hazardous contaminants for environment like: cadmium, lead,
mercury or chromium. Use of harmful power-demanding materials can be replaced by efficient
and recyclable elements e.g. displays made of OLED’s (Organic Light-Emitting Diode) - in
manufacturing mercury is not used, making them more environmentally friendly.
Green cloud architecture is one of the latest developments of green computing idea. The aim of
this unified solution is to deliver both users and providers, high-level architecture for supporting
energy- efficient service allocation which is based on cloud technology. Cloud providers, being
profit oriented are looking for solutions which can lower their electricity bills without losing
their market share. The goal of satisfying the demand for high-level computing services on the
users side and saving energy on the providers side, can now be achieved by implementing the
green cloud infrastructure. Figure 1 shows the architecture for supporting energy-efficient
service allocation in green cloud computing infrastructure. The cloud services (SaaS, PaaS, IaaS)
are registered in the form of public offering in Green Offer Directory.
The Green Broker has the full access to all services which are available and registered in public
directory. Green Offer directory is incentive for the providers who, list their services with
discounted prices and green hours. A typical cloud broker lease cloud services and schedule
applications Green broker’s responsibility is to select these offerings in terms of requirements of
end user. Each request is analysed according to the price, time and service that offer the highest
quality and least CO2 emission. Green broker uses the up to date information about cloud
services and current status of energy efficiency parameters using Carbon Emission Directory
(CED) which is very important component of the architecture. CED may include some the
crucial green metrics power measurement like: Power Usage Effectiveness (PUE) – which is the
fraction of total energy consumed by the service of a data centre to the total energy consumed by
IT equipment, some cooling efficiency indicators like Water Usage Effectiveness (CUE) – which
is the calculation of greenhouse gasses (CO2, CH4) release on atmosphere by the data centre and
carbon footprint.
Software tools for analyzing, modeling, and simulating environmental impact, and
environmental risk management;
Platforms for eco-management, emission trading, or ethical investing; tools for auditing
and reporting energy consumption and savings and for monitoring greenhouse gas
emissions;
Environmental knowledge management systems, including geographic information
systems and environmental metadata standards; urban environment planning tools and
systems;
Integrating and optimizing existing environ- mental monitoring networks and new easy
plug-in sensors.
The concept of green environment started during the 20th century because of consumers concern
about environmental issues and scarcity which is aroused due to high utilization of natural
resources where these resources may become scarce or depleted. Already it took long 20 years
for the company to adopt this practice of green business. In a developing economy doing a green
business is no longer considered as a cost issue since it is now considered as a primary issue for
modification and exploring of new markets and also profit maximization. In a green business
environment, the company may re-design the product attributes since the usage of the harmful
chemical may be prevented and scarce resources may not be exploited which thereby reduces the
production and inventory cost. The company should adopt strategies and creative solutions to
face the environmental challenges through mutual support from the policies of the companies
and also by the provision of incentives by the government. The green behaviour is not only the
sole concern of the consumers but also it is also the social responsibility of the producers to
maintain their own green environment. One of the cleaning industries is striving hard to maintain
progress in their job by developing efficiency towards energy usage.
Environmental Challenges:
In a liberalised and globalized economy, managers face the following environmental
challenges:
1. Economic environment:
In the market economy of today, consumers are free to buy what they want and producers are
free to produce what they want. Keeping in view the economic system of a country, demands and
preferences of consumers and system of ownership (private or public), managers decide whether
or not they should carry business operations in the host country.
Natural resources (oil, iron, coal, natural gas, uranium etc.) of a country also attract foreign
enterprises to manufacture goods for domestic and international markets. Infrastructural facilities
like roads, schools, hospitals, transport, communication system etc. affect economic
development of a country. An economically developed country attracts and supports foreign
enterprises.
2. Political/Legal environment:
Political environment of a country also offers challenges to international managers. A politically
stable economy where rules and regulations affecting the business operations do not change
frequently; where incentives (reduced interest rates on loans, tax subsidies, market protection
etc.) attract foreign business; where international economic communities agree to reduce trade
barriers on international trade (in the form of tariffs, quotas, export restraint agreements and “buy
national” laws) to protect domestic business (more the trade barriers, less the international
business), offer great advantage to foreign enterprises.
3. Cultural environment:
International business is also affected by cultural environment of a country. The value systems,
social and ethical beliefs, understanding and interpretation of symbols and language of different
cultures have direct impact on business practices. The international manager must, therefore, get
accustomed to cultural environment of the economy to establish his business there.
Functional Challenges:
The following challenges are faced by managers in carrying out these functions:
1. Planning:
Planning becomes complex in the changing business environment. At the domestic level, local
market conditions, technological factors, growth of products and markets, strengths and
weaknesses of domestic and foreign competitors and a host of other factors become challenging
to effectively plan the business operations.
At the international level, economic, political and legal stability of economy, availability of
technology and finance, ease of formation or strategic alliances, understanding of the
environmental circumstances and many other issues affect planning. Managers who are able and
competent to forecast environmental opportunities will be successful in making optimum plans.
2. Organising:
Managers have to reorient their organising abilities to face the competitive challenges in
domestic and international markets. Line organisations cannot be suitable forms of organisations
and managers have to adopt project, matrix or network organisation structures.
They should be competent to manage organisation structures and designs, manage people and
change, both at the national and international levels. Firms operating at the international level
should give authority to managers in each country to manage their businesses rather than
exercising authority from the head office.
3. Staffing:
With increase in knowledge and competence of workers, managers have to be skilled in making
appointments so that right persons are selected for the right jobs and expenditure on training and
development is worthwhile.
4. Directing:
As management is moving towards excellence and managers deal with people from different
social and cultural backgrounds, they must understand how cultural factors affect individuals,
how motivational forces and communication vary across cultures and, thus, interact with work
groups of different cultures.
Even while interacting with people of the same culture, managers adopt democratic styles of
leadership, non-financial motivators and two-way communication to get better responses from
the employees.
5. Controlling:
The basic issues with respect to control are operations management, productivity, quality,
technology and information systems. Managers have to take care of distance, time zones and
cultural differences (in international management) while controlling the business operations.
Control through computers and management information systems is replacing direct controls
through close contacts and supervision.
COMPONENTS OF IT INFRASTRUCTURE
Hardware
Hardware includes servers, datacenters, personal computers, routers, switches, and other
equipment.
The facilities that house, cool, and power a datacenter could also be included as part of the
infrastructure.
Software
Software refers to the applications used by the business, such as web servers, content
management systems, and the OS—like Linux®. The OS is responsible for managing system
resources and hardware, and makes the connections between all of your software and the
physical resources that do the work.
Networking
Switching
A network switch is the device that provides connectivity between network devices on a Local
Area Network (LAN). A switch contains several ports that physically connect to other network
devices, including:
Other switches
Routers
Servers
Early networks used bridges, in which each device “saw” the traffic of all other devices on the
network. Switches allow two devices on the network to talk to each other without having to
forward that traffic to all devices on the network.
Routers
Routers move packets between networks. Routing allows devices separated on different LANs to
talk to each other by determining the next “hop” that will allow the network packet to eventually
get to its destination.
If you have ever manually configured your IP address on a workstation, the default gateway
value that you keyed in was the IP address of your router.
Firewalls
Firewalls are security devices at the edge of the network. The firewall can be thought of as the
guardian or gatekeeper.
A set of rules defines what types of network traffic will be allowed through the firewall and what
will be blocked.
In the simplest version of a firewall, rules can be created which allow a specific port and/or
protocol for traffic from one device (or a group of devices) to a device or group of devices.
Servers
A network server is simply another computer, but usually larger in terms of resources than what
most people think of. A server allows multiple users to access and share its resources. There are
several types of servers, with the following being among the most common:
A file server provides end users with a centralized location to store files. When configured
correctly, file servers can allow or prevent specific users to access files.
A directory server provides a central database of user accounts that can be used by several
computers. This allows centralized management of user accounts which are used to access
server resources.
Web servers use HTTP (Hyper Text Transfer Protocol) to provide files to users through a
web browser.
There are also application servers, database servers, print servers, etc.
Physical plant
The physical plant is all of the network cabling in your office buildings and server room/data
center. This all too often neglected part of your infrastructure usually is the weakest link and is
the cause of most system outages when not managed properly. There are two main types of
cabling in the infrastructure:
CAT 5/6/7
Fiber optic
The server room, or data center in large organizations, can be thought of as the central core of
your network. It is the location in which you place all of your servers, and it usually acts as the
center of most networks.
Infrastructure software
Infrastructure software is perhaps the most “gray” of all infrastructure components. However, I
consider server operating systems and directory services (like MS Active Directory) to be part of
the infrastructure. Without multi-user operating systems, the hardware can’t perform its
infrastructure functions.
IT infrastructure management
IT infrastructure management is the coordination of IT resources, systems, platforms, people,
and environments. Here are some of the most common technology infrastructure management
types:
The choices made for each of these different types of decisions have varying effects on a
company’s operating costs, quality, dependability flexibility, speed / responsiveness and new
product capabilities. For example most companies that continually adjust their production rates
so as to chase demand tend to have higher production costs and less consistent quality than those
that try to maintain a level rate of production and absorb demand fluctuations through
inventories. If it wants to be able to respond quickly to small orders for customized products and
rapid changes in customers requirement it probably, should configure itself so that it has excess
capacity, its facilities are tightly coordinated (or individual facilities are focused on supplying the
needs of specific customers), its processing equipment and people are organized more like those
of a job shop than a continuous flow line, and it has cultivated suppliers who are able to react
quickly to changing requirements.
If on the other hand, it wants to be able to offer low cost and the latest technology it probably
should concentrate the production of those items that require large amounts of capital investment
and technological expertise into a small number of facilities, possibly located near engineering
universities or other technical centres and seek out suppliers who are able to match its needs.
Some researchers have found evidence that the structural and infrastructural decisions made by
many companies tend to exhibit consistent patterns that allow them to be placed into one of just a
few categories, which can be characterized by such competitive strategies as caretakers versus
innovators.
Just as the trade-offs made by the designers of an engineered product must be consistent with its
intended use, so must these structural and infrastructural decisions mesh together to create a
desired set of specific capabilities. Operations has to be able to do things that are considered
critical to the company’s success without wasting resources on lower priority pursuits ;
otherwise some of the things that are really important will not get done. Achieving this kind of
consistency or fit between strategy, structure and infrastructure in an organization, however, is
much more difficult and complex than when designing a product. Whereas the decisions and
trade-offs involved in product design are usually made within a relatively short period of time by
a group who work closely together and are often located near one another, structural and
infrastructural decisions are usually made at different points in time by different groups of people
who often are physically separated and may seldom interact in the normal course of business.
Only infrequently will a company make a basic change in any one of these categories (this being
almost the definition of a structural decision) but in any year it probably will make at least one
major decision that falls into one of them. Hence, the company’s competitive priorities and
operations strategy need to be clearly communicated to all these groups, and their structural and
infrastructural decisions monitored for consistency. Otherwise, unintended drifting may occur.
In this digital age, an organization’s agility and productivity depend on more than just
hardworking employees and excellent equipment. Running a smooth operation also requires a
robust, clean and secure network infrastructure. Without the right network infrastructure in place,
you may suffer from poor user experience and security issues that can impact employee
productivity, cost you money and damage your brand.
As such, it’s critical for business executives to understand the importance of network
infrastructure and be aware of the challenges and opportunities it presents. With this knowledge
and the right tools in place, you are taking the first steps to ensuring optimum productivity and
helping your organization maintain peak levels of performance.
Network infrastructure refers to all of the resources of a network that make network or internet
connectivity, management, business operations and communication possible. Network
infrastructure comprises hardware and software, systems and devices, and it enables computing
and communication between users, services, applications and processes. Anything involved in
the network, from servers to wireless routers, comes together to make up a system’s network
infrastructure. Network infrastructure allows for effective communication and service between
users, applications, services, devices and so forth.
Network infrastructure and IT infrastructure are similar. However, while they may at times refer
to the same thing, there can also be subtle differences between the two. Often, IT infrastructure is
seen as the larger, more encompassing term. IT infrastructure (or information technology
infrastructure) defines a collection of information technology elements foundational to an IT
service. Often, information technology infrastructure refers to physical components like
hardware, but it can also encompass some network or software components.
Network infrastructure may be seen as a smaller category within the larger IT infrastructure
definition. A sound network infrastructure supports the success of the broad IT infrastructure. A
company needs both a solid IT infrastructure and network infrastructure to have cohesive
solutions and sustained success.
It’s important to have a reliable IT infrastructure and also qualified people, but neither of those
are sufficient without a well-built network infrastructure. A network infrastructure enables
connection and communication, each of which are critical to the success of a business. Simply
put, without a sound network infrastructure, IT components including hardware and software
aren’t much use. Making sure your network infrastructure is robust, secure and clean is critical to
organizational excellence.
There are a number of challenges in regards to running a network infrastructure model. Some of
the top three network infrastructure challenges are:
1. Centralizing traffic
2. Dealing with duplicate data
3. Sending the right data to the right tool
1. Centralizing Traffic
Within an organization, there are often multiple different subnets and locations or sites. Without
a centralized hub, network visibility, monitoring and management can become near impossible.
Many companies use network infrastructure solutions to centralize traffic to better understand
and monitor the data traversing their networks. This enhances their security posture and helps
network operations teams address performance issues.
In some cases, duplicate data can comprise 50 – 66 percent of network traffic. Removing
duplicate data is critical, particularly when it comes to the effectiveness of network security
solutions. If the security solutions get too much duplicate data, they may be slowed down and
less effective in detecting threats.
Many organizations use a number of different cyber security tools and providers. Many security
providers often charge based on how much data they need to process. As such, sending the right
type of data to the right tool is a critical aspect of infrastructure networking. Sending data from
many different sources all to the same tool might be ineffective and costly, particularly if one
tool is best suited to one type of data and another tool to another type.
QUES 1:
Cloud computing is the delivery of different services through the Internet. These resources
include tools and applications like data storage, servers, databases, networking, and software.
Rather than keeping files on a proprietary hard drive or local storage device, cloud-based
storage makes it possible to save them to a remote database. As long as an electronic device has
access to the web, it has access to the data and the software programs to run it.
Cloud computing is a popular option for people and businesses for a number of reasons including
cost savings, increased productivity, speed and efficiency, performance, and security.
Understanding Cloud Computing
Cloud computing is named as such because the information being accessed is found remotely in
the cloud or a virtual space. Companies that provide cloud services enable users to store files and
applications on remote servers and then access all the data via the Internet. This means the user is
not required to be in a specific place to gain access to it, allowing the user to work remotely.
Cloud computing takes all the heavy lifting involved in crunching and processing data away
from the device you carry around or sit and work at. It also moves all of that work to huge
computer clusters far away in cyberspace. The Internet becomes the cloud, and voilà—your data,
work, and applications are available from any device with which you can connect to the Internet,
anywhere in the world.
Cloud computing can be both public and private. Public cloud services provide their services
over the Internet for a fee. Private cloud services, on the other hand, only provide services to a
certain number of people. These services are a system of networks that supply hosted services.
There is also a hybrid option, which combines elements of both the public and private services.
Cloud computing is the delivery of different services through the Internet, including data
storage, servers, databases, networking, and software.
Cloud-based storage makes it possible to save files to a remote database and retrieve
them on demand.
Services can be both public and private—public services are provided online for a fee
while private services are hosted on a network to specific clients.
Types of Cloud Services
Regardless of the kind of service, cloud computing services provide users with a series of
functions including:
Email
Storage, backup, and data retrieval
Creating and testing apps
Analyzing data
Audio and video streaming
Delivering software on demand
Cloud computing is still a fairly new service but is being used by a number of different
organizations from big corporations to small businesses, nonprofits to government agencies, and
even individual consumers.
Cloud-based software offers companies from all sectors a number of benefits, including the
ability to use software from any device either via a native app or a browser. As a result, users can
carry their files and settings over to other devices in a completely seamless manner.
Cloud computing is far more than just accessing files on multiple devices. Thanks to cloud
computing services, users can check their email on any computer and even store files using
services such as Dropbox and Google Drive.Cloud computing services also make it possible for
users to back up their music, files, and photos, ensuring those files are immediately available in
the event of a hard drive crash.
It also offers big businesses huge cost-saving potential. Before the cloud became a viable
alternative, companies were required to purchase, construct, and maintain costly information
management technologyand infrastructure. Companies can swap costly server centers and IT
departments for fast Internet connections, where employees interact with the cloud online to
complete their tasks.
The cloud structure allows individuals to save storage space on their desktops or laptops. It also
lets users upgrade software more quickly because software companies can offer their products
via the web rather than through more traditional, tangible methods involving discs or flash
drives. For example, Adobe customers can access applications in its Creative Cloud through an
Internet-based subscription.8 This allows users to download new versions and fixes to their
programs easily.
Security has always been a big concern with the cloud especially when it comes to sensitive
medical records and financial information. While regulations force cloud computing services to
shore up their security and compliance measures, it remains an ongoing issue. Encryption
protects vital information, but if that encryption key is lost, the data disappears.
Servers maintained by cloud computing companies may fall victim to natural disasters, internal
bugs, and power outages, too. The geographical reach of cloud computing cuts both ways: A
blackout in California could paralyze users in New York, and a firm in Texas could lose its data
if something causes its Maine-based provider to crash.
As with any technology, there is a learning curve for both employees and managers. But with
many individuals accessing and manipulating information through single portal, inadvertent
mistakes can transfer across an entire system.
Over the past few years, IoT has become one of the most important technologies of the 21st
century. Now that we can connect everyday objects—kitchen appliances, cars, thermostats, baby
monitors—to the internet via embedded devices, seamless communication is possible between
people, processes, and things.
By means of low-cost computing, the cloud, big data, analytics, and mobile technologies,
physical things can share and collect data with minimal human intervention. In this
hyperconnected world, digital systems can record, monitor, and adjust each interaction between
connected things. The physical world meets the digital world—and they cooperate.
In a nutshell, the Internet of Things is the concept of connecting any device (so long as it has an
on/off switch) to the Internet and to other connected devices. The IoT is a giant network of
connected things and people – all of which collect and share data about the way they are used
and about the environment around them.
That includes an extraordinary number of objects of all shapes and sizes – from smart
microwaves, which automatically cook your food for the right length of time, to self-driving
cars, whose complex sensors detect objects in their path, to wearable fitness devices that measure
your heart rate and the number of steps you’ve taken that day, then use that information to
suggest exercise plans tailored to you. There are even connected footballs that can track how far
and fast they are thrown and record those statistics via an app for future training purposes.
How does it work?
Devices and objects with built in sensors are connected to an Internet of Things platform, which
integrates data from the different devices and applies analytics to share the most valuable
information with applications built to address specific needs.
These powerful IoT platforms can pinpoint exactly what information is useful and what can
safely be ignored. This information can be used to detect patterns, make recommendations, and
detect possible problems before they occur.
For example, if I own a car manufacturing business, I might want to know which optional
components (leather seats or alloy wheels, for example) are the most popular. Using Internet of
Things technology, I can:
Use sensors to detect which areas in a showroom are the most popular, and where customers
linger longest;
Drill down into the available sales data to identify which components are selling fastest;
Automatically align sales data with supply, so that popular items don’t go out of stock.
The information picked up by connected devices enables me to make smart decisions about
which components to stock up on, based on real-time information, which helps me save time and
money.
With the insight provided by advanced analytics comes the power to make processes more
efficient. Smart objects and systems mean you can automate certain tasks, particularly when
these are repetitive, mundane, time-consuming or even dangerous.
With cloud-based IoT applications, business users can quickly enhance existing processes for
supply chains, customer service, human resources, and financial services. There’s no need to
recreate entire business processes.
The ability of IoT to provide sensor information as well as enable device-to-device
communication is driving a broad set of applications. The following are some of the most
popular applications and what they do.
Internet of Things (IoT) is one of the hottest technologies in the era of digital
transformation, connecting everything to the Internet. It is the core technology behind smart
homes, self-driving cars, smart utility meters, and smart cities. But there are nine main security
challenges for the future of the internet of things (IoT).
The number of IoT devices is rapidly increasing over the last few years. According to an analyst
firm Gartner, there will be more than 26 billion connected devices around the world by 2020, up
from just 6 billion in 2016.
While IoT devices bring effective communication between devices, automate things, save time
and cost and have numerous benefits, there is one thing still concerning the users—iot security.
There have been specific incidents which have made the IoT devices challenging to trust.
Several smart TVs and cash machines have been hacked, which is negatively impacting the trust
of not only consumers but also enterprises. Having said that, let’s have a deep dive into the most
critical security challenges for the future of the Internet of Things (IoT).
1. Outdated hardware and software.
Since the IoT devices are being used increasingly, the manufacturers of these devices are
focusing on building new ones and not paying enough attention to security.
A majority of these devices don’t get enough updates, whereas some of them never get a single
one. What this means is that these products are secure at the time of purchase but becomes
vulnerable to attacks when the hackers find some bugs or security issues.
When these issues are not fixed by releasing regular updates for hardware and software, the
devices remain vulnerable to attacks. For every little thing connected to the Internet, the regular
updates are a must-have. Not having updates can lead to data breach of not only customers but
also of the companies that manufacture them.
2. Use of weak and default credentials.
Many IoT companies are selling devices and providing consumers default credentials with them
— like an admin username. Hackers need just the username and password to attack the device.
When they know the username, they carry out brute-force attacks to infect the devices.
The Mirai botnet attack is an example that was carried out because the devices were using
default credentials. Consumers should be changing the default credentials as soon as they get the
device, but most of the manufacturers don’t say anything in the instruction guides about making
that change. Not making an update in the instruction guides leaves all of the devices open to
attack.
3. Malware and ransomware.
The rapid rise in the development of IoT products will make cyberattack permutations
unpredictable. Cybercriminals have become advanced today — and they lock out the consumers
from using their own device.
For example, an IoT-enabled camera that captures confidential information from home or the
work office — and the system is hacked. The attackers will encrypt the webcam system and not
allow consumers to access any information. Since the system contains personal data, they can
ask consumers to pay a hefty amount to recover their data. When this occurs, it’s called
ransomware.
4. Predicting and preventing attacks.
Cybercriminals are proactively finding out new techniques for security threats. In such a
scenario, there is a need for not only finding the vulnerabilities and fixing them as they occur but
also learning to predict and prevent new threats.
The challenge of security seems to be a long-term challenge for the security of connected
devices. Modern cloud services make use of threat intelligence for predicting security issues.
Other such techniques include AI-powered monitoring and analytics tools. However, it is
complex to adapt these techniques in IoT because the connected devices need processing of data
instantly.
5. Difficult to find if a device is affected.
Although it is not really possible to guarantee 100% security from security threats and breaches,
the thing with IoT devices is that most of the users don’t get to know if their device is hacked.
When there is a large scale of IoT devices, it becomes difficult to monitor all of them even for
the service providers. It is because an IoT device needs apps, services, and protocols for
communication. Since the number of devices is increasing significantly, the number of things to
be managed is increasing even more.
Hence, many devices keep on operating without the users knowing that they have been hacked.
6. Data protection and security challenges.
In this interconnected world, the protection of data has become really difficult because it gets
transferred between multiple devices within a few seconds. One moment, it is stored in mobile,
the next minute it is on the web, and then the cloud.
All this data is transferred or transmitted over the internet, which can lead to data leak. Not all
the devices through which data is being transmitted or received are secure. Once the data gets
leaked, hackers can sell it to other companies that violate the rights for data privacy and security.
Furthermore, even if the data doesn’t get leaked from the consumer side, the service providers
might not be compliant with regulations and laws. This can also lead to security incidents.
7. Use of autonomous systems for data management.
From data collection and networking point-of-view, the amount of data generated from
connected devices will be too high to handle.
It will undoubtedly need the use of AI tools and automation. IoT admins and network experts
will have to set new rules so that traffic patterns can be detected easily.
However, use of such tools will be a little risky because even a slightest of mistakes while
configuring can cause an outage. This is critical for large enterprises in healthcare, financial
services, power, and transportation industries.
8. Home security.
Today, more and more homes and offices are getting smart with IoT connectivity. The big
builders and developers are powering the apartments and the entire building with IoT devices.
While home automation is a good thing, but not everyone is aware of the best practices that
should be taken care of for IoT security.
Even if the IP addresses get exposed, this can lead to exposure of residential address and other
contact details of the consumer. Attackers or interested parties can use this information for evil
purposes. This leaves smart homes at potential risk.
9. Security of autonomous vehicles.
Just like homes, the self-driving vehicles or the ones that make use of IoT services, are also at
risk. Smart vehicles can be hijacked by skilled hackers from remote locations. Once they get
access, they can control the car, which can be very risky for passengers.
Undoubtedly, IoT is a technology that should be called a boon. But since it connects all the
things to the Internet, the things become vulnerable to some sort of security threats. Big
companies and cybersecurity researchers are giving their best to make things perfect for the
consumers, but there is still a lot to be done.
• Application error. Computation errors, input errors, and buffer overflows. The idea of risk
management is that threats of any kind must be identified, classified, and evaluated to calculate
their damage potential. This is easier said than done.
Administrative, Technical, and Physical Controls For example, administrative, technical, and
physical controls, are as follows:
• Administrative controls consist of organizational policies and guidelines that help minimize the
exposure of an organization. They provide a framework by which a business can manage and
inform its people how they should conduct themselves while at the workplace and provide clear
steps employees can take when they’re confronted with a potentially risky situation.
Some examples of administrative controls include the corporate security policy, password policy,
hiring policies, and disciplinary policies that form the basis for the selection and implementation
of logical and physical controls. Administrative controls are of paramount importance because
technical and physical controls are manifestations of the administrative control policies that are
in place.
Technical controls use software and hardware resources to control access to information and
computing systems, to help mitigate the potential for errors and blatant security policy violations.
Examples of technical controls include passwords, network- and host-based firewalls, network
intrusion detection systems, and access control lists and data encryption. Associated with
technical controls is the Principle of Least Privilege, which requires that an individual, program,
or system process is not granted any more access privileges than are necessary to perform the
task.
• Physical controls monitor and protect the physical environment of the workplace and
computing facilities. They also monitor and control access to and from such facilities. Separating
the network and workplace into functional areas are also physical controls. An important
physical control is also separation of duties, which ensures that an individual cannot complete a
critical task by herself. Risk Analysis During risk analysis there are several units that can help
measure risk. Before risk can be measured, though, the organization must identify the
vulnerabilities and threats against its mission-critical systems in terms of business continuity.
During risk analysis, an organization tries to evaluate the cost for each security control that helps
mitigate the risk. If the control is cost effective relative to the exposure of the organization, then
the control is put in place.
The measure of risk can be determined as a product of threat, vulnerability, and asset values—in
other words:
Risk ¼ Asset Threat Vulnerability
There are two primary types of risk analysis: quantitative and qualitative.
Quantitative risk analysis attempts to assign meaningful numbers to all elements of the risk
analysis process. It is recommended for large, costly projects that require exact calculations. It is
typically performed to examine the viability of a project’s cost or time objectives.
Quantitative risk analysis provides answers to three questions that cannot be addressed with
deterministic risk and project management methodologies such as traditional cost estimating or
project scheduling:
• What is the probability of meeting the project objective, given all known risks?
• How much could the overrun or delay be, and therefore how much contingency is needed for
the organization’s desired level of certainty?
• Where in the project is the most risk, given the model of the project and the totality of all
identified and quantified risks?
Qualitative risk analysis does not assign numerical values but instead opts for general
categorization by severity levels. Where little or no numerical data is available for a risk
assessment, the qualitative approach is the most appropriate. The qualitative approach does not
require heavy mathematics; instead, it thrives more on the people participating and their
backgrounds. Qualitative analysis enables classification of risk that is determined by people’s
wide experience and knowledge captured within the process. Ultimately, it is not an exact
science, so the process will count on expert opinions for its base assumptions. The assessment
process uses a structured and documented approach and agreed likelihood and consequence
evaluation tables. It is also quite common to calculate risk as a single loss expectancy (SLE) or
annual loss expectancy (ALE) by project or business function.
To be able to properly classify and restrict data, one first needs to understand how data is
accessed. Data is accessed by a subject, whether that is a person, process, or another application,
and what is accessed to retrieve the data is called an object. Think of an object as a cookie jar
with valuable information in it, and only select subjects have the permissions necessary to dip
their hands into the cookie jar and retrieve the data or information that they are looking for.
Data Classification Various data classification models are available for different environments.
Some security models focus on the confidentiality of the data (such as Bell-La Padula) and use
different classifications. For example, the U.S. military uses a model that goes from most
confidential (Top Secret) to least confidential (Unclassified) to classify the data on any given
system. On the other hand, most corporate entities prefer a model whereby they classify data by
business unit (HR, Marketing, R&D, etc.) or use terms such as Company Confidential to define
items that should not be shared with the public. Other security models focus on the integrity of
the data (for example, Bipa); yet others are expressed by mapping security policies to data
classification (for example, Clark-Wilson). In every case there are
WHAT IS ADT And user or creator of the data object is allowed to define who can and who
cannot access the data; this model has become less popular in recent history. Mandatory Access
Control (MAC) is more of a militant style of applying permissions, where permissions are the
same across the board to all members of a certain level or class within the organization.
• Authentication versus authorization. It’s crucial to understand that simply because someone
becomes authenticated does not mean that she is authorized to view certain data. There needs to
be a means by which a person, after gaining access through authentication, is limited in the
actions she is authorized to perform on certain data (such as read-only permissions).
• Protecting data with cryptography is important for the security of both the organization and its
customers. Usually, the most important item that an organization needs to protect, aside from
trade secrets, is its customers’ personal data. If there is a security breach and the data that is
stolen or compromised was previously encrypted, the organization can feel more secure in that
the collateral damage to its reputation and customer base will be minimized.
• Data leakage prevention and content management is an up-and-coming area of data security
that has proven extremely useful in preventing sensitive information from leaving an
organization. With this relatively new technology, a security administrator can define the types
of documents, and further define the content within those documents, that cannot leave the
organization and quarantine them for inspection before they hit the public Internet.
• Securing email systems is one of the most important and overlooked areas of data security.
With access to the mail server, an attacker can snoop through anyone’s email, even the company
CEO’s! Password files, company confidential documents, and contacts for all address books are
only some of the things that a compromised mail server can reveal about an organization, not to
mention root/administrator access to a system in the internal network.
Systems and network security is at the core of information security. Though physical security is
extremely important and a breach could render all your systems and network security safeguards
useless, without hardened systems and networks, anyone from the comfort of her own living
room can take over your network, access your confidential information, and disrupt your
operations at will.
Data classification and security are also quite important, if for nothing else but to be sure that
only those who need to access certain data can and those who do not need access cannot;
however, that usually works well for people who play by the rules. In many cases when an
attacker gains access to a system, the first order of business is escalation of privileges. This
means that the attacker gets in as a regular user and attempts to find ways to gain administrator
or root privileges.
The following are brief descriptions of each of the components that make for a complete security
infrastructure for all host systems and network-connected assets. Host-Based Security The host
system is the core place where data sit and are accessed, so it is therefore also the main target of
many intruders. Regardless of the operating system platform that is selected to run certain
applications and databases, the principles of hardening systems are the same and apply to host
systems as well as network devices, as we will see in the upcoming sections. Steps required to
maintain host systems in as secure a state as possible are as follows:
1. OS hardening. Guidelines by which a base operating system goes through a series of checks to
make sure no unnecessary exposures remain open and that security features are enabled where
possible. There is a series of organizations that publish OS hardening guides for various
platforms of operating systems.
2. Removing unnecessary services. In any operating system there are usually services that are
enabled but have no real business need. It is necessary to go through all the services of your main
corporate image, on both the server side and client side, to determine which services are required
and which would create a potential vulnerability if left enabled.
3. Patch management. All vendors release updates for known vulnerabilities on some kind of
schedule. Part of host-based security is making sure that all required vendor patches, at both the
operating system and the application level, are applied as quickly as business operations allow on
some kind of regular schedule. There should also be an emergency patch procedure in case there
is an outbreak and updates need to be pushed out of sequence.
4. Antivirus. Possibly more important than patches are antivirus definitions, specifically on
desktop and mobile systems. Corporate antivirus software should be installed and updated
frequently on all systems in the organization.
5. Intrusion detection systems (IDSs). Although many seem to think IDSs are a network security
function, there are many good host-based IDS applications, both commercial and open source,
that can significantly increase security and act as an early warning system for possibly malicious
traffic and/or files for which the AV does not have a definition.
6. Firewalls. Host-based firewalls are not as popular as they once were because many big
vendors such as Symantec, McAfee, and Checkpoint have moved to a host-based client
application that houses all security functions in one. There is also another trend in the industry to
move toward application-specific host-based firewalls like those specifically designed to run on a
Web or database server, for example.
7. Data encryption software. One item often overlooked is encryption of data while at rest. Many
solutions have recently come onto the market that offer the ability to encrypt sensitive data such
as credit-card and Social Security numbers that sit on your file server or inside the database
server. This is a huge protection in the case of information theft or data leakage.
8. Backup and restore capabilities. Without the ability to back up and restore both servers and
clients in a timely fashion, an issue that could be resolved in short order can quickly turn into a
disaster. Backup procedures should be in place and restored on a regular basis to verify their
integrity.
8. System event logging. Event logs are significant when you’re attempting to investigate
the root cause of an issue or incident. In many cases, logging is not turned on by default
and needs to be enabled after the core installation of the host operating system. The OS
hardening guidelines for your organization should require that logging be enabled.
Network-Based Security The network is the communication highway for everything that happens
between all the host systems. All data at one point or another pass over the wire and are
potentially vulnerable to snooping or spying by the wrong person. The controls implemented on
the network are similar in nature to those that can be applied to host systems; however, network-
based security can be more easily classified into two main categories: detection and prevention.
The first development methodology we are going to review is the systems-development life
cycle (SDLC). This methodology was first developed in the 1960s to manage the large software
projects associated with corporate systems running on mainframes. It is a very structured and
risk-averse methodology designed to manage large projects that included multiple programmers
and systems that would have a large impact on the organization.
S
DLC waterfall
Various definitions of the SDLC methodology exist, but most contain the following phases.
1. Preliminary Analysis. In this phase, a review is done of the request. Is creating a solution
possible? What alternatives exist? What is currently being done about it? Is this project a
good fit for our organization? A key part of this step is a feasibility analysis, which
includes an analysis of the technical feasibility (is it possible to create this?), the
economic feasibility (can we afford to do this?), and the legal feasibility (are we allowed
to do this?). This step is important in determining if the project should even get started.
2. System Analysis. In this phase, one or more system analysts work with different
stakeholder groups to determine the specific requirements for the new system. No
programming is done in this step. Instead, procedures are documented, key players are
interviewed, and data requirements are developed in order to get an overall picture of
exactly what the system is supposed to do. The result of this phase is a system-
requirements document.
3. System Design. In this phase, a designer takes the system-requirements document created
in the previous phase and develops the specific technical details required for the system.
It is in this phase that the business requirements are translated into specific technical
requirements. The design for the user interface, database, data inputs and outputs, and
reporting are developed here. The result of this phase is a system-design document. This
document will have everything a programmer will need to actually create the system.
4. Programming. The code finally gets written in the programming phase. Using the system-
design document as a guide, a programmer (or team of programmers) develop the
program. The result of this phase is an initial working program that meets the
requirements laid out in the system-analysis phase and the design developed in the
system-design phase.
5. Testing. In the testing phase, the software program developed in the previous phase is put
through a series of structured tests. The first is a unit test, which tests individual parts of
the code for errors or bugs. Next is a system test, where the different components of the
system are tested to ensure that they work together properly. Finally, the user-acceptance
test allows those that will be using the software to test the system to ensure that it meets
their standards. Any bugs, errors, or problems found during testing are addressed and
then tested again.
6. Implementation. Once the new system is developed and tested, it has to be implemented
in the organization. This phase includes training the users, providing documentation, and
conversion from any previous system to the new system. Implementation can take many
forms, depending on the type of system, the number and type of users, and how urgent it
is that the system become operational. These different forms of implementation are
covered later in the chapter.
7. Maintenance. This final phase takes place once the implementation phase is complete. In
this phase, the system has a structured support process in place: reported bugs are fixed
and requests for new features are evaluated and implemented; system updates and
backups are performed on a regular basis.
The SDLC methodology is sometimes referred to as the waterfall methodology to represent how
each step is a separate part of the process; only when one step is completed can another step
begin. After each step, an organization must decide whether to move to the next step or not. This
methodology has been criticized for being quite rigid. For example, changes to the requirements
are not allowed once the process has begun. No software is available until after the programming
phase.
Again, SDLC was developed for large, structured projects. Projects using SDLC can sometimes
take months or years to complete. Because of its inflexibility and the availability of new
programming techniques and tools, many other software-development methodologies have been
developed. Many of these retain some of the underlying concepts of SDLC but are not as rigid.
Rapid Application Development
As you can see, the RAD methodology is much more compressed than SDLC. Many of the
SDLC steps are combined and the focus is on user participation and iteration. This methodology
is much better suited for smaller projects than SDLC and has the added advantage of giving users
the ability to provide feedback throughout the process. SDLC requires more documentation and
attention to detail and is well suited to large, resource-intensive projects. RAD makes more sense
for smaller projects that are less resource-intensive and need to be developed quickly.
Agile Methodologies
Agile methodologies are a group of methodologies that utilize incremental changes with a
focus on quality and attention to detail. Each increment is released in a specified period of
time (called a time box), creating a regular release schedule with very specific objectives.
While considered a separate methodology from RAD, they share some of the same
principles: iterative development, user interaction, ability to change. The agile
methodologies are based on the “Agile Manifesto,” first released in 2001.
The goal of the agile methodologies is to provide the flexibility of an iterative approach while
ensuring a quality product.
Lean Methodology
One last methodology we will discuss is a relatively new concept taken from the business
bestseller The Lean Startup, by Eric Reis. In this methodology, the focus is on taking an initial
idea and developing a minimum viable product (MVP). The MVP is a working software
application with just enough functionality to demonstrate the idea behind the project. Once the
MVP is developed, it is given to potential users for review. Feedback on the MVP is generated in
two forms: (1) direct observation and discussion with the users, and (2) usage statistics gathered
from the software itself. Using these two forms of feedback, the team determines whether they
should continue in the same direction or rethink the core idea behind the project, change the
functions, and create a new MVP. This change in strategy is called a pivot. Several iterations of
the MVP are developed, with new functions added each time based on the feedback, until a final
product is completed.
The biggest difference between the lean methodology and the other methodologies is that the full
set of requirements for the system are not known when the project is launched. As each iteration
of the project is released, the statistics and feedback gathered are used to determine the
requirements. The lean methodology works best in an entrepreneurial environment where a
company is interested in determining if their idea for a software application is worth developing.
Sidebar: The Quality Triangle
When developing software, or any sort of product or service, there exists a tension between the
developers and the different stakeholder groups, such as management, users, and investors. This
tension relates to how quickly the software can be developed (time), how much money will be
spent (cost), and how well it will be built (quality). The quality triangle is a simple concept. It
states that for any product or service being developed, you can only address two of the
following: time, cost, and quality.
A project is a group of tasks that need to complete to reach a clear result. A project also
defines as a set of inputs and outputs which are required to achieve a goal. Projects can vary
from simple to difficult and can be operated by one person or a hundred.
Projects usually described and approved by a project manager or team executive. They go
beyond their expectations and objects, and it's up to the team to handle logistics and
complete the project on time. For good project development, some teams split the project
into specific tasks so they can manage responsibility and utilize team strengths.
Software project management is an art and discipline of planning and supervising software
projects. It is a sub-discipline of software project management in which software projects
planned, implemented, monitored and controlled.
There are three needs for software project management. These are:
1. Time
2. Cost
3. Quality
It is an essential part of the software organization to deliver a quality product, keeping the
cost within the client?s budget and deliver the project as per schedule. There are various
factors, both external and internal, which may impact this triple factor. Any of three-factor
can severely affect the other two.
Project Manager
A project manager is a character who has the overall responsibility for the planning, design,
execution, monitoring, controlling and closure of a project. A project manager represents an
essential role in the achievement of the projects.
A project manager is a character who is responsible for giving decisions, both large and
small projects. The project manager is used to manage the risk and minimize uncertainty.
Every decision the project manager makes must directly profit their project.
1. Leader
A project manager must lead his team and should provide them direction to make them
understand what is expected from all of them.
2. Medium:
The Project manager is a medium between his clients and his team. He must coordinate and
transfer all the appropriate information from the clients to his team and report to the senior
management.
3. Mentor:
He should be there to guide his team at each step and make sure that the team has an
attachment. He provides a recommendation to his team and points them in the right direction.
Project Planning
Scope Management
Project Estimation
Project Planning
Software project planning is task, which is performed before the production of software actually
starts. It is there for the software production but involves no concrete activity that has any
direction connection with software production; rather it is a set of multiple processes, which
facilitates software production. Project planning may include the following:
Scope Management
It defines the scope of project; this includes all the activities, process need to be done in order to
make a deliverable software product. Scope management is essential because it creates
boundaries of the project by clearly defining what would be done in the project and what would
not be done. This makes project to contain limited and quantifiable tasks, which can easily be
documented and in turn avoids cost and time overrun.
During Project Scope management, it is necessary to -
Effort estimation
The managers estimate efforts in terms of personnel requirement and man-hour required
to produce the software. For effort estimation software size should be known. This can
either be derived by managers’ experience, organization’s historical data or software
size can be converted into efforts by using some standard formulae.
Time estimation
Once size and efforts are estimated, the time required to produce the software can be
estimated. Efforts required is segregated into sub categories as per the requirement
specifications and interdependency of various components of software. Software tasks
are divided into smaller tasks, activities or events by Work Breakthrough Structure
(WBS). The tasks are scheduled on day-to-day basis or in calendar months.
The sum of time required to complete all tasks in hours or days is the total time invested
to complete the project.
Cost estimation
This might be considered as the most difficult of all because it depends on more
elements than any of the previous ones. For estimating project cost, it is required to
consider -
o Size of software
o Software quality
o Hardware
o Additional software or tools, licenses etc.
o Skilled personnel with task-specific skills
o Travel involved
o Communication
o Training and support
Decomposition Technique
This technique assumes the software as a product of various compositions.
There are two main models -
This technique uses empirically derived formulae to make estimation.These formulae are based
on LOC or FPs.
Putnam Model
This model is made by Lawrence H. Putnam, which is based on Norden’s frequency
distribution (Rayleigh curve). Putnam model maps time and efforts required with
software size.
COCOMO
COCOMO stands for COnstructive COst MOdel, developed by Barry W. Boehm. It
divides the software product into three categories of software: organic, semi-detached
and embedded.
Project Scheduling
Project Scheduling in a project refers to roadmap of all activities to be done with specified order
and within time slot allotted to each activity. Project managers tend to define various tasks, and
project milestones and arrange them keeping various factors in mind. They look for tasks lie in
critical path in the schedule, which are necessary to complete in specific manner (because of
task interdependency) and strictly within the time allocated. Arrangement of tasks which lies
out of critical path are less likely to impact over all schedule of the project.
For scheduling a project, it is necessary to -
Experienced staff leaving the project and new staff coming in.
Change in organizational management.
Requirement change or misinterpreting requirement.
Under-estimation of required time and resources.
Technological changes, environmental changes, business competition.
Risk Management Process
There are following activities involved in risk management process:
Identification - Make note of all possible risks, which may occur in the project.
Categorize - Categorize known risks into high, medium and low risk intensity as per
their possible impact on the project.
Manage - Analyze the probability of occurrence of risks at various phases. Make plan to
avoid or face risks. Attempt to minimize their side-effects.
Monitor - Closely monitor the potential risks and their early symptoms. Also monitor the
effects of steps taken to mitigate or avoid them.
Project Execution & Monitoring
In this phase, the tasks described in project plans are executed according to their schedules.
Execution needs monitoring in order to check whether everything is going according to the
plan. Monitoring is observing to check the probability of risk and taking measures to address
the risk or report the status of various tasks.
These measures include -
Activity Monitoring - All activities scheduled within some task can be monitored on
day-to-day basis. When all activities in a task are completed, it is considered as complete.
Status Reports - The reports contain status of activities and tasks completed within a
given time frame, generally a week. Status can be marked as finished, pending or work-
in-progress etc.
Milestones Checklist - Every project is divided into multiple phases where major tasks
are performed (milestones) based on the phases of SDLC. This milestone checklist is
prepared once every few weeks and reports the status of milestones.
Project Communication Management
Effective communication plays vital role in the success of a project. It bridges gaps between
client and the organization, among the team members as well as other stake holders in the
project such as hardware suppliers.
Communication can be oral or written. Communication management process may have the
following steps:
Planning - This step includes the identifications of all the stakeholders in the project and
the mode of communication among them. It also considers if any additional
communication facilities are required.
Sharing - After determining various aspects of planning, manager focuses on sharing
correct information with the correct person on correct time. This keeps every one
involved the project up to date with project progress and its status.
Feedback - Project managers use various measures and feedback mechanism and create
status and performance reports. This mechanism ensures that input from various
stakeholders is coming to the project manager as their feedback.
Closure - At the end of each major event, end of a phase of SDLC or end of the project
itself, administrative closure is formally announced to update every stakeholder by
sending email, by distributing a hardcopy of document or by other mean of effective
communication.
A computer system organizes data in a hierarchy that starts with bits and bytes and progresses to
fields, records, files, and databases. A bit represents the smallest unit of data a computer can
handle. A group of bits, called a byte, represents a single character, which can be a letter, a
number, or another symbol. A grouping of characters into a word, a group of words, or a
complete number (such as a person's name or age) is called a field. A group of related fields,
such as the student's name, the course taken, the date, and the grade, comprises a record; a group
of records of the same type is called a file. For instance, the student records in Figure could
constitute a course file. A group of related files makes up a database.
A computer system organizes data in a hierarchy that starts with the bit, which represents either a
0 or a 1. Bits can be grouped to form a byte to represent one character, number, or symbol. Bytes
can be grouped to form a field, and related fields can be grouped to form a record. Related
records can be collected to form a file, and related files can be organized into a database.
Database Management Systems A database management system (DBMS) is simply the software
that permits an organization to centralize data, manage them efficiently, and provide access to
the stored data by application programs. The DBMS acts as an interface between application
programs and the physical data files. When the application program calls for a data item such as
gross pay, the DBMS finds this item in the database and presents it to the application program.
Using traditional data files the programmer would have to specify the size and format of each
data element used in the program and then tell the computer where they were located. A DBMS
eliminates most of the data definition statements found in traditional programs. The DBMS
relieves the programmer or end user from the task of understanding where and how the data are
actually stored by separating the logical and physical views of the data. The logical view presents
data as they would be perceived by end users or business specialists, whereas the physical view
shows how data are actually organized and structured on physical storage media. There is only
one physical view of the data, but there can be many different logical views. The database
management software makes the physical database available for different logical views presented
for various application programs.
A database management system has three components: 1 ) A data definition language 2 ) A data
manipulation language 3 ) A data dictionary The data definition language is the formal language
programmers use to specify the content and structure of the database. The data definition
language defines each data element as it appears in the database before that data element is
translated into the forms required by application programs. Most DBMS have a specialized
language called a data manipulation language that is used in conjunction with some conventional
third- or fourth-generation programming languages to manipulate the data in the database. This
language contains commands that permit end users and programming specialists to extract data
from the database to satisfy information requests and develop applications. The most prominent
data manipulation language today is Structured Query Language, or SQL. End users and
information systems specialists can use SQL as an interactive query language to access data from
databases, and SQL commands can be embedded in application programs written in conventional
programming languages. The third element of a DBMS is a data dictionary. This is an automated
or manual file that stores definitions of data elements and data characteristics such as usage,
physical representation, ownership (who in the organization is responsible for maintaining the
data), authorization, and security.
Many data dictionaries can produce lists and reports of data use, groupings, program locations,
and so on. A data element represents a field. By creating an inventory of data contained in the
database, the data dictionary serves as an important data management tool. For instance, business
users could consult the dictionary to find out exactly what pieces of data are maintained for the
sales or marketing function or even to determine all the information maintained by the entire
enterprise. The dictionary could supply business users with the name, format, and specifications
required to access data for reports. Technical staff could use the dictionary to determine what
data elements and files must be changed if a program is changed.
In an ideal database environment, the data in the database are defined only once and used for all
applications whose data reside in the database, thereby eliminating data redundancy and
inconsistency. Application programs, which are written using a combination of the data
manipulation language of the DBMS and a conventional programming language, request data
elements from the database. Data elements called for by the application programs are found and
delivered by the DBMS. The programmer does not have to specify in detail how or where the
data are to be found. A DBMS can reduce program-data dependence along with program
development and maintenance costs. Access and availability of information can be increased
because users and programmers can perform ad hoc queries of data in the database. The DBMS
allows the organization to centrally manage data, their use, and security.
Database systems require that the organization recognize the strategic role of information and
begin actively to manage and plan for information as a corporate resource. This means that the
organization must develop a data administration function with the power to define information
requirements for the entire company and with direct access to senior management. The chief
information officer (CIO) or vice president of information becomes the primary advocate in the
organization for database systems. Data administration is responsible for the specific policies and
procedures through which data can be managed as an organizational resource. These
responsibilities include developing information policy, planning for data, overseeing logical
database design and data dictionary development, and monitoring how information system
specialists and end-user groups use data.
The fundamental principle of data administration is that all data are the property of the
organization as a whole. Data cannot belong exclusively to any one business area or
organizational unit. All data are to be made available to any group that requires them to fulfill its
mission.
An organization needs to formulate an information policy that specifies its rules for sharing,
disseminating, acquiring, standardizing, classifying, and inventorying information throughout the
organization. Information policy lays out specific procedures and accountabilities, specifying
which organizational units share information, where information can be distributed, and who has
responsibility for updating and maintaining the information. Although data administration is a
very important organizational function, it has proved very challenging to implement.
The organizational interests served by the DBMS are much broader than those in the traditional
file environment; therefore, the organization requires enterprise-wide planning for data.
Enterprise analysis, which addresses the information requirements of the entire organization (as
opposed to the requirements of individual applications), is needed to develop databases. The
purpose of enterprise analysis is to identify the key entities, attributes, and relationships that
constitute the organization's data.
Databases require new software and a new staff specially trained in DBMS techniques, as well as
new data management structures. Most corporations develop a database design and management
group within the corporate information system division that is responsible for defining and
organizing the structure and content of the database and maintaining the database. In close
cooperation with users, the design group establishes the physical database, the logical relations
among elements, and the access rules and procedures. The functions it performs are called
database administration. A database serves a wider community of users than traditional systems.
Relational systems with fourth-generation query languages permit employees who are not
computer specialists to access large databases. In addition, users include trained computer
specialists. To optimize access for no specialists, resources must be devoted to training end
users.
Database Trends
Organizations are installing powerful data analysis tools and data warehouses to make better use
of the information stored in their databases and are taking advantage of database technology
linked to the World Wide Web. We now explore these developments.
Sometimes managers need to analyze data in ways that traditional database models cannot
represent. For example, a company selling four different products—nuts, bolts, washers, and
screws—in the East, West, and Central regions, might want to know actual sales by product for
each region and might also want to compare them with projected sales. This analysis requires a
multidimensional view of data. To provide this type of information, organizations can use either
a specialized multidimensional database or a tool that creates multidimensional views of data in
relational databases. Multidimensional analysis enables users to view the same data in different
ways using multiple dimensions. Each aspect of information—product, pricing, cost, region, or
time period— represents a different dimension. So a product manager could use a
multidimensional data analysis tool to learn how many washers were sold in the East in June,
how that compares with the previous month and the previous June, and how it compares with the
sales forecast. Another term for multidimensional data analysis is on-line analytical processing
(OLAP).
Decision makers need concise, reliable information about current operations, trends, and
changes. What has been immediately available at most firms is current data only (historical data
were available through special IS reports that took a long time to produce). Data often are
fragmented in separate operational systems, such as sales or payroll, so that different managers
make decisions from incomplete knowledge bases. Users and information system specialists may
have to spend inordinate amounts of time locating and gathering data (Watson and Haley, 1998).
Data warehousing addresses this problem by integrating key operational data from around the
company in a form that is consistent, reliable, and easily available for reporting.
Data Warehouse
A data warehouse is a database that stores current and historical data of potential interest to
managers throughout the company. The data originate in many core operational systems and
external sources, including Web site transactions, each with different data models. They may
include legacy systems, relational or object-oriented DBMS applications, and systems based on
HTML or XML documents. The data from these diverse applications are copied into the data
warehouse database as often as needed—hourly, daily, weekly, monthly. The data are
standardized into a common data model and consolidated so that they can be used across the
enterprise for management analysis and decision making. The data are available for anyone to
access as needed but cannot be altered. Companies can build enterprise-wide data warehouses
where a central data warehouse serves the entire organization, or they can create smaller,
decentralized warehouses called data marts. A data mart is a subset of a data warehouse in which
a summarized or highly focused portion of the organization's data is placed in a separate database
for a specific population of users. For example, a company might develop marketing and sales
data marts to deal with customer information.
A data mart typically focuses on a single subject area or line of business, so it usually can be
constructed more rapidly and at lower cost than an enterprise-wide data warehouse. However,
complexity, costs, and management problems will rise if an organization creates too many data
marts.
Data mining
A data warehouse system provides a range of ad hoc and standardized query tools, analytical
tools, and graphical reporting facilities, including tools for OLAP and data mining. Data mining
uses a variety of techniques to find hidden patterns and relationships in large pools of data and
infer rules from them that can be used to predict future behavior and guide decision making.
Data mining helps companies engage in one-to-one marketing where personalized or
individualized messages can be created based on individual preferences Data mining is both a
powerful and profitable tool, but it poses challenges to the protection of individual privacy. Data
mining technology can combine information from many diverse sources to create a detailed "data
image" about each of us—our income, our driving habits, our hobbies, our families, and our
political interests.
Data warehouses not only offer improved information but they also make it easy for decision
makers to obtain it. They even include the ability to model and remodel the data. It has been
estimated that 70 percent of the world's business information resides on mainframe databases,
many of which are for older legacy systems. Many of these legacy systems are critical
production applications that support the company's core business processes. As long as these
systems can efficiently process the necessary volume of transactions to keep the company
running, firms are reluctant to replace them to avoid disrupting critical business functions and
high system replacement costs. Many of these legacy systems use hierarchical DBMS where
information is difficult for users to access. Data warehouses enable decision makers to access
data as often as they need without affecting the performance of the underlying operational
systems. Many organizations are making access to their data warehouses even easier by using
Web technology.
A series of middleware and other software products has been developed to help users gain access
to organizations' legacy data through the Web. For example, a customer with a Web browser
might want to search an on-line retailer's database for pricing information. The user would access
the retailer's Web site over the Internet using Web browser software on his or her client PC. The
user's Web browser software would request data from the organization's database, using HTML
commands to communicate with the Web server. Because many back-end databases cannot
interpret commands written in HTML, the Web server would pass these requests for data to
special software that would translate HTML commands into SQL so that they could be processed
by the DBMS working with the database. In a client/server environment, the DBMS resides on a
special dedicated computer called a database server. The DBMS receives the SQL requests and
provides the required data. The middleware would transfer information from the organization's
internal database back to the Web server for delivery in the form of a Web page to the user.
Companies seek, through BPI, a way not only to integrate applications and systems internally.
The goal is bigger: they want to relate and exchange information externally, with the tools and
data from partners, suppliers and customers.
In the same way that an ERP (Integrated Management System) generates more transparency,
agility and reliability in the management of a company, BPI acts in a similar way between
companies.
Thus, the needs of all are aligned with the objectives of the business, which brings transparency,
agility and flexibility to companies. The result is more efficiency and innovation.
This is because some of the processes dominated by organizations external to your business are
unknown or few are exploited in your organization. And it works the other way around, too.
For example, a company that produces wooden furniture for hotels does not understand anything
about hospitality, much less about forest cultivation and the extraction of timber.
However, if everyone could align their goals, it may be possible to create integrated and
innovative processes, from planting trees to the use of wooden furniture by hotel guests.
With all stakeholders exchanging information, it is possible to create integrated processes that
are much more objective and efficient for the group. Thus, more value is delivered to end
customers using fewer resources.
Imagine that the hotel knows that its customers prefer furniture made with light wood and with
certain colouring and shape. If this information is shared with the logger and the furniture
factory, the whole value chain can prepare for this.
Thus, the integration of processes goes beyond the boundaries of the company and encompasses
each stage of the production process, in different organizations.
“Business Process Integration (BPI) is a crucial technique for supporting interorganizational
business interoperability. BPI enables the automation of business processes and the integration of
systems into several organizations”
4 – Implementation
As the tool also automates the processes, implementation is very agile and can be accompanied
by all the organizations involved.
A request made by one company can trigger a process in the other that, through the same flow of
tasks, triggers the suppliers in a third company, and so on.
Enterprise resource planning (ERP) is a process used by companies to manage and integrate the
important parts of their businesses. Many ERP software applications are important to companies
because they help them implement resource planning by integrating all of the processes needed
to run their companies with a single system. An ERP software system can also integrate
planning, purchasing inventory, sales, marketing, finance, human resources, and more.
ERP applications also allow the different departments to communicate and share information
more easily with the rest of the company. It collects information about the activity and state of
different divisions, making this information available to other parts, where it can be used
productively.
ERP applications can help a corporation become more self-aware by linking information about
the production, finance, distribution, and human resources together. Because it connects different
technologies used by each part of a business, an ERP application can eliminate costly duplicate
and incompatible technology. The process often integrates accounts payable, stock control
systems, order-monitoring systems, and customer databases into one system.
ERP offerings have evolved over the years from traditional software models that make use of
physical client servers to cloud-based software that offers remote, web-based access.
Integrating and automating business processes eliminates redundancies, improves accuracy, and
improves productivity. Departments with interconnected processes can now synchronize work to
achieve faster and better outcomes.
Some businesses benefit from enhanced reporting of real-time data from a single source system.
Accurate and complete reporting help companies adequately plan, budget, forecast, and
communicate the state of operations to the organization and interested parties, such as
shareholders.
ERPs allow businesses to quickly access needed information for clients, vendors, and business
partners, contributing to improved customer and employee satisfaction, quicker response rates,
and increased accuracy rates. Associated costs often decrease as the company operates more
efficiently.
Departments are better able to collaborate and share knowledge; a newly synergized workforce
can improve productivity and employee satisfaction as employees are better able to see how each
functional group contributes to the mission and vision of the company. Also, menial, manual
tasks are eliminated, allowing employees to allocate their time to more meaningful work.
The components of an ERP system are dependent on the needs of the organization. However,
there are key features that each ERP should include. An ERP system should be automated—to
reduces errors—and flexible, allowing for modifications as the company changes or grows. More
people are mobile; therefore, the ERP platform should allow users to access it from their mobile
devices. Lastly, an ERP system should provide a means for productivity to be analyzed and
measured. Other tools can be integrated within the system to improve a company's capabilities.
Enterprise resource planning (ERP) manages and integrates business processes through a single
system. With a better line of sight, companies are better able to plan and allocate resources.
Without ERP, companies tend to operate in a soloed approach, with each department operating
its own disconnected system. ERP systems promote the free flow of communication and sharing
of knowledge across an organization, the integration of systems for improved productivity and
efficiencies, and increased synergies across teams and departments. However, moving to an ERP
system will be counterproductive if the company's culture does not adjust with the change and
the company does not review how the structure of its organization can support it.
An enterprise resources planning (ERP) system can improve business productivity and efficiency
by automating processes and providing a centralized source of data for all teams at your
company. But an ERP implementation can be complex and sometimes challenging, largely
because it affects people and business processes across the entire organization.
An ERP system is a suite of software that supports many business functions, from accounting to
human resources to sales and marketing to engineering. It provides a central database for the
entire organization, connecting multiple groups with a single source of information that everyone
can access. Along with improved productivity and process efficiency, key benefits include real-
time information that allows teams to make better decisions, faster.
An ERP implementation can be complex since it affects business processes across the entire
organization. And to realize the benefits of the new system, people often have to change the way
they work—often replacing longstanding manual processes with more efficient, automated
processes.
One of the biggest ERP implementation challenges is getting users and functional groups to
change their ways in order to work with the new solution. Driving this change requires strong
project management and backing from senior leadership. To develop the new system, the
organization needs a committed project team that represents all users of the ERP platform. This
ensures the software will support the needs and business processes of all departments across the
company.
An ERP implementation involves people as well as technology. Accordingly, it may face people-
related challenges, such as resistance to change, as well as technical obstacles. Common ERP
implementation challenges include:
Strong project and people management, which includes setting realistic expectations,
time frames and milestones, along with timely two-way communication, is critical to
success. As with change management, backing from executives and other top leaders is
essential to conquering this challenge, as well.
2. Project planning: Organizations often underestimate the time and budget necessary for a
successful implementation.One of the most common causes of budget overruns is scope
creep—when a business adds capabilities or features to the system that weren’t part of the
original plan—and another is underestimating staffing needs, according to Statista.
Developing a clear and realistic plan from the start can help to avoid those issues. A
realistic project plan that acknowledges possible speed bumps and minor cost overruns
and addresses them in advance will simplify that decision-making process and keep the
project on track.
3. Data integration: One of the key advantages of ERP is that it provides a single, accurate
source of data for the whole organization. A key step in ERP implementation is data
migration, which typically involves moving data from multiple older systems into the
ERP database. But first, you have to find all of your data. This may be much more
challenging than you expect. The information may be spread far and wide across the
organization, buried in accounting systems, department-specific applications,
spreadsheets and perhaps on paper.
Well-planned data migration can help to keep the entire ERP implementation project on
time and on budget. It’s also an opportunity to winnow out obsolete and redundant data
lurking in the organization’s older systems. In contrast, underprioritizing data migration
can cause issues such as inaccurate or duplicate data and challenges to your go-live date.
4. Data quality: Once the organization has located all data sources, it can start thinking
about migrating it to the ERP system. But that may involve a serious data hygiene
exercise. Because multiple departments interact with the same customers, products and
orders, organizations often have duplicate versions of the same information in their
systems. The information may be stored in different formats; there may be
inconsistencies, like in addresses or name spellings; some information may be inaccurate;
and it may include obsolete information such as customers or suppliers that have since
gone out of business.
Ensuring data quality can become a sizable project on its own, involving validating the
data, cleaning out duplicates and adding missing values before migrating data to the ERP
system. The new data should also be thoroughly tested before going live with the ERP
system. Make sure your team understands the importance of cleaning up data, and assign
clear responsibilities in doing so. For example, the accounting team will handle all
financial data and the customer service group will clean up customer data.
Resistance to change can be a formidable roadblock; getting buy-in from leadership and
stakeholders across departments very early in the implementation process is crucial to a
successful implementation. Communicate the features and advantages of the new ERP to
all stakeholders throughout the implementation process, especially end users on the front
lines. And make sure all users receive comprehensive training and support to help smooth
their paths to adoption of the system.
6. Cost overruns: ERP projects are infamous for sailing past budgets after the
implementation kicks off. Many organizations underestimate the amount of work
required to move to a new business system, and that results in spending more money than
expected.These cost overruns often show up in a few different areas.
When internal resources run low, businesses frequently use a software vendor’s services
team or third-party consultants more than planned. This is especially true if the solution
requires significant customization to meet your company’s needs. Experienced ERP
consultants, whether provided by the vendor or part of a third-party consultancy, usually
run about $150-175 per hour, plus travel expenses. Another budget breaker is data
migration, which can represent as much as 10-15% of the total project cost, according to
ERP Focus. Training costs are one other expense to consider—ERP vendors often offer
free basic training to customers, but you may need to pay for additional training hours or
classes during or after the implementation.
To avoid blowing up the budget, companies should consider these and other overlooked
expenses, and budget more than they think for them. Coming in under budget is always
preferable to the alternative.
An effective information system provides users with timely, accurate, and relevant information. This
information is stored in computer files. When the files are properly arranged and maintained, users can
easily access and retrieve the information they need. Well-managed, carefully arranged files make it
easy to obtain data for business decisions, whereas poorly managed files lead to chaos in information
processing, high costs, poor performance, and little, if any, flexibility. Despite the use of excellent
hardware and software, many organizations have inefficient information systems because of poor file
management. In this section we describe the traditional methods that organizations have used to
arrange data in computer files. We also discuss the problems with these methods. An effective
information system provides users with timely, accurate, and relevant information. This information is
stored in computer files. When the files are properly arranged and maintained, users can easily access
and retrieve the information they need. Well-managed, carefully arranged files make it easy to obtain
data for business decisions, whereas poorly managed files lead to chaos in information processing, high
costs, poor performance, and little, if any, flexibility. Despite the use of excellent hardware and
software, many organizations have inefficient information systems because of poor file management. In
this section we describe the traditional methods that organizations have used to arrange data in
computer files. We also discuss the problems with these methods.
_________ is an organized
portfolio of formal systems
for obtaining processing and
delivering information in
support of the business
operations and management
of an
organization.
1. MIS
2. DSS
3. MRS
4. None of the above
_________ is an organized
portfolio of formal systems
for obtaining processing and
delivering information in
support of the business
operations and management
of an
organization.
1. MIS
2. DSS
3. MRS
4. None of the above
_________ is an organized
portfolio of formal systems
for obtaining processing and
delivering information in
support of the business
operations and management
of an
organization.
1. MIS
2. DSS
3. MRS
4. None of the above
_________ is an organized
portfolio of formal systems
for obtaining processing and
delivering information in
support of the business
operations and management
of an
organization.
1. MIS
2. DSS
3. MRS
4. None of the above