Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Ram

RAM (Random Access Memory) is the hardware in a computing device where the operating system
(OS), application programs and data in current use are kept so they can be quickly reached by the
device's processor. RAM is the main memory in a computer, and it is much faster to read from and
write to than other kinds of storage, such as a hard disk drive (HDD), solid-state drive (SSD) or optical
drive.

Random Access Memory is volatile. That means data is retained in RAM as long as the computer is on,
but it is lost when the computer is turned off. When the computer is rebooted, the OS and other files
are reloaded into RAM, usually from an HDD or SSD.

Because of its volatility, Random Access Memory can't store permanent data. RAM can be compared to
a person's short-term memory, and a hard drive to a person's long-term memory. Short-term memory
is focused on immediate work, but it can only keep a limited number of facts in view at any one time.
When a person's short-term memory fills up, it can be refreshed with facts stored in the brain's long-
term memory.

A computer also works this way. If RAM fills up, the computer's processor must repeatedly go to the
hard disk to overlay the old data in RAM with new data. This process slows the computer's operation.

The term random access as applied to RAM comes from the fact that any storage location, also known
as any memory address, can be accessed directly. Originally, the term Random Access Memory was
used to distinguish regular core memory from offline memory.

Offline memory typically referred to magnetic tape from which a specific piece of data could only be
accessed by locating the address sequentially, starting at the beginning of the tape. RAM is organized
and controlled in a way that enables data to be stored and retrieved directly to and from specific
locations.
CPU
The central processing unit (CPU) is the unit which performs most of the processing inside a computer.
To control instructions and data flow to and from other parts of the computer, the CPU relies heavily
on a chipset, which is a group of microchips located on the motherboard.
The CPU has two components:

• Control Unit: extracts instructions from memory and decodes and executes them
• Arithmetic Logic Unit (ALU): handles arithmetic and logical operations

To function properly, the CPU relies on the system clock, memory, secondary storage, and data and
address buses.
This term is also known as a central processor, microprocessor or chip.
The CPU is the heart and brain of a computer. It receives data input, executes instructions, and
processes information. It communicates with input/output (I/O) devices, which send and receive data
to and from the CPU. Additionally, the CPU has an internal bus for communication with the internal
cache memory, called the backside bus. The main bus for data transfer to and from the CPU, memory,
chipset, and AGP socket is called the front-side bus.
The CPU contains internal memory units, which are called registers. These registers contain data,
instructions, counters and addresses used in the ALU's information processing.
Some computers utilize two or more processors. These consist of separate physical CPUs located side
by side on the same board or on separate boards. Each CPU has an independent interface, separate
cache, and individual paths to the system front-side bus. Multiple processors are ideal for intensive
parallel tasks requiring multitasking. Multicore CPUs are also common, in which a single chip contains
multiple CPUs.
IP
IP number. Internet address. Whatever you call it, it's your link to the world.
Don't worry. Most of the billions of computer users don't know either, and to tell you the truth, that's
perfectly alright. Because even though it's your passport to the Internet, you never have to think about
it.
Here's a "pocket definition" that you can use if someone asked. "It's a network address for your
computer so the Internet knows where to send you emails, data and pictures of cats."
That puts you way ahead of the curve. In fact, 98% of people on computers right now don't know
what an IP address even looks like.
Let me explain.
It always helps to see an IP address example.
Let's see yours. Here it is:
Don't get too attached. It's not permanent—you'll find out why in a bit...
But for now, somehow you found your way to this website and page about the "IP address." And
unless you're a "techie," you may not have more than a passing idea what an IP address is or how it
works. ("It has to do with networking or something," is the usual guess.)

Let's clear up this concept for you, just to give you an idea why the misunderstood IP address is very
important to our lives.
Don't worry. We promise not to get too techie on you.
In the end, you'll love your IP address.
The IP address is a fascinating product of modern computer technology designed to allow one
connected computer (or "smart" device) to communicate with another device over the Internet.
IP addresses allow the location of literally billions of digital devices that are connected to the Internet
to be pinpointed and differentiated from other devices.
Because, in the same way you to need a mailing address to receive a letter in the mail from a friend, a
remote computer needs your IP address to communicate with your computer.
Firewall
In computing, a firewall is software or firmware that enforces a set of rules about what data
packets will be allowed to enter or leave a network. Firewalls are incorporated into a wide variety of
networked devices to filter traffic and lower the risk that malicious packets traveling over the public
internet can impact the security of a private network. Firewalls may also be purchased as stand-alone
software applications.

The term firewall is a metaphor that compares a type of physical barrier that's put in place to limit the
damage a fire can cause, with a virtual barrier that's put in place to limit damage from an external or
internal cyberattack. When located at the perimeter of a network, firewalls provide low-level network
protection, as well as important logging and auditing functions.

While the two main types of firewalls are host-based and network-based, there are many different
types that can be found in different places and controlling different activities. A host-based firewall is
installed on individual servers and monitors incoming and outgoing signals. A network-based firewall
can be built into the cloud's infrastructure, or it can be a virtual firewall service.

Types of firewalls

Other types of firewalls include packet-filtering firewalls, stateful inspection firewalls, proxy
firewalls and next-generation firewalls (NGFWs).

• A packet-filtering firewall examines packets in isolation and does not know the packet's context.

• A stateful inspection firewall examines network traffic to determine whether one packet is related to
another packet.

• A proxy firewall inspects packets at the application layer of the Open Systems Interconnection (OSI)
reference model.

• An NGFW uses a multilayered approach to integrate enterprise firewall capabilities with an intrusion
prevention system (IPS) and application control.
When organizations began moving from mainframe computers and dumb clients to the client-server
model, the ability to control access to the server became a priority. Before the first firewalls emerged
based on work done in the late 1980s, the only real form of network security was enforced
through access control lists (ACLs) residing on routers. ACLs specified which Internet Protocol (IP)
addresses were granted or denied access to the network.

Encryption
In computing, encryption is the method by which plaintext or any other type of data is converted from
a readable form to an encoded version that can only be decoded by another entity if they have access
to a decryption key. Encryption is one of the most important methods for providing data security,
especially for end-to-end protection of data transmitted across networks.

Encryption is widely used on the internet to protect user information being sent between a browser and
a server, including passwords, payment information and other personal information that should be
considered private. Organizations and individuals also commonly use encryption to protect sensitive
data stored on computers, servers and mobile devices like phones or tablets.

How encryption works

Unencrypted data, often referred to as plaintext, is encrypted using an encryption algorithm and an
encryption key. This process generates cipher text that can only be viewed in its original form if
decrypted with the correct key. Decryption is simply the inverse of encryption, following the same steps
but reversing the order in which the keys are applied. Today's most widely used encryption algorithms
fall into two categories: symmetric and asymmetric.
How the encryption
operation works

Benefits of encryption

The primary purpose of encryption is to protect the confidentiality of digital data stored on computer
systems or transmitted via the internet or any other computer network. A number of organizations and
standards bodies either recommend or require sensitive data to be encrypted in order to prevent
unauthorized third parties or threat actors from accessing the data. For example, the Payment Card
Industry Data Security Standard requires merchants to encrypt customers' payment card data when it
is both stored at rest and transmitted across public networks.

Modern encryption algorithms also play a vital role in the security assurance of IT systems and
communications as they can provide not only confidentiality, but also the following key elements of
security:

• Authentication: the origin of a message can be verified.

• Integrity: proof that the contents of a message have not been changed since it was sent.

• Nonrepudiation: the sender of a message cannot deny sending the message.

Encryption is used to protect data stored on a system (encryption in place or encryption at rest); many
internet protocols define mechanisms for encrypting data moving from one system to another (data in
transit).
Cloud Computing
Cloud computing is a general term for the delivery of hosted services over the internet.

Cloud computing enables companies to consume a compute resource, such as a virtual machine
(VM), storage or an application, as a utility -- just like electricity -- rather than having to build and
maintain computing infrastructures in house. Whether you are running applications that share photos
to millions of mobile users or you’re supporting the critical operations of your business, a cloud services
platform provides rapid access to flexible and low cost IT resources. With cloud computing, you don’t
need to make large upfront investments in hardware and spend a lot of time on the heavy lifting of
managing that hardware. Instead, you can provision exactly the right type and size of computing
resources you need to power your newest bright idea or operate your IT department. You can access
as many resources as you need, almost instantly, and only pay for what you use.

Cloud computing provides a simple way to access servers, storage, databases and a broad set of
application services over the Internet. A Cloud services platform such as Amazon Web Services owns
and maintains the network-connected hardware required for these application services, while you
provision and use what you need via a web application.

Six Advantages and Benefits of Cloud Computing

Trade capital expense for variable expense


Benefit from massive economies of scale
Stop guessing capacity

Increase speed and agility

Stop spending money on running and maintaining data centers

Go global in minutes
Analytics
Analytics is the scientific process of discovering and communicating the meaningful patterns which can
be found in data.

It is concerned with turning raw data into insight for making better decisions. Analytics relies on the
application of statistics, computer programming, and operations research in order to quantify and gain
insight to the meanings of data. It is especially useful in areas which record a lot of data or
information.

Analytics provides us with meaningful information which may otherwise be hidden from us within large
quantities of data. It is something that any leader, manager or just about anyone can make use of
especially in today’s data-driven word. Information has long been considered as a great weapon, and
analytics is the forge that creates it. Analytics changes everything, not just in the world of business, but
also in science, sports, health care and just about any field where vast amounts of data are collected.

Analytics leads us to find the hidden patterns in the world around us, from consumer behaviors, athlete
and team performance, to finding connections between activities and diseases. This can change how
we look at the world, and usually for the better. Sometimes we think that a process is already working
at its best, but sometimes data tells us otherwise, so analytics helps us to improve our world.

In the world of business, organizations would usually apply analytics in order to describe, predict and
then improve the business performance of the company
BIG DATA
Big data is a term that describes the large volume of data – both structured and unstructured – that
inundates a business on a day-to-day basis. But it’s not the amount of data that’s important. It’s what
organizations do with the data that matters. Big data can be analyzed for insights that lead to better
decisions and strategic business moves. While the term “big data” is relatively new, the act of gathering
and storing large amounts of information for eventual analysis is ages old. The concept gained
momentum in the early 2000s when industry analyst Doug Laney articulated the now-mainstream
definition of big data as the three Vs:

Volume. Organizations collect data from a variety of sources, including business transactions, social
media and information from sensor or machine-to-machine data. In the past, storing it would’ve been a
problem – but new technologies (such as Hadoop) have eased the burden.

Velocity. Data streams in at an unprecedented speed and must be dealt with in a timely manner.
RFID tags, sensors and smart metering are driving the need to deal with torrents of data in near-real
time.

Variety. Data comes in all types of formats – from structured, numeric data in traditional databases to
unstructured text documents, email, video, audio, stock ticker data and financial transactions.

At SAS, we consider two additional dimensions when it comes to big data:

Variability. In addition to the increasing velocities and varieties of data, data flows can be highly
inconsistent with periodic peaks. Is something trending in social media? Daily, seasonal and event-
triggered peak data loads can be challenging to manage. Even more so with unstructured data.

Complexity. Today's data comes from multiple sources, which makes it difficult to link, match,
cleanse and transform data across systems. However, it’s necessary to connect and correlate
relationships, hierarchies and multiple data linkages or your data can quickly spiral out of control.
Enterprise Resource Planning

When you search for "ERP" on the web, the sheer amount of information that comes up can be
overwhelming—not to mention a little confusing. Every website seems to have its own definition of
ERP, and one ERP implementation can vary widely from the next. These differences, however,
underscore the flexibility that can make ERP such a powerful business tool.
To get a deeper understanding of how ERP solutions can transform your business, it helps to get a
better sense of what ERP actually is and how it works. Here's a brief introduction to ERP and why it
seems like everyone's talking about it.

ERP is an acronym for Enterprise Resource Planning, but even its full name doesn't shed much light on
what ERP is or what it does. For that, you need to take a step back and think about all of the various
processes that are essential to running a business, including inventory and order management,
accounting, human resources, customer relationship management (CRM), and beyond. At its most
basic level, ERP software integrates these various functions into one complete system to streamline
processes and information across the entire organization.

The central feature of all ERP systems is a shared database that supports multiple functions used by
different business units. In practice, this means that employees in different divisions—for example,
accounting and sales—can rely on the same information for their specific needs.

The term ERP was coined in 1990 by Gartner1, but its roots date to the 1960s. Back then, the concept
applied to inventory management and control in the manufacturing sector. Software engineers created
programs to monitor inventory, reconcile balances, and report on status. By the 1970s, this had
evolved into Material Requirements Planning (MRP) systems for scheduling production processes.

In the 1980s, MRP grew to encompass more manufacturing processes, prompting many to call it MRP-
II or Manufacturing Resource Planning. By 1990, these systems had expanded beyond inventory
control and other operational processes to other back-office functions like accounting and human
resources, setting the stage for ERP as we've come to know it.
Antivirus
Antivirus software is a program or set of programs that are designed to prevent, search for, detect,
and remove software viruses, and other malicious software like worms, trojans, adware, and more.
These tools are critical for users to have installed and up-to-date because a computer without antivirus
software protection will be infected within minutes of connecting to the internet. The bombardment is
constant, which means antivirus companies have to update their detection tools regularly to deal with
the more than 60,000 new pieces of malware created daily.
Today's malware (an umbrella term that encompasses computer viruses) changes appearance quickly
to avoid detection by older, definition-based antivirus software. Viruses can be programmed to cause
damage to your device, prevent a user from accessing data, or to take control of your computer.
Several different companies build antivirus software and what each offer can vary but all perform some
essential functions:

• Scan specific files or directories for any malware or known malicious patterns
• Allow you to schedule scans to automatically run for you
• Allow you to initiate a scan of a particular file or your entire computer, or of a CD or flash drive
at any time.
• Remove any malicious code detected –sometimes you will be notified of an infection and asked
if you want to clean the file, other programs will automatically do this behind the scenes.
• Show you the ‘health’ of your computer

Always be sure you have the best, up-to-date security software installed to protect your computers,
laptops, tablets, and smartphones
Many antivirus software programs still download malware definitions straight to your device and scan
your files in search of matches. But since, as we mentioned, most malware regularly morphs in
appearance to avoid detection, Web root works differently. Instead of storing examples of recognized
malware on your device, it stores malware definitions in the cloud. This allows us to take up less space,
scan faster, and maintain a more robust threat library.
From banking to baby photos, so much of our business and personal data live on our devices. If it were
stored physically, paying for a security solution would be a no-brainer. Unfortunately, we often expect
our online data to remain secure without lifting a finger or spending a cent. Companies claiming to do
it for free are partly responsible for the confusion, to be sure. But consumers should insist on features
like identity theft protection, mobile security, and support options when it comes to their data security,
too—features usually lacking with free solutions.
Processor Speed

Those who work in the information technology field always are looking for the fastest computer they
can get because a faster computer saves time and allows them to multitask. One of the primary factors
looked at in computer speed is the speed of the processor. What is processor speed, and why is it
important?

credit: mixetto/E+/GettyImages

Definition of Processor Speed

In computer language, there are what are known as cycles. Cycles are groupings of information – a
cycle is "completed" when all of the instructions in the group have been processed. Processor speed is
the number of cycles per second at which the central processing unit of a computer operates and is
able to process information. Processor speed is measured in megahertz and is essential to the ability to
run applications. Faster processor speeds are desirable.

Why It Matters

The more cycles that a computer's central processing unit is able to complete per second, the faster
data is able to be processed. The faster data can be processed, the faster the computer can complete a
task. This means that a computer with a fast processor speed can complete more tasks in the same
amount of time than a computer with a slow processor, and that more applications can be running at
the same time. Some applications are processor-intensive, which means that they require a great deal
of data to be processed in order to operate.
Processor speed is impacted by several factors. These include circuit size, die size, cache size,
efficiency of the instruction set and manufacturing variables. Smaller chips usually result in faster
processor speeds because the data has less distance to travel, but smaller chips also result in greater
heat generation, which needs to be managed.

Multiple Processors

Some computers improve the speed of data processing by having multiple processors – it is similar to
asking two workers to do the same workload as one worker, so both workers (processors) can handle
more in the long run. Some programmers are writing code that is specifically written to be handled by
such a processor setup.
Clock speed

In a computer, clock speed refers to the number of pulses per second generated by an oscillator that
sets the tempo for the processor. Clock speed is usually measured in MHz(megahertz, or millions of
pulses per second) or GHz (gigahertz, or billions of pulses per second). Today's personal computers run
at a clock speed in the hundreds of megahertz and some exceed one gigahertz. The clock speed is
determined by a quartz-crystal circuit, similar to those used in radio communications equipment.

Computer clock speed has been roughly doubling every year. The Intel 8088, common in computers
around the year 1980, ran at 4.77 MHz. The 1 GHz mark was passed in the year 2000.

Clock speed is one measure of computer "power," but it is not always directly proportional to the
performance level. If you double the speed of the clock, leaving all other hardware unchanged, you will
not necessarily double the processing speed. The type of microprocessor, the bus architecture, and the
nature of the instruction set all make a difference. In some applications, the amount of random access
memory (RAM) is important, too.

Some processors execute only one instruction per clock pulse. More advanced processors can perform
more than one instruction per clock pulse. The latter type of processor will work faster at a given clock
speed than the former type. Similarly, a computer with a 32-bit bus will work faster at a given clock
speed than a computer with a 16-bit bus. For these reasons, there is no simplistic, universal relation
among clock speed, "bus speed," and millions of instructions per second (MIPS).

Excessive clock speed can be detrimental to the operation of a computer. As the clock speed in a
computer rises without upgrades in any of the other components, a point will be reached beyond which
a further increase in frequency will render the processor unstable. Some computer users deliberately
increase the clock speed, hoping this alone will result in a proportional improvement in performance,
and are disappointed when things don't work out that way.
AI (artificial intelligence)
AI (artificial intelligence) is the simulation of human intelligence processes by machines, especially
computer systems. These processes include learning (the acquisition of information and rules for using
the information), reasoning (using rules to reach approximate or definite conclusions) and self-
correction. Particular applications of AI include expert systems, speech recognition and machine vision.

AI can be categorized in any number of ways, but here are two examples.

The first classifies AI systems as either weak AI or strong AI. Weak AI, also known as narrow AI, is an
AI system that is designed and trained for a particular task. Virtual personal assistants, such as
Apple's Siri, are a form of weak AI.

Strong AI, also known as artificial general intelligence, is an AI system with generalized human
cognitive abilities so that when presented with an unfamiliar task, it has enough intelligence to find a
solution. The Turing Test, developed by mathematician Alan Turing in 1950, is a method used to
determine if a computer can actually think like a human, although the method is controversial.

The second example comes from Arend Hintze, an assistant professor of integrative biology and
computer science and engineering at Michigan State University. He categorizes AI into four types, from
the kind of AI systems that exist today to sentient systems, which do not yet exist. His categories are
as follows:

• Type 1: Reactive machines. An example is Deep Blue, the IBM chess program that beat Garry
Kasparov in the 1990s. Deep Blue can identify pieces on the chess board and make predictions, but
it has no memory and cannot use past experiences to inform future ones. It analyzes possible
moves -- its own and its opponent -- and chooses the most strategic move. Deep Blue and
Google's AlphaGO were designed for narrow purposes and cannot easily be applied to another
situation.

• Type 2: Limited memory. These AI systems can use past experiences to inform future decisions.
Some of the decision-making functions in self-driving cars are designed this way. Observations
inform actions happening in the not-so-distant future, such as a car changing lanes. These
observations are not stored permanently.

• Type 3: Theory of mind. This psychology term refers to the understanding that others have their
own beliefs, desires and intentions that impact the decisions they make. This kind of AI does not yet
exist.

• Type 4: Self-awareness. In this category, AI systems have a sense of self, have consciousness.
Machines with self-awareness understand their current state and can use the information to infer
what others are feeling. This type of AI does not yet exist.

IOT
The Internet of Things (IoT) is many things to many people. Everything from new applications, such as
smart cities or autonomous vehicles to massive sensor networks for monitoring environmental or
industrial systems.

It also means a fundamental change in the way mobile network operators build and manage networks
to remain profitable.

The anticipated massive scale, combined with a lower average revenue per connected device, is
bringing about the most significant change yet in network architectures.

You might also like