NAME: - NARINDER KUMAR (6012/17) Subject:-Computer Architecture Submitted To: - Mr. Anil Sagar Sir

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 18

NAME :– NARINDER KUMAR (6012/17)

SUBJECT :- COMPUTER ARCHITECTURE


SUBMITTED TO :- MR. ANIL SAGAR SIR
CONTENTS
 Describe recent data storage trends in digital
devices with their types, working, area of
application and limitations?
 Clearly present the idea of virtual memory,
paging, segmentation? Also identify and
present the problems arises in the
implementation of virtual memory due to the
development of computer software and
hardware.
DEFINITION

 Data storage is the collective methods and


technologies that capture and retain digital
information on electromagnetic, optical or
silicon- based storage media.
THE MAIN RECENT TYPES OF DIGITAL DEVICES
ARE :-

 Flash storage adoption will get bigger and faster.


Organizations of all sizes will include solid-state
drive (SSD) for greater performance, energy
savings, space efficiency, and reduced
management. New technologies like integrated
data protection, storage federation/automation,
policy-based provisioning and public cloud
integration will be built on top of this flash
foundation.
ARTIFICIAL INTELLIGENCE WILL GAIN
SIGNIFICANT TRACTION IN THE DATA CENTER.

 Vendors who harness the power of big data


analytics will continue to differentiate their
products and deliver measurable business
impact for customers. AI will result in huge
opportunities to radically simplify operations
and automate complex manual tasks.
PREDICTIVE STORAGE ANALYTICS

 Predictive storage analytics surpasses


traditional hierarchical storage management or
resource monitoring. The goal is to harness and
divert vast amounts of data into operational
analytics to guide strategic decision-making.
Predictive analytics lets storage and network-
monitoring vendors continuously capture
millions of data points in the cloud from
customer arrays deployed in the field.
HYPER-CONVERGENCE MOVE INTO SECONDARY
STORAGE

 Many organizations are putting greater


emphasis on secondary storage to optimize
primary storage capacity. Secondary storage
frees up primary storage, while leaving the data
more accessible than archive storage. It also
lets organizations continue to gain value from
older data or data that isn’t mission-critical.
MULTI CLOUD STORAGE

 Multi-cloud storage still has its share of


challenges. Moving data in and out of cloud is
more complicated than moving it across on-
premises systems, and managing data stored
in different clouds requires a new approach.
NON-VOLATILE MEMORY EXPRESS (NVME) OVER
FABRICS

 Performance boosting, latency lowering


nonvolatile memory express is already one of
the technology trends in SSDs that use a host
computer’s PCI Express bus.
NVMe allows you to take your flash storage to
the next level, taking advantage of the massive
parallelization of SSDs and next-generation
SCM technologies while doing away with SCSI
overheads.
SOFTWARE-DEFINED STORAGE (SDS) WILL BE
UBIQUITOUS

 All storage vendors use software-defined


technology and customers are not tied to a single
hardware. SDS will support your legacy assets,
allowing you to take advantage of subscription or
consumption-based storage models.
SDS can also help you leverage software
enhancements that utilize the analytics output by
classifying, tracking, and moving data to the
appropriate locations within your storage
environment.
IOT COMPUTING AND ANALYTICS

 When IoT data is combined along with the data


that’s collected from other systems, it puts a
huge burden on your storage.
This data is at the edge of the network, not the
core, and must be stored – and acted on – at
the edge. IoT analytics can help to improve your
efficiencies and gain insights into your
customers.
COMPUTING WILL MOVE TO THE STORAGE

 Today, data is stored on premises, in the cloud,


or in devices at the network edge. Disparate
data locations, combined with the difficulty of
finding enough bandwidth to move data to
where it is needed in a timely manner, are
making it more important to move computing
power closer to the data.
VIRTUAL MEMORY
 Virtual memory is a memory management capability of an
operating system (OS) that uses hardware and software to
allow a computer to compensate for physical memory
shortages by temporarily transferring data from random access
memory (RAM) to disk storage. Virtual address space is
increased using active memory in RAM and inactive memory in
hard disk drives (HDDs) to form contiguous addresses that hold
both the application and its data.
PAGING
 In computer operating system, paging is a memory
management scheme by which a computer stores and retrieves
data from secondary storage for use in main memory. In this
scheme, the operating system retrieves data from secondary
storage in same-size blocks called pages. Paging is an
important part of virtual memory implementations in modern
operating systems, using secondary storage to let programs
exceed the size of available physical memory.
SEGMENTATION
 Segmentation means to divide the marketplace into parts, or
segments, which are definable, accessible, actionable, and
profitable and have a growth potential. In other words, a
company would find it impossible to target the entire market,
because of time, cost and effort restrictions. It needs to have a
'definable' segment - a mass of people who can be identified
and targeted with reasonable effort, cost and time.
PROBLEMS ARISES IN THE IMPLEMENTATION OF VIRTUAL
MEMORY DUE TO THE DEVELOPMENT OF COMPUTER SOFTWARE

 TLB management improvements can be


divided into optimizing replacement and
placement of TLB entries, and minimizing refill
times. The likely improvements in performance
that can be achieved by replacement policy
are minor. Hardware designers typically
choose random replacement, as the cost
involved in implementing other policies in
hardware outweigh the performance benefits
to be gained.
PROBLEMS ARISES IN THE IMPLEMENTATION OF VIRTUAL
MEMORY DUE TO THE DEVELOPMENT OF COMPUTER
HARDWARE

 Currently, much effort has been directed towards


increasing TLB coverage, the focus being on
increasing the number of TLB entries, increasing
the page size, or using multiple page sizes.
Increasing the umber of TLB entries in an obvious
solution, as illustrated by chen. However, large fully
associative structure are difficult to build.
Reducing the associativity to increase the number
of entries could be increased sufficiently to cover a
significant proportion of larger memory sizes.
THANK YOU......

You might also like