Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 22

Running Head: BIG DATA

Big Data and Data Quality Problems


[Name of Institute]
[Name of Student]
[Date]
Big Data 2

Table of Contents

Introduction...................................................................................................................3
CSV Datset...................................................................................................................3
JSON Dataset...............................................................................................................3
Data Quality Issues.......................................................................................................4
Volume......................................................................................................................4
Velocity......................................................................................................................4
Variety.......................................................................................................................5
Veracity......................................................................................................................6
Value.........................................................................................................................7
Issues in SQL data.......................................................................................................7
empdb PAGELATCH Contention..............................................................................7
Memory Pressure......................................................................................................8
Indexes......................................................................................................................9
ORMS........................................................................................................................9
References.................................................................................................................10
APPENDIX..................................................................................................................11
Big Data 3

Big Data and Data Quality Problems

Introduction
The term big data refers to all data that are recorded worldwide at unusual
speeds. This data can be organised or unstructured. Today's business opportunity
owes much of its prosperity to the fixed economy (Helfert, et al., 2016). The data
launches the latest organisations in the world and understands that data and dealing
with various examples, as well as showing insignificant associations in the vast data
ocean, is becoming a fundamental and indeed very useful experiment. It needs to
turn big data into business intelligence that can be sent quickly. Better data drives
better momentum and organisational planning for companies that pay little attention
to their size, geology, parts of the industry, customer breakdown, and different
categories (Talha, et al., 2019). Hadoop is the basis for deciding to work with an
incredibly large amount of data.

CSV Datset
In AI, Deep Learning and Datascience, the most commonly used data files are
in JSON or CSV. Here we learn about CSV and use it to create a dataset (Liu, et al.,
2016). CSV specifies values separated by commas. These database fields are sent
to an organisation that contains a line that separates each database with commas.
Data with the .csv extension are standard documents. In this way, people without
comparable database applications can exchange database information (Dai, et al.,
2016).

JSON Dataset
The JSON introduces JavaScript object notation. This is a simple group of
companies written in JavaScript. The smaller size and comfort of the inscriptions
favor certain designs. Nimble can now use this data position on the Spry web.
Because the Spry system website does not have to worry about where the data
comes from, the use of the site and data references with the JSON data package
does not change (Xie, et al., 2017). In addition, JSON works just like a set of XML
data with objects like sharing and saving. Because of these similarities, they are not
covered in this file. You can find basic databases in the database overview (Dong, et
al., 2018).
Big Data 4

Data Quality Issues


What is best known about the quality of the data associated with big data
management can best be understood in relation to the main characteristics of big
data – “Volume, Velocity, Variety, Veracity, and Value” (Emmanuel and Stanier,
2016).

Volume
In the normal state of the data distribution center, a comprehensive evaluation
of data and details (if not complete) would be possible to a certain extent (Lakshen,
et al., 2016). With large amounts of data, however, the size of the data appears
unusual. Therefore, the assessment of quality data can at best be an estimate (for
example, it should be presented with probability and certainty, not with its accuracy).
In addition, we must characterise the vast majority of quality data measurements
based on the unique characteristics of big data in order to make these statistics
reasonably significant, to evaluate them (acceptable reflection) and to use them to
evaluate the methodology to improve data quality (Taleb, et al., 2016). Despite the
incredible breadth of the underlying data, it is not apparent that some ideal data is
not captured or is unavailable for various reasons (e.g., significant cost, delays in
retrieval, etc.). It's nice, but it's clear that accessing data at the time of big data is still
an important concern for data quality (Dai, et al., 2016).

Velocity
The advanced age and the number of data make it very difficult to describe
the quality of the data within a reasonable schedule and resources (inventory,
Big Data 5

register, labor, etc.). Once the quality assessment is complete, the results may be
out of date and little new, especially if the big data project is designed to continually
meet a firm business need. In such cases, quality measurements should be re-
evaluated to make them as credible and valid as possible. Testing can help you
achieve data quality rates (Talha, et al., 2019). However, this is at the expense of
bias (which makes the end product more useful in the long run), as examples are
rarely accurate descriptions of all data. Smaller examples give more speed than with
more incline. Other consequences of the speed are that you may have to
continuously evaluate data (Dai, et al., 2016). A site associated with a data
selection / relocation / data storage form. Since you cannot copy the selected data
package due to the required key time, delete it elsewhere and carry out a quality
assessment (Emmanuel and Stanier, 2016).

Variety
Perhaps the biggest problem with the data quality of big data is that data
contains different types of data (structured, semi-structured and unstructured) that
are exchanged from different data sources. Therefore, it is often not important to
measure the quality of individual data for all data, and you have to characterise the
quality measurement for each data type independently (Helfert, et al., 2016). In
addition, studies and improvements in the quality of raw or semi-structured data are
much less secure and complex than the quality of structured data. For example,
when a doctor is excavated, he records clinical data around the world (diagnosed
with a particular disease). Regardless of whether the language (and syntax) are
identical or not, the meaning may vary due to the nearby languages and
jargon(Talha, et al., 2019).
This requires little interpretation of the data, another measure of data quality.
Data from different sources often have real meaning. For example, “services” can
normally change definitions based on certain association units or external agencies.
Frames with undetectable names cannot mean anything similar (Lakshen, et al.,
2016). This problem is compounded by the lack of sufficient and stable metadata
from any source. To understand the data, you need reliable metadata (for example,
to understand the number of retail contracts, you need other data such as date, time,
part purchased, voucher used, etc.) (Dai, et al., 2016). Most of the time, much of this
data is outside the company, and this makes it extremely difficult to back up great
Big Data 6

metadata for such data. The second principle is lack of surface. For example, values
of “timestamps” from different sources conflict unless they are compiled with time
zone data (Dong, et al., 2018).

Veracity
As one of the most forgotten features of big data, Veracity is analysed directly
with quality data because it relates to the characteristic tendencies, confusions and
anomalies in the data. Based on the truth data values, the true values are unlikely to
be cautious, but this may roughly be the case. Finally, the data can contain internal
inaccuracies and vulnerabilities. In addition to data errors, Veracity also includes
data consistency (characterised by data reliability) and data reliability (in relation to
data creation, data selection and preparation, security gaps, etc.) (Talha, et al.,
2019). Therefore, these quality problems affect data integrity. and responsibility.
While other devices are properly characterised and easy to evaluate, Veracity is a
complex hypothesis that lacks the standard evaluation method. In a way, this reflects
the complexity of the “data quality” problem in big data mode (Xie, et al., 2017).
Data customers and data providers are regularly different organisations with
all other goals and business technologies. It is therefore not surprising that their
ideas about data quality are very different. Often, data providers have no idea of the
business cases of using customer information (data providers are unlikely to care
unless they are paid for the data). This difference between data sources and data
Big Data 7

usage is one of the most important explanations for the quality problems that
Veracity is facing (Emmanuel and Stanier, 2016).

Value
The Value brand is associated with the perfect reason. Companies provide
big data for various business interests. These are the interests of characterising,
evaluating and improving data quality. A typical and old meaning of data quality is
that this is the “benefit of use” for the data buyer. This means that the quality of the
data depends on what you do with the data. Therefore, for certain data, two different
organisations with different business goals are likely to have essentially different
estimates of the data data (Liu, et al., 2016).
This sensitivity is often unknown, the data quality is a “relative” concept. A
large data company can contain incoherent and contradictory data. In any case,
these quality issues may not affect the usefulness of the data in relation to the
business goal. In such cases, the company would find that data quality is incredible
(and unwilling to invest resources to improve data quality) (Talha, et al., 2019). For
example, making pots of mashed potatoes is a stack of small potatoes of the same
quality as a group of large potatoes. However, to eat from a plant that produces
French potatoes, the quality of these two routes would actually be excellent. The
value of the perspective is also given in terms of “saving money” in quality data -
whether it is worth identifying a particular quality problem, what problems are to be
solved, and so on (Emmanuel and Stanier, 2016).

Issues in SQL data


The best admin in the database disagree on careless decisions, which seems
obvious at a superficial level. You will review the questions in more detail to help you
understand the underlying trigger before proceeding (Helfert, et al., 2016). This is an
extremely important way to update your SQL Server. Here are the top five issues
with SQL Server implementation that I see and why it is important to never make
sudden assumptions and apply the basics in everything (Talha, et al., 2019).

empdb PAGELATCH Contention


This unwanted widespread problem is usually due to the framework that
tempdb uses for some kind of extraction, modification and loading (ETL) process.
Big Data 8

This is especially natural if it is an advanced “permanent” ETL process in a stable


style. The side effects can vary, but some things are always the same: High
PAGELATCH stays in Tempdb and does not show enough snapshot for forms with
Tempdb (Emmanuel and Stanier, 2016). People usually monitor top SQL retention in
Performance Advisor and see many questions using temporary diagrams that are
timed in Top SQL. These queries are typically run in milliseconds and should never
be considered part of the “top SQL” for the server (Dai, et al., 2016). This may make
people feel that these questions pose a major problem, but that is not the real
situation in any part of the imagination. The interrogation is the victim of the real
question. When I think this is the case, I usually go to Disk Activity on the
Performance Advisor tab to see how Tempdb is ranked (Dong, et al., 2018). In most
cases, I really noticed something very similar: busy tempdb with a characteristic
single document for data.

Memory Pressure
There are many stores in SQL Server, but the most notable is data backup
(also known as a pillow case). The cheapest way to back up data is to store the data
in memory, unlike the panel expansion. Storing batches of data in long-term memory
is attractive because working with data in memory is usually much faster than
performing physical I / O. Typically, pressure in memory appears as some different
indicators (Helfert, et al., 2016). Alone, some of these images can cause errors over
time and are expensive. There are two false indications that the circuit subsystem is
experiencing greater inertia than normal and that there are unusually long delays
corresponding to the movement of the plate. If you just look at these two tips, you
are probably dealing with the frame on your hard drive (Dai, et al., 2016).
This is why all the important measurements on a dashboard seem so
important. You should look at the big picture and availability of storage data, as well
as the movement of the circle, and help show what is actually happening. PLE (Page
Lifetime) is normally displayed very little for this server. The larger your cot, the
higher your “basic” PLE limits. The more data you can produce in and out of the crib,
the sadly the “beat” takes place (Emmanuel and Stanier, 2016).
Big Data 9

Indexes
Indexes are the main cause of problems with SQL Server. This does not
mean that SQL Server does not have indexes. Today SQL Server orders very well.
No, the problem with indexes and SQL Server is related to how easy it is for
customers to make order errors. Missing indexes, incorrect indexes, too many
indexes, outdated knowledge or a lack of inventory are mainly the basis for
customers who do not understand (what we lovingly call “unplanned DBA”) (Xie, et
al., 2017). People know that this area is making great strides. With a lot of normal
maintenance, many of these problems even go away. Please note that end
customers are not upset because there is a problem with their order. They only find
that their questions are taking too long and at this point your phone rings. It's up to
you how you know and see how orders work and how to organise the right support
(Dong, et al., 2018).

ORMS
Object-Relational Mapping (ORM) devices have been around for some time. I
regularly call such instrument generators. When used correctly, they can work
comfortably. Unfortunately, they are often not used properly and the result is a
terrible show and protected property. ORMs visited the case so often that it was easy
to see that they were the culprits. This is different from removing fingerprints in the
wrong place (Talha, et al., 2019). ORM would rather find out how fingerprints, hair
and blood are left to make sure we understand them. There are many sections of
blogs on the Internet about problems with ORM implementation. One of my best
recommendations is this, which summarises a significant number of ways in which
sending ORM can go wrong (Xie, et al., 2017).
Big Data 10

References

Dai, W., Wardlaw, I., Cui, Y., Mehdi, K., Li, Y. and Long, J., 2016. Data profiling
technology of data governance regarding big data: review and rethinking.
In Information Technology: New Generations (pp. 439-450). Springer, Cham.
Dong, X., He, H., Li, C., Liu, Y. and Xiong, H., 2018, September. Scene-Based Big
Data Quality Management Framework. In International Conference of
Pioneering Computer Scientists, Engineers and Educators (pp. 122-139).
Springer, Singapore.
Emmanuel, I. and Stanier, C., 2016, November. Defining big data. In Proceedings of
the International Conference on Big Data and Advanced Wireless
Technologies (pp. 1-6).
Helfert, M.A.K.U.S. and Ge, M., 2016. Big data quality-towards an explanation model
in a smart city context. In proceedings of 21st International Conference on
Information Quality, Ciudad Real, Spain.
Lakshen, G.A., Vraneš, S. and Janev, V., 2016, November. Big data and quality: A
literature review. In 2016 24th telecommunications forum (TELFOR) (pp. 1-4).
IEEE.
Liu, J., Li, J., Li, W. and Wu, J., 2016. Rethinking big data: A review on the data
quality and usage issues. ISPRS journal of photogrammetry and remote
sensing, 115, pp.134-142.
Taleb, I., El Kassabi, H.T., Serhani, M.A., Dssouli, R. and Bouhaddioui, C., 2016,
July. Big data quality: A quality dimensions evaluation. In 2016 Intl IEEE
Conferences on Ubiquitous Intelligence & Computing, Advanced and Trusted
Computing, Scalable Computing and Communications, Cloud and Big Data
Computing, Internet of People, and Smart World Congress
(UIC/ATC/ScalCom/CBDCom/IoP/SmartWorld) (pp. 759-765). IEEE.
Talha, M., Abou El Kalam, A. and Elmarzouqi, N., 2019. Big Data: Trade-off between
Data Quality and Data Security. Procedia Computer Science, 151, pp.916-
922.
Xie, C., Gao, J. and Tao, C., 2017, April. Big data validation case study. In 2017
IEEE third international conference on big data computing service and
applications (BigDataService) (pp. 281-286). IEEE.
Big Data 11

APPENDIX
Big Data 12

SELECT

migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks +


migs.user_scans) AS improvement_measure,

'CREATE INDEX [missing_index_' + CONVERT (varchar, mig.index_group_handle) + '_' +


CONVERT (varchar, mid.index_handle)

+ '_' + LEFT (PARSENAME(mid.statement, 1), 32) + ']'

+ ' ON ' + mid.statement

+ ' (' + ISNULL (mid.equality_columns,'')

+ CASE WHEN mid.equality_columns IS NOT NULL AND mid.inequality_columns IS NOT NULL THEN
',' ELSE '' END

+ ISNULL (mid.inequality_columns, '')

+ ')'

+ ISNULL (' INCLUDE (' + mid.included_columns + ')', '') AS create_index_statement,

migs.*, mid.database_id, mid.[object_id]

FROM
Big Data 13

sys.dm_db_missing_index_groups mig

INNER JOIN sys.dm_db_missing_index_group_stats migs ON migs.group_handle =


mig.index_group_handle

INNER JOIN sys.dm_db_missing_index_details mid ON mig.index_handle = mid.index_handle

WHERE

migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks +


migs.user_scans) > 10

ORDER BY

migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans)


DESC
Big Data 14

/*============================================================================

File: Index - Rarely Used Indexes

Summary: Sample stored procedure that lists rarely-used indexes. Because the number and type of
accesses are

tracked in dmvs, this procedure can find indexes that are rarely useful. Because the
cost of these indexes

is incurred during maintenance (e.g. insert, update, and delete operations), the
write costs of rarely-used

indexes may outweigh the benefits.

sp_help tblPasswordHistory

sp_helptext fnt_currency_user

select top 10 * from tblPasswordHistory

Date: 2008

Versions: 2005, 2008, 2012

------------------------------------------------------------------------------

Written by Ben DeBow, SQLHA

For more scripts and sample code, check out

http://www.SQLHA.com

THIS CODE AND INFORMATION ARE PROVIDED "AS IS" WITHOUT WARRANTY OF

ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED

TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A

PARTICULAR PURPOSE.

============================================================================*/

/* Create a temporary table to hold our data, since we're going to iterate through databases */

IF OBJECT_ID('tempdb..#Results') IS NOT NULL

DROP TABLE #Results;


Big Data 15

CREATE TABLE [dbo].#Results(

[Server Name] [nvarchar](128) NULL,

[DB Name] [nvarchar](128) NULL,

[source] [varchar](10) NOT NULL,

[objectname] [nvarchar](128) NULL,

[object_id] [int] NOT NULL,

[indexname] [sysname] NULL,

[data_compression] [varchar](24) NOT NULL,

[index_id] [int] NOT NULL,

[rowcnt] [bigint] NULL,

[datapages] [bigint] NULL,

[is_unique] [bit] NULL,

[count] [int] NULL,

[user_seeks] [bigint] NOT NULL,

[user_scans] [bigint] NOT NULL,

[user_lookups] [bigint] NOT NULL,

[user_updates] [bigint] NOT NULL,

[total_usage] [bigint] NOT NULL,

[%Reads] [bigint] NULL,

[%Writes] [bigint] NULL,

[%Seeks] [bigint] NULL,

[%Scans] [bigint] NULL,

[%Lookups] [bigint] NULL,

[%Updates] [bigint] NULL,

[last_user_scan] [datetime] NULL,

[last_user_seek] [datetime] NULL,


Big Data 16

[run_time] [datetime] NOT NULL

) ON [PRIMARY]

EXECUTE sys.sp_MSforeachdb

'USE [?];

declare @dbid int

select @dbid = db_id()

INSERT INTO #Results

SELECT @@SERVERNAME AS [Server Name]

, db_name() AS [DB Name]

, ''Usage Data'' ''source''

, objectname=object_name(s.object_id)

, s.object_id, indexname=i.name

, data_compression_desc, i.index_id

, s2.rowcnt, sa.total_pages, is_unique

, (select count(*)

from sys.indexes r

where r.object_id = s.object_id) ''count''

, user_seeks, user_scans, user_lookups, user_updates, user_seeks + user_scans +


user_lookups + user_updates AS [total_usage]

, CAST(CAST(user_seeks + user_scans + user_lookups AS


DEC(12,2))/CAST(REPLACE((user_seeks + user_scans + user_lookups + user_updates), 0, 1) AS
DEC(12,2)) * 100 AS DEC(5,2)) [%Reads]

, CAST(CAST(user_updates AS DEC(12,2))/CAST(REPLACE((user_seeks + user_scans +


user_lookups + user_updates), 0, 1) AS DEC(12,2)) * 100 AS DEC(5,2)) [%Writes]

, CAST(CAST(user_seeks AS DEC(12,2))/CAST(REPLACE((user_seeks + user_scans +


user_lookups + user_updates), 0, 1) AS DEC(12,2)) * 100 AS DEC(5,2)) [%Seeks]

, CAST(CAST(user_scans AS DEC(12,2))/CAST(REPLACE((user_seeks + user_scans +


user_lookups + user_updates), 0, 1) AS DEC(12,2)) * 100 AS DEC(5,2)) [%Scans]

, CAST(CAST(user_lookups AS DEC(12,2))/CAST(REPLACE((user_seeks + user_scans +


user_lookups + user_updates), 0, 1) AS DEC(12,2)) * 100 AS DEC(5,2)) [%Lookups]
Big Data 17

, CAST(CAST(user_updates AS DEC(12,2))/CAST(REPLACE((user_seeks + user_scans +


user_lookups + user_updates), 0, 1) AS DEC(12,2)) * 100 AS DEC(5,2)) [%Updates]

, last_user_scan

, last_user_seek

, getdate() run_time

from sys.dm_db_index_usage_stats s

join sys.indexes i on i.object_id = s.object_id

and i.index_id = s.index_id

join sysindexes s2 on i.object_id = s2.id

and i.index_id = s2.indid

join sys.partitions sp on i.object_id = sp.object_id

and i.index_id = sp.index_id

join sys.allocation_units sa on sa.container_id = sp.hobt_id

where objectproperty(s.object_id, ''IsUserTable'') = 1

and database_id = @dbid'

EXECUTE sys.sp_MSforeachdb

'USE [?];

declare @dbid int

select @dbid = db_id()

INSERT INTO #Results

SELECT @@SERVERNAME

, db_name()

, ''NA''

, object_name(i.object_id)

, o.object_id

, i.name
Big Data 18

, data_compression_desc

, i.index_id

, s2.rowcnt

, sa.total_pages

, is_unique

, (select count(*)

from sys.indexes r

where r.object_id = i.object_id) ''count''

,0

,0

,0

,0

,0

,0

,0

,0

,0

,0

,0

,0

,0

, getdate()

FROM sys.indexes i

JOIN sys.objects o

ON i.object_id = o.object_id

join sysindexes s2 on i.object_id = s2.id

and i.index_id = s2.indid


Big Data 19

join sys.partitions sp on i.object_id = sp.object_id

and i.index_id = sp.index_id

join sys.allocation_units sa on sa.container_id = sp.hobt_id

WHERE OBJECTPROPERTY(o.object_id,''IsUserTable'') = 1

AND i.index_id NOT IN (

SELECT s.index_id

FROM sys.dm_db_index_usage_stats s

WHERE s.object_id = i.object_id

AND i.index_id = s.index_id

AND database_id = @dbid)

--AND i.index_id NOT IN (0,1)'

SELECT *

FROM #Results

WHERE [DB Name] NOT IN ('MASTER', 'msdb', 'MODEL', 'TEMPDB')

DROP TABLE #Results;

/*

declare @dbid int

select @dbid = db_id()

SELECT @@SERVERNAME AS [Server Name]

, db_name() AS [DB Name]

, 'Usage Data' 'source'

, objectname=object_name(s.object_id)

, s.object_id

, indexname=i.name
Big Data 20

, data_compression_desc

, i.index_id

, s2.rowcnt

, sa.total_pages

, is_unique

, (select count(*)

from sys.indexes r

where r.object_id = s.object_id) 'count'

, user_seeks

, user_scans

, user_lookups

, user_updates

, user_seeks + user_scans + user_lookups + user_updates AS [total_usage]

, CAST(CAST(user_seeks AS DEC(12,2))/CAST(REPLACE((user_seeks + user_scans +


user_lookups + user_updates), 0, 1) AS DEC(12,2)) * 100 AS DEC(5,2)) [% Seeks]

, CAST(CAST(user_scans AS DEC(12,2))/CAST(REPLACE((user_seeks + user_scans +


user_lookups + user_updates), 0, 1) AS DEC(12,2)) * 100 AS DEC(5,2)) [% Scans]

, CAST(CAST(user_lookups AS DEC(12,2))/CAST(REPLACE((user_seeks + user_scans +


user_lookups + user_updates), 0, 1) AS DEC(12,2)) * 100 AS DEC(5,2)) [% Lookups]

, CAST(CAST(user_updates AS DEC(12,2))/CAST(REPLACE((user_seeks + user_scans +


user_lookups + user_updates), 0, 1) AS DEC(12,2)) * 100 AS DEC(5,2)) [% Updates]

, last_user_scan

, last_user_seek

, getdate() run_time

from sys.dm_db_index_usage_stats s

join sys.indexes i on i.object_id = s.object_id

and i.index_id = s.index_id

join sysindexes s2 on i.object_id = s2.id

and i.index_id = s2.indid


Big Data 21

join sys.partitions sp on i.object_id = sp.object_id

and i.index_id = sp.index_id

join sys.allocation_units sa on sa.container_id = sp.hobt_id

where objectproperty(s.object_id, 'IsUserTable') = 1

and database_id = @dbid

--and 'etblHistory' = object_name(s.object_id)

UNION ALL

SELECT @@SERVERNAME AS [Server Name]

, db_name() AS [DB Name]

, 'NA'

, objectname = object_name(o.object_id)

, o.object_id

, indexname = i.name

, i.index_id

, s2.rowcnt

, sa.total_pages

, is_unique

, data_compression_desc

, (select count(*)

from sys.indexes r

where r.object_id = i.object_id) 'count'

,0

,0

,0

,0

,0

,0
Big Data 22

,0

,0

,0

,0

,0

, getdate() run_time

FROM sys.indexes i

JOIN sys.objects o

ON i.object_id = o.object_id

join sysindexes s2 on i.object_id = s2.id

and i.index_id = s2.indid

join sys.partitions sp on i.object_id = sp.object_id

and i.index_id = sp.index_id

join sys.allocation_units sa on sa.container_id = sp.hobt_id

WHERE OBJECTPROPERTY(o.object_id,'IsUserTable') = 1

AND i.index_id NOT IN (

SELECT s.index_id

FROM sys.dm_db_index_usage_stats s

WHERE s.object_id = i.object_id

AND i.index_id = s.index_id

AND database_id = @dbid)

--AND i.index_id NOT IN (0,1)

order by last_user_scan, last_user_seek

*/

You might also like