CTSQL Notes

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 17

CTE:

Common Table Expressions


Overview

A common table expression (or CTE) is a powerful, easy-to-read way of querying data in a structured
way. Think of it an an "inline view" or "derived table" you build on-the-fly. You can directly query from or,
in some cases, do updates against it.

It is fully documented in SQL Server Books Onine.

Syntax

The syntax of the CTE uses the WITH keyword and it wraps a SELECT statement. The SELECT
statement can be complicated when necessary. See SELECT for more information.
WITH cte[(return_column[, ...n]] AS (
SELECT value
FROM table;
)
some_dml_statement;

The SELECT statement is nothing more than a SELECT statement.

WITH (return_columns) AS (
The keyword WITH is required and identifies the start of a CTE. The statement that comes before the
WITH must be terminated with a semicolon.
You can optionally specify one or more return column names in parentheses or you can let your query
determine the return column names.
After the keyword AS comes the left parentheses to open the query itself.

SELECT
This is a normal SELECT statement. Unless you use the optional TOP operator, you cannot include an
ORDER BY clause. You probably don't want to specify the order inside a CTE anyway because the only
impact is going to be to slow things down.

) some_dml_statement;(
The closing parentheses closes the CTE and then you write your normal DML statement that works with
the CTE by name. This can be a SELECT, INSERT, UPDATE or DELETE. There are certain restictions
on INSERT, UPDATE and DELETE. Books Online contains a list of all of them. The simplest one is that
you can delete from only one table at a time, so the CTE must contain only one table. Another simple one
is that you can't INSERT, UPDATE or DELETE a CTE that contains an aggregate function.

Examples
This is probably easier understood through example. These examples will use the following table
structure.
IF OBJECT_ID('dbo.Sample', 'u') IS NOT NULL DROP TABLE dbo.Sample;
CREATE TABLE dbo.Sample (
ID Integer not null identity (1, 1),
Dealer Varchar(8),
CustomerNumber Varchar(16),
SurveyID Integer,
EntryDate Datetime,
CONSTRAINT Sample_pk PRIMARY KEY CLUSTERED (ID));

INSERT INTO dbo.Sample(Dealer, CustomerNumber, SurveyID, EntryDate)


VALUES('ABC', '12345', 1, '01/01/2014'),
('ABC', '12345', 2, '01/02/2014'),
('ABC', '12345', 1, '01/03/2014'),
('ABC', '12345', 2, '01/04/2014'),
('ABC', '23456', 1, '01/05/2014'),
('ABC', '23456', 2, '01/06/2014'),
('ABC', '34567', 1, '01/07/2014'),
('ABC', '34567', 2, '01/08/2014'),
('ABC', '34567', 1, '01/09/2014'),
('ABC', '34567', 2, '01/10/2014'),
('ABC', '34567', 1, '01/11/2014'),
('ABC', '34567', 2, '01/12/2014'),
('ABC', '45678', 1, '01/13/2014'),
('ABC', '45678', 2, '01/14/2014'),
('DEF', '45678', 1, '01/15/2014'),
('DEF', '45678', 2, '01/16/2014'),
('DEF', '45678', 1, '01/17/2014'),
('DEF', '45678', 2, '01/18/2014'),
('DEF', '45678', 1, '01/30/2014'),
('DEF', '45678', 2, '01/30/2014');

To query sample rows with an entry date having more than one row:
WITH cteDupeDates AS (
SELECT EntryDate
FROM dbo.Sample
GROUP BY EntryDate
HAVING COUNT(EntryDate) > 1
)
SELECT s.Dealer, s.CustomerNumber, s.SurveyID, s.EntryDate
FROM dbo.Sample s
INNER JOIN cteDupeDates d ON d.EntryDate = s.EntryDate
ORDER BY s.EntryDate;

Since we somehow found ourselves in the situation where we have duplicate rows for Dealer,
CustomerNumber and SurveyID, we may need to de-dupe the data set by these three criteria, keeping
only the first one sorted by SurveyID. The first step is to identify the duplicates. Using the
ROW_NUMBER windowing function, we can build a CTE to assign a consecutive number for each unique
set of values. For each Dealer, CustomerNumber and SurveyID, assign a number to each row starting at
1 and incrementing for each duplicate. This example is shown here as the base of the logic to de-dupe
the set from start to finish.
WITH cteCustomers AS (
SELECT ID, Dealer, CustomerNumber, SurveyID,
RN = ROW_NUMBER() OVER(PARTITION BY Dealer, CustomerNumber, SurveyID
ORDER BY SurveyID DESC)
FROM dbo.Sample
)
SELECT Dealer, CustomerNumber, SurveyID, RN
FROM cteCustomers
ORDER BY Dealer, CustomerNumber;

The next step is to isolate the rows that are duplicates. Since we're going to want to keep the rows where
RN = 1, we'll add a WHERE clause to only return those rows where RN > 1. In other words, only return
the duplicate rows.
WITH cteCustomers AS (
SELECT ID, Dealer, CustomerNumber, SurveyID,
RN = ROW_NUMBER() OVER(PARTITION BY Dealer, CustomerNumber, SurveyID
ORDER BY SurveyID DESC)
FROM dbo.Sample
)
SELECT Dealer, CustomerNumber, SurveyID, RN
FROM cteCustomers
WHERE RN > 1
ORDER BY Dealer, CustomerNumber;

The last step would be to delete the duplicates, or the rows where RN > 1. Since our CTE only contains
one table, we can delete directly from the CTE instead of having to join in the dbo.Sample table itself.
WITH cteCustomers AS (
SELECT ID, Dealer, CustomerNumber, SurveyID,
RN = ROW_NUMBER() OVER(PARTITION BY Dealer, CustomerNumber, SurveyID
ORDER BY SurveyID DESC)
FROM dbo.Sample
)
DELETE cteCustomers
WHERE RN > 1;

Cascading CTEs
Overview

Using the derived tales from CTEs, you can easily reference the CTEs in queries or in other CTEs. The
top-down approach helps to divide the work that would normally take multiple temp tables and
calculations into a single, easy-to-read syntax. This structure is called a cascading CTE or cCTE. The first
one is executed, then execution cascades down to the next level.

As developers, we try to break down complex queries into manageable steps. SQL Server works the
same way. If we can "divide and conquer" the individual related steps to build a result set, cascading
CTEs are the approach we can use to write the query to work this way.

Examples

Like normal CTEs, the cascading CTEs are probably best understood with an example.

Using our original sample, let's say you wanted to list the dealer and customer number. You also want the
number of rows you have for that dealer and customer number along with the percentage of the sample
for that dealer. Without a CTE, this would ordinarily be approached using a temp table. With a cascading
CTE, it's changed to a single query without a temp table.
WITH cteSample AS (
SELECT Dealer, CustomerNumber
FROM dbo.Sample
),
cteDealerCounts AS (
SELECT Dealer, CONVERT(Numeric(6, 3), COUNT(*)) RC
FROM cteSample
GROUP BY Dealer
)
SELECT s.Dealer, s.CustomerNumber, COUNT(*) Sample,
PercentOfDealer = CONVERT(Numeric(6, 3), COUNT(*) / dc.RC * 100)
FROM cteSample s
CROSS APPLY cteDealerCounts dc
WHERE dc.Dealer = s.Dealer
GROUP BY s.Dealer, s.CustomerNumber, dc.RC;

The first CTE (cteSample) queries the dealer and customer number from the table. The second one
(cteDealerCounts) queries the cteSample and calculates the dealer and number of rows for the dealer.
The final query pulls the dealer, customer number and COUNT from cteSample. The cteDealerCounts is
included using the CROSS APPLY operator and the WHERE clause defines how they're joined. The
query can then divide the count by the total number of rows for the dealer. Add a conversion and multiply
by 100 to show it as a percentage and you're done.

The CTEs don't need to reference each other. In other words, they don't have to cascade. You can pull
from totally different tables and then use them both in your outer query. Be careful for a hidden loop in this
case; you can create a scenario where the second CTE is executed once for each row in the first CTE. If
this happens, you'll know it because your formerly-fast query will slow to a crawl. There are plenty of
cases when you need to create a temp table and go through individual steps, but this is not one of them.

Each query that runs against a table can make full use of indexes on the underlying table. Cascading
CTEs are a high-performance way to produce a result set you wouldn't otherwise be able to produce in a
single query.

Recursive CTEs
Overview

A recursive CTE, or rCTE, is a CTE that calls itself. They're form of a hidden loop and are notoriously
inefficient. They're being mentioned here in the interest of completeness only, but they aren't going to be
covered. While you'll undoubtedly be able to find many blog posts touting their virtues, please remember
that you can't believe everything you read on the internet. They're slow and inefficient, so it's best to not
use them.

Notes

CTEs create derives tables you can query. The scope is limited to the single query that uses it.
Cascading CTEs are ones that query other CTEs.
Recursive CTEs are performance black holes and should be avoided.

SELECT :

SELECT
Overview

The SELECT statement queries data from tables, views, functions and variables and returns one or more
rows. It can also assign values to variables instead of returing them. Commands such as INSERT,
UPDATE and DELETE can use SELECTs to drive them.

This is the command that's probably used more than any other when working with data. It is fully
documented in SQL Server Books Onine.
Syntax

The full syntax of the SELECT statement can be simple or it can get complicated, having many optional
parts to it. The minimal syntax is:
--minimal syntax
SELECT value;

--minimal syntax to return colums from a table


SELECT columns
FROM table1;

The syntax showing more of the optional parts of the statement is:
SELECT [DISTINCT] [TOP t] [columns, [, ... n]] [function]
[INTO new_table]
[FROM table1]
[join_condition table2 ON table.columns = table2.column]
[WHERE where_clause]
[GROUP BY group_by_clause]
[HAVING search_condition]
[ORDER BY sort_columns [ASC | DESC]]
[FOR XML [RAW | EXPLICIT | PATH]] [xml_options];

While many parts of the SELECT statement are optional, if they are included, they must appear in the
appropriate order.

SELECT {columns}
This is the main clause of the statement where you define what data to return. This is normally a comma
separated list of expressions. They can be column names, calculations, conditional calculations,
aggregations, variables or several other items.

The optional DISTINCT operator is used to return only unique rows, but it can have a significant impact
on performance.
The optional TOP operator limits the number of rows returned. You can specify TOP t to return the first {t}
rows or TOP p PERCENT to return the first {p} percent of rows. Which rows are first is determined by the
ORDER BY clause.

You can use scalar functions in the SELECT statement. The different functions are covered on
the functions page and there are pages for the date and
time, numeric, string, aggregate, windowing and anonymizer functions. Links to the Books Online
references are found on the pages that cover the functions.

INTO new_table
The into clause is optional. It will create a new table from whatever the SELECT {columns} returns and
populate the data in {new_table}. The column names and data types are used to create the new table, but
constraints and foreign keys are not copied. This clause creates the base structure only, but doesn't
include all the extra stuff that can go along with it.

FROM table1
The FROM clause defines the tables, views, functions and derrived tables to serve as the source of the
data. This is actually an optional part of the statement. If you don't need to hit a table to query a variable
or constant, you don't include the FROM clause. This is usually where your JOIN conditions are defined,
where you also define how the objects relate to each other. Each definition of a joined table is called a
join predicate.

How you join your tables can have a serious impact on performance. See Implicit Casts for information on
implicit casts and Non-SARGable Predicates for more information on using functions in predicates. This is
the first place to look at if your query isn't performing well.

Joins are one place where having the proper indexes defined can really help your query perform well.
Having properly-defined foreign keys with nonclustered indexes can dramatically improve query
performance. The same indexes can also down INSERTs, UPDATEs and DELETEs, so use indexing
sparingly and correctly.

WHERE where_clause
The WHERE clause is where you filter the data that is returned. Each condition of the WHERE statement
is called a predicate. How you filter your data, can have a drastic impact on performance. See Implicit
Casts for information on implicit casts and Non-SARGable Predicates for more information on using
functions in predicates. This is the second place to look at if your query isn't performing well.

This is another spot where proper indexing can dramatically help the performance of your query, but see
the warnings above.

GROUP BY group_by_clause
The GROUP BY clause defines how you want your results to be grouped, usually when using an
aggregate function. When using functions like SUM and COUNT, you almost always GROUP BY the non-
aggregated columns in the SELECT list.

The GROUP BY also has several options for ROLLUP and CUBE in the output to create totals and
subtotals, similar to what you can do in Excel. See Books Online for more information.

This is another place where proper indexing can help performance. The warnings above still apply.

HAVING search_condition
The HAVING clause acts like a WHERE clause for aggregations. For example, if you're querying your
QFS RawSurveyData table for the COUNT of rows by LoginID, you can use the HAVING clause to return
only those LoginIDs with more than 5 rows.

The HAVING clause normally requires that the query also have a GROUP BY clause.

ORDER BY sort_columns
The ORDER BY clause defines the order in which the result set is sorted. Contrary to popular opinion,
there is no default sort order for a query. If you want to guarantee the order of the rows returned, you
must include an ORDER BY clause.

The {sort_columns} part defines the columns to sort on in order, with an optional direction for each
column. The direction can be either ASC (ascending) or DESC (descending) and the default is ASC.

FOR XML
The FOR XML clause allows generation of XML from the data returned by the query. This is the last
logical processing step and occurs after the query is fully resolved. The PATH mode is commonly used to
perform some really cool tricks with concatenation. When you consider the many xml_options available,
there's quite a bit to the FOR XML clause. It's fully detailed in Books Onine.
For an excellent article on using the PATH mode to produce a delimited list, see Wayne Sheffield's
article Creating a comma-separated list.

Set Operators
Set operators are operations performed on sets of data. They can be used on multiple SELECT
statements to manipulate the result set, which will be a single set based on each query and the result of
applying the set operators. The following rules apply to all set operators:
1. All queries must return the same columns in the same order
2. The respective columns in each query must be of compatible data types.
Ideally, each respective column will be of the same data type, not just a compatible one. Having
different data types will result in an implicit cast, which can cause miserable performance.

Syntax
Each SELECT statement can contain the full syntax as shown above. The isolated syntax of the set
operations is:
SELECT column_list
FROM table1
{set_operator}
SELECT column_list
FROM table2
[ORDER BY sort_columns [ASC | DESC]];

INTERSECT
The INTERSECT operator returns distinct rows from each result set where they appear in all result sets.
In performance comparisons of the many different approaches to determining which rows are in set A and
in set B, the INTERSECT is usually the fastest. It's also normally the simplest to write and the easiest for
people to read.

EXCEPT
The EXCEPT operator returns distinct rows the first result set where they do not appear subsequent
result sets. In performance comparisons of the many different approaches to determining which rows are
in set A and not in set B, the EXCEPT is usually the fastest. It's also normally the simplest to write and the
easiest for people to read.

UNION ALL
The UNION ALL operator returns all rows from both result sets. Because it returns all rows and doesn't
apply a DISTINCT, the UNION ALL is almost always faster than a UNION.

UNION
The UNION operator returns distinct rows from both result sets. This command applies a DISTINCT to
only return unique rows. If a row exists in both result sets, they'll both be returned. Because of this, the
UNION is almost always slower than UNION ALL.

Joins
Joins are the key to getting related data from different tables. They define how tables relate to one
another. There are several types of joins that behave differently. The SELECT statement can contain the
full syntax as shown above. The isolated syntax of a join is:
SELECT left_tablet1.column_name, t2.column_name
FROM left_table
[INNER | [RIGHT | LEFT | FULL] OUTER | CROSS] JOIN right_table ON
left_table.some_column = right_table.some_column;

The three basic types of joins are INNER, OUTER and CROSS.

INNER JOIN
This join returns rows from both tables where the join predicate is true. In other words, the rows returned
will be those where the values of the joined columns match. Rows in the left table are not returned if
there's no matching values in the right table.
LEFT OUTER JOIN
This join returns all rows from the left table and matched columns from the right table. Unmatched
columns from the right table are returned as NULL. Put another way, all rows from the left table are
returned and any missing data in the right table will be returned as NULL. There is no match on NULLs in
the left table. When OUTER JOINs are necessary, this is the way most people think.

RIGHT OUTER JOIN


This join returns all rows from the right table and matched columns from the left table. Unmatched
columns from the left table are returned as NULL. Put another way, all rows from the right table are
returned and any missing data in the left table will be returned as NULL. There is no match on NULLs in
the right table.

FULL OUTER JOIN


This join returns all rows from both tables, including unmatched rows. Unmatched columns from the either
table are returned as NULL. Put another way, all rows from both tables are returned and any missing data
in either table will be returned as NULL. There is no match on NULLs in the right table.

CROSS JOIN
This join returns all rows from the left table matched against all rows in the right table. Put another way,
for each row in the left table, all rows are returned from the right table. This is also known as a "Cartesian
Plane" or Cartesian Product". The number of rows returned by a CROSS JOIN can be calculated by
multiplying the number of rows in each CROSS JOINed table. For example, if you have a table with 2
rows and you CROSS JOIN it to itself 3 times, you have 2 * 3 = 8 rows that will be returned.

The applications of this join type are limited. They're very powerful when they're needed if done correctly,
but when done accidentally, they can have a very detrimental impact on the whole server. Use sparingly
and with caution.

Apply Operator

The APPLY operator is used when you want to use a table-valued function (TVF) in your query. It's
similar to a join, but for functions instead of tables. The performance of functions can vary tremendously,
depending on the type of function you use. For more information on the performance of different types of
functions, see the performance page on functions. There are two different APPLY operators.

CROSS APPLY
This is similar to an INNER JOIN for use with TVFs in that the rows are returned only when the function
returns a matching row. The columns returned by the functions are appended to the columns returned
from the table. You can SELECT all or only some of the columns from the TVF, just like the table. When
you use the APPLY operator, this is the most common one to use.

OUTER APPLY
This is similar to an OUTEr JOIN for use with TVFs in that the rows are returned whether or not the
function returns a matching row. The columns returned by the function are appended to the columns
returned from the table. You can SELECT all or only some of the columns from TVF, just like the table.

Examples

These examples will use the following table structure.


IF OBJECT_ID('dbo.Sample', 'u') IS NOT NULL DROP TABLE dbo.Sample;
CREATE TABLE dbo.Sample (
ID Integer not null identity (1, 1),
CONSTRAINT Sample_pk
PRIMARY KEY (ID),
Dealer Varchar(8),
CustomerNumber Varchar(16),
SurveyID Integer,
EntryDate Datetime);

INSERT INTO dbo.Sample(Dealer, CustomerNumber, SurveyID, EntryDate)


VALUES('ABC', '12345', 1, '01/01/2014'),
('ABC', '12345', 2, '01/02/2014'),
('ABC', '12345', 1, '01/03/2014'),
('ABC', '12345', 2, '01/04/2014'),
('ABC', '23456', 1, '01/05/2014'),
('ABC', '23456', 2, '01/06/2014'),
('ABC', '34567', 1, '01/07/2014'),
('ABC', '34567', 2, '01/08/2014'),
('ABC', '34567', 1, '01/09/2014'),
('ABC', '34567', 2, '01/10/2014'),
('ABC', '34567', 1, '01/11/2014'),
('ABC', '34567', 2, '01/12/2014'),
('ABC', '45678', 1, '01/13/2014'),
('ABC', '45678', 2, '01/14/2014'),
('DEF', '45678', 1, '01/15/2014'),
('DEF', '45678', 2, '01/16/2014'),
('DEF', '45678', 1, '01/17/2014'),
('DEF', '45678', 2, '01/18/2014'),
('DEF', '45678', 1, '01/30/2014'),
('DEF', '45678', 2, '01/30/2014');

To query all sample rows for dealer 'ABC' sorted by the entry date:
SELECT Dealer, CustomerNumber, SurveyID, EntryDate
FROM dbo.Sample
WHERE Dealer = 'ABC'
ORDER BY EntryDate;

To query the sample for the count of surveys by survey id and sort by the count from largest to smallest:
SELECT SurveyID, COUNT(*) SurveyCount
FROM dbo.Sample
GROUP BY SurveyID
ORDER BY COUNT(*) DESC;

To query sample entry dates with more than one row:


SELECT EntryDate
FROM dbo.Sample
GROUP BY EntryDate
HAVING COUNT(EntryDate) > 1
ORDER BY EntryDate;

To query sample rows with an entry date having more than one row:
WITH cteDupeDates AS (
SELECT EntryDate
FROM dbo.Sample
GROUP BY EntryDate
HAVING COUNT(EntryDate) > 1
)
SELECT s.Dealer, s.CustomerNumber, s.SurveyID, s.EntryDate
FROM dbo.Sample s
INNER JOIN cteDupeDates d ON d.EntryDate = s.EntryDate
ORDER BY s.EntryDate;
To query the range of dates by each survey id:
SELECT SurveyID, MIN(EntryDate), MAX(EntryDate)
FROM dbo.Sample
GROUP BY SurveyID
ORDER BY SurveyID;

To query the sample for the day of the week (both number and name) and the number of surveys on that
day:
SELECT DATEPART(weekday, EntryDate), DATENAME(weekday, EntryDate), COUNT(*)
FROM dbo.Sample
GROUP BY DATEPART(weekday, EntryDate), DATENAME(weekday, EntryDate)
ORDER BY DATEPART(weekday, EntryDate);

The technique of generating a comma-delimited list is covered in an article by Wayne Sheffield published
at Creating a comma-separated list. An example of generating a comma-delimited list of customer
numbers for each dealer is shown here. This isn't the best example because there are duplicate customer
numbers, but the technique is what's important.
WITH cte AS (
SELECT DISTINCT Dealer
FROM dbo.Sample)
SELECT Dealer,
CommaList = STUFF((SELECT ',' + CustomerNumber
FROM dbo.Sample s
WHERE s.Dealer = cte.Dealer
ORDER BY CustomerNumber
FOR XML PATH(''), TYPE).value('.',
'varchar(max)'), 1, 1, '')
FROM CTE
ORDER BY Dealer;

The correlated subquery builds a comma-separated list. The FOR XML PATh builds it into an XML
structure with no tags and then uses an XQuery trick and casts it into a varchar to prevent the XML
entities from being created, also called de-entitization. The STUFF commands replaces the first comma
with an empty string, which is necessary because we're starting with our delimiter instead of ending with
it. Had we ended with it, we would have to use a SUBSTRING to remove the trailing one, which would be
less efficient because we'd have to use a LEN and SUBSTRING function. You can do this with any
column you can convert to a string.

Notes

There are usually many different ways to query data and achieve the same results. The art of querying
comes into play when trying to determine the best way to write a query that will return the results you
want quickly and minimally impacting others. How you write a query determines it's performance. The
only way to know which approach is the most performant is to test them. When you do testing, don't test
over 100 rows. Most things will work well over a 10-row table. When your table grows is when you'll
discover your scalability problems. Test things over larger data sets. See Scalability for more information.
When writing queries, it can be helpful to know how the query will be processed. The TOP n clause is one
such case. If you want to select the most recent rows, the results must first be sorted by date then the
TOP n rows returned. This is the sequence in which a query is processed.

1. FROM
2. ON
3. JOIN
4. WHERE
5. GROUP BY
6. WITH CUBE or WITH ROLLUP (Part of the GROUP BY clause).
7. HAVING
8. SELECT
9. DISTINCT
10. ORDER BY
11. TOP
12. FOR XML

INSERT
Overview

The INSERT statement adds one or more rows to a table. It is fully documented in SQL Server Books
Onine.

The INSERT statement has several forms only because it can get its data from several sources. It can
insert rows into physical tables, updatable views, temp tables or table variables.

Syntax

Syntax 1: Value List

This syntax lets you insert a defined set of values into a table.
INSERT INTO table_name(column1 [, ... n])

VALUES(value1, [, ... n]);

Syntax 2: Query Results

This syntax lets you insert the results of a query into a table. The SELECT statement can be complicated
when necessary. See SELECT for more information.
INSERT INTO table_name(column1 [, ... n])

SELECT value1, [, ... n]

FROM source_table;

Syntax 3: Procedure Results

This syntax lets you insert the results of a stored procedure into a table.
INSERT INTO table_name(column1 [, ... n])

EXECUTE dbo.SomeProcedure [some_parameter];

Examples

The examples below will use the following test tables.


--create an employees table
IF OBJECT_ID('dbo.Employees', 'u') IS NOT NULL DROP TABLE dbo.Employees;

CREATE TABLE dbo.Employees (

ID Integer not null identity (1, 1),

CONSTRAINT Employees_PK

primary key (ID),

EmployeeName Varchar(32) not null,

HireDate Datetime not null,

ScheduledHours integer not null,

EntryDate Datetime not null default GETDATE());

--three months later, we decide we want to keep track of changes to the


scheduled hours, so we'll create a new table

IF OBJECT_ID('dbo.ScheduledHours', 'u') IS NOT NULL DROP TABLE


dbo.ScheduledHours;

CREATE TABLE dbo.ScheduledHours (

ID Integer not null identity (1, 1),

CONSTRAINT ScheduledHours_PK

PRIMARY KEY (ID),

EmployeeID Integer not null,

CONSTRAINT ScheduledHours_Employees_FK

FOREIGN KEY (EmployeeID)

REFERENCES dbo.Employees(ID),

Hours Integer,

EffectiveDate Datetime not null,

EntryDate Datetime not null default GETDATE());

Syntax 1: Value List

To insert rows into dbo.Employees, the syntax is:


INSERT INTO dbo.Employees(EmployeeName, HireDate, ScheduledHours)

VALUES('Beavis', '06/01/2014', 40);

INSERT INTO dbo.Employees(EmployeeName, HireDate, ScheduledHours)

VALUES('Butthead', '06/02/2014', 40);


In SQL Server 2008, you can insert multiple rows with a single INSERT statement. If your SQL has to run
in SQL Server 2005 or you enjoy unnecessary typing, you can go with the individual rows as shown
above. If you don't have to support SQL Server 2005, you can take advantage of the syntax below and
save yourself some typing and round trips to the server.
INSERT INTO dbo.Employees(EmployeeName, HireDate, ScheduledHours)

VALUES('Beavis', '06/01/2014', 40),

('Butthead', '06/02/2014', 40);

Note that in both examples we haven't specified a value for the ID or EntryDate columns. We don't specify
a value for ID because it is defined in the design as an IDENTITY column that starts with 1 and
increments by 1 for each row inserted into the table. We can't explicitly specify a value for this column
unless we enable that ability for this table, but normally this is not required or wanted. We don't specify a
value for EntryDate because there's a DEFAULT value defined in the design of the table that we want
populated.

We could define a value to insert by executing one of these statements. The first one uses the
GETDATE() function to specify the current date/time on the server and the second one uses a value
we're free to make up.
INSERT INTO dbo.Employees(EmployeeName, HireDate, ScheduledHours, EntryDate)

VALUES('Beavis', '06/01/2014', 40, GETDATE()),

('Butthead', '06/02/2014', 40, GETDATE());

INSERT INTO dbo.Employees(EmployeeName, HireDate, ScheduledHours, EntryDate)

VALUES('Beavis', '06/01/2014', 40, '08/01/2014'),

('Butthead', '06/02/2014', 40, '09/01/2014');

Syntax 2: Query Results

We can use the above syntax to populate our dbo.ScheduledHours, by looking up the ID value for every
employee and then write a VALUES clause in our INSERT statement for each one. This would get awfully
tedious if we had 2,000 rows we had to create.
INSERT INTO dbo.ScheduledHours(EmployeeID, Hours, EffectiveDate)

VALUES(1, 40, GETDATE()),

(2, 40, GETDATE());

Instead, we can use a query to generate a table of values from dbo.Employees that we'll use to INSERT
rows into our dbo.ScheduledHours table.
INSERT INTO dbo.ScheduledHours(EmployeeID, Hours, EffectiveDate)

SELECT e.ID, 40, GETDATE()

FROM dbo.Employees e;

Syntax 3: Procedure Results


If write a stored procedure to run a query against the tables and return a result set, you might want to use
the data it returns to run a summary. For example, the boss tells you he wants to see a report of
employees and the number of changes to their scheduled hours. You already have a procedure named
dbo.GetEmpHours to query the data and return the employee name, effective date and hours. It takes
into account the custom rules for including people with no changes, but the procedure returns the the
detail and you want a summary. So, you could use the same procedure to populate a temp table you can
then query like this:
IF OBJECT_ID('tempdb.dbo.#emp_hours', 'u') IS NOT NULL DROP TABLE
#emp_hours;

CREATE TABLE #emp_hours (

EmployeeName Varchar(32) not null

EffectiveDate Datetime not null,

Hours Integer);

INSERT INTO #emp_hours(EmployeeName, EffectiveDate, Hours)

EXECUTE dbo.GetEmpHours;

Now you can query your #emp_hours table for the data to feed the new report.

DELETE
Overview

The DELETE statement deletes one or more rows from a table. It is fully documented in SQL Server
Books Onine.

Syntax

The syntax of a DELETE statement is pretty straightforward.


DELETE FROM table_name
[WHERE ...];

Examples

The examples below will use the following test tables.


--create an employees table
IF OBJECT_ID('dbo.Employees', 'u') IS NOT NULL DROP TABLE dbo.Employees;
CREATE TABLE dbo.Employees (
ID Integer not null identity (1, 1),
CONSTRAINT Employees_PK
primary key (ID),
EmployeeName Varchar(32) not null,
HireDate Datetime not null,
ScheduledHours integer not null,
EntryDate Datetime not null default GETDATE());
--three months later, we decide we want to keep track of changes to the
scheduled hours, so we'll create a new table
IF OBJECT_ID('dbo.ScheduledHours', 'u') IS NOT NULL DROP TABLE
dbo.ScheduledHours;
CREATE TABLE dbo.ScheduledHours (
ID Integer not null identity (1, 1),
CONSTRAINT ScheduledHours_PK
PRIMARY KEY (ID),
EmployeeID Integer not null,
CONSTRAINT ScheduledHours_Employees_FK
FOREIGN KEY (EmployeeID)
REFERENCES dbo.Employees(ID),
Hours Integer,
EffectiveDate Datetime not null,
EntryDate Datetime not null default GETDATE());

--create two employees


INSERT INTO dbo.Employees(EmployeeName, HireDate, ScheduledHours)
VALUES('Beavis', '06/01/2014', 40),
('Butthead', '06/02/2014', 40);

--build some hours history


INSERT INTO dbo.ScheduledHours(EmployeeID, Hours, EffectiveDate)
SELECT e.ID, 40, GETDATE()
FROM dbo.Employees e;

INSERT INTO dbo.ScheduledHours(EmployeeID, Hours, EffectiveDate)


SELECT e.ID, 35, GETDATE()
FROM dbo.Employees e;

INSERT INTO dbo.ScheduledHours(EmployeeID, Hours, EffectiveDate)


SELECT e.ID, 40, GETDATE()
FROM dbo.Employees e;

--now create a couple of employees without creating any hours history


INSERT INTO dbo.Employees(EmployeeName, HireDate, ScheduledHours)
VALUES('Agent Smith', '11/01/2014', 40),
('Mr. Anderson', '11/02/2014', 40);

To delete all rows, just fire the DELETE statement without a WHERE clause. Any foreign keys that
reference the table we're deleting from will be checked and the statement terminated if completing it
violated referential integrity.
--this won't work in this case because of the foreign key from
dbo.ScheduledHours
DELETE FROM dbo.Employees;

To delete a specific set of rows, use a WHERE clause in your update statement. This is useful when you
have set of rows to delete but they aren't dependent on other tables.
DELETE FROM dbo.ScheduledHours
WHERE ID = 1;

Let's say we wanted to delete employees without any hours history. We could do this in a couple different
ways. The right approach to use depends on the individual situation and if NULLs are allowed in the
referenced table.
--we could use a subquery
DELETE FROM dbo.Employees
WHERE ID NOT IN (SELECT EmployeeID
FROM dbo.ScheduledHours);

--we could also use a correlated subquery


DELETE FROM dbo.Employees
WHERE NOT EXISTS (SELECT 1
FROM dbo.ScheduledHours sh
WHERE sh.EmployeeID = dbo.Employees.ID);

File transfer protocol = SFTP-3

Cryptographic protocol = SSH-2

SSH implementation = OpenSSH_7.4

Encryption algorithm = AES-256 SDCTR (AES-NI accelerated)

Compression = No

------------------------------------------------------------

Server host key fingerprints

SHA-256 = ssh-ed25519 255 NPeJ3LmSbxffzUI7NBNMR9XGW9ek0GFza4dC558DkA4

MD5 = ssh-ed25519 255 42:4e:b1:63:06:cc:66:0d:e4:e9:ef:60:2f:f6:ae:5c

------------------------------------------------------------

Can change permissions = Yes

Can change ACL = No

Can change owner/group = Yes

Can execute arbitrary command = No

Can create symbolic/hard link = Yes/Yes

Can lookup user groups = No

Can duplicate remote files = No

Can check available space = Yes

Can calculate file checksum = No

Native text (ASCII) mode transfers = No

------------------------------------------------------------

Additional information
The server supports these SFTP extensions:

posix-rename@openssh.com="1"

statvfs@openssh.com="2"

fstatvfs@openssh.com="2"

hardlink@openssh.com="1"

fsync@openssh.com="1"

------------------------------------------------------------

Total bytes on device = 846 GB (908,782,112,768 B)

Free bytes on device = 676 GB (726,400,032,768 B)

Total bytes for user = Unknown

Free bytes for user = 635 GB (682,360,602,624 B)

Bytes per allocation unit = 4.00 KB (4,096 B)

You might also like