Technical Insights: Azure, .NET, Dynamics 365 & EV Charging Architecture

Category: SQL Server Page 1 of 4

SQL Server Category

Use Power Shell to execute SQL Server script files

Below is the snippet for using Power Shell to execute list of SQL scripts under a specific folder. I use this script in Octopus deployment to deploy database changes (* for this particular case, we don’t use Code First therefore we don’t use Migrate.exe)

[code language=”powershell”]
# Connection String
[string] $ConnectionString = "server=MyDBServer;database=MyDatabase;user id=Myuser;password=Mypassword;trusted_connection=true;"

#The folder where all the sql scripts are located
[string] $ScriptPath= "C:\Octopus\Applications\SQL2014UAT\Powershell Deployment\Scripts"

# Go to every single SQL files under the folder
foreach ($sqlFile in Get-ChildItem -Path $ScriptPath -Filter "*.sql" | sort-object)
{
$SQLQuery = Get-Content "$ScriptPath\$sqlFile" -Raw

ExecuteSqlQuery $ConnectionString $SQLQuery
}

# executes multiple lines of SQL query
function ExecuteSqlQuery ($ConnectionString, $SQLQuery) {
# Use GO to separate between commands
$queries = [System.Text.RegularExpressions.Regex]::Split($SQLQuery, "^\s*GO\s*`$", [System.Text.RegularExpressions.RegexOptions]::IgnoreCase -bor [System.Text.RegularExpressions.RegexOptions]::Multiline)

$queries | ForEach-Object {
$q = $_

if ((-not [String]::IsNullOrWhiteSpace($q)) -and ($q.Trim().ToLowerInvariant() -ne "go"))
{
$Connection = New-Object System.Data.SQLClient.SQLConnection

Try
{
$Connection.ConnectionString = $ConnectionString
$Connection.Open()

$Command = New-Object System.Data.SQLClient.SQLCommand
$Command.Connection = $Connection
$Command.CommandText = $q
$Command.ExecuteNonQuery() | Out-Null
}
Catch
{
echo $_.Exception.GetType().FullName, $_.Exception.Message
}
Finally
{
if ($Connection.State -eq ‘Open’)
{
write-Host "Closing Connection…"
$Command.Dispose()
$Connection.Close()
}
}
}
}
}
[/code]

Alternatively if you have SMO, PS extensions and the snap in then you can use simpler script below. For the pre-requisites for invoke-sqlcmd is here
[code language=”powershell”]
Get-ChildItem -Path "C:\Octopus\Applications\SQL2014UAT\Powershell Deployment\Scripts" -Filter "*.sql" | % {invoke-sqlcmd -InputFile $_.FullName}
[/code]

IIF in SQL Server 2012

Now you can use IIF in SQL Server 2012 instead of CASE (again this only if you have 2 possible values that you want to evaluate)

–This will return Apple
SELECT IIF(‘APPLE’ = ‘APPLE’, ‘Apple’, ‘Banana’)

–This will return Banana
SELECT IIF(‘APPLE’ = ‘Banana’, ‘Apple’, ‘Banana’)

OFFSET FETCH in SQL Server 2012 – Paging

In SQL Server 2005, normally when we do paging we need to use CTE or nested SELECT Statement, but in SQL Server 2012 we can use OFFSET and FETCH to do the paging

Let’s start of how we normally do paging in SQL Server 2005/2008 – this will return the first 10 rows ordered by REF

WITH Fields AS
(
SELECT REF, NAME, DISPLAY, ROW_NUMBER() OVER (ORDER BY REF) AS RowNumber
FROM SU_FIELD
)
SELECT REF, NAME, DISPLAY
FROM Fields
WHERE RowNumber BETWEEN 1 AND 10
ORDER BY RowNumber ASC;

To move to next page we do

WITH Fields AS
(
SELECT REF, NAME, DISPLAY, ROW_NUMBER() OVER (ORDER BY REF) AS RowNumber
FROM SU_FIELD
)
SELECT REF, NAME, DISPLAY
FROM Fields
WHERE RowNumber BETWEEN 11 AND 20
ORDER BY RowNumber ASC;

In SQL Server 2012 – it looks much more simpler to write the query. OFFSET is used for skipping the first x rows ad FETCH NEXT is used to control how many records to be returned

–This will return 10 records without skipping any row (REF from 0 – 9)
SELECT REF, NAME, DISPLAY
FROM SU_FIELD
ORDER BY REF
OFFSET 0 ROWS
FETCH NEXT 10 ROWS ONLY;

to move to the next page

–This will return 10 records with skipping first 10 rows (REF from 10 – 19)
SELECT REF, NAME, DISPLAY
FROM SU_FIELD
ORDER BY REF
OFFSET 10 ROWS
FETCH NEXT 10 ROWS ONLY;

TRY_CONVERT in SQL Server 2012

One of the new TSQL feature in SQL Server 2012 is TRY_CONVERT, basically it’s a function that will return null if the object passed is not compatible with the expected casting. In earlier version of SQL Server we don’t have this feature which means you need to make sure your data is cast-able to the expected format

Earlier Version of SQL Server – you can run this

SELECT CONVERT(INT, ‘abc’)

and it will throw an error

Msg 245, Level 16, State 1, Line 1
Conversion failed when converting the varchar value ‘abc’ to data type int.

Then you will normally do work around with this to avoid the error which means no record will be returned when the object is not a valid data type

SELECT CONVERT(INT, ‘abc’) AS TEST WHERE ISNUMERIC(‘abc’) = 1

SQL Server 2012

SELECT TRY_CONVERT(INT, ‘abc’) AS TEST

This will still return a record but with NULL value when the data is not cast-able

SQL Server udf Join

I found some interesting article to share in regards of joining the UDF into your join which ends up in executing the UDF as many as number of records being joined to. The solution is to put it into a temp table before joining it

http://sqlblog.com/blogs/rob_farley/archive/2011/11/08/when-is-a-sql-function-not-a-function.aspx

Tracing Dynamic SQL in the SQL Profiler

The default template in SQL profiler doesn’t trace the dynamic SQL executed by the stored procedure. In order to trace the Dynamic SQL, you need to turn on these 2 options in your tracing profile

SP:stmtstarting and SQL:stmtstarting

Tips: How to find Recursive Parent-Child relationship in SQL Server

Below is the sample in how to reproduce a recursive parent-child relationship and how to find out which records are the troublesome one

CREATE TABLE RecursiveSample
(
REF INT NOT NULL,
PARENT_REF INT NOT NULL,
NAME VARCHAR(255)
)
GO
/*Inserting the parents*/
INSERT INTO RecursiveSample(REF, PARENT_REF, NAME) VALUES(1, 0, ‘Parent 1’)
GO
INSERT INTO RecursiveSample(REF, PARENT_REF, NAME) VALUES(2, 0, ‘Parent 2’)
GO
INSERT INTO RecursiveSample(REF, PARENT_REF, NAME) VALUES(3, 0, ‘Parent 3’)
GO
/*Inserting the child*/
INSERT INTO RecursiveSample(REF, PARENT_REF, NAME) VALUES(4, 1, ‘Child Parent 1’)
GO
INSERT INTO RecursiveSample(REF, PARENT_REF, NAME) VALUES(5, 2, ‘Child Parent 2’)
GO
INSERT INTO RecursiveSample(REF, PARENT_REF, NAME) VALUES(6, 3, ‘Child Parent 3’)
GO
/*Inserting the grand child*/
INSERT INTO RecursiveSample(REF, PARENT_REF, NAME) VALUES(7, 4, ‘Grand Child Parent 1’)
GO
INSERT INTO RecursiveSample(REF, PARENT_REF, NAME) VALUES(8, 5, ‘Grand Child Parent 2’)
GO
INSERT INTO RecursiveSample(REF, PARENT_REF, NAME) VALUES(9, 6, ‘Grand Child Parent 3’)
GO
/*Let’s update the record to make it recursive*/
UPDATE RecursiveSample SET PARENT_REF = 7 WHERE REF = 4
GO
/*Run this query to find the Parent child structure*/
;WITH CTE
AS(
SELECT *,[PATH]=CAST(REF AS NVARCHAR(1000))FROM RecursiveSample WHERE PARENT_REF =0
UNION ALL
SELECT A.*,[PATH]=CAST(C.[PATH]+’/’+RTRIM(C.REF) AS NVARCHAR(1000))
FROM CTE C
INNER JOIN RecursiveSample A ON C.REF=A.PARENT_REF AND PATINDEX(‘/’+RTRIM(A.REF)+’/%’,’/’+C.[PATH]+’/’)=0
)
SELECT * FROM CTE
/*
REF PARENT_REF NAME PATH

Simulate Deadlock in SQL Server

How to simulate Deadlock in SQL Server, pretty simple

1. Create a table

CREATE TABLE MyDeadlockTable
(
REF INT PRIMARY KEY NOT NULL,
NAME VARCHAR(255)
)
GO

2. Populate a table with the record
INSERT INTO MyDeadlockTable(REF, NAME) VALUES(1, ‘Record 1’)
GO
INSERT INTO MyDeadlockTable(REF, NAME) VALUES(2, ‘Record 2’)
GO

3. On the first window let’s run this
BEGIN TRAN
UPDATE MyDeadlockTable set NAME=NAME WHERE REF = 1;
WAITFOR DELAY ’00:00:15′
UPDATE MyDeadlockTable set NAME=NAME WHERE REF = 2;
COMMIT

4. Open a 2nd window let’s run this

BEGIN TRAN
UPDATE MyDeadlockTable SET NAME=NAME WHERE REF = 2;
WAITFOR DELAY ’00:00:15′
UPDATE MyDeadlockTable SET NAME=NAME WHERE REF = 1;
COMMIT

Table Spool (Lazy Spool) in SQL Server 2005

I have a web app that recently timing out and the timing out exception is actually coming from the SQL Server. When I run the execution plan I found that there is one item called Table Spool (Lazy Spool) which is costing about 20%. I thought it was caused by my full text search but when I drilled down further more is because of DISTINCT keyword. So I decided to change to use GROUP BY instead. In my case it is only a single column so It won’t make any difference at all. Once I’ve changed that my web application running fast and no more timeout

I got this explanation from this website

Explain Distinct:
3) We do an all-AMPs RETRIEVE step from … by way of an
all-rows scan with no residual conditions into Spool x
(group_amps), which is redistributed by hash code to all AMPs.
Then we do a SORT to order Spool 1 by the sort key in spool field1
eliminating duplicate rows.

First there’s a redistribution, then duplicate rows are removed:
Efficient, if there are just a few rows per value [per AMP].
Spool size is always about the same, but may be extremely skewed → 2646: No more Spool Space

Explain Group By:
3) We do an all-AMPs SUM step to aggregate from … by way
of an all-rows scan with no residual conditions, and the grouping
identifier in field 1025. Aggregate Intermediate Results are
computed globally, then placed in Spool x.

First each AMP removes duplicate rows locally (first aggregate) and hashes/redistributes the resulting
rows, then there’s a second aggregation to remove duplicate rows:
Efficient, if there are lots of rows per value [per AMP].
Large number of rows per value Spool → small spool size
Small number of rows per value Spool → large spool size
Spool is never skewed.

Other interesting fact quoted from this article/discussion

http://www.simple-talk.com/sql/learn-sql-server/showplan-operator-of-the-week—lazy-spool/

http://www.sql-server-performance.com/forum/threads/table-spool-lazy-spool.15647/

INDEXING: Take a look at your indices to make sure that they’re all covering the columns that you’re selecting out of the tables. You’ll want to aim to get all the columns included in JOINs and WHERE clauses within the indices. All other columns that are in the SELECT statements should be INCLUDEd, or covered, by the index.

OPERATORS: See if you can get rid of the not equals (“<>”) operators, in favor of a single greater than or less than operator. Can this statement and T.CurrentHorizon <> 0 be changed to this and T.CurrentHorizon > 0?

JOINS: Get rid of the subqueries that are JOINing to tables outside of themselves. For instance, this line and FV2.elementId = FV.elementID might be causing some problems. There’s no reason you can’t move that out of a subquery and into a JOIN to dbo.aowCollectedFact FV, given that you’re GROUPing (DISTINCT) in the main query already.

DISTINCT: Change it to a GROUP BY. I’ve got no reason other than, because it’s good practice and takes two minutes.

LAST NOTE: The exception to all the above might be to leave the final subquery, the IF NOT EXISTS, as a subquery. If you change it to a JOIN, it’ll have to be a LEFT JOIN...WHERE NULL statement, which can actually cause spooling operations. No great way to get around that one.

SQL Statistical information – Index usage, etc and also tips on how to find the most costly SQL Server queries using DMV’s

This is just an information for the users that might not know this feature in SQL Server Management Studio, eventhough I’m sure most of people should have known this feature

Basically SQL Server stores all the database usage information including index usage, disk usage, top transaction etc. *This information is not available from the backup file

1. Right click your database, Go to Reports, Go to Standard Reports and you can find all the out of the box reports

Clicking one of the report will give you detailed information – in this sample, I’m using Index Usage Statistics

How to find the top 20 most expensive queries by the total execution time, run this query against the database

SELECT TOP 20
 qs.sql_handle,
 qs.execution_count,
 qs.total_worker_time AS Total_CPU,
 total_CPU_inSeconds = --Converted from microseconds
 qs.total_worker_time/1000000,
 average_CPU_inSeconds = --Converted from microseconds
 (qs.total_worker_time/1000000) / qs.execution_count,
 qs.total_elapsed_time,
 total_elapsed_time_inSeconds = --Converted from microseconds
 qs.total_elapsed_time/1000000,
 st.text,
 qp.query_plan
FROM
 sys.dm_exec_query_stats AS qs
 CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
 CROSS apply sys.dm_exec_query_plan (qs.plan_handle) AS qp
ORDER BY qs.total_worker_time DESC

-It will give you this result set

– Clicking the query_plan will give you detailed query plan

Page 1 of 4

Powered by WordPress & Theme by Anders Norén