Skip to content

Commit 3d4ae01

Browse files
Merge pull request #1 from microsoft/master
catching up the main repo
2 parents 1b839b4 + 394257f commit 3d4ae01

24 files changed

Lines changed: 409 additions & 184 deletions

File tree

Lines changed: 85 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,85 @@
1+
![](./media/solutions-microsoft-logo-small.png)
2+
3+
# OPTIMIZE_FOR_SEQUENTIAL_KEY
4+
5+
In SQL Server 2019, a new index option was added called [OPTIMIZE_FOR_SEQUENTIAL_KEY](https://docs.microsoft.com/sql/t-sql/statements/create-index-transact-sql#sequential-keys) that is intended to address an issue known as [last page insert contention](https://support.microsoft.com/kb/4460004). Most of the solutions to this problem that have been suggested in the past involve making changes to either the application or the structure of the contentious index, which can be costly and sometimes involve performance trade-offs. Rather than making major structural changes, OPTIMIZE_FOR_SEQUENTIAL_KEY addresses some of the SQL Server scheduling issues that can lead to severely reduced throughput when last page insert contention occurs. Using the OPTMIZE_FOR_SEQUENTIAL_KEY index option can help maintain consistent throughput in high-concurrency environments when the following conditions are true:
6+
7+
- The index has a sequential key
8+
- The number of concurrent insert threads to the index far exceeds the number of schedulers (in other words logical cores)
9+
- The index has a high rate of new page allocations (page splits), which is most often due to a large row size
10+
11+
This sample illustrates how OPTIMIZE_FOR_SEQUENTIAL_KEY can be used to improve throughput on workloads that are suffering from severe last page insert contention bottlenecks.
12+
13+
### Contents
14+
15+
[About this sample](#about-this-sample)<br/>
16+
[Before you begin](#before-you-begin)<br/>
17+
[Run this sample](#run-this-sample)<br/>
18+
[Disclaimers](#disclaimers)<br/>
19+
[Related links](#related-links)<br/>
20+
21+
22+
<a name=about-this-sample></a>
23+
24+
## About this sample
25+
26+
- **Applies to:** SQL Server 2019 (or higher)
27+
- **Workload:** High-concurrency OLTP
28+
- **Programming Language:** T-SQL
29+
- **Authors:** Pam Lahoud
30+
- **Update history:** Created August 15, 2019
31+
32+
<a name=before-you-begin></a>
33+
34+
## Before you begin
35+
36+
To run this sample, you need the following prerequisites.
37+
38+
1. SQL Server 2019 (or higher)
39+
2. A server (physical or virtual) with multiple cores
40+
3. The [AdventureWorks2016_EXT](https://github.com/Microsoft/sql-server-samples/releases/download/adventureworks/AdventureWorks2016_EXT.bak) sample database
41+
42+
[!NOTE]
43+
> This sample was designed for a server with 8 logical cores. If you run the sample on a server with more cores, you may need to increase the number of concurrent threads in order to observe the improvement.
44+
45+
46+
<a name=run-this-sample></a>
47+
48+
## Run this sample
49+
50+
1. Copy the files from the root folder to a folder on the SQL Server.
51+
52+
2. Download [AdventureWorks2016_EXT.bak](https://github.com/Microsoft/sql-server-samples/releases/download/adventureworks/AdventureWorks2016_EXT.bak) and restore it to your SQL Server 2019 instance.
53+
54+
3. From SQL Server Management Studio or Azure Data Studio, run the Setup.sql script.
55+
56+
4. Modify the SequentialInserts_Optimized.bat and SequentialInserts_Unoptimized.bat files and change the -S parameter to point to the server where the setup script was run. For example, `-S.\SQL2019` points to an instance named SQL2019 on the local server.
57+
58+
5. Open the SQL2019_LatchWaits.htm file to open a Performance Monitor session in your default browser.
59+
60+
6. Right-click anywhere in the browser window to clear the existing data from the session.
61+
62+
7. Click the play button to start the Performance Monitor session.
63+
64+
8. From a Command Prompt, browse to the folder that contains the demo files and run SequentialInserts_Unoptimized.bat, then return to the Performance Monitor window. You should see a high number of Page Latch waits as well as high average wait times. Note the time it takes for the script to complete.
65+
66+
9. Run the SequentialInserts_Optimized.bat script from the same Command Prompt window and again return to the Performance Monitor window. This time you should see much lower number and duration of Page Latch waits, along with higher Batch requests/sec. Note the time it takes for the script to complete, it should be significantly faster than the Unoptimized script.
67+
68+
10. **OPTIONAL** - Modify the `-n256` parameter in the Optimized and Unoptimized scripts to see the effect on performance. Generally, the larger the number of concurrent sessions, the greater the improvement will be with OPTIMIZE_FOR_SEQUENTIAL_KEY.
69+
70+
71+
72+
<a name=disclaimers></a>
73+
74+
## Disclaimers
75+
The code included in this sample is not intended to be a set of best practices on how to build scalable enterprise grade applications. This is beyond the scope of this quick start sample.
76+
77+
<a name=related-links></a>
78+
79+
## Related Links
80+
81+
For more information, see these articles:
82+
83+
[CREATE INDEX - Sequential Keys](https://docs.microsoft.com/sql/t-sql/statements/create-index-transact-sql#sequential-keys)
84+
85+
[Behind the Scenes on OPTIMIZE_FOR_SEQUENTIAL_KEY](https://techcommunity.microsoft.com/t5/SQL-Server/Behind-the-Scenes-on-OPTIMIZE-FOR-SEQUENTIAL-KEY/ba-p/806888)
Binary file not shown.
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
ECHO OFF
2+
rd /s /q %temp%\output
3+
"ostress.exe" -E -S.\SQL2019 -dAdventureWorks2016_EXT -Q"EXEC usp_InsertLogRecord @Optimized = 1" -mstress -quiet -n1 -r1 | FINDSTR "Cantfindthisstring"
4+
rd /s /q %temp%\output
5+
"ostress.exe" -E -S.\SQL2019 -dAdventureWorks2016_EXT -Q"EXEC usp_InsertLogRecord @Optimized = 1" -mstress -quiet -n256 -r250 | FINDSTR "QEXEC Starting Creating elapsed"
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
ECHO OFF
2+
rd /s /q %temp%\output
3+
"ostress.exe" -E -S.\SQL2019 -dAdventureWorks2016_EXT -Q"EXEC usp_InsertLogRecord" -mstress -quiet -n1 -r1 | FINDSTR "Cantfindthisstring"
4+
rd /s /q %temp%\output
5+
"ostress.exe" -E -S.\SQL2019 -dAdventureWorks2016_EXT -Q"EXEC usp_InsertLogRecord" -mstress -quiet -n256 -r250 | FINDSTR "QEXEC Starting Creating elapsed"
Lines changed: 91 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,91 @@
1+
USE AdventureWorks2016_EXT;
2+
GO
3+
4+
-- Create regular table
5+
6+
DROP TABLE IF EXISTS [dbo].[TestSequentialKey];
7+
GO
8+
9+
CREATE TABLE [dbo].[TestSequentialKey](
10+
[DatabaseLogID] [bigint] IDENTITY(1,1) NOT NULL,
11+
[PostTime] [datetime2] NOT NULL,
12+
[DatabaseUser] [sysname] NOT NULL,
13+
[Event] [sysname] NOT NULL,
14+
[Schema] [sysname] NULL,
15+
[Object] [sysname] NULL,
16+
[TSQL] [nvarchar](max) NOT NULL
17+
CONSTRAINT [PK_TestSequentialKey_DatabaseLogID] PRIMARY KEY NONCLUSTERED
18+
(
19+
[DatabaseLogID] ASC
20+
));
21+
22+
CREATE CLUSTERED INDEX CIX_TestSequentialKey_PostTime ON TestSequentialKey (PostTime);
23+
GO
24+
25+
-- Create optimized table
26+
27+
DROP TABLE IF EXISTS [dbo].[TestSequentialKey_Optimized];
28+
GO
29+
30+
CREATE TABLE [dbo].[TestSequentialKey_Optimized](
31+
[DatabaseLogID] [bigint] IDENTITY(1,1) NOT NULL,
32+
[PostTime] [datetime2] NOT NULL,
33+
[DatabaseUser] [sysname] NOT NULL,
34+
[Event] [sysname] NOT NULL,
35+
[Schema] [sysname] NULL,
36+
[Object] [sysname] NULL,
37+
[TSQL] [nvarchar](max) NOT NULL
38+
CONSTRAINT [PK_TestSequentialKey_Optimized_DatabaseLogID] PRIMARY KEY NONCLUSTERED
39+
(
40+
[DatabaseLogID] ASC
41+
)
42+
WITH (OPTIMIZE_FOR_SEQUENTIAL_KEY=ON));
43+
44+
CREATE CLUSTERED INDEX CIX_TestSequentialKey_Optimized_PostTime ON TestSequentialKey_Optimized (PostTime) WITH (OPTIMIZE_FOR_SEQUENTIAL_KEY=ON);
45+
GO
46+
47+
-- Create INSERT stored procedure
48+
49+
CREATE OR ALTER PROCEDURE usp_InsertLogRecord @Optimized bit = 0 AS
50+
51+
DECLARE @PostTime datetime2 = SYSDATETIME(), @User sysname, @Event sysname, @Schema sysname, @Object sysname, @TSQL nvarchar(max)
52+
53+
SELECT @User = name
54+
FROM sys.sysusers
55+
WHERE issqlrole = 0 and hasdbaccess = 1 and status = 0
56+
ORDER BY NEWID();
57+
58+
SELECT @Object = t.name, @Schema = s.name
59+
FROM sys.tables t
60+
INNER JOIN sys.schemas s ON s.schema_id = t.schema_id
61+
ORDER BY NEWID();
62+
63+
IF DATEPART(ms, @PostTime) % 4 = 0
64+
BEGIN
65+
SET @Event = N'SELECT';
66+
SET @TSQL = N'SELECT * FROM ' + @Schema + '.' + @Object
67+
END
68+
ELSE IF DATEPART(ms, @PostTime) % 4 = 1
69+
BEGIN
70+
SET @Event = N'INSERT';
71+
SET @TSQL = N'INSERT ' + @Schema + '.' + @Object + ' SELECT * FROM ' + @Schema + '.' + @Object
72+
END
73+
ELSE IF DATEPART(ms, @PostTime) % 4 = 2
74+
BEGIN
75+
SET @Event = N'UPDATE';
76+
SET @TSQL = N'UPDATE ' + @Schema + '.' + @Object + ' SET 1=1';
77+
END
78+
ELSE IF DATEPART(ms, @PostTime) % 4 = 3
79+
BEGIN
80+
SET @Event = N'DELETE';
81+
SET @TSQL = N'DELETE FROM ' + @Schema + '.' + @Object + ' WHERE 1=1';
82+
END
83+
84+
IF @Optimized = 1
85+
INSERT TestSequentialKey_Optimized (PostTime, DatabaseUser, [Event], [Schema], [Object], [TSQL])
86+
VALUES (@PostTime, @User, @Event, @Schema, @Object, @TSQL);
87+
ELSE
88+
INSERT TestSequentialKey (PostTime, DatabaseUser, [Event], [Schema], [Object], [TSQL])
89+
VALUES (@PostTime, @User, @Event, @Schema, @Object, @TSQL);
90+
91+
GO
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,3 @@
11
#!/bin/bash -e
2+
apt update
23
apt install microsoft-mlserver-mml-r-9.3.0

samples/features/sql-big-data-cluster/bootstrap-sample-db.cmd

Lines changed: 29 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -41,12 +41,14 @@ if NOT EXIST tpcxbb_1gb.bak (
4141
set SQLCMDSERVER=%SQL_MASTER_INSTANCE%
4242
set SQLCMDUSER=sa
4343
set SQLCMDPASSWORD=%SQL_MASTER_SA_PASSWORD%
44-
for /F "usebackq" %%v in (`sqlcmd -I -b -h-1 -Q "print RTRIM((CAST(SERVERPROPERTY('ProductLevel') as nvarchar(128))));"`) do SET CTP_VERSION=%%v
45-
if /i "%CTP_VERSION%" EQU "CTP2.4" (set MASTER_POD_NAME=mssql-master-pool-0) else (set MASTER_POD_NAME=master-0)
44+
for /F "usebackq tokens=1,2" %%v in (`sqlcmd -I -b -h-1 -W -Q "SET NOCOUNT ON; SELECT @@SERVERNAME, SERVERPROPERTY('IsHadrEnabled');"`) do (
45+
SET MASTER_POD_NAME=%%v
46+
SET HADR_ENABLED=%%w
47+
)
4648

4749
REM Copy the backup file, restore the database, create necessary objects and data file
4850
echo Copying sales database backup file to SQL Master instance...
49-
%DEBUG% kubectl cp tpcxbb_1gb.bak %CLUSTER_NAMESPACE%/%MASTER_POD_NAME%:var/opt/mssql/data -c mssql-server || goto exit
51+
%DEBUG% kubectl cp tpcxbb_1gb.bak %CLUSTER_NAMESPACE%/%MASTER_POD_NAME%:var/opt/mssql/data/ -c mssql-server || goto exit
5052

5153
REM Download and copy the sample backup files
5254
if /i "%AW_WWI_SAMPLES%" EQU "--install-extra-samples" (
@@ -57,7 +59,10 @@ if /i "%AW_WWI_SAMPLES%" EQU "--install-extra-samples" (
5759
%DEBUG% curl -L -G "https://github.com/Microsoft/sql-server-samples/releases/download/adventureworks/%%f" -o %%f
5860
)
5961
echo Copying %%f database backup file to SQL Master instance...
60-
%DEBUG% kubectl cp %%f %CLUSTER_NAMESPACE%/%MASTER_POD_NAME%:var/opt/mssql/data -c mssql-server || goto exit
62+
%DEBUG% kubectl cp %%f %CLUSTER_NAMESPACE%/%MASTER_POD_NAME%:var/opt/mssql/data/ -c mssql-server || goto exit
63+
64+
echo Removing database backup file...
65+
%DEBUG% kubectl exec %MASTER_POD_NAME% -n %CLUSTER_NAMESPACE% -c mssql-server -i -t -- bash -c "rm -rvf /var/opt/mssql/data/%%f"
6166
)
6267

6368
set FILES=WideWorldImporters-Full.bak WideWorldImportersDW-Full.bak
@@ -67,40 +72,50 @@ if /i "%AW_WWI_SAMPLES%" EQU "--install-extra-samples" (
6772
%DEBUG% curl -L -G "https://github.com/Microsoft/sql-server-samples/releases/download/wide-world-importers-v1.0/%%f" -o %%f
6873
)
6974
echo Copying %%f database backup file to SQL Master instance...
70-
%DEBUG% kubectl cp %%f %CLUSTER_NAMESPACE%/%MASTER_POD_NAME%:var/opt/mssql/data -c mssql-server || goto exit
75+
%DEBUG% kubectl cp %%f %CLUSTER_NAMESPACE%/%MASTER_POD_NAME%:var/opt/mssql/data/ -c mssql-server || goto exit
76+
77+
echo Removing database backup file...
78+
%DEBUG% kubectl exec %MASTER_POD_NAME% -n %CLUSTER_NAMESPACE% -c mssql-server -i -t -- bash -c "rm -rvf /var/opt/mssql/data/%%f"
7179
)
7280
)
7381

82+
REM If HADR is enabled then port-forward 1533 temporarily to connect to the primary directly
83+
REM Default timeout for port-forward is 5 minutes so start command in background & it will terminate automatically
84+
if /i "%HADR_ENABLED%" EQU "1" (
85+
%DEBUG% start kubectl port-forward pods/%MASTER_POD_NAME% 1533:1533 -n %CLUSTER_NAMESPACE%
86+
SET SQLCMDSERVER=127.0.0.1,1533
87+
)
88+
7489
echo Configuring sample database(s)...
7590
%DEBUG% sqlcmd -i "%STARTUP_PATH%bootstrap-sample-db.sql" -o "bootstrap.out" -I -b -v SA_PASSWORD="%KNOX_PASSWORD%" || goto exit
7691

7792
REM remove files copied into the pod:
78-
echo Removing database backup files...
79-
%DEBUG% kubectl exec %MASTER_POD_NAME% -n %CLUSTER_NAMESPACE% -c mssql-server -i -t -- bash -c "rm -rvf /var/opt/mssql/data/*.bak"
93+
echo Removing database backup file...
94+
%DEBUG% kubectl exec %MASTER_POD_NAME% -n %CLUSTER_NAMESPACE% -c mssql-server -i -t -- bash -c "rm -rvf /var/opt/mssql/data/tpcxbb_1gb.bak"
8095

8196
for %%F in (web_clickstreams inventory customer) do (
8297
if NOT EXIST %%F.csv (
8398
echo Exporting %%F data...
8499
if /i %%F EQU web_clickstreams (set DELIMITER=,) else (SET DELIMITER=^|)
85-
%DEBUG% bcp sales.dbo.%%F out "%%F.csv" -S %SQL_MASTER_INSTANCE% -Usa -P%SQL_MASTER_SA_PASSWORD% -c -t"!DELIMITER!" -o "%%F.out" -e "%%F.err" || goto exit
100+
%DEBUG% bcp sales.dbo.%%F out "%%F.csv" -S %SQLCMDSERVER% -Usa -P%SQL_MASTER_SA_PASSWORD% -c -t"!DELIMITER!" -o "%%F.out" -e "%%F.err" || goto exit
86101
)
87102
)
88103

89104
if NOT EXIST product_reviews.csv (
90105
echo Exporting product_reviews data...
91-
%DEBUG% bcp "select pr_review_sk, replace(replace(pr_review_content, ',', ';'), char(34), '') as pr_review_content from sales.dbo.product_reviews" queryout "product_reviews.csv" -S %SQL_MASTER_INSTANCE% -Usa -P%SQL_MASTER_SA_PASSWORD% -c -t, -o "product_reviews.out" -e "product_reviews.err" || goto exit
106+
%DEBUG% bcp "select pr_review_sk, replace(replace(pr_review_content, ',', ';'), char(34), '') as pr_review_content from sales.dbo.product_reviews" queryout "product_reviews.csv" -S %SQLCMDSERVER% -Usa -P%SQL_MASTER_SA_PASSWORD% -c -t, -o "product_reviews.out" -e "product_reviews.err" || goto exit
92107
)
93108

94109
REM Copy the data file to HDFS
95110
echo Uploading web_clickstreams data to HDFS...
96-
%DEBUG% curl -i -L -k -u root:%KNOX_PASSWORD% -X PUT "https://%KNOX_ENDPOINT%/gateway/default/webhdfs/v1/clickstream_data?op=MKDIRS" || goto exit
97-
%DEBUG% curl -i -L -k -u root:%KNOX_PASSWORD% -X PUT "https://%KNOX_ENDPOINT%/gateway/default/webhdfs/v1/clickstream_data/web_clickstreams.csv?op=create&overwrite=true" -H "Content-Type: application/octet-stream" -T "web_clickstreams.csv" || goto exit
111+
%DEBUG% curl -L -k -u root:%KNOX_PASSWORD% -X PUT "https://%KNOX_ENDPOINT%/gateway/default/webhdfs/v1/clickstream_data?op=MKDIRS" || goto exit
112+
%DEBUG% curl -L -k -u root:%KNOX_PASSWORD% -X PUT "https://%KNOX_ENDPOINT%/gateway/default/webhdfs/v1/clickstream_data/web_clickstreams.csv?op=create&overwrite=true" -H "Content-Type: application/octet-stream" -T "web_clickstreams.csv" || goto exit
98113
:: del /q web_clickstreams.*
99114

100115
echo.
101116
echo Uploading product_reviews data to HDFS...
102-
%DEBUG% curl -i -L -k -u root:%KNOX_PASSWORD% -X PUT "https://%KNOX_ENDPOINT%/gateway/default/webhdfs/v1/product_review_data?op=MKDIRS" || goto exit
103-
%DEBUG% curl -i -L -k -u root:%KNOX_PASSWORD% -X PUT "https://%KNOX_ENDPOINT%/gateway/default/webhdfs/v1/product_review_data/product_reviews.csv?op=create&overwrite=true" -H "Content-Type: application/octet-stream" -T "product_reviews.csv" || goto exit
117+
%DEBUG% curl -L -k -u root:%KNOX_PASSWORD% -X PUT "https://%KNOX_ENDPOINT%/gateway/default/webhdfs/v1/product_review_data?op=MKDIRS" || goto exit
118+
%DEBUG% curl -L -k -u root:%KNOX_PASSWORD% -X PUT "https://%KNOX_ENDPOINT%/gateway/default/webhdfs/v1/product_review_data/product_reviews.csv?op=create&overwrite=true" -H "Content-Type: application/octet-stream" -T "product_reviews.csv" || goto exit
104119
:: del /q product_reviews.*
105120

106121
REM %DEBUG% del /q *.out *.err *.csv
@@ -122,4 +137,4 @@ goto :eof
122137
:usage
123138
echo USAGE: %0 ^<CLUSTER_NAMESPACE^> ^<SQL_MASTER_IP^> ^<SQL_MASTER_SA_PASSWORD^> ^<KNOX_IP^> [^<KNOX_PASSWORD^>] [--install-extra-samples] [SQL_MASTER_PORT] [KNOX_PORT]
124139
echo Default ports are assumed for SQL Master instance ^& Knox gateway unless specified.
125-
exit /b 0
140+
exit /b 0

0 commit comments

Comments
 (0)