Skip to content

Commit 5aea12b

Browse files
authored
Merge pull request #1 from microsoft/master
merge
2 parents 515aa26 + e117bea commit 5aea12b

35 files changed

Lines changed: 2549 additions & 83 deletions

File tree

Lines changed: 83 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,83 @@
1+
![](./media/solutions-microsoft-logo-small.png)
2+
3+
# OPTIMIZE_FOR_SEQUENTIAL_KEY
4+
5+
In SQL Server 2019, a new index option was added called OPTIMIZE_FOR_SEQUENTIAL_KEY that is intended to address an issue known as [last page insert contention](https://support.microsoft.com/kb/4460004). Most of the solutions to this problem that have been suggested in the past involve making changes to either the application or the structure of the contentious index, which can be costly and sometimes involve performance trade-offs. Rather than making major structural changes, OPTIMIZE_FOR_SEQUENTIAL_KEY addresses some of the SQL Server scheduling issues that can lead to severely reduced throughput when last page insert contention occurs. Using the OPTMIZE_FOR_SEQUENTIAL_KEY index option can help maintain consistent throughput in high-concurrency environments when the following conditions are true:
6+
7+
- The index has a sequential key
8+
- The number of concurrent insert threads to the index far exceeds the number of schedulers (in other words logical cores)
9+
- The index has a high rate of new page allocations (page splits), which is most often due to a large row size
10+
11+
This sample illustrates how OPTIMIZE_FOR_SEQUENTIAL_KEY can be used to improve throughput on workloads that are suffering from severe last page insert contention bottlenecks.
12+
13+
### Contents
14+
15+
[About this sample](#about-this-sample)<br/>
16+
[Before you begin](#before-you-begin)<br/>
17+
[Run this sample](#run-this-sample)<br/>
18+
[Disclaimers](#disclaimers)<br/>
19+
[Related links](#related-links)<br/>
20+
21+
22+
<a name=about-this-sample></a>
23+
24+
## About this sample
25+
26+
- **Applies to:** SQL Server 2019 (or higher)
27+
- **Workload:** High-concurrency OLTP
28+
- **Programming Language:** T-SQL
29+
- **Authors:** Pam Lahoud
30+
- **Update history:** Created August 15, 2019
31+
32+
<a name=before-you-begin></a>
33+
34+
## Before you begin
35+
36+
To run this sample, you need the following prerequisites.
37+
38+
1. SQL Server 2019 (or higher)
39+
2. A server (physical or virtual) with multiple cores
40+
3. The [AdventureWorks2016_EXT](https://github.com/Microsoft/sql-server-samples/releases/download/adventureworks/AdventureWorks2016_EXT.bak) sample database
41+
42+
[!NOTE]
43+
> This sample was designed for a server with 8 logical cores. If you run the sample on a server with more cores, you may need to increase the number of concurrent threads in order to observe the improvement.
44+
45+
46+
<a name=run-this-sample></a>
47+
48+
## Run this sample
49+
50+
1. Copy the files from the root folder to a folder on the SQL Server.
51+
52+
2. Download [AdventureWorks2016_EXT.bak](https://github.com/Microsoft/sql-server-samples/releases/download/adventureworks/AdventureWorks2016_EXT.bak) and restore it to your SQL Server 2019 instance.
53+
54+
3. From SQL Server Management Studio or Azure Data Studio, run the Setup.sql script.
55+
56+
4. Modify the SequentialInserts_Optimized.bat and SequentialInserts_Unoptimized.bat files and change the -S parameter to point to the server where the setup script was run. For example, `-S.\SQL2019` points to an instance named SQL2019 on the local server.
57+
58+
5. Open the SQL2019_LatchWaits.htm file to open a Performance Monitor session in your default browser.
59+
60+
6. Right-click anywhere in the browser window to clear the existing data from the session.
61+
62+
7. Click the play button to start the Performance Monitor session.
63+
64+
8. From a Command Prompt, browse to the folder that contains the demo files and run SequentialInserts_Unoptimized.bat, then return to the Performance Monitor window. You should see a high number of Page Latch waits as well as high average wait times. Note the time it takes for the script to complete.
65+
66+
9. Run the SequentialInserts_Optimized.bat script from the same Command Prompt window and again return to the Performance Monitor window. This time you should see much lower number and duration of Page Latch waits, along with higher Batch requests/sec. Note the time it takes for the script to complete, it should be significantly faster than the Unoptimized script.
67+
68+
10. **OPTIONAL** - Modify the `-n256` parameter in the Optimized and Unoptimized scripts to see the effect on performance. Generally, the larger the number of concurrent sessions, the greater the improvement will be with OPTIMIZE_FOR_SEQUENTIAL_KEY.
69+
70+
71+
72+
<a name=disclaimers></a>
73+
74+
## Disclaimers
75+
The code included in this sample is not intended to be a set of best practices on how to build scalable enterprise grade applications. This is beyond the scope of this quick start sample.
76+
77+
<a name=related-links></a>
78+
79+
## Related Links
80+
81+
For more information, see these articles:
82+
83+
[CREATE INDEX - Sequential Keys](https://docs.microsoft.com/sql/t-sql/statements/create-index-transact-sql#sequential-keys)
Binary file not shown.
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
ECHO OFF
2+
rd /s /q %temp%\output
3+
"ostress.exe" -E -S.\SQL2019 -dAdventureWorks2016_EXT -Q"EXEC usp_InsertLogRecord @Optimized = 1" -mstress -quiet -n1 -r1 | FINDSTR "Cantfindthisstring"
4+
rd /s /q %temp%\output
5+
"ostress.exe" -E -S.\SQL2019 -dAdventureWorks2016_EXT -Q"EXEC usp_InsertLogRecord @Optimized = 1" -mstress -quiet -n256 -r250 | FINDSTR "QEXEC Starting Creating elapsed"
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
ECHO OFF
2+
rd /s /q %temp%\output
3+
"ostress.exe" -E -S.\SQL2019 -dAdventureWorks2016_EXT -Q"EXEC usp_InsertLogRecord" -mstress -quiet -n1 -r1 | FINDSTR "Cantfindthisstring"
4+
rd /s /q %temp%\output
5+
"ostress.exe" -E -S.\SQL2019 -dAdventureWorks2016_EXT -Q"EXEC usp_InsertLogRecord" -mstress -quiet -n256 -r250 | FINDSTR "QEXEC Starting Creating elapsed"
Lines changed: 91 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,91 @@
1+
USE AdventureWorks2016_EXT;
2+
GO
3+
4+
-- Create regular table
5+
6+
DROP TABLE IF EXISTS [dbo].[TestSequentialKey];
7+
GO
8+
9+
CREATE TABLE [dbo].[TestSequentialKey](
10+
[DatabaseLogID] [bigint] IDENTITY(1,1) NOT NULL,
11+
[PostTime] [datetime2] NOT NULL,
12+
[DatabaseUser] [sysname] NOT NULL,
13+
[Event] [sysname] NOT NULL,
14+
[Schema] [sysname] NULL,
15+
[Object] [sysname] NULL,
16+
[TSQL] [nvarchar](max) NOT NULL
17+
CONSTRAINT [PK_TestSequentialKey_DatabaseLogID] PRIMARY KEY NONCLUSTERED
18+
(
19+
[DatabaseLogID] ASC
20+
));
21+
22+
CREATE CLUSTERED INDEX CIX_TestSequentialKey_PostTime ON TestSequentialKey (PostTime);
23+
GO
24+
25+
-- Create optimized table
26+
27+
DROP TABLE IF EXISTS [dbo].[TestSequentialKey_Optimized];
28+
GO
29+
30+
CREATE TABLE [dbo].[TestSequentialKey_Optimized](
31+
[DatabaseLogID] [bigint] IDENTITY(1,1) NOT NULL,
32+
[PostTime] [datetime2] NOT NULL,
33+
[DatabaseUser] [sysname] NOT NULL,
34+
[Event] [sysname] NOT NULL,
35+
[Schema] [sysname] NULL,
36+
[Object] [sysname] NULL,
37+
[TSQL] [nvarchar](max) NOT NULL
38+
CONSTRAINT [PK_TestSequentialKey_Optimized_DatabaseLogID] PRIMARY KEY NONCLUSTERED
39+
(
40+
[DatabaseLogID] ASC
41+
)
42+
WITH (OPTIMIZE_FOR_SEQUENTIAL_KEY=ON));
43+
44+
CREATE CLUSTERED INDEX CIX_TestSequentialKey_Optimized_PostTime ON TestSequentialKey_Optimized (PostTime) WITH (OPTIMIZE_FOR_SEQUENTIAL_KEY=ON);
45+
GO
46+
47+
-- Create INSERT stored procedure
48+
49+
CREATE OR ALTER PROCEDURE usp_InsertLogRecord @Optimized bit = 0 AS
50+
51+
DECLARE @PostTime datetime2 = SYSDATETIME(), @User sysname, @Event sysname, @Schema sysname, @Object sysname, @TSQL nvarchar(max)
52+
53+
SELECT @User = name
54+
FROM sys.sysusers
55+
WHERE issqlrole = 0 and hasdbaccess = 1 and status = 0
56+
ORDER BY NEWID();
57+
58+
SELECT @Object = t.name, @Schema = s.name
59+
FROM sys.tables t
60+
INNER JOIN sys.schemas s ON s.schema_id = t.schema_id
61+
ORDER BY NEWID();
62+
63+
IF DATEPART(ms, @PostTime) % 4 = 0
64+
BEGIN
65+
SET @Event = N'SELECT';
66+
SET @TSQL = N'SELECT * FROM ' + @Schema + '.' + @Object
67+
END
68+
ELSE IF DATEPART(ms, @PostTime) % 4 = 1
69+
BEGIN
70+
SET @Event = N'INSERT';
71+
SET @TSQL = N'INSERT ' + @Schema + '.' + @Object + ' SELECT * FROM ' + @Schema + '.' + @Object
72+
END
73+
ELSE IF DATEPART(ms, @PostTime) % 4 = 2
74+
BEGIN
75+
SET @Event = N'UPDATE';
76+
SET @TSQL = N'UPDATE ' + @Schema + '.' + @Object + ' SET 1=1';
77+
END
78+
ELSE IF DATEPART(ms, @PostTime) % 4 = 3
79+
BEGIN
80+
SET @Event = N'DELETE';
81+
SET @TSQL = N'DELETE FROM ' + @Schema + '.' + @Object + ' WHERE 1=1';
82+
END
83+
84+
IF @Optimized = 1
85+
INSERT TestSequentialKey_Optimized (PostTime, DatabaseUser, [Event], [Schema], [Object], [TSQL])
86+
VALUES (@PostTime, @User, @Event, @Schema, @Object, @TSQL);
87+
ELSE
88+
INSERT TestSequentialKey (PostTime, DatabaseUser, [Event], [Schema], [Object], [TSQL])
89+
VALUES (@PostTime, @User, @Event, @Schema, @Object, @TSQL);
90+
91+
GO

samples/features/sql-big-data-cluster/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ Installation instructions for SQL Server 2019 big data clusters can be found [he
99
## Executing the sample scripts
1010
The scripts should be executed in a specific order to test the various features. Execute the scripts from each folder in below order:
1111

12-
1. __[spark/dataloading/transform-csv-files.ipynb](spark/dataloading/transform-csv-files.ipynb)__
12+
1. __[spark/data-loading/transform-csv-files.ipynb](spark/data-loading/transform-csv-files.ipynb)__
1313
1. __[data-virtualization/generic-odbc](data-virtualization/generic-odbc)__
1414
1. __[data-virtualization/hadoop](data-virtualization/hadoop)__
1515
1. __[data-virtualization/storage-pool](data-virtualization/storage-pool)__

samples/features/sql-big-data-cluster/app-deploy/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,10 +7,10 @@ Application deployment allows you to deploy applications into SQL Server big dat
77

88
## Pre-requisites
99
* SQL Server big data cluster CTP 2.3 or later
10-
* `mssqlctl` CLI familiarity. If you are unfamiliar with `mssqlctl` please refer to - [App Deployment in SQL Server big data cluster](https://docs.microsoft.com/en-us/sql/big-data-cluster/big-data-cluster-create-apps?view=sqlallproducts-allversions) for more information.
10+
* `azdata` CLI familiarity. If you are unfamiliar with `azdata` please refer to - [App Deployment in SQL Server big data cluster](https://docs.microsoft.com/en-us/sql/big-data-cluster/big-data-cluster-create-apps?view=sqlallproducts-allversions) for more information.
1111

1212
* Tip
13-
**mssqlctl app -h** will display the various commands to manage the app
13+
**azdata app -h** will display the various commands to manage the app
1414

1515
## Templates
1616
Templates are used by our [App Deploy add-ins](https://docs.microsoft.com/en-us/sql/big-data-cluster/app-deployment-extension?view=sqlallproducts-allversions) and can be used to quickly deploy applications.

samples/features/sql-big-data-cluster/app-deploy/RollDice/README.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -34,30 +34,30 @@ To run this sample, you need the following prerequisites.
3434
**Software prerequisites:**
3535

3636
1. SQL Server big data cluster CTP 2.3 or later.
37-
2. `mssqlctl`. Refer to [installing mssqlctl](https://docs.microsoft.com/en-us/sql/big-data-cluster/deploy-install-mssqlctl?view=sqlallproducts-allversions) document on setting up the `mssqlctl` and connecting to a SQL Server 2019 big data cluster.
37+
2. `azdata`. Refer to [installing azdata](https://docs.microsoft.com/en-us/sql/big-data-cluster/deploy-install-azdata?view=sqlallproducts-allversions) document on setting up the `azdata` and connecting to a SQL Server 2019 big data cluster.
3838

3939
<a name=run-this-sample></a>
4040

4141
## Run this sample
4242

4343
1. Clone or download this sample on your computer.
44-
2. Log in to the SQL Server big data cluster using the command below using the IP address of the `mgmtproxy-svc-external` in your cluster. If you are not familiar with `mssqltctl` you can refer to the [documentation](https://docs.microsoft.com/en-us/sql/big-data-cluster/big-data-cluster-create-apps?view=sqlallproducts-allversions) and then return to this sample.
44+
2. Log in to the SQL Server big data cluster using the command below using the IP address of the `controller-svc-external` in your cluster. If you are not familiar with `mssqltctl` you can refer to the [documentation](https://docs.microsoft.com/en-us/sql/big-data-cluster/big-data-cluster-create-apps?view=sqlallproducts-allversions) and then return to this sample.
4545

4646
```bash
47-
mssqlctl login -e https://<ip-address-of-mgmtproxy-svc-external>:30777 -u <user-name> -p <password>
47+
azdata login -e https://<ip-address-of-controller-svc-external>:30080 -u <user-name> -p <password>
4848
```
4949
3. Deploy the application by running the following command, specifying the folder where your `spec.yaml` and `roll-dice.R` files are located:
5050
```bash
51-
mssqlctl app create --spec ./RollDice
51+
azdata app create --spec ./RollDice
5252
```
5353
4. Check the deployment by running the following command:
5454
```bash
55-
mssqlctl app list -n roll-dice -v [version]
55+
azdata app list -n roll-dice -v [version]
5656
```
5757
Once the app is listed as `Ready` you can continue to the next step.
5858
5. Test the app by running the following command:
5959
```bash
60-
mssqlctl app run -n roll-dice -v [version] --input x=3
60+
azdata app run -n roll-dice -v [version] --input x=3
6161
```
6262
You should get output like the example for three dice below. The results of the dice rolled are in the `result` data frame:
6363
```json
@@ -88,7 +88,7 @@ To run this sample, you need the following prerequisites.
8888
6. You can clean up the sample by running the following commands:
8989
```bash
9090
# delete app
91-
mssqlctl app delete --name roll-dice --version [version]
91+
azdata app delete --name roll-dice --version [version]
9292
```
9393

9494
<a name=sample-details></a>

samples/features/sql-big-data-cluster/app-deploy/SSIS/README.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ To run this sample, you need the following prerequisites.
2323
**Software prerequisites:**
2424

2525
1. SQL Server big data cluster CTP 2.3 or later.
26-
2. `mssqlctl`. Refer to [installing mssqlctl](https://docs.microsoft.com/en-us/sql/big-data-cluster/deploy-install-mssqlctl?view=sqlallproducts-allversions) document on setting up the `mssqlctl` and connecting to a SQL Server big data cluster.
26+
2. `azdata`. Refer to [installing azdata](https://docs.microsoft.com/en-us/sql/big-data-cluster/deploy-install-azdata?view=sqlallproducts-allversions) document on setting up the `azdata` and connecting to a SQL Server big data cluster.
2727
3. Optional: to see the SSIS package itself, install Visual Studio 2017 if you don't have it already. After that download and install [SSDT](https://docs.microsoft.com/en-us/sql/ssdt/download-sql-server-data-tools-ssdt?view=sql-server-2017#ssdt-for-vs-2017-standalone-installer).
2828
4. Optional: install [SSMS](https://docs.microsoft.com/en-us/sql/ssms/download-sql-server-management-studio-ssms?view=sql-server-2017) if it is not already installed.
2929

@@ -32,19 +32,19 @@ To run this sample, you need the following prerequisites.
3232
## Run this sample
3333

3434
1. Clone or download this sample on your computer.
35-
2. Log in to the SQL Server big data cluster using the command below using the IP address of the `mgmtproxy-svc-external` in your cluster. If you are not familiar with `mssqltctl` you can refer to the [documentation](https://docs.microsoft.com/en-us/sql/big-data-cluster/big-data-cluster-create-apps?view=sqlallproducts-allversions) and then return to this sample.
35+
2. Log in to the SQL Server big data cluster using the command below using the IP address of the `controller-svc-external` in your cluster. If you are not familiar with `mssqltctl` you can refer to the [documentation](https://docs.microsoft.com/en-us/sql/big-data-cluster/big-data-cluster-create-apps?view=sqlallproducts-allversions) and then return to this sample.
3636

3737
```bash
38-
mssqlctl login -e https://<ip-address-of-mgmtproxy-svc-external>:30777 -u <user-name> -p <password>
38+
azdata login -e https://<ip-address-of-controller-svc-external>:30080 -u <user-name>
3939
```
4040
3. Replace `[SA_PASSWORD]` in the `spec.yaml` file with the password for SQL user `sa`.
4141
4. Deploy the application by running the following command, specifying the folder where your `spec.yaml` and `back-up-db.dtsx` files are located:
4242
```bash
43-
mssqlctl app create --spec ./SSIS
43+
azdata app create --spec ./SSIS
4444
```
4545
5. Check the deployment by running the following command:
4646
```bash
47-
mssqlctl app list --name back-up-db --version [version]
47+
azdata app list --name back-up-db --version [version]
4848
```
4949
Once the app is listed as `Ready` the job should run within a minute.
5050
You can check if the backup is created by running:
@@ -56,7 +56,7 @@ To run this sample, you need the following prerequisites.
5656
6. You can clean up the sample by running the following commands:
5757
```bash
5858
# delete app
59-
mssqlctl app delete --name back-up-db --version [version]
59+
azdata app delete --name back-up-db --version [version]
6060
# delete backup files
6161
kubectl -n [your namespace] exec -it mssql-master-pool-0 -c mssql-server -- /bin/bash -c "rm /var/opt/mssql/data/*.DWConfigbak"
6262
```
@@ -73,7 +73,7 @@ Here is the spec file for this application. This sample uses the `SSIS` runtime
7373
|Setting|Description|
7474
|-|-|
7575
|options|Specifies any command line parameters passed to the execution of the SSIS package|
76-
|schedule|Specifies when the job should run. This follows cron expressions. A value of '*/1 * * * *' means the job runs *every minute*. If omitted the package will not run automatically and you can run the package on demand using `mssqlctl run -n back-up-db -v [version]` or making a call to the API.|
76+
|schedule|Specifies when the job should run. This follows cron expressions. A value of '*/1 * * * *' means the job runs *every minute*. If omitted the package will not run automatically and you can run the package on demand using `azdata run -n back-up-db -v [version]` or making a call to the API.|
7777

7878
```yaml
7979
name: back-up-db

0 commit comments

Comments
 (0)