Skip to content

Commit 37d1645

Browse files
authored
Merge pull request #3 from microsoft/master
merge back
2 parents 6649942 + 394257f commit 37d1645

10 files changed

Lines changed: 73 additions & 55 deletions

File tree

samples/features/optimize-for-sequential-key/README.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
# OPTIMIZE_FOR_SEQUENTIAL_KEY
44

5-
In SQL Server 2019, a new index option was added called OPTIMIZE_FOR_SEQUENTIAL_KEY that is intended to address an issue known as [last page insert contention](https://support.microsoft.com/kb/4460004). Most of the solutions to this problem that have been suggested in the past involve making changes to either the application or the structure of the contentious index, which can be costly and sometimes involve performance trade-offs. Rather than making major structural changes, OPTIMIZE_FOR_SEQUENTIAL_KEY addresses some of the SQL Server scheduling issues that can lead to severely reduced throughput when last page insert contention occurs. Using the OPTMIZE_FOR_SEQUENTIAL_KEY index option can help maintain consistent throughput in high-concurrency environments when the following conditions are true:
5+
In SQL Server 2019, a new index option was added called [OPTIMIZE_FOR_SEQUENTIAL_KEY](https://docs.microsoft.com/sql/t-sql/statements/create-index-transact-sql#sequential-keys) that is intended to address an issue known as [last page insert contention](https://support.microsoft.com/kb/4460004). Most of the solutions to this problem that have been suggested in the past involve making changes to either the application or the structure of the contentious index, which can be costly and sometimes involve performance trade-offs. Rather than making major structural changes, OPTIMIZE_FOR_SEQUENTIAL_KEY addresses some of the SQL Server scheduling issues that can lead to severely reduced throughput when last page insert contention occurs. Using the OPTMIZE_FOR_SEQUENTIAL_KEY index option can help maintain consistent throughput in high-concurrency environments when the following conditions are true:
66

77
- The index has a sequential key
88
- The number of concurrent insert threads to the index far exceeds the number of schedulers (in other words logical cores)
@@ -80,4 +80,6 @@ The code included in this sample is not intended to be a set of best practices o
8080

8181
For more information, see these articles:
8282

83-
[CREATE INDEX - Sequential Keys](https://docs.microsoft.com/sql/t-sql/statements/create-index-transact-sql#sequential-keys)
83+
[CREATE INDEX - Sequential Keys](https://docs.microsoft.com/sql/t-sql/statements/create-index-transact-sql#sequential-keys)
84+
85+
[Behind the Scenes on OPTIMIZE_FOR_SEQUENTIAL_KEY](https://techcommunity.microsoft.com/t5/SQL-Server/Behind-the-Scenes-on-OPTIMIZE-FOR-SEQUENTIAL-KEY/ba-p/806888)

samples/features/sql-big-data-cluster/deployment/aks/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ python deploy-sql-big-data-aks.py
4242
4343
When prompted, provide your input for Azure subscription ID, Azure resource group to create the resources in, and Docker credentials. Optionally, you can also provide your input for below configurations or use the defaults provided:
4444
- azure_region
45-
- vm_size - we recommend to use a VM size to accommodate your workload. For an optimal experience while you are validating basic scenarios, we recommend at least 8 vCPUs and 32GB memory across all agent nodes in the cluster. The script uses **Standard_L8s** as default.
45+
- vm_size - we recommend to use a VM size to accommodate your workload. For an optimal experience while you are validating basic scenarios, we recommend at least 8 vCPUs and 64GB memory across all agent nodes in the cluster. The script uses **Standard_L8s** as default. A default size configuration also uses about 24 disks for persistent volume claims across all components.
4646
- aks_node_count - this is the number of the worker nodes for the AKS cluster, excluding master node. The script is using a default of 1 agent node. This is the minimum required for this VM size to have enough resources and disks to provision all the necessary persistent volumes.
4747
- cluster_name - this value is used for both AKS cluster and SQL big data cluster created on top of AKS. Note that the name of the SQL big data cluster is going to be a Kubernetes namespace
4848
- password - same value is going to be used for all accounts that require user password input: SQL Server master instance SA account, controller user and Knox user

samples/features/sql-big-data-cluster/deployment/aks/deploy-sql-big-data-aks.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ def executeCmd (cmd):
6868
executeCmd (command)
6969

7070
print("Creating AKS cluster: "+CLUSTER_NAME)
71-
command = "az aks create --name "+CLUSTER_NAME+" --resource-group "+GROUP_NAME+" --generate-ssh-keys --node-vm-size "+VM_SIZE+" --node-count "+AKS_NODE_COUNT+" --kubernetes-version 1.12.8"
71+
command = "az aks create --name "+CLUSTER_NAME+" --resource-group "+GROUP_NAME+" --generate-ssh-keys --node-vm-size "+VM_SIZE+" --node-count "+AKS_NODE_COUNT+" --kubernetes-version 1.13.10"
7272
executeCmd (command)
7373

7474
command = "az aks get-credentials --overwrite-existing --name "+CLUSTER_NAME+" --resource-group "+GROUP_NAME+" --admin"

samples/features/sql-big-data-cluster/deployment/kubeadm/ubuntu-single-node-vm-ad/endpoint-patch.json

Lines changed: 13 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,19 @@
11
{
22
"patch": [
33
{
4-
"op": "replace",
5-
"path": "spec.pools[?(@.spec.type=='Master')].spec",
6-
"value": {
7-
"type": "Master",
8-
"dnsName": "mastersql.contoso.local",
9-
"replicas": 1,
10-
"endpoints": [
11-
{
12-
"name": "Master",
13-
"serviceType": "NodePort",
14-
"port": 31433
15-
}
16-
]
17-
}
4+
"op": "add",
5+
"path": "spec.resources.master.spec.endpoints[0].dnsName",
6+
"value": "mastersql.contoso.local"
7+
},
8+
{
9+
"op": "add",
10+
"path": "spec.resources.gateway.spec.endpoints[0].dnsName",
11+
"value": "knox.contoso.local"
12+
},
13+
{
14+
"op": "add",
15+
"path": "spec.resources.appproxy.spec.endpoints[0].dnsName",
16+
"value": "app.contoso.local"
1817
}
1918
]
2019
}

samples/features/sql-big-data-cluster/deployment/kubeadm/ubuntu-single-node-vm-ad/security-patch.json

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -4,10 +4,9 @@
44
"op": "add",
55
"path": "security",
66
"value": {
7-
"useInternalDomain": false,
87
"ouDistinguishedName":"OU=bdc,DC=contoso,DC=local",
9-
"dnsIpAddresses": ["11.11.111.11"],
10-
"domainControllerFullyQualifiedDns": ["VM.CONTOSO.LOCAL"],
8+
"dnsIpAddresses": ["00.00.000.00"],
9+
"domainControllerFullyQualifiedDns": ["DC.CONTOSO.LOCAL"],
1110
"realm":"CONTOSO.LOCAL",
1211
"domainDnsName":"contoso.local",
1312
"bdcAdminPrincipals": [
@@ -20,12 +19,13 @@
2019
},
2120
{
2221
"op": "add",
23-
"path": "spec.endpoints/0",
24-
"value": {
25-
"name": "Kerberos",
26-
"serviceType": "NodePort",
27-
"port": 30088
28-
}
22+
"path": "spec.endpoints[0].dnsName",
23+
"value": "controller.contoso.local"
24+
},
25+
{
26+
"op": "add",
27+
"path": "spec.endpoints[1].dnsName",
28+
"value": "serviceproxy.contoso.local"
2929
}
3030
]
3131
}

samples/features/sql-big-data-cluster/deployment/kubeadm/ubuntu-single-node-vm-ad/setup-bdc-ad.sh

Lines changed: 17 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -22,19 +22,7 @@ while true; do
2222
[ "$password" = "$password2" ] && break
2323
echo "Password mismatch. Please try again."
2424
done
25-
echo ""
26-
# Get docker credentials for private release.
27-
#
28-
read -p "Enter Docker username: " DOCKER_USERNAME
29-
while true; do
30-
read -s -p "Enter Docker Password: " docker_password
31-
echo
32-
read -s -p "Confirm Docker Password: " docker_password2
33-
echo
34-
[ "$docker_password" = "$docker_password2" ] && break
35-
echo "Password mismatch. Please try again."
36-
done
37-
export DOCKER_PASSWORD=$docker_password
25+
3826
echo ""
3927

4028
# Get Domain Service Account Username and Password.
@@ -75,9 +63,9 @@ RETRY_INTERVAL=5
7563

7664
# Variables for pulling dockers.
7765
#
78-
export DOCKER_REGISTRY="private-repo.microsoft.com"
79-
export DOCKER_REPOSITORY="mssql-private-preview"
80-
export DOCKER_TAG="ctp3.2.1"
66+
export DOCKER_REGISTRY="mcr.microsoft.com"
67+
export DOCKER_REPOSITORY="mssql/bdc"
68+
export DOCKER_TAG="2019-RC1-ubuntu"
8169

8270
# Variables used for azdata cluster creation.
8371
#
@@ -91,9 +79,10 @@ export STORAGE_CLASS=local-storage
9179
export PV_COUNT="30"
9280

9381
IMAGES=(
94-
mssql-app-service-proxy
95-
mssql-appdeploy-init
82+
mssql-app-service-proxy
83+
mssql-control-watchdog
9684
mssql-controller
85+
mssql-dns
9786
mssql-hadoop
9887
mssql-mleap-serving-runtime
9988
mssql-mlserver-py-runtime
@@ -105,10 +94,13 @@ IMAGES=(
10594
mssql-monitor-influxdb
10695
mssql-monitor-kibana
10796
mssql-monitor-telegraf
97+
mssql-security-domainctl
10898
mssql-security-knox
10999
mssql-security-support
100+
mssql-server
110101
mssql-server-controller
111102
mssql-server-data
103+
mssql-server-ha
112104
mssql-service-proxy
113105
mssql-ssis-app-runtime
114106
)
@@ -320,39 +312,40 @@ echo "Kubernetes master setup done."
320312

321313
# Pull docker images of SQL Server big data cluster.
322314
#
315+
323316
echo ""
324317
echo "############################################################################"
325318
echo "Starting to pull docker images..."
326319
echo "Pulling images from repository: " $DOCKER_REGISTRY"/"$DOCKER_REPOSITORY
327320

328-
docker login $DOCKER_REGISTRY -u $DOCKER_USERNAME -p $DOCKER_PASSWORD
329321
for image in "${IMAGES[@]}";
330322
do
331323
docker pull $DOCKER_REGISTRY/$DOCKER_REPOSITORY/$image:$DOCKER_TAG
332324
echo "Docker image" $image " pulled."
333325
done
334-
docker logout $DOCKER_REGISTRY
335326
echo "Docker images pulled."
336327

337328
# Deploy azdata bdc create cluster.
338329
#
339330
echo ""
340331
echo "############################################################################"
341-
echo "Starting to deploy azdata cluster..."
332+
echo "Starting to deploy big data cluster..."
342333

343334
# Command to create cluster for single node cluster.
344335
#
345336
azdata bdc config init --source kubeadm-dev-test --target kubeadm-custom -f
346337
azdata bdc config replace -c kubeadm-custom/control.json -j ".spec.docker.repository=$DOCKER_REPOSITORY"
347338
azdata bdc config replace -c kubeadm-custom/control.json -j ".spec.docker.registry=$DOCKER_REGISTRY"
348339
azdata bdc config replace -c kubeadm-custom/control.json -j ".spec.docker.imageTag=$DOCKER_TAG"
349-
azdata bdc config replace -c kubeadm-custom/cluster.json -j "$.spec.pools[?(@.spec.type == "Data")].spec.replicas=1"
340+
azdata bdc config replace -c kubeadm-custom/bdc.json -j "$.spec.resources.data-0.spec.replicas=1"
350341
azdata bdc config replace -c kubeadm-custom/control.json -j "spec.storage.data.className=$STORAGE_CLASS"
351342
azdata bdc config replace -c kubeadm-custom/control.json -j "spec.storage.logs.className=$STORAGE_CLASS"
352343
azdata bdc config patch -c kubeadm-custom/control.json -p $STARTUP_PATH/security-patch.json
353-
azdata bdc config patch -c kubeadm-custom/cluster.json -p $STARTUP_PATH/endpoint-patch.json
344+
azdata bdc config patch -c kubeadm-custom/bdc.json -p $STARTUP_PATH/endpoint-patch.json
345+
354346
azdata bdc create -c kubeadm-custom --accept-eula $ACCEPT_EULA
355-
echo "Azdata cluster created."
347+
348+
echo "Big data cluster created."
356349

357350
# Setting context to cluster.
358351
#

samples/features/sql-big-data-cluster/deployment/kubeadm/ubuntu-single-node-vm/setup-bdc.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -321,7 +321,7 @@ azdata bdc config replace -c kubeadm-custom/bdc.json -j "$.spec.resources.data-0
321321
azdata bdc config replace -c kubeadm-custom/control.json -j "spec.storage.data.className=$STORAGE_CLASS"
322322
azdata bdc config replace -c kubeadm-custom/control.json -j "spec.storage.logs.className=$STORAGE_CLASS"
323323
azdata bdc create -c kubeadm-custom --accept-eula $ACCEPT_EULA
324-
echo "Azdata cluster created."
324+
echo "Big data cluster created."
325325

326326
# Setting context to cluster.
327327
#

samples/features/sql-clr/Curl/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ One of the most popular tools for calling an API on `http:` endpoints is [curl](
99

1010
[About this sample](#about-this-sample)<br/>
1111
[Build the CLR/CURL extension](#build-functions)<br/>
12-
[Add RegEx functions to your SQL database](#add-functions)<br/>
12+
[Add CURL functions to your SQL database](#add-functions)<br/>
1313
[Test the functions](#test)<br/>
1414
[Disclaimers](#disclaimers)<br/>
1515
[Appendix](#appendix) - quick install script for your dev environment.<br/>

samples/manage/azure-sql-db-managed-instance/compare-environment-settings/compare-properties.sql

Lines changed: 11 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,11 @@ select property = 'TEMPDB:'+y.v.value('local-name(.)', 'nvarchar(300)'),
1818
value = y.v.value('.[1]', 'nvarchar(300)')
1919
from @source.nodes('//tempdb') x(v)
2020
cross apply x.v.nodes('*') y(v)
21+
UNION ALL
22+
select property = 'INSTANCE:'+y.v.value('local-name(.)', 'nvarchar(300)'),
23+
value = y.v.value('.[1]', 'nvarchar(300)')
24+
from @source.nodes('//instance') x(v)
25+
cross apply x.v.nodes('*') y(v)
2126
),
2227
tgt as(
2328
select property = x.v.value('name[1]', 'nvarchar(300)'),
@@ -33,6 +38,11 @@ select property = 'TEMPDB:'+y.v.value('local-name(.)', 'nvarchar(300)'),
3338
value = y.v.value('.[1]', 'nvarchar(300)')
3439
from @target.nodes('//tempdb') x(v)
3540
cross apply x.v.nodes('*') y(v)
41+
UNION ALL
42+
select property = 'INSTANCE:'+y.v.value('local-name(.)', 'nvarchar(300)'),
43+
value = y.v.value('.[1]', 'nvarchar(300)')
44+
from @target.nodes('//instance') x(v)
45+
cross apply x.v.nodes('*') y(v)
3646
),
3747
diff as (
3848
select property = isnull(src.property, tgt.property),
@@ -43,7 +53,7 @@ where (src.value <> tgt.value
4353
or src.value is null and tgt.value is not null
4454
or src.value is not null and tgt.value is null)
4555
)
46-
select *
56+
select property, source, [target]
4757
from diff
4858
where is_missing = 0 or @verbose = 1 -- in the earlier versions you had to comment out this line. Now just set the value of the flag up
4959
order by property

samples/manage/azure-sql-db-managed-instance/compare-environment-settings/get-properties.sql

Lines changed: 15 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,6 +53,20 @@ where name in ('cost threshold for parallelism','cursor threshold','fill factor
5353
for xml raw, elements
5454
);
5555
set @result += (select name = 'version', value = @@VERSION for xml raw, elements)
56+
57+
set @result += isnull
58+
((SELECT scheduler_count, scheduler_total_count FROM sys.dm_os_sys_info
59+
for xml raw('instance'), elements),''
60+
);
61+
62+
set @result +=
63+
isnull((SELECT name = REPLACE([type], 'MEMORYCLERK_', 'MEMORY:')
64+
, value = CAST(sum(pages_kb)/1024.1/1024 AS NUMERIC(6,1))
65+
FROM sys.dm_os_memory_clerks
66+
GROUP BY type
67+
HAVING sum(pages_kb) /1024. /1024 > 1
68+
for xml raw, elements),'');
69+
5670
select cast(@result as xml);
57-
end;
5871

72+
end;

0 commit comments

Comments
 (0)