Skip to content

Commit 0277e66

Browse files
Merge pull request #4 from microsoft/master
sync with msft samples repo
2 parents aeb6d44 + 6112625 commit 0277e66

10 files changed

Lines changed: 83613 additions & 80527 deletions

File tree

samples/features/intelligent-query-processing/notebooks/Batch_Mode_on_Rowstore.ipynb

Lines changed: 2037 additions & 0 deletions
Large diffs are not rendered by default.

samples/features/sql2019notebooks/intelligent-query-processing/Scalar_UDF_Inlining.ipynb renamed to samples/features/intelligent-query-processing/notebooks/Scalar_UDF_Inlining.ipynb

Lines changed: 80516 additions & 80516 deletions
Large diffs are not rendered by default.
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# Security Notebooks
2+
1. **TDE_on_Standard.ipynb** - This notebook demonstrates the ability to enable TDE on SQL Server 2019 Standard Edition along with Encryption Scan SUSPEND and RESUME.
3+
2. **TDE_on_Standard_EKM.ipynb** - This notebook demonstrates the ability to enable TDE on a SQL Server 2019 Standard Edition using EKM and Azure Key Vault.

samples/features/security/tde-sql2019-standard/TDE_on_Standard.ipynb

Lines changed: 545 additions & 0 deletions
Large diffs are not rendered by default.

samples/features/security/tde-sql2019-standard/TDE_on_Standard_EKM.ipynb

Lines changed: 462 additions & 0 deletions
Large diffs are not rendered by default.

samples/features/sql-big-data-cluster/deployment/kubeadm/ubuntu/README.md

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -24,5 +24,8 @@ In this example, we will deploy Kubernetes over multiple Linux machines (physica
2424
1. Execute [setup-k8s-master.sh](setup-k8s-master.sh/) script on the machine designated as Kubernetes master (_not_ under sudo su as otherwise you'll setup K8S .kube/config permissions for root)
2525
1. After successful initialization of the Kubernetes master, follow the kubeadm join commands output by the setup script on each agent machine
2626
1. Execute [setup-volumes-agent.sh](setup-volumes-agent.sh/) script on each agent machine to create volumes for local storage
27-
1. Execute ***kubectl apply -f local-storage-provisioner.yaml*** against the Kubernetes cluster to create the local storage provisioner.
28-
1. Now, you can deploy the SQL Server 2019 big data cluster following instructions [here](https://docs.microsoft.com/en-us/sql/big-data-cluster/deployment-guidance?view=sqlallproducts-allversions)
27+
1. Execute ***kubectl apply -f local-storage-provisioner.yaml*** against the Kubernetes cluster to create the local storage provisioner. This will create a Storage Class named "local-storage".
28+
1. Now, you can deploy the SQL Server 2019 big data cluster following instructions [here](https://docs.microsoft.com/en-us/sql/big-data-cluster/deployment-guidance?view=sqlallproducts-allversions).
29+
Simply type in "local-storage" twice (once for data, once for logs) when facing the following prompt by azdata :
30+
31+
`Kubernetes Storage Class - Config Path: spec.storage.data.className - Description: This indicates the name of the Kubernetes Storage Class to use. You must pre-provision the storage class and the persistent volumes or you can use a built in storage class if the platform you are deploying provides this capability. - Please provide a value:`

samples/features/sql-big-data-cluster/deployment/kubeadm/ubuntu/setup-volumes-agent.sh

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,5 +7,7 @@ for i in $(seq 1 $PV_COUNT); do
77
vol="vol$i"
88

99
mkdir -p /mnt/local-storage/$vol
10+
# If wondering why the next code line is needed, see https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/faqs.md#why-i-need-to-bind-mount-normal-directories-to-create-pvs-for-them
11+
# Experience showed that the mount need to exist at cluster provision time. The fact they're not put in fstab and will disappear after reboot doesn't seem to be an issue once the K8S PV are created.
1012
mount --bind /mnt/local-storage/$vol /mnt/local-storage/$vol
1113
done

samples/features/sql2019notebooks/README.md

Lines changed: 19 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,5 +4,22 @@ In this folder, you will find various notebooks that you can use in [Azure Data
44
The [What's New](https://docs.microsoft.com/en-us/sql/sql-server/what-s-new-in-sql-server-ver15?view=sql-server-ver15) article covers all the *NEW* features in SQL Server 2019.
55

66
## Notebook List
7-
1. Intelligent Query Processing
8-
* Scalar UDF Inlining
7+
### Intelligent Query Processing
8+
* **[Scalar_UDF_Inlining.ipynb](https://github.com/microsoft/sql-server-samples/blob/master/samples/features/intelligent-query-processing/notebooks/Scalar_UDF_Inlining.ipynb)** - This notebook demonstrates the benefits of Scalar UDF Inlining along with how to find out which UDFs in your database can be inlined.
9+
* **[IQP_tablevariabledeferred.ipynb](https://github.com/microsoft/sqlworkshops/blob/master/sql2019lab/01_IntelligentPerformance/iqp/iqp_tablevariabledeferred.ipynb)** - In this example, you will learn about the new cardinality estimation for table variables called deferred compilation.
10+
* **[Batch_Mode_on_Rowstore.ipynb](https://github.com/microsoft/sql-server-samples/blob/master/samples/features/intelligent-query-processing/notebooks/Batch_Mode_on_Rowstore.ipynb)** - In this notebook, you will learn about how Batch Mode for Rowstore can help execute queries faster on SQL Server 2019.
11+
12+
### Security
13+
* **[TDE_on_Standard.ipynb](https://github.com/microsoft/sql-server-samples/blob/master/samples/features/security/tde-sql2019-standard/TDE_on_Standard.ipynb)** - This notebook demonstrates the ability to enable TDE on SQL Server 2019 Standard Edition along with Encryption Scan SUSPEND and RESUME.
14+
* **[TDE_on_Standard_EKM.ipynb](https://github.com/microsoft/sql-server-samples/blob/master/samples/features/security/tde-sql2019-standard/TDE_on_Standard_EKM.ipynb)** - This notebook demonstrates the ability to enable TDE on a SQL Server 2019 Standard Edition using EKM and Azure Key Vault.
15+
16+
### In-Memory Database
17+
* **[MemoryOptimizedTempDBMetadata-TSQL.ipynb](https://github.com/microsoft/sql-server-samples/blob/master/samples/features/in-memory-database/memory-optimized-tempdb-metadata/MemoryOptimizedTempDBMetadata-TSQL.ipynb)** - This is a T-SQL notebook which shows the benefits of Memory Optimized Tempdb metadata.
18+
* **[MemoryOptmizedTempDBMetadata-Python.ipynb](https://github.com/microsoft/sql-server-samples/blob/master/samples/features/in-memory-database/memory-optimized-tempdb-metadata/MemoryOptmizedTempDBMetadata-Python.ipynb)** - This is a Python notebook which shows the benefits of Memory Optimized Tempdb metadata.
19+
20+
### Availability
21+
* **[Basic_ADR.ipynb](https://github.com/microsoft/sqlworkshops/blob/master/sql2019workshop/sql2019wks/04_Availability/adr/basic_adr.ipynb)** - In this notebook, you will see how fast rollback can now be with Accelerated Database Recovery. You will also see that a long active transaction does not affect the ability to truncate the transaction log.
22+
* **[Recovery_ADR.ipynb](https://github.com/microsoft/sqlworkshops/blob/master/sql2019workshop/sql2019wks/04_Availability/adr/recovery_adr.ipynb)** - In this example, you will see how Accelerated Database Recovery will speed up recovery.
23+
24+
25+

samples/features/sql2019notebooks/intelligent-query-processing/README.md

Lines changed: 0 additions & 4 deletions
This file was deleted.

samples/manage/azure-sql-db-managed-instance/compare-environment-settings/get-properties.sql

Lines changed: 24 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
1-
declare @db_name sysname = 'master'
1+
USE <database_name, , > --> put the database name here like WideWorldImporters
22

33
begin
44
declare @result NVARCHAR(MAX);
5-
set @result = (select compatibility_level, recovery_model_desc, snapshot_isolation_state_desc, is_read_committed_snapshot_on,
5+
set @result = (select database_name = name, compatibility_level, recovery_model_desc, snapshot_isolation_state_desc, is_read_committed_snapshot_on,
66
is_auto_update_stats_on, is_auto_update_stats_async_on, delayed_durability_desc,
77
is_encrypted, is_auto_create_stats_incremental_on, is_arithabort_on, is_ansi_warnings_on, is_parameterization_forced
88
from sys.databases
9-
where name = @db_name
9+
where name = db_name()
1010
for xml raw('db'), elements);
1111
set @result += (select compatibility_level, snapshot_isolation_state_desc, is_read_committed_snapshot_on,
1212
is_auto_update_stats_on, is_auto_update_stats_async_on, delayed_durability_desc,
@@ -53,6 +53,8 @@ where name in ('cost threshold for parallelism','cursor threshold','fill factor
5353
for xml raw, elements
5454
);
5555
set @result += (select name = 'version', value = @@VERSION for xml raw, elements)
56+
set @result += (select name = 'script version', value = '1.0' for xml raw, elements)
57+
set @result += (select name = 'date', value = GETUTCDATE() for xml raw, elements)
5658

5759
set @result += isnull
5860
((SELECT scheduler_count, scheduler_total_count FROM sys.dm_os_sys_info
@@ -67,6 +69,25 @@ isnull((SELECT name = REPLACE([type], 'MEMORYCLERK_', 'MEMORY:')
6769
HAVING sum(pages_kb) /1024. /1024 > 1
6870
for xml raw, elements),'');
6971

72+
set @result +=
73+
isnull((
74+
select name = 'INDEX:'+schema_name(schema_id)+'.'+object_name(t.object_id)+'.'+ix.name,
75+
value = concat(ix.type_desc COLLATE SQL_Latin1_General_CP1_CI_AS,
76+
'/disabled:',is_disabled,'/row_locks:',allow_row_locks,'/page_locks:',allow_page_locks,'/filter:',filter_definition,'/compression_delay:',compression_delay)
77+
from sys.indexes ix
78+
join sys.tables t on t.object_id = ix.object_id
79+
where ix.type <> 0
80+
for xml raw, elements),'');
81+
82+
set @result +=
83+
isnull((
84+
select name = 'JOB::'+j.name + '/' + s.step_name,
85+
value = subsystem
86+
from msdb.dbo.sysjobs j
87+
join msdb.dbo.sysjobsteps s on j.job_id = s.job_id
88+
for xml raw, elements),'');
89+
90+
7091
select cast(@result as xml);
7192

7293
end;

0 commit comments

Comments
 (0)