Skip to content

Commit 2a52387

Browse files
committed
CU13 preparedness
1 parent 4abc4cf commit 2a52387

10 files changed

Lines changed: 38 additions & 43 deletions

File tree

samples/features/sql-big-data-cluster/deployment/kubeadm/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Create a Kubernetes cluster using Kubeadm on Ubuntu 16.04 LTS or 18.04 LTS
1+
# Create a Kubernetes cluster using Kubeadm on Ubuntu 20.04 LTS
22

33

44
## __[ubuntu](ubuntu/)__

samples/features/sql-big-data-cluster/deployment/kubeadm/ubuntu-single-node-vm-ad/setup-bdc-ad.sh

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -49,8 +49,8 @@ export DEBIAN_FRONTEND=noninteractive
4949

5050
# Kube version.
5151
#
52-
KUBE_DPKG_VERSION=1.16.3-00
53-
KUBE_VERSION=1.16.3
52+
KUBE_DPKG_VERSION=1.20.7-00
53+
KUBE_VERSION=1.20.7
5454

5555
# Wait for 5 minutes for the cluster to be ready.
5656
#
@@ -61,7 +61,7 @@ RETRY_INTERVAL=5
6161
#
6262
export DOCKER_REGISTRY="mcr.microsoft.com"
6363
export DOCKER_REPOSITORY="mssql/bdc"
64-
export DOCKER_TAG="2019-CU8-ubuntu-16.04"
64+
export DOCKER_TAG="2019-CU13-ubuntu-20.04"
6565

6666
# Variables used for azdata cluster creation.
6767
#

samples/features/sql-big-data-cluster/deployment/kubeadm/ubuntu-single-node-vm/setup-bdc.sh

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -34,8 +34,8 @@ export DEBIAN_FRONTEND=noninteractive
3434

3535
# Kube version.
3636
#
37-
KUBE_DPKG_VERSION=1.18.3-00
38-
KUBE_VERSION=1.18.3
37+
KUBE_DPKG_VERSION=1.20.7-00
38+
KUBE_VERSION=1.20.7
3939

4040
# Wait for 5 minutes for the cluster to be ready.
4141
#
@@ -46,7 +46,7 @@ RETRY_INTERVAL=5
4646
#
4747
export DOCKER_REGISTRY="mcr.microsoft.com"
4848
export DOCKER_REPOSITORY="mssql/bdc"
49-
export DOCKER_TAG="2019-CU8-ubuntu-16.04"
49+
export DOCKER_TAG="2019-CU13-ubuntu-20.04"
5050

5151
# Variables used for azdata cluster creation.
5252
#

samples/features/sql-big-data-cluster/deployment/kubeadm/ubuntu/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
1-
# Create a Kubernetes cluster using Kubeadm on Ubuntu 16.04 LTS or 18.04 LTS
1+
# Create a Kubernetes cluster using Kubeadm on Ubuntu 20.04 LTS
22

3-
In this example, we will deploy Kubernetes over multiple Linux machines (physical or virtualized) using kubeadm utility. These instructions have been tested primarily with Ubuntu 16.04 LTS & 18.04 LTS versions.
3+
In this example, we will deploy Kubernetes over multiple Linux machines (physical or virtualized) using kubeadm utility. These instructions have been tested primarily with Ubuntu20.04 LTS version.
44

55
## Pre-requisites
66

samples/features/sql-big-data-cluster/deployment/kubeadm/ubuntu/setup-k8s-master.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
# Initialize a kubernetes cluster on the current node.
44
#
5-
KUBE_VERSION=1.16.2
5+
KUBE_VERSION=1.20.7
66
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=$KUBE_VERSION
77
mkdir -p $HOME/.kube
88
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

samples/features/sql-big-data-cluster/deployment/kubeadm/ubuntu/setup-k8s-prereqs.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
1010
deb http://apt.kubernetes.io/ kubernetes-xenial main
1111
EOF
1212

13-
KUBE_DPKG_VERSION=1.16.2-00
13+
KUBE_DPKG_VERSION=1.20.7-00
1414
apt-get update
1515
apt-get install -y ebtables ethtool
1616
apt-get install -y docker.io

samples/features/sql-big-data-cluster/deployment/platform-ops/bdc-aks-ops/deploy-bdc-aks.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ done
1919
azdata bdc config init --source aks-dev-test --target bdc-aks --force
2020

2121
#Configurations for BDC deployment
22-
azdata bdc config replace -p private-bdc-aks/control.json -j "$.spec.docker.imageTag=2019-CU12-ubuntu-20.04"
22+
azdata bdc config replace -p private-bdc-aks/control.json -j "$.spec.docker.imageTag=2019-CU13-ubuntu-20.04"
2323
azdata bdc config replace -p private-bdc-aks/control.json -j "$.spec.storage.data.className=default"
2424
azdata bdc config replace -p private-bdc-aks/control.json -j "$.spec.storage.logs.className=default"
2525

samples/features/sql-big-data-cluster/deployment/platform-ops/deploy-bdc-aks.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ done
1919
azdata bdc config init --source aks-dev-test --target bdc-aks --force
2020

2121
#Configurations for BDC deployment
22-
azdata bdc config replace -p private-bdc-aks/control.json -j "$.spec.docker.imageTag=2019-CU12-ubuntu-20.04"
22+
azdata bdc config replace -p private-bdc-aks/control.json -j "$.spec.docker.imageTag=2019-CU13-ubuntu-20.04"
2323
azdata bdc config replace -p private-bdc-aks/control.json -j "$.spec.storage.data.className=default"
2424
azdata bdc config replace -p private-bdc-aks/control.json -j "$.spec.storage.logs.className=default"
2525

samples/features/sql-big-data-cluster/deployment/private-aks/scripts/deploy-bdc-private-aks.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ done
1818
azdata bdc config init --source aks-dev-test --target private-bdc-aks --force
1919

2020
#Configurations for BDC deployment
21-
azdata bdc config replace -p private-bdc-aks/control.json -j "$.spec.docker.imageTag=2019-CU12-ubuntu-20.04"
21+
azdata bdc config replace -p private-bdc-aks/control.json -j "$.spec.docker.imageTag=2019-CU13-ubuntu-20.04"
2222
azdata bdc config replace -p private-bdc-aks/control.json -j "$.spec.storage.data.className=default"
2323
azdata bdc config replace -p private-bdc-aks/control.json -j "$.spec.storage.logs.className=default"
2424

samples/features/sql-big-data-cluster/spark/config-install/installpackage_Spark.ipynb

Lines changed: 24 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -17,19 +17,21 @@
1717
{
1818
"cell_type": "markdown",
1919
"source": [
20-
"<p align=\"center\">\n",
21-
"<img src =\"https://raw.githubusercontent.com/microsoft/azuredatastudio/master/src/sql/media/microsoft_logo_gray.svg?sanitize=true\" width=\"250\" align=\"center\">\n",
22-
"</p>\n",
23-
"\n",
24-
"# **Spark Package Management in SQL Server 2019 Big Data Clusters**\n",
25-
"This guide covers installing packages and submitting jobs to a SQL Server 2019 Big Data Cluster using Spark.\n",
26-
"* Built-In Tools\n",
27-
"* Install Packages from a Maven Repository onto the Spark Cluster at Runtime\n",
28-
"* Import .jar from HDFS for use at runtime\n",
29-
"* Import .jar at runtime through Azure Data Studio notebook cell configuration\n",
30-
"* Install Python Packages at Runtime for use with PySpark \n",
31-
"* Submit local .jar or python file\n",
32-
"<!-- <span style=\"color:red\"><font size=\"3\">Please press the \"Run Cells\" button to run the notebook</font></span> -->"
20+
"<p align=\"center\">\r\n",
21+
"<img src =\"https://raw.githubusercontent.com/microsoft/azuredatastudio/master/src/sql/media/microsoft_logo_gray.svg?sanitize=true\" width=\"250\" align=\"center\">\r\n",
22+
"</p>\r\n",
23+
"\r\n",
24+
"# **Spark Package Management in SQL Server 2019 Big Data Clusters**\r\n",
25+
"This guide covers installing packages and submitting jobs to a SQL Server 2019 Big Data Cluster using Spark.\r\n",
26+
"* Built-In Tools\r\n",
27+
"* Install Packages from a Maven Repository onto the Spark Cluster at Runtime\r\n",
28+
"* Import .jar from HDFS for use at runtime\r\n",
29+
"* Import .jar at runtime through Azure Data Studio notebook cell configuration\r\n",
30+
"* Install Python Packages at Runtime for use with PySpark \r\n",
31+
"* Submit local .jar or python file\r\n",
32+
"<!-- <span style=\"color:red\"><font size=\"3\">Please press the \"Run Cells\" button to run the notebook</font></span> -->\r\n",
33+
"\r\n",
34+
"For more information on package managament, refer to [Spark library management](https://docs.microsoft.com/sql/big-data-cluster/spark-install-packages?view=sql-server-ver15)"
3335
],
3436
"metadata": {
3537
"azdata_cell_guid": "cbc8ced8-8931-4302-b252-7e7e478a16d4"
@@ -38,13 +40,10 @@
3840
{
3941
"cell_type": "markdown",
4042
"source": [
41-
"# Built-in Tools\n",
42-
"* Spark and Hadoop base packages\n",
43-
"* Python 3.5 and Python 2.7\n",
44-
"* Pandas, Sklearn, Numpy, and other data processing packages.\n",
45-
"* R and MRO packages\n",
46-
"* Sparklyr\n",
47-
""
43+
"# Built-in Tools\r\n",
44+
"* Spark and Hadoop base packages\r\n",
45+
"* Python 3.8 with PySpark and Pandas, Sklearn, Numpy, and other data processing libraries.\r\n",
46+
"* R 3.5 with Spark.R, sparklyr and MRO packages\r\n"
4847
],
4948
"metadata": {
5049
"azdata_cell_guid": "2fc8a069-115e-4d9b-bedc-5c55f79466b1"
@@ -59,8 +58,7 @@
5958
"```\r\n",
6059
"%%configure -f \\\r\n",
6160
"{\"conf\": {\"spark.jars.packages\": \"com.microsoft.azure:azure-eventhubs-spark_2.11:2.3.1\"}}\r\n",
62-
"```\r\n",
63-
""
61+
"```\r\n"
6462
],
6563
"metadata": {
6664
"azdata_cell_guid": "a0fecc05-f094-4dda-9afe-0de8ddad87eb"
@@ -76,8 +74,7 @@
7674
"```\n",
7775
"%%configure -f\n",
7876
"{\"conf\": {\"spark.jars\": \"/jar/mycodeJar.jar\"}}\n",
79-
"```\n",
80-
""
77+
"```\n"
8178
],
8279
"metadata": {
8380
"azdata_cell_guid": "c5e65fa2-faf0-4e22-aac1-69d7ff8c9989"
@@ -91,8 +88,7 @@
9188
"```\n",
9289
"%%configure -f\n",
9390
"{\"conf\": {\"spark.jars\": \"/jar/mycodeJar.jar\"}}\n",
94-
"```\n",
95-
""
91+
"```\n"
9692
],
9793
"metadata": {
9894
"azdata_cell_guid": "6fc4085f-e142-4355-b215-148dbf6c5b86"
@@ -140,12 +136,11 @@
140136
"\r\n",
141137
"* [Submit Spark jobs on SQL Server Big Data Clusters in Azure Data Studio](https://docs.microsoft.com/en-us/sql/big-data-cluster/spark-submit-job?view=sqlallproducts-allversions)\r\n",
142138
"* [Submit Spark jobs on SQL Server Big Data Clusters in IntelliJ](https://docs.microsoft.com/en-us/sql/big-data-cluster/spark-submit-job-intellij-tool-plugin?view=sqlallproducts-allversions)\r\n",
143-
"* [Submit Spark jobs on SQL Server big data cluster in Visual Studio Code](https://docs.microsoft.com/en-us/sql/big-data-cluster/spark-hive-tools-vscode?view=sqlallproducts-allversions)\r\n",
144-
""
139+
"* [Submit Spark jobs on SQL Server big data cluster in Visual Studio Code](https://docs.microsoft.com/en-us/sql/big-data-cluster/spark-hive-tools-vscode?view=sqlallproducts-allversions)\r\n"
145140
],
146141
"metadata": {
147142
"azdata_cell_guid": "7d1b55c0-1961-45f7-8449-a24a913106e4"
148143
}
149144
}
150145
]
151-
}
146+
}

0 commit comments

Comments
 (0)