Skip to content

Commit d3a13a6

Browse files
authored
Merge pull request #959 from cloudmelon/bdc-platform-ops
Refresh deployment scripts with the tested configurations on BDC release notes
2 parents a82e76f + 796e135 commit d3a13a6

9 files changed

Lines changed: 336 additions & 9 deletions

File tree

Lines changed: 91 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,91 @@
1+
# Deploy BDC on Azure Kubernetes Service cluster
2+
3+
SQL Server Big Data Clusters allow you to deploy scalable clusters of SQL Server, Spark, and HDFS containers running on Kubernetes, it allows you to easily combine and analyze your high-value relational data with high-volume big data.
4+
5+
This repository contains the scripts that you can use to deploy a BDC cluster on Azure Kubernetes Service (AKS) cluster with basic networking ( Kubenet ) and advanced networking ( CNI ).
6+
7+
This repository contains 3 bash scripts :
8+
- **deploy-cni-aks.sh** : You can use it to deploy AKS cluster using CNI networking, it fits the use case that you need to deploy BDC with AKS cluster with CNI networking plugin for integration with existing virtual networks in Azure, and this network model allows greater separation of resources and controls in an enterprise environment.
9+
10+
- **deploy-kubenet-aks.sh** : You can use it to deploy AKS cluster using kubenet networking, it fits the use case that you need to deploy BDC with AKS cluster with kubenet networking. Kubenet is a basic network plugin, on Linux only. AKS cluster by default is on kubenet networking, after provisioning it, it also creates an Azure virtual network and a subnet, where your nodes get an IP address from the subnet and all pods receive an IP address from a logically different address space to the subnet of the nodes.
11+
12+
- **deploy-bdc.sh** : You can use it to deploy Big Data Clusters ( BDC ) AKS cluster. Please find the inline comments about the deployment steps and configurations which allows your customization.
13+
14+
## Above all
15+
16+
SQL Server Big Data Clusters is a fully containerized solution orchestrated by Kubernetes. Starting with CU12, each release of SQL Server Big Data Clusters is tested against a fixed configuration of components. The configuration is evaluated with each release and adjustments are made to stay in-line with the ecosystem as Kubernetes continues to evolve. Further information see [Tested Configurations from SQL Server Big Data Clusters platform release notes](https://docs.microsoft.com/en-us/sql/big-data-cluster/release-notes-big-data-cluster?view=sql-server-ver15#tested-configurations).
17+
18+
Please note that a no later than 1.13 version for Kubernetes server to deploy your big data clusters. Therefore you need to use --kubernetes-version parameter to specify a version different than the default for AKS.
19+
20+
## Prerequisites
21+
22+
You can run those scripts on the following client environment with Linux OS or WSL/WSL2.
23+
24+
The following link listed common big data cluster tools and how to install them:
25+
26+
https://docs.microsoft.com/en-us/sql/big-data-cluster/deploy-big-data-tools?view=sql-server-ver15
27+
28+
29+
## Instructions
30+
31+
### deploy-cni-aks.sh
32+
33+
1. Download the script on the location that you are planning to use for the deployment
34+
35+
``` bash
36+
curl --output deploy-cni-aks.sh https://raw.githubusercontent.com/microsoft/sql-server-samples/master/samples/features/sql-big-data-cluster/deployment/platform-ops/scripts/deploy-cni-aks.sh
37+
```
38+
39+
2. Make the script executable
40+
41+
``` bash
42+
chmod +x deploy-cni-aks.sh
43+
```
44+
45+
3. Run the script (make sure you are running with sudo)
46+
47+
``` bash
48+
sudo ./deploy-cni-aks.sh
49+
```
50+
51+
### deploy-kubenet-aks.sh
52+
53+
1. Download the script on the location that you are planning to use for the deployment
54+
55+
``` bash
56+
curl --output deploy-kubenet-aks.sh https://raw.githubusercontent.com/microsoft/sql-server-samples/master/samples/features/sql-big-data-cluster/deployment/platform-ops/scripts/deploy-kubenet-aks.sh
57+
```
58+
59+
2. Make the script executable
60+
61+
``` bash
62+
chmod +x deploy-kubenet-aks.sh
63+
```
64+
65+
3. Run the script (make sure you are running with sudo)
66+
67+
``` bash
68+
sudo ./deploy-kubenet-aks.sh
69+
```
70+
71+
### deploy-bdc-aks.sh
72+
73+
74+
1. Download the script on the location that you are planning to use for the deployment
75+
76+
``` bash
77+
curl --output deploy-bdc-aks.sh https://raw.githubusercontent.com/microsoft/sql-server-samples/master/samples/features/sql-big-data-cluster/deployment/platform-ops/scripts/deploy-bdc-aks.sh
78+
```
79+
80+
2. Make the script executable
81+
82+
``` bash
83+
chmod +x deploy-bdc-aks.sh
84+
```
85+
86+
3. Run the script (make sure you are running with sudo)
87+
88+
``` bash
89+
sudo ./deploy-bdc-aks.sh
90+
```
91+
Lines changed: 63 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
#!/bin/bash
2+
#Define a set of environment variables to be used in resource creations.
3+
#
4+
5+
#!/bin/bash
6+
#Get Subscription ID and resource groups. It is used as default for controller, SQL Server Master instance (sa account) and Knox.
7+
#
8+
9+
read -p "Your Azure Subscription: " subscription
10+
echo
11+
read -p "Your Resource Group Name: " resourcegroup
12+
echo
13+
read -p "In which region you're deploying: " region
14+
echo
15+
16+
17+
#Define a set of environment variables to be used in resource creations.
18+
export SUBID=$subscription
19+
20+
export REGION_NAME=$region
21+
export RESOURCE_GROUP=$resourcegroup
22+
export KUBERNETES_VERSION=$version
23+
export SUBNET_NAME=aks-subnet
24+
export VNET_NAME=bdc-vnet
25+
export AKS_NAME=bdcakscluster
26+
27+
#Set Azure subscription current in use
28+
az account set --subscription $subscription
29+
30+
#Create Azure Resource Group
31+
az group create -n $RESOURCE_GROUP -l $REGION_NAME
32+
33+
#Create Azure Virtual Network to host your AKS clus
34+
az network vnet create \
35+
--resource-group $RESOURCE_GROUP \
36+
--location $REGION_NAME \
37+
--name $VNET_NAME \
38+
--address-prefixes 10.0.0.0/8 \
39+
--subnet-name $SUBNET_NAME \
40+
--subnet-prefix 10.1.0.0/16
41+
42+
SUBNET_ID=$(az network vnet subnet show \
43+
--resource-group $RESOURCE_GROUP \
44+
--vnet-name $VNET_NAME \
45+
--name $SUBNET_NAME \
46+
--query id -o tsv)
47+
48+
#Create AKS Cluster
49+
az aks create \
50+
--resource-group $RESOURCE_GROUP \
51+
--name $AKS_NAME \
52+
--kubernetes-version $version \
53+
--load-balancer-sku standard \
54+
--network-plugin azure \
55+
--vnet-subnet-id $SUBNET_ID \
56+
--docker-bridge-address 172.17.0.1/16 \
57+
--dns-service-ip 10.2.0.10 \
58+
--service-cidr 10.2.0.0/24 \
59+
--node-vm-size Standard_D13_v2 \
60+
--node-count 2 \
61+
--generate-ssh-keys
62+
63+
az aks get-credentials -g $RESOURCE_GROUP -n $AKS_NAME
Lines changed: 63 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
#!/bin/bash
2+
#Define a set of environment variables to be used in resource creations.
3+
#
4+
5+
#!/bin/bash
6+
#Get Subscription ID and resource groups. It is used as default for controller, SQL Server Master instance (sa account) and Knox.
7+
#
8+
9+
read -p "Your Azure Subscription: " subscription
10+
echo
11+
read -p "Your Resource Group Name: " resourcegroup
12+
echo
13+
read -p "In which region you're deploying: " region
14+
echo
15+
16+
17+
#Define a set of environment variables to be used in resource creations.
18+
export SUBID=$subscription
19+
20+
export REGION_NAME=$region
21+
export RESOURCE_GROUP=$resourcegroup
22+
export KUBERNETES_VERSION=$version
23+
export SUBNET_NAME=aks-subnet
24+
export VNET_NAME=bdc-vnet
25+
export AKS_NAME=bdcakscluster
26+
27+
#Set Azure subscription current in use
28+
az account set --subscription $subscription
29+
30+
#Create Azure Resource Group
31+
az group create -n $RESOURCE_GROUP -l $REGION_NAME
32+
33+
#Create Azure Virtual Network to host your AKS clus
34+
az network vnet create \
35+
--resource-group $RESOURCE_GROUP \
36+
--location $REGION_NAME \
37+
--name $VNET_NAME \
38+
--address-prefixes 10.0.0.0/8 \
39+
--subnet-name $SUBNET_NAME \
40+
--subnet-prefix 10.1.0.0/16
41+
42+
SUBNET_ID=$(az network vnet subnet show \
43+
--resource-group $RESOURCE_GROUP \
44+
--vnet-name $VNET_NAME \
45+
--name $SUBNET_NAME \
46+
--query id -o tsv)
47+
48+
#Create AKS Cluster
49+
az aks create \
50+
--resource-group $RESOURCE_GROUP \
51+
--name $AKS_NAME \
52+
--kubernetes-version $version \
53+
--load-balancer-sku standard \
54+
--network-plugin azure \
55+
--vnet-subnet-id $SUBNET_ID \
56+
--docker-bridge-address 172.17.0.1/16 \
57+
--dns-service-ip 10.2.0.10 \
58+
--service-cidr 10.2.0.0/24 \
59+
--node-vm-size Standard_D13_v2 \
60+
--node-count 2 \
61+
--generate-ssh-keys
62+
63+
az aks get-credentials -g $RESOURCE_GROUP -n $AKS_NAME
Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
#!/bin/bash
2+
3+
#Get password as input. It is used as default for controller, SQL Server Master instance (sa account) and Knox.
4+
#
5+
6+
while true; do
7+
read -p "Create Admin username for Big Data Cluster: " bdcadmin
8+
echo
9+
read -s -p "Create Password for Big Data Cluster: " password
10+
echo
11+
read -s -p "Confirm your Password: " password2
12+
echo
13+
[ "$password" = "$password2" ] && break
14+
echo "Password mismatch. Please try again."
15+
done
16+
17+
18+
#Create BDC custom profile
19+
azdata bdc config init --source aks-dev-test --target bdc-aks --force
20+
21+
#Configurations for BDC deployment
22+
azdata bdc config replace -p private-bdc-aks/control.json -j "$.spec.docker.imageTag=2019-CU12-ubuntu-20.04"
23+
azdata bdc config replace -p private-bdc-aks/control.json -j "$.spec.storage.data.className=default"
24+
azdata bdc config replace -p private-bdc-aks/control.json -j "$.spec.storage.logs.className=default"
25+
26+
27+
azdata bdc create --config-profile bdc-aks --accept-eula yes
28+
29+
#Login and get endpoint list for the cluster.
30+
#
31+
azdata login -n mssql-cluster
32+
33+
azdata bdc endpoint list --output table
Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
#!/bin/bash
2+
3+
#Get password as input. It is used as default for controller, SQL Server Master instance (sa account) and Knox.
4+
#
5+
6+
while true; do
7+
read -p "Create Admin username for Big Data Cluster: " bdcadmin
8+
echo
9+
read -s -p "Create Password for Big Data Cluster: " password
10+
echo
11+
read -s -p "Confirm your Password: " password2
12+
echo
13+
[ "$password" = "$password2" ] && break
14+
echo "Password mismatch. Please try again."
15+
done
16+
17+
18+
#Create BDC custom profile
19+
azdata bdc config init --source aks-dev-test --target bdc-aks --force
20+
21+
#Configurations for BDC deployment
22+
azdata bdc config replace -p private-bdc-aks/control.json -j "$.spec.docker.imageTag=2019-CU12-ubuntu-20.04"
23+
azdata bdc config replace -p private-bdc-aks/control.json -j "$.spec.storage.data.className=default"
24+
azdata bdc config replace -p private-bdc-aks/control.json -j "$.spec.storage.logs.className=default"
25+
26+
27+
azdata bdc create --config-profile bdc-aks --accept-eula yes
28+
29+
#Login and get endpoint list for the cluster.
30+
#
31+
azdata login -n mssql-cluster
32+
33+
azdata bdc endpoint list --output table
Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
#!/bin/bash
2+
#Define a set of environment variables to be used in resource creations.
3+
#
4+
5+
#!/bin/bash
6+
#Get Subscription ID and resource groups. It is used as default for controller, SQL Server Master instance (sa account) and Knox.
7+
#
8+
9+
read -p "Your Azure Subscription: " subscription
10+
echo
11+
read -p "Your Resource Group Name: " resourcegroup
12+
echo
13+
read -p "In which region you're deploying: " region
14+
echo
15+
16+
17+
#Define a set of environment variables to be used in resource creations.
18+
export SUBID=$subscription
19+
20+
export REGION_NAME=$region
21+
export RESOURCE_GROUP=$resourcegroup
22+
export KUBERNETES_VERSION=$version
23+
export AKS_NAME=bdcakscluster
24+
25+
#Set Azure subscription current in use
26+
az account set --subscription $subscription
27+
28+
#Create Azure Resource Group
29+
az group create -n $RESOURCE_GROUP -l $REGION_NAME
30+
31+
#Create AKS Cluster
32+
az aks create \
33+
--resource-group $RESOURCE_GROUP \
34+
--name $AKS_NAME \
35+
--kubernetes-version $version \
36+
--node-vm-size Standard_D13_v2 \
37+
--node-count 2 \
38+
--generate-ssh-keys
39+
40+
az aks get-credentials -g $RESOURCE_GROUP -n $AKS_NAME

samples/features/sql-big-data-cluster/deployment/private-aks/README.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,15 @@
11
# Deploy BDC in private AKS cluster with Advanced Networking (CNI)
22

3+
SQL Server Big Data Clusters allow you to deploy scalable clusters of SQL Server, Spark, and HDFS containers running on Kubernetes, it allows you to easily combine and analyze your high-value relational data with high-volume big data.
4+
35
This repository contains the scripts that you can use to deploy a BDC cluster in Azure Kubernetes Service (AKS) private cluster with advanced networking ( CNI ).
46

57
This repository contains 3 bash scripts :
68
- **deploy-private-aks.sh** : You can use it to deploy private AKS cluster with private endpoint, it fits the use case that you need to deploy BDC with AKS private cluster.
79

810
- **deploy-private-aks-udr.sh** : You can use it to deploy private AKS cluster with private endpoint, it fits the use case that you need to deploy BDC with AKS private cluster and limit egress traffic with UDR ( User-defined Routes ).
911

10-
- **deploy-bdc.sh** : You can use it to deploy Big Data Clusters ( BDC ) in private deployment mode on private AKS cluster with or without User-defined routes based on your project requirements. **Note** : Please use this scripts in the Azure VM which manages your AKS private cluster.
12+
- **deploy-bdc.sh** : You can use it to deploy Big Data Clusters ( BDC ) in private deployment mode on private AKS cluster with or without user-defined routes based on your project requirements. **Note** : Please use this scripts in the Azure VM or Azure Bastion instance which manages your AKS private cluster.
1113

1214

1315
## Prerequisites

samples/features/sql-big-data-cluster/deployment/private-aks/scripts/deploy-bdc.sh renamed to samples/features/sql-big-data-cluster/deployment/private-aks/scripts/deploy-bdc-private-aks.sh

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -18,16 +18,16 @@ done
1818
azdata bdc config init --source aks-dev-test --target private-bdc-aks --force
1919

2020
#Configurations for BDC deployment
21-
azdata bdc config replace -c private-bdc-aks/control.json -j "$.spec.docker.imageTag=2019-CU6-ubuntu-16.04"
22-
azdata bdc config replace -c private-bdc-aks/control.json -j "$.spec.storage.data.className=default"
23-
azdata bdc config replace -c private-bdc-aks/control.json -j "$.spec.storage.logs.className=default"
21+
azdata bdc config replace -p private-bdc-aks/control.json -j "$.spec.docker.imageTag=2019-CU12-ubuntu-20.04"
22+
azdata bdc config replace -p private-bdc-aks/control.json -j "$.spec.storage.data.className=default"
23+
azdata bdc config replace -p private-bdc-aks/control.json -j "$.spec.storage.logs.className=default"
2424

25-
azdata bdc config replace -c private-bdc-aks/control.json -j "$.spec.endpoints[0].serviceType=NodePort"
26-
azdata bdc config replace -c private-bdc-aks/control.json -j "$.spec.endpoints[1].serviceType=NodePort"
25+
azdata bdc config replace -p private-bdc-aks/control.json -j "$.spec.endpoints[0].serviceType=NodePort"
26+
azdata bdc config replace -p private-bdc-aks/control.json -j "$.spec.endpoints[1].serviceType=NodePort"
2727

28-
azdata bdc config replace -c private-bdc-aks/bdc.json -j "$.spec.resources.master.spec.endpoints[0].serviceType=NodePort"
29-
azdata bdc config replace -c private-bdc-aks/bdc.json -j "$.spec.resources.gateway.spec.endpoints[0].serviceType=NodePort"
30-
azdata bdc config replace -c private-bdc-aks/bdc.json -j "$.spec.resources.appproxy.spec.endpoints[0].serviceType=NodePort"
28+
azdata bdc config replace -p private-bdc-aks/bdc.json -j "$.spec.resources.master.spec.endpoints[0].serviceType=NodePort"
29+
azdata bdc config replace -p private-bdc-aks/bdc.json -j "$.spec.resources.gateway.spec.endpoints[0].serviceType=NodePort"
30+
azdata bdc config replace -p private-bdc-aks/bdc.json -j "$.spec.resources.appproxy.spec.endpoints[0].serviceType=NodePort"
3131

3232
#In case you're deploying BDC in HA mode ( aks-dev-test-ha profile ) please also use the following command
3333
#azdata bdc config replace -c private-bdc-aks /bdc.json -j "$.spec.resources.master.spec.endpoints[1].serviceType=NodePort"

samples/features/sql-big-data-cluster/deployment/private-aks/scripts/deploy-private-aks.sh

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,7 @@ export SUBID=$subscription
1919

2020
export REGION_NAME=$region
2121
export RESOURCE_GROUP=$resourcegroup
22+
export KUBERNETES_VERSION=$version
2223
export SUBNET_NAME=aks-subnet
2324
export VNET_NAME=bdc-vnet
2425
export AKS_NAME=bdcaksprivatecluster
@@ -50,6 +51,7 @@ az aks create \
5051
--name $AKS_NAME \
5152
--load-balancer-sku standard \
5253
--enable-private-cluster \
54+
--kubernetes-version $version \
5355
--network-plugin azure \
5456
--vnet-subnet-id $SUBNET_ID \
5557
--docker-bridge-address 172.17.0.1/16 \

0 commit comments

Comments
 (0)