Skip to content

Commit 26801b9

Browse files
committed
2 parents 4dca48c + 828d451 commit 26801b9

12 files changed

Lines changed: 24 additions & 24 deletions

File tree

samples/features/sql-big-data-cluster/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ Installation instructions for SQL Server 2019 big data clusters can be found [he
99
## Executing the sample scripts
1010
The scripts should be executed in a specific order to test the various features. Execute the scripts from each folder in below order:
1111

12-
1. __[spark/dataloading/transform-csv-files.ipynb](spark/dataloading/transform-csv-files.ipynb)__
12+
1. __[spark/data-loading/transform-csv-files.ipynb](spark/data-loading/transform-csv-files.ipynb)__
1313
1. __[data-virtualization/generic-odbc](data-virtualization/generic-odbc)__
1414
1. __[data-virtualization/hadoop](data-virtualization/hadoop)__
1515
1. __[data-virtualization/storage-pool](data-virtualization/storage-pool)__

samples/features/sql-big-data-cluster/app-deploy/RollDice/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -41,10 +41,10 @@ To run this sample, you need the following prerequisites.
4141
## Run this sample
4242

4343
1. Clone or download this sample on your computer.
44-
2. Log in to the SQL Server big data cluster using the command below using the IP address of the `mgmtproxy-svc-external` in your cluster. If you are not familiar with `mssqltctl` you can refer to the [documentation](https://docs.microsoft.com/en-us/sql/big-data-cluster/big-data-cluster-create-apps?view=sqlallproducts-allversions) and then return to this sample.
44+
2. Log in to the SQL Server big data cluster using the command below using the IP address of the `controller-svc-external` in your cluster. If you are not familiar with `mssqltctl` you can refer to the [documentation](https://docs.microsoft.com/en-us/sql/big-data-cluster/big-data-cluster-create-apps?view=sqlallproducts-allversions) and then return to this sample.
4545

4646
```bash
47-
azdata login -e https://<ip-address-of-mgmtproxy-svc-external>:30777 -u <user-name> -p <password>
47+
azdata login -e https://<ip-address-of-controller-svc-external>:30080 -u <user-name> -p <password>
4848
```
4949
3. Deploy the application by running the following command, specifying the folder where your `spec.yaml` and `roll-dice.R` files are located:
5050
```bash

samples/features/sql-big-data-cluster/app-deploy/SSIS/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -32,10 +32,10 @@ To run this sample, you need the following prerequisites.
3232
## Run this sample
3333

3434
1. Clone or download this sample on your computer.
35-
2. Log in to the SQL Server big data cluster using the command below using the IP address of the `mgmtproxy-svc-external` in your cluster. If you are not familiar with `mssqltctl` you can refer to the [documentation](https://docs.microsoft.com/en-us/sql/big-data-cluster/big-data-cluster-create-apps?view=sqlallproducts-allversions) and then return to this sample.
35+
2. Log in to the SQL Server big data cluster using the command below using the IP address of the `controller-svc-external` in your cluster. If you are not familiar with `mssqltctl` you can refer to the [documentation](https://docs.microsoft.com/en-us/sql/big-data-cluster/big-data-cluster-create-apps?view=sqlallproducts-allversions) and then return to this sample.
3636

3737
```bash
38-
azdata login -e https://<ip-address-of-mgmtproxy-svc-external>:30777 -u <user-name> -p <password>
38+
azdata login -e https://<ip-address-of-controller-svc-external>:30080 -u <user-name>
3939
```
4040
3. Replace `[SA_PASSWORD]` in the `spec.yaml` file with the password for SQL user `sa`.
4141
4. Deploy the application by running the following command, specifying the folder where your `spec.yaml` and `back-up-db.dtsx` files are located:

samples/features/sql-big-data-cluster/app-deploy/addpy/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -42,10 +42,10 @@ To run this sample, you need the following prerequisites.
4242
## Run this sample
4343

4444
1. Clone or download this sample on your computer.
45-
2. Log in to the SQL Server big data cluster using the command below using the IP address of the `mgmtproxy-svc-external` in your cluster. If you are not familiar with `mssqltctl` you can refer to the [documentation](https://docs.microsoft.com/en-us/sql/big-data-cluster/big-data-cluster-create-apps?view=sqlallproducts-allversions) and then return to this sample.
45+
2. Log in to the SQL Server big data cluster using the command below using the IP address of the `controller-svc-external` in your cluster. If you are not familiar with `mssqltctl` you can refer to the [documentation](https://docs.microsoft.com/en-us/sql/big-data-cluster/big-data-cluster-create-apps?view=sqlallproducts-allversions) and then return to this sample.
4646

4747
```bash
48-
azdata login -e https://<ip-address-of-mgmtproxy-svc-external>:30777 -u <user-name> -p <password>
48+
azdata login -e https://<ip-address-of-controller-svc-external>:30080 -u <user-name>
4949
```
5050
3. Deploy the application by running the following command, specifying the folder where your `spec.yaml` and `add.py` files are located:
5151
```bash

samples/features/sql-big-data-cluster/app-deploy/magic8ball/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -41,10 +41,10 @@ To run this sample, you need the following prerequisites.
4141
## Run this sample
4242

4343
1. Clone or download this sample on your computer.
44-
2. Log in to the SQL Server big data cluster using the command below using the IP address of the `mgmtproxy-svc-external` in your cluster. If you are not familiar with `mssqltctl` you can refer to the [documentation](https://docs.microsoft.com/en-us/sql/big-data-cluster/big-data-cluster-create-apps?view=sqlallproducts-allversions) and then return to this sample.
44+
2. Log in to the SQL Server big data cluster using the command below using the IP address of the `controller-svc-external` in your cluster. If you are not familiar with `mssqltctl` you can refer to the [documentation](https://docs.microsoft.com/en-us/sql/big-data-cluster/big-data-cluster-create-apps?view=sqlallproducts-allversions) and then return to this sample.
4545

4646
```bash
47-
azdata login -e https://<ip-address-of-mgmtproxy-svc-external>:30777 -u <user-name> -p <password>
47+
azdata login -e https://<ip-address-of-controller-svc-external>:30080 -u <user-name>
4848
```
4949
3. Deploy the application by running the following command, specifying the folder where your `spec.yaml` and `magic8ball.py` files are located:
5050
```bash

samples/features/sql-big-data-cluster/app-deploy/mleap/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -38,10 +38,10 @@ To run this sample, you need the following prerequisites.
3838
## Run this sample
3939

4040
1. Clone or download this sample on your computer.
41-
2. Log in to the SQL Server big data cluster using the command below using the IP address of the `mgmtproxy-svc-external` in your cluster. If you are not familiar with `mssqltctl` you can refer to the [documentation](https://docs.microsoft.com/en-us/sql/big-data-cluster/big-data-cluster-create-apps?view=sqlallproducts-allversions) and then return to this sample.
41+
2. Log in to the SQL Server big data cluster using the command below using the IP address of the `controller-svc-external` in your cluster. If you are not familiar with `mssqltctl` you can refer to the [documentation](https://docs.microsoft.com/en-us/sql/big-data-cluster/big-data-cluster-create-apps?view=sqlallproducts-allversions) and then return to this sample.
4242

4343
```bash
44-
azdata login -e https://<ip-address-of-mgmtproxy-svc-external>:30777 -u <user-name> -p <password>
44+
azdata login -e https://<ip-address-of-controller-svc-external>:30080 -u <user-name>
4545
```
4646
3. This example uses a TensorFlow Machine Learning Model that uses public US Census data predict income. [More details and information on the example are here](https://docs.microsoft.com/en-us/sql/big-data-cluster/train-and-create-machinelearning-models-with-spark?view=sqlallproducts-allversions). The application you will be deploying as part of this sample is a Random Forest Model that was built in Spark and has been [serialized as an MLeap bundle](https://docs.microsoft.com/en-us/sql/big-data-cluster/export-model-with-spark-mleap?view=sqlallproducts-allversions).
4747

samples/features/sql-big-data-cluster/app-deploy/sentiment-analysis/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -42,10 +42,10 @@ To run this sample, you need the following prerequisites.
4242
## Run this sample
4343

4444
1. Clone or download this sample on your computer.
45-
2. Log in to the SQL Server big data cluster using the command below using the IP address of the `mgmtproxy-svc-external` in your cluster. If you are not familiar with `mssqltctl` you can refer to the [documentation](https://docs.microsoft.com/en-us/sql/big-data-cluster/big-data-cluster-create-apps?view=sqlallproducts-allversions) and then return to this sample.
45+
2. Log in to the SQL Server big data cluster using the command below using the IP address of the `controller-svc-external` in your cluster. If you are not familiar with `mssqltctl` you can refer to the [documentation](https://docs.microsoft.com/en-us/sql/big-data-cluster/big-data-cluster-create-apps?view=sqlallproducts-allversions) and then return to this sample.
4646

4747
```bash
48-
azdata login -e https://<ip-address-of-mgmtproxy-svc-external>:30777 -u <user-name> -p <password>
48+
azdata login -e https://<ip-address-of-controller-svc-external>:30080 -u <user-name>
4949
```
5050
3. Deploy the application by running the following command, specifying the folder where your `spec.yaml`, `sentiment.rds` and `sentiment.R` files are located:
5151
```bash

samples/features/sql-big-data-cluster/app-deploy/sumofsq/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -42,10 +42,10 @@ To run this sample, you need the following prerequisites.
4242
## Run this sample
4343

4444
1. Clone or download this sample on your computer.
45-
2. Log in to the SQL Server big data cluster using the command below using the IP address of the `mgmtproxy-svc-external` in your cluster. If you are not familiar with `mssqltctl` you can refer to the [documentation](https://docs.microsoft.com/en-us/sql/big-data-cluster/big-data-cluster-create-apps?view=sqlallproducts-allversions) and then return to this sample.
45+
2. Log in to the SQL Server big data cluster using the command below using the IP address of the `controller-svc-external` in your cluster. If you are not familiar with `mssqltctl` you can refer to the [documentation](https://docs.microsoft.com/en-us/sql/big-data-cluster/big-data-cluster-create-apps?view=sqlallproducts-allversions) and then return to this sample.
4646

4747
```bash
48-
azdata login -e https://<ip-address-of-mgmtproxy-svc-external>:30777 -u <user-name> -p <password>
48+
azdata login -e https://<ip-address-of-controller-svc-external>:30080 -u <user-name>
4949
```
5050
3. Deploy the application by running the following command, specifying the folder where your `spec.yaml` and `sum_of_squares.R` files are located:
5151
```bash

samples/features/sql-big-data-cluster/data-virtualization/hadoop/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,10 +12,10 @@ In SQL Server 2019 big data cluster, the storage pool consists of HDFS data node
1212

1313
1. Connect to SQL Server Master instance.
1414

15-
1. Run the [../../spark/dataloading/transform-csv-files.ipynb](../../spark/dataloading/transform-csv-files.ipynb/) notebook to generate the sample parquet file(s).
15+
1. Run the [../../spark/data-loading/transform-csv-files.ipynb](../../spark/data-loading/transform-csv-files.ipynb/) notebook to generate the sample parquet file(s).
1616

1717
1. Execute the [web-clickstreams-hdfs-orc.sql](web-clickstreams-hdfs-orc.sql). This script demonstrates how to read ORC file(s) stored in HDFS.
1818

1919
1. Execute the [product-reviews-hdfs-orc.sql](product-reviews-hdfs-orc.sql). This script demonstrates how to read ORC file(s) stored in HDFS.
2020

21-
1. Execute the [inventory-hdfs-rcfile.sql](inventory-hdfs-rcfile.sql). This script demonstrates how to export data from SQL Server into HDFS using PolyBase v1 syntax. This script will export data from SQL Server into RCFILE format.
21+
1. Execute the [inventory-hdfs-rcfile.sql](inventory-hdfs-rcfile.sql). This script demonstrates how to export data from SQL Server into HDFS using PolyBase v1 syntax. This script will export data from SQL Server into RCFILE format.

samples/features/sql-big-data-cluster/data-virtualization/storage-pool/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ In SQL Server 2019 big data cluster, the storage pool consists of HDFS data node
1212

1313
1. Connect to SQL Server Master instance.
1414

15-
1. Run the [../../spark/dataloading/transform-csv-files.ipynb](../../spark/dataloading/transform-csv-files.ipynb/) notebook to generate the sample parquet file(s).
15+
1. Run the [../../spark/data-loading/transform-csv-files.ipynb](../../spark/data-loading/transform-csv-files.ipynb/) notebook to generate the sample parquet file(s).
1616

1717
1. Execute the [web-clickstreams-hdfs-csv.sql](web-clickstreams-hdfs-csv.sql). This script demonstrates how to read CSV file(s) stored in HDFS.
1818

0 commit comments

Comments
 (0)