Skip to content

Commit 9593298

Browse files
authored
Merge branch 'main' into lint-additional-old-data-refs
2 parents 6662e02 + c9293f4 commit 9593298

20 files changed

Lines changed: 342 additions & 198 deletions
Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
name: Algolia Sync Single English Index
2+
3+
on:
4+
pull_request:
5+
types:
6+
- opened
7+
- reopened
8+
- synchronize
9+
- ready_for_review
10+
- unlocked
11+
12+
# This workflow requires a label in the format `sync-english-index-for-<PLAN@RELEASE>`
13+
jobs:
14+
updateIndices:
15+
name: Update English index for single version based on a label's version
16+
if: github.repository == 'github/docs-internal' && startsWith(github.event.label.name, 'sync-english-index-for-')
17+
runs-on: ubuntu-latest
18+
steps:
19+
- name: checkout
20+
uses: actions/checkout@5a4ac9002d0be2fb38bd78e4b4dbde5606d7042f
21+
- uses: actions/setup-node@56899e050abffc08c2b3b61f3ec6a79a9dc3223d
22+
with:
23+
node-version: 14.x
24+
- name: cache node modules
25+
uses: actions/cache@0781355a23dac32fd3bac414512f4b903437991a
26+
with:
27+
path: ~/.npm
28+
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
29+
restore-keys: |
30+
${{ runner.os }}-node-
31+
- name: npm ci
32+
run: npm ci
33+
- name: Get version from label
34+
id: getVersion
35+
run: |
36+
echo "::set-output name=version::$(github.event.label.name.split('sync-english-index-for-')[1])"
37+
- name: Sync English index for single version
38+
env:
39+
VERSION: ${{ steps.getVersion.outputs.version }}
40+
LANGUAGE: 'en'
41+
ALGOLIA_APPLICATION_ID: ${{ secrets.ALGOLIA_APPLICATION_ID }}
42+
ALGOLIA_API_KEY: ${{ secrets.ALGOLIA_API_KEY }}
43+
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
44+
run: npm run sync-search

CODE_OF_CONDUCT.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,7 @@ Examples of unacceptable behavior include:
2222
* Trolling, insulting or derogatory comments, and personal or political attacks
2323
* Public or private harassment
2424
* Publishing others' private information, such as a physical or email address, without their explicit permission
25+
* Contacting individual members, contributors, or leaders privately, outside designated community mechanisms, without their explicit permission
2526
* Other conduct which could reasonably be considered inappropriate in a professional setting
2627

2728
## Enforcement Responsibilities

content/actions/learn-github-actions/migrating-from-travis-ci-to-github-actions.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -164,6 +164,12 @@ git:
164164
</tr>
165165
</table>
166166
167+
#### Using environment variables in a matrix
168+
169+
Travis CI and {% data variables.product.prodname_actions %} can both add custom environment variables to a test matrix, which allows you to refer to the variable in a later step.
170+
171+
In {% data variables.product.prodname_actions %}, you can use the `include` key to add custom environment variables to a matrix. {% data reusables.github-actions.matrix-variable-example %}
172+
167173
### Key features in {% data variables.product.prodname_actions %}
168174

169175
When migrating from Travis CI, consider the following key features in {% data variables.product.prodname_actions %}:

content/actions/reference/workflow-syntax-for-github-actions.md

Lines changed: 2 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -878,34 +878,9 @@ strategy:
878878

879879
##### Using environment variables in a matrix
880880

881-
You can add custom environment variables for each test combination by using `include` with `env`. You can then refer to the custom environment variables in a later step.
881+
You can add custom environment variables for each test combination by using the `include` key. You can then refer to the custom environment variables in a later step.
882882

883-
In this example, the matrix entries for `node-version` are each configured to use different values for the `site` and `datacenter` environment variables. The `Echo site details` step then uses {% raw %}`env: ${{ matrix.env }}`{% endraw %} to refer to the custom variables:
884-
885-
{% raw %}
886-
```yaml
887-
name: Node.js CI
888-
on: [push]
889-
jobs:
890-
build:
891-
runs-on: ubuntu-latest
892-
strategy:
893-
matrix:
894-
include:
895-
- node-version: 10.x
896-
site: "prod"
897-
datacenter: "site-a"
898-
- node-version: 12.x
899-
site: "dev"
900-
datacenter: "site-b"
901-
steps:
902-
- name: Echo site details
903-
env:
904-
SITE: ${{ matrix.site }}
905-
DATACENTER: ${{ matrix.datacenter }}
906-
run: echo $SITE $DATACENTER
907-
```
908-
{% endraw %}
883+
{% data reusables.github-actions.matrix-variable-example %}
909884

910885
### **`jobs.<job_id>.strategy.fail-fast`**
911886

content/admin/enterprise-management/configuring-high-availability-replication-for-a-cluster.md

Lines changed: 61 additions & 43 deletions
Original file line numberDiff line numberDiff line change
@@ -57,32 +57,36 @@ Before you define a secondary datacenter for your passive nodes, ensure that you
5757
mysql-master = <em>HOSTNAME</em>
5858
redis-master = <em>HOSTNAME</em>
5959
<strong>primary-datacenter = default</strong>
60-
```
60+
```
6161

6262
- Optionally, change the name of the primary datacenter to something more descriptive or accurate by editing the value of `primary-datacenter`.
6363

6464
4. {% data reusables.enterprise_clustering.configuration-file-heading %} Under each node's heading, add a new key-value pair to assign the node to a datacenter. Use the same value as `primary-datacenter` from step 3 above. For example, if you want to use the default name (`default`), add the following key-value pair to the section for each node.
6565
66-
datacenter = default
66+
```
67+
datacenter = default
68+
```
6769
6870
When you're done, the section for each node in the cluster configuration file should look like the following example. {% data reusables.enterprise_clustering.key-value-pair-order-irrelevant %}
6971

70-
```shell
71-
[cluster "<em>HOSTNAME</em>"]
72-
<strong>datacenter = default</strong>
73-
hostname = <em>HOSTNAME</em>
74-
ipv4 = <em>IP ADDRESS</em>
72+
```shell
73+
[cluster "<em>HOSTNAME</em>"]
74+
<strong>datacenter = default</strong>
75+
hostname = <em>HOSTNAME</em>
76+
ipv4 = <em>IP ADDRESS</em>
77+
...
7578
...
76-
...
77-
```
79+
```
7880

79-
{% note %}
81+
{% note %}
8082

81-
**Note**: If you changed the name of the primary datacenter in step 3, find the `consul-datacenter` key-value pair in the section for each node and change the value to the renamed primary datacenter. For example, if you named the primary datacenter `primary`, use the following key-value pair for each node.
83+
**Note**: If you changed the name of the primary datacenter in step 3, find the `consul-datacenter` key-value pair in the section for each node and change the value to the renamed primary datacenter. For example, if you named the primary datacenter `primary`, use the following key-value pair for each node.
8284

83-
consul-datacenter = primary
85+
```
86+
consul-datacenter = primary
87+
```
8488

85-
{% endnote %}
89+
{% endnote %}
8690

8791
{% data reusables.enterprise_clustering.apply-configuration %}
8892

@@ -103,31 +107,37 @@ For an example configuration, see "[Example configuration](#example-configuratio
103107

104108
1. For each node in your cluster, provision a matching virtual machine with identical specifications, running the same version of {% data variables.product.prodname_ghe_server %}. Note the IPv4 address and hostname for each new cluster node. For more information, see "[Prerequisites](#prerequisites)."
105109

106-
{% note %}
110+
{% note %}
107111

108-
**Note**: If you're reconfiguring high availability after a failover, you can use the old nodes from the primary datacenter instead.
112+
**Note**: If you're reconfiguring high availability after a failover, you can use the old nodes from the primary datacenter instead.
109113
110-
{% endnote %}
114+
{% endnote %}
111115
112116
{% data reusables.enterprise_clustering.ssh-to-a-node %}
113117
114118
3. Back up your existing cluster configuration.
115119
116-
cp /data/user/common/cluster.conf ~/$(date +%Y-%m-%d)-cluster.conf.backup
120+
```
121+
cp /data/user/common/cluster.conf ~/$(date +%Y-%m-%d)-cluster.conf.backup
122+
```
117123
118124
4. Create a copy of your existing cluster configuration file in a temporary location, like _/home/admin/cluster-passive.conf_. Delete unique key-value pairs for IP addresses (`ipv*`), UUIDs (`uuid`), and public keys for WireGuard (`wireguard-pubkey`).
119125
120-
grep -Ev "(?:|ipv|uuid|vpn|wireguard\-pubkey)" /data/user/common/cluster.conf > ~/cluster-passive.conf
126+
```
127+
grep -Ev "(?:|ipv|uuid|vpn|wireguard\-pubkey)" /data/user/common/cluster.conf > ~/cluster-passive.conf
128+
```
121129
122130
5. Remove the `[cluster]` section from the temporary cluster configuration file that you copied in the previous step.
123131
124-
git config -f ~/cluster-passive.conf --remove-section cluster
132+
```
133+
git config -f ~/cluster-passive.conf --remove-section cluster
134+
```
125135
126136
6. Decide on a name for the secondary datacenter where you provisioned your passive nodes, then update the temporary cluster configuration file with the new datacenter name. Replace `SECONDARY` with the name you choose.
127137
128138
```shell
129-
sed -i 's/datacenter = default/datacenter = <em>SECONDARY</em>/g' ~/cluster-passive.conf
130-
```
139+
sed -i 's/datacenter = default/datacenter = <em>SECONDARY</em>/g' ~/cluster-passive.conf
140+
```
131141
132142
7. Decide on a pattern for the passive nodes' hostnames.
133143

@@ -140,7 +150,7 @@ For an example configuration, see "[Example configuration](#example-configuratio
140150
8. Open the temporary cluster configuration file from step 3 in a text editor. For example, you can use Vim.
141151

142152
```shell
143-
sudo vim ~/cluster-passive.conf
153+
sudo vim ~/cluster-passive.conf
144154
```
145155

146156
9. In each section within the temporary cluster configuration file, update the node's configuration. {% data reusables.enterprise_clustering.configuration-file-heading %}
@@ -150,37 +160,37 @@ For an example configuration, see "[Example configuration](#example-configuratio
150160
- Add a new key-value pair, `replica = enabled`.
151161
152162
```shell
153-
[cluster "<em>NEW PASSIVE NODE HOSTNAME</em>"]
154-
...
155-
hostname = <em>NEW PASSIVE NODE HOSTNAME</em>
156-
ipv4 = <em>NEW PASSIVE NODE IPV4 ADDRESS</em>
157-
<strong>replica = enabled</strong>
163+
[cluster "<em>NEW PASSIVE NODE HOSTNAME</em>"]
164+
...
165+
hostname = <em>NEW PASSIVE NODE HOSTNAME</em>
166+
ipv4 = <em>NEW PASSIVE NODE IPV4 ADDRESS</em>
167+
<strong>replica = enabled</strong>
168+
...
158169
...
159-
...
160170
```
161171
162172
10. Append the contents of the temporary cluster configuration file that you created in step 4 to the active configuration file.
163173
164174
```shell
165-
cat ~/cluster-passive.conf >> /data/user/common/cluster.conf
166-
```
175+
cat ~/cluster-passive.conf >> /data/user/common/cluster.conf
176+
```
167177
168178
11. Designate the primary MySQL and Redis nodes in the secondary datacenter. Replace `REPLICA MYSQL PRIMARY HOSTNAME` and `REPLICA REDIS PRIMARY HOSTNAME` with the hostnames of the passives node that you provisioned to match your existing MySQL and Redis primaries.
169179
170180
```shell
171-
git config -f /data/user/common/cluster.conf cluster.mysql-master-replica <em>REPLICA MYSQL PRIMARY HOSTNAME</em>
172-
git config -f /data/user/common/cluster.conf cluster.redis-master-replica <em>REPLICA REDIS PRIMARY HOSTNAME</em>
173-
```
181+
git config -f /data/user/common/cluster.conf cluster.mysql-master-replica <em>REPLICA MYSQL PRIMARY HOSTNAME</em>
182+
git config -f /data/user/common/cluster.conf cluster.redis-master-replica <em>REPLICA REDIS PRIMARY HOSTNAME</em>
183+
```
174184
175185
12. Enable MySQL to fail over automatically when you fail over to the passive replica nodes.
176186
177187
```shell
178-
git config -f /data/user/common/cluster.conf cluster.mysql-auto-failover true
188+
git config -f /data/user/common/cluster.conf cluster.mysql-auto-failover true
179189
```
180190
181-
{% warning %}
191+
{% warning %}
182192
183-
**Warning**: Review your cluster configuration file before proceeding.
193+
**Warning**: Review your cluster configuration file before proceeding.
184194
185195
- In the top-level `[cluster]` section, ensure that the values for `mysql-master-replica` and `redis-master-replica` are the correct hostnames for the passive nodes in the secondary datacenter that will serve as the MySQL and Redis primaries after a failover.
186196
- In each section for an active node named <code>[cluster "<em>ACTIVE NODE HOSTNAME</em>"]</code>, double-check the following key-value pairs.
@@ -194,9 +204,9 @@ For an example configuration, see "[Example configuration](#example-configuratio
194204
- `replica` should be configured as `enabled`.
195205
- Take the opportunity to remove sections for offline nodes that are no longer in use.
196206

197-
To review an example configuration, see "[Example configuration](#example-configuration)."
207+
To review an example configuration, see "[Example configuration](#example-configuration)."
198208

199-
{% endwarning %}
209+
{% endwarning %}
200210

201211
13. Initialize the new cluster configuration. {% data reusables.enterprise.use-a-multiplexer %}
202212

@@ -207,7 +217,7 @@ For an example configuration, see "[Example configuration](#example-configuratio
207217
14. After the initialization finishes, {% data variables.product.prodname_ghe_server %} displays the following message.
208218

209219
```shell
210-
Finished cluster initialization
220+
Finished cluster initialization
211221
```
212222

213223
{% data reusables.enterprise_clustering.apply-configuration %}
@@ -294,19 +304,27 @@ You can monitor the progress on any node in the cluster, using command-line tool
294304
295305
- Monitor replication of databases:
296306
297-
/usr/local/share/enterprise/ghe-cluster-status-mysql
307+
```
308+
/usr/local/share/enterprise/ghe-cluster-status-mysql
309+
```
298310
299311
- Monitor replication of repository and Gist data:
300312
301-
ghe-spokes status
313+
```
314+
ghe-spokes status
315+
```
302316
303317
- Monitor replication of attachment and LFS data:
304318
305-
ghe-storage replication-status
319+
```
320+
ghe-storage replication-status
321+
```
306322
307323
- Monitor replication of Pages data:
308324
309-
ghe-dpages replication-status
325+
```
326+
ghe-dpages replication-status
327+
```
310328
311329
You can use `ghe-cluster-status` to review the overall health of your cluster. For more information, see "[Command-line utilities](/enterprise/admin/configuration/command-line-utilities#ghe-cluster-status)."
312330

content/admin/installation/installing-github-enterprise-server-on-google-cloud-platform.md

Lines changed: 25 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ Before launching {% data variables.product.product_location %} on Google Cloud P
2727
{% data variables.product.prodname_ghe_server %} is supported on the following Google Compute Engine (GCE) machine types. For more information, see [the Google Cloud Platform machine types article](https://cloud.google.com/compute/docs/machine-types).
2828

2929
| High-memory |
30-
------------- |
30+
| ------------- |
3131
| n1-highmem-4 |
3232
| n1-highmem-8 |
3333
| n1-highmem-16 |
@@ -54,7 +54,7 @@ Based on your user license count, we recommend these machine types.
5454
1. Using the [gcloud compute](https://cloud.google.com/compute/docs/gcloud-compute/) command-line tool, list the public {% data variables.product.prodname_ghe_server %} images:
5555
```shell
5656
$ gcloud compute images list --project github-enterprise-public --no-standard-images
57-
```
57+
```
5858

5959
2. Take note of the image name for the latest GCE image of {% data variables.product.prodname_ghe_server %}.
6060

@@ -63,18 +63,18 @@ Based on your user license count, we recommend these machine types.
6363
GCE virtual machines are created as a member of a network, which has a firewall. For the network associated with the {% data variables.product.prodname_ghe_server %} VM, you'll need to configure the firewall to allow the required ports listed in the table below. For more information about firewall rules on Google Cloud Platform, see the Google guide "[Firewall Rules Overview](https://cloud.google.com/vpc/docs/firewalls)."
6464

6565
1. Using the gcloud compute command-line tool, create the network. For more information, see "[gcloud compute networks create](https://cloud.google.com/sdk/gcloud/reference/compute/networks/create)" in the Google documentation.
66-
```shell
67-
$ gcloud compute networks create <em>NETWORK-NAME</em> --subnet-mode auto
68-
```
66+
```shell
67+
$ gcloud compute networks create <em>NETWORK-NAME</em> --subnet-mode auto
68+
```
6969
2. Create a firewall rule for each of the ports in the table below. For more information, see "[gcloud compute firewall-rules](https://cloud.google.com/sdk/gcloud/reference/compute/firewall-rules/)" in the Google documentation.
70-
```shell
71-
$ gcloud compute firewall-rules create <em>RULE-NAME</em> \
72-
--network <em>NETWORK-NAME</em> \
73-
--allow tcp:22,tcp:25,tcp:80,tcp:122,udp:161,tcp:443,udp:1194,tcp:8080,tcp:8443,tcp:9418,icmp
74-
```
75-
This table identifies the required ports and what each port is used for.
70+
```shell
71+
$ gcloud compute firewall-rules create <em>RULE-NAME</em> \
72+
--network <em>NETWORK-NAME</em> \
73+
--allow tcp:22,tcp:25,tcp:80,tcp:122,udp:161,tcp:443,udp:1194,tcp:8080,tcp:8443,tcp:9418,icmp
74+
```
75+
This table identifies the required ports and what each port is used for.
7676

77-
{% data reusables.enterprise_installation.necessary_ports %}
77+
{% data reusables.enterprise_installation.necessary_ports %}
7878

7979
### Allocating a static IP and assigning it to the VM
8080

@@ -87,21 +87,21 @@ In production High Availability configurations, both primary and replica applian
8787
To create the {% data variables.product.prodname_ghe_server %} instance, you'll need to create a GCE instance with your {% data variables.product.prodname_ghe_server %} image and attach an additional storage volume for your instance data. For more information, see "[Hardware considerations](#hardware-considerations)."
8888

8989
1. Using the gcloud compute command-line tool, create a data disk to use as an attached storage volume for your instance data, and configure the size based on your user license count. For more information, see "[gcloud compute disks create](https://cloud.google.com/sdk/gcloud/reference/compute/disks/create)" in the Google documentation.
90-
```shell
91-
$ gcloud compute disks create <em>DATA-DISK-NAME</em> --size <em>DATA-DISK-SIZE</em> --type <em>DATA-DISK-TYPE</em> --zone <em>ZONE</em>
92-
```
90+
```shell
91+
$ gcloud compute disks create <em>DATA-DISK-NAME</em> --size <em>DATA-DISK-SIZE</em> --type <em>DATA-DISK-TYPE</em> --zone <em>ZONE</em>
92+
```
9393

9494
2. Then create an instance using the name of the {% data variables.product.prodname_ghe_server %} image you selected, and attach the data disk. For more information, see "[gcloud compute instances create](https://cloud.google.com/sdk/gcloud/reference/compute/instances/create)" in the Google documentation.
95-
```shell
96-
$ gcloud compute instances create <em>INSTANCE-NAME</em> \
97-
--machine-type n1-standard-8 \
98-
--image <em>GITHUB-ENTERPRISE-IMAGE-NAME</em> \
99-
--disk name=<em>DATA-DISK-NAME</em> \
100-
--metadata serial-port-enable=1 \
101-
--zone <em>ZONE</em> \
102-
--network <em>NETWORK-NAME</em> \
103-
--image-project github-enterprise-public
104-
```
95+
```shell
96+
$ gcloud compute instances create <em>INSTANCE-NAME</em> \
97+
--machine-type n1-standard-8 \
98+
--image <em>GITHUB-ENTERPRISE-IMAGE-NAME</em> \
99+
--disk name=<em>DATA-DISK-NAME</em> \
100+
--metadata serial-port-enable=1 \
101+
--zone <em>ZONE</em> \
102+
--network <em>NETWORK-NAME</em> \
103+
--image-project github-enterprise-public
104+
```
105105

106106
### Configuring the instance
107107

0 commit comments

Comments
 (0)