Skip to content

Commit c5b8bc2

Browse files
authored
Merge branch 'main' into find-page-in-version-redux
2 parents a090963 + b07b5a8 commit c5b8bc2

20 files changed

Lines changed: 238 additions & 193 deletions

File tree

Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
#!/usr/bin/env node
2+
3+
const fs = require('fs')
4+
const core = require('@actions/core')
5+
const eventPayload = JSON.parse(fs.readFileSync(process.env.GITHUB_EVENT_PATH, 'utf8'))
6+
7+
// This workflow-run script does the following:
8+
// 1. Gets an array of labels on a PR.
9+
// 2. Finds one with the relevant Algolia text; if none found, exits early.
10+
// 3. Gets the version substring from the label string.
11+
12+
const labelText = 'sync-english-index-for-'
13+
const labelsArray = eventPayload.pull_request.labels
14+
15+
// Exit early if no labels are on this PR
16+
if (!(labelsArray && labelsArray.length)) {
17+
process.exit(0)
18+
}
19+
20+
// Find the relevant label
21+
const algoliaLabel = labelsArray
22+
.map(label => label.name)
23+
.find(label => label.startsWith(labelText))
24+
25+
// Exit early if no relevant label is found
26+
if (!algoliaLabel) {
27+
process.exit(0)
28+
}
29+
30+
// Given: sync-english-index-for-enterprise-server@3.0
31+
// Returns: enterprise-server@3.0
32+
const versionToSync = algoliaLabel.split(labelText)[1]
33+
34+
// Store the version so we can access it later in the workflow
35+
core.setOutput('versionToSync', versionToSync)
36+
process.exit(0)

.github/workflows/sync-single-english-algolia-index.yml

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,8 @@ name: Algolia Sync Single English Index
33
on:
44
pull_request:
55
types:
6+
- labeled
7+
- unlabeled
68
- opened
79
- reopened
810
- synchronize
@@ -13,7 +15,7 @@ on:
1315
jobs:
1416
updateIndices:
1517
name: Update English index for single version based on a label's version
16-
if: github.repository == 'github/docs-internal' && startsWith(github.event.label.name, 'sync-english-index-for-')
18+
if: github.repository == 'github/docs-internal'
1719
runs-on: ubuntu-latest
1820
steps:
1921
- name: checkout
@@ -30,13 +32,13 @@ jobs:
3032
${{ runner.os }}-node-
3133
- name: npm ci
3234
run: npm ci
33-
- name: Get version from label
35+
- name: Get version from Algolia label if present; only continue if the label is found.
3436
id: getVersion
35-
run: |
36-
echo "::set-output name=version::$(github.event.label.name.split('sync-english-index-for-')[1])"
37-
- name: Sync English index for single version
37+
run: $GITHUB_WORKSPACE/.github/actions-scripts/enterprise-algolia-label.js
38+
- if: ${{ steps.getVersion.outputs.versionToSync }}
39+
name: Sync English index for single version
3840
env:
39-
VERSION: ${{ steps.getVersion.outputs.version }}
41+
VERSION: ${{ steps.getVersion.outputs.versionToSync }}
4042
LANGUAGE: 'en'
4143
ALGOLIA_APPLICATION_ID: ${{ secrets.ALGOLIA_APPLICATION_ID }}
4244
ALGOLIA_API_KEY: ${{ secrets.ALGOLIA_API_KEY }}

content/actions/learn-github-actions/migrating-from-travis-ci-to-github-actions.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -164,6 +164,12 @@ git:
164164
</tr>
165165
</table>
166166
167+
#### Using environment variables in a matrix
168+
169+
Travis CI and {% data variables.product.prodname_actions %} can both add custom environment variables to a test matrix, which allows you to refer to the variable in a later step.
170+
171+
In {% data variables.product.prodname_actions %}, you can use the `include` key to add custom environment variables to a matrix. {% data reusables.github-actions.matrix-variable-example %}
172+
167173
### Key features in {% data variables.product.prodname_actions %}
168174

169175
When migrating from Travis CI, consider the following key features in {% data variables.product.prodname_actions %}:

content/actions/reference/workflow-syntax-for-github-actions.md

Lines changed: 2 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -878,34 +878,9 @@ strategy:
878878

879879
##### Using environment variables in a matrix
880880

881-
You can add custom environment variables for each test combination by using `include` with `env`. You can then refer to the custom environment variables in a later step.
881+
You can add custom environment variables for each test combination by using the `include` key. You can then refer to the custom environment variables in a later step.
882882

883-
In this example, the matrix entries for `node-version` are each configured to use different values for the `site` and `datacenter` environment variables. The `Echo site details` step then uses {% raw %}`env: ${{ matrix.env }}`{% endraw %} to refer to the custom variables:
884-
885-
{% raw %}
886-
```yaml
887-
name: Node.js CI
888-
on: [push]
889-
jobs:
890-
build:
891-
runs-on: ubuntu-latest
892-
strategy:
893-
matrix:
894-
include:
895-
- node-version: 10.x
896-
site: "prod"
897-
datacenter: "site-a"
898-
- node-version: 12.x
899-
site: "dev"
900-
datacenter: "site-b"
901-
steps:
902-
- name: Echo site details
903-
env:
904-
SITE: ${{ matrix.site }}
905-
DATACENTER: ${{ matrix.datacenter }}
906-
run: echo $SITE $DATACENTER
907-
```
908-
{% endraw %}
883+
{% data reusables.github-actions.matrix-variable-example %}
909884

910885
### **`jobs.<job_id>.strategy.fail-fast`**
911886

content/admin/enterprise-management/configuring-high-availability-replication-for-a-cluster.md

Lines changed: 61 additions & 43 deletions
Original file line numberDiff line numberDiff line change
@@ -57,32 +57,36 @@ Before you define a secondary datacenter for your passive nodes, ensure that you
5757
mysql-master = <em>HOSTNAME</em>
5858
redis-master = <em>HOSTNAME</em>
5959
<strong>primary-datacenter = default</strong>
60-
```
60+
```
6161

6262
- Optionally, change the name of the primary datacenter to something more descriptive or accurate by editing the value of `primary-datacenter`.
6363

6464
4. {% data reusables.enterprise_clustering.configuration-file-heading %} Under each node's heading, add a new key-value pair to assign the node to a datacenter. Use the same value as `primary-datacenter` from step 3 above. For example, if you want to use the default name (`default`), add the following key-value pair to the section for each node.
6565
66-
datacenter = default
66+
```
67+
datacenter = default
68+
```
6769
6870
When you're done, the section for each node in the cluster configuration file should look like the following example. {% data reusables.enterprise_clustering.key-value-pair-order-irrelevant %}
6971

70-
```shell
71-
[cluster "<em>HOSTNAME</em>"]
72-
<strong>datacenter = default</strong>
73-
hostname = <em>HOSTNAME</em>
74-
ipv4 = <em>IP ADDRESS</em>
72+
```shell
73+
[cluster "<em>HOSTNAME</em>"]
74+
<strong>datacenter = default</strong>
75+
hostname = <em>HOSTNAME</em>
76+
ipv4 = <em>IP ADDRESS</em>
77+
...
7578
...
76-
...
77-
```
79+
```
7880

79-
{% note %}
81+
{% note %}
8082

81-
**Note**: If you changed the name of the primary datacenter in step 3, find the `consul-datacenter` key-value pair in the section for each node and change the value to the renamed primary datacenter. For example, if you named the primary datacenter `primary`, use the following key-value pair for each node.
83+
**Note**: If you changed the name of the primary datacenter in step 3, find the `consul-datacenter` key-value pair in the section for each node and change the value to the renamed primary datacenter. For example, if you named the primary datacenter `primary`, use the following key-value pair for each node.
8284

83-
consul-datacenter = primary
85+
```
86+
consul-datacenter = primary
87+
```
8488

85-
{% endnote %}
89+
{% endnote %}
8690

8791
{% data reusables.enterprise_clustering.apply-configuration %}
8892

@@ -103,31 +107,37 @@ For an example configuration, see "[Example configuration](#example-configuratio
103107

104108
1. For each node in your cluster, provision a matching virtual machine with identical specifications, running the same version of {% data variables.product.prodname_ghe_server %}. Note the IPv4 address and hostname for each new cluster node. For more information, see "[Prerequisites](#prerequisites)."
105109

106-
{% note %}
110+
{% note %}
107111

108-
**Note**: If you're reconfiguring high availability after a failover, you can use the old nodes from the primary datacenter instead.
112+
**Note**: If you're reconfiguring high availability after a failover, you can use the old nodes from the primary datacenter instead.
109113
110-
{% endnote %}
114+
{% endnote %}
111115
112116
{% data reusables.enterprise_clustering.ssh-to-a-node %}
113117
114118
3. Back up your existing cluster configuration.
115119
116-
cp /data/user/common/cluster.conf ~/$(date +%Y-%m-%d)-cluster.conf.backup
120+
```
121+
cp /data/user/common/cluster.conf ~/$(date +%Y-%m-%d)-cluster.conf.backup
122+
```
117123
118124
4. Create a copy of your existing cluster configuration file in a temporary location, like _/home/admin/cluster-passive.conf_. Delete unique key-value pairs for IP addresses (`ipv*`), UUIDs (`uuid`), and public keys for WireGuard (`wireguard-pubkey`).
119125
120-
grep -Ev "(?:|ipv|uuid|vpn|wireguard\-pubkey)" /data/user/common/cluster.conf > ~/cluster-passive.conf
126+
```
127+
grep -Ev "(?:|ipv|uuid|vpn|wireguard\-pubkey)" /data/user/common/cluster.conf > ~/cluster-passive.conf
128+
```
121129
122130
5. Remove the `[cluster]` section from the temporary cluster configuration file that you copied in the previous step.
123131
124-
git config -f ~/cluster-passive.conf --remove-section cluster
132+
```
133+
git config -f ~/cluster-passive.conf --remove-section cluster
134+
```
125135
126136
6. Decide on a name for the secondary datacenter where you provisioned your passive nodes, then update the temporary cluster configuration file with the new datacenter name. Replace `SECONDARY` with the name you choose.
127137
128138
```shell
129-
sed -i 's/datacenter = default/datacenter = <em>SECONDARY</em>/g' ~/cluster-passive.conf
130-
```
139+
sed -i 's/datacenter = default/datacenter = <em>SECONDARY</em>/g' ~/cluster-passive.conf
140+
```
131141
132142
7. Decide on a pattern for the passive nodes' hostnames.
133143

@@ -140,7 +150,7 @@ For an example configuration, see "[Example configuration](#example-configuratio
140150
8. Open the temporary cluster configuration file from step 3 in a text editor. For example, you can use Vim.
141151

142152
```shell
143-
sudo vim ~/cluster-passive.conf
153+
sudo vim ~/cluster-passive.conf
144154
```
145155

146156
9. In each section within the temporary cluster configuration file, update the node's configuration. {% data reusables.enterprise_clustering.configuration-file-heading %}
@@ -150,37 +160,37 @@ For an example configuration, see "[Example configuration](#example-configuratio
150160
- Add a new key-value pair, `replica = enabled`.
151161
152162
```shell
153-
[cluster "<em>NEW PASSIVE NODE HOSTNAME</em>"]
154-
...
155-
hostname = <em>NEW PASSIVE NODE HOSTNAME</em>
156-
ipv4 = <em>NEW PASSIVE NODE IPV4 ADDRESS</em>
157-
<strong>replica = enabled</strong>
163+
[cluster "<em>NEW PASSIVE NODE HOSTNAME</em>"]
164+
...
165+
hostname = <em>NEW PASSIVE NODE HOSTNAME</em>
166+
ipv4 = <em>NEW PASSIVE NODE IPV4 ADDRESS</em>
167+
<strong>replica = enabled</strong>
168+
...
158169
...
159-
...
160170
```
161171
162172
10. Append the contents of the temporary cluster configuration file that you created in step 4 to the active configuration file.
163173
164174
```shell
165-
cat ~/cluster-passive.conf >> /data/user/common/cluster.conf
166-
```
175+
cat ~/cluster-passive.conf >> /data/user/common/cluster.conf
176+
```
167177
168178
11. Designate the primary MySQL and Redis nodes in the secondary datacenter. Replace `REPLICA MYSQL PRIMARY HOSTNAME` and `REPLICA REDIS PRIMARY HOSTNAME` with the hostnames of the passives node that you provisioned to match your existing MySQL and Redis primaries.
169179
170180
```shell
171-
git config -f /data/user/common/cluster.conf cluster.mysql-master-replica <em>REPLICA MYSQL PRIMARY HOSTNAME</em>
172-
git config -f /data/user/common/cluster.conf cluster.redis-master-replica <em>REPLICA REDIS PRIMARY HOSTNAME</em>
173-
```
181+
git config -f /data/user/common/cluster.conf cluster.mysql-master-replica <em>REPLICA MYSQL PRIMARY HOSTNAME</em>
182+
git config -f /data/user/common/cluster.conf cluster.redis-master-replica <em>REPLICA REDIS PRIMARY HOSTNAME</em>
183+
```
174184
175185
12. Enable MySQL to fail over automatically when you fail over to the passive replica nodes.
176186
177187
```shell
178-
git config -f /data/user/common/cluster.conf cluster.mysql-auto-failover true
188+
git config -f /data/user/common/cluster.conf cluster.mysql-auto-failover true
179189
```
180190
181-
{% warning %}
191+
{% warning %}
182192
183-
**Warning**: Review your cluster configuration file before proceeding.
193+
**Warning**: Review your cluster configuration file before proceeding.
184194
185195
- In the top-level `[cluster]` section, ensure that the values for `mysql-master-replica` and `redis-master-replica` are the correct hostnames for the passive nodes in the secondary datacenter that will serve as the MySQL and Redis primaries after a failover.
186196
- In each section for an active node named <code>[cluster "<em>ACTIVE NODE HOSTNAME</em>"]</code>, double-check the following key-value pairs.
@@ -194,9 +204,9 @@ For an example configuration, see "[Example configuration](#example-configuratio
194204
- `replica` should be configured as `enabled`.
195205
- Take the opportunity to remove sections for offline nodes that are no longer in use.
196206

197-
To review an example configuration, see "[Example configuration](#example-configuration)."
207+
To review an example configuration, see "[Example configuration](#example-configuration)."
198208

199-
{% endwarning %}
209+
{% endwarning %}
200210

201211
13. Initialize the new cluster configuration. {% data reusables.enterprise.use-a-multiplexer %}
202212

@@ -207,7 +217,7 @@ For an example configuration, see "[Example configuration](#example-configuratio
207217
14. After the initialization finishes, {% data variables.product.prodname_ghe_server %} displays the following message.
208218

209219
```shell
210-
Finished cluster initialization
220+
Finished cluster initialization
211221
```
212222

213223
{% data reusables.enterprise_clustering.apply-configuration %}
@@ -294,19 +304,27 @@ You can monitor the progress on any node in the cluster, using command-line tool
294304
295305
- Monitor replication of databases:
296306
297-
/usr/local/share/enterprise/ghe-cluster-status-mysql
307+
```
308+
/usr/local/share/enterprise/ghe-cluster-status-mysql
309+
```
298310
299311
- Monitor replication of repository and Gist data:
300312
301-
ghe-spokes status
313+
```
314+
ghe-spokes status
315+
```
302316
303317
- Monitor replication of attachment and LFS data:
304318
305-
ghe-storage replication-status
319+
```
320+
ghe-storage replication-status
321+
```
306322
307323
- Monitor replication of Pages data:
308324
309-
ghe-dpages replication-status
325+
```
326+
ghe-dpages replication-status
327+
```
310328
311329
You can use `ghe-cluster-status` to review the overall health of your cluster. For more information, see "[Command-line utilities](/enterprise/admin/configuration/command-line-utilities#ghe-cluster-status)."
312330

0 commit comments

Comments
 (0)