Skip to content

Commit 835f4c6

Browse files
committed
Merge branch 'main' into connor4312/eager-activeclient
2 parents 3a3037e + 334c5b7 commit 835f4c6

72 files changed

Lines changed: 3335 additions & 2253 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.github/instructions/sessions.instructions.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,3 +11,10 @@ When working on files under `src/vs/sessions/`, use these skills for detailed gu
1111

1212
- **`sessions`** skill — covers the full architecture: layering, folder structure, chat widget, menus, contributions, entry points, and development guidelines
1313
- **`agent-sessions-layout`** skill — covers the fixed layout structure, grid configuration, part visibility, editor modal, titlebar, sidebar footer, and implementation requirements
14+
15+
## Touch & iOS Compatibility
16+
17+
The Agents window can run on touch-capable platforms (notably iOS). Follow these rules for all DOM interaction code:
18+
19+
- Do not use `EventType.MOUSE_DOWN`, `EventType.MOUSE_UP`, or `EventType.MOUSE_MOVE` with `addDisposableListener` directly — on iOS, these events don't fire because the platform uses pointer events. Use `addDisposableGenericMouseDownListener`, `addDisposableGenericMouseUpListener`, or `addDisposableGenericMouseMoveListener` instead, which automatically select the correct event type per platform.
20+
- Add `touch-action: manipulation` in CSS on custom clickable elements (e.g. picker triggers, title bar pills, or other `<div>`/`<span>` elements styled as buttons) to eliminate the 300ms tap delay on touch devices. This is not needed for native `<button>` elements or standard VS Code widgets (quick picks, context menus, action bar items) which already handle touch behavior.

.github/skills/heap-snapshot-analysis/SKILL.md

Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -33,13 +33,53 @@ If the user needs the agent to launch VS Code, drive a scenario, and capture sna
3333

3434
Use the helpers in [parseSnapshot.ts](./helpers/parseSnapshot.ts) to load snapshots. The files are often >500MB and too large for `JSON.parse` as a string — the helpers use Buffer-based extraction. In scratchpad scripts, import helpers from `../helpers/*.ts`.
3535

36+
For very large snapshots, the helper may still be too eager. Node cannot create a Buffer larger than roughly 2 GiB, so snapshots above that size can fail with `ERR_FS_FILE_TOO_LARGE` even before parsing. In that case, do not try to raise `--max-old-space-size` and retry the same full-file read. Switch to a streaming script.
37+
3638
```typescript
3739
import { parseSnapshot, buildGraph } from '../helpers/parseSnapshot.ts';
3840

3941
const data = parseSnapshot('/path/to/snapshot.heapsnapshot');
4042
const graph = buildGraph(data);
4143
```
4244

45+
#### Snapshots Larger Than 2 GiB
46+
47+
When a snapshot is too large to load into a single Buffer, write scratchpad scripts that scan and parse only the sections needed for the question. Use [streamSnapshot.mjs](./helpers/streamSnapshot.mjs) for the common streaming primitives instead of copying them between scratch scripts.
48+
49+
Useful tricks:
50+
51+
- Find top-level section offsets first. Scan the file as bytes for markers like `"nodes":`, `"edges":`, `"strings":`, and `"trace_function_infos":`. This lets follow-up scripts jump directly to the large arrays instead of searching the whole file repeatedly.
52+
- Parse `snapshot.meta` separately from the small header at the start of the file. Use `meta.node_fields`, `meta.node_types`, `meta.edge_fields`, and `meta.edge_types` to avoid hard-coding tuple widths.
53+
- Stream numeric arrays in chunks. For `nodes` and `edges`, keep a small carryover string between chunks, split on commas, and process complete numeric tokens as they arrive.
54+
- Avoid materializing the full `strings` table unless the investigation truly needs it. If you only need suspicious names, collect string indexes from matching nodes/edges first, then resolve only those indexes in a second streaming pass.
55+
- If you do need many strings, store only short previews and category counters. Full source strings, ref-listing strings, and prompt payloads can dominate memory and make the analyzer become the leak.
56+
- Write intermediate outputs to files in the scratchpad. Large heap analysis is iterative and slow; cached node ids, offsets, and retainer traces save repeated multi-minute passes.
57+
- Prefer self-size attribution and field-level ownership for huge graphs. Full retained-size walks can wildly overcount shared services, roots, maps, and singleton caches.
58+
- When quantifying a suspected owner, count obvious owned fields separately: wrapper object, key arrays, array elements, direct strings, and parent strings of sliced/concatenated strings. This often gives a better lower-bound than a single direct string bucket.
59+
- Be explicit about approximation boundaries. A field-level subtotal usually undercounts listeners/watchers/back-references but avoids the much worse problem of attributing the whole runtime to one object.
60+
61+
Example large-snapshot workflow:
62+
63+
```javascript
64+
import { findArrayStart, findTokenOffsets, parseMeta, streamNumberTuples } from '../../helpers/streamSnapshot.mjs';
65+
66+
const { size, offsets } = findTokenOffsets(snapshotPath);
67+
const meta = parseMeta(snapshotPath);
68+
const nodeFieldCount = meta.node_fields.length;
69+
const nodesStart = findArrayStart(snapshotPath, offsets.get('"nodes"'));
70+
71+
streamNumberTuples(snapshotPath, nodesStart, offsets.get('"edges"'), nodeFieldCount, (node, nodeIndex) => {
72+
// node is reused for speed; copy it before storing.
73+
});
74+
```
75+
76+
```bash
77+
cd .github/skills/heap-snapshot-analysis
78+
node --max-old-space-size=24576 scratchpad/YYYY-MM-DD-topic/findOffsets.mjs /path/to/Heap.heapsnapshot
79+
node --max-old-space-size=24576 scratchpad/YYYY-MM-DD-topic/streamAnalyze.mjs /path/to/Heap.heapsnapshot > scratchpad/YYYY-MM-DD-topic/streamAnalyze.out
80+
node --max-old-space-size=24576 scratchpad/YYYY-MM-DD-topic/traceNodes.mjs /path/to/Heap.heapsnapshot 12345 67890 > scratchpad/YYYY-MM-DD-topic/traceNodes.out
81+
```
82+
4383
### 2. Compare Before/After
4484

4585
Use [compareSnapshots.ts](./helpers/compareSnapshots.ts) to diff two snapshots:
@@ -134,6 +174,7 @@ override dispose() {
134174

135175
### False Retainers to Watch For
136176

177+
- **DevTools debugger global handles**: If the snapshot was captured after opening DevTools, large source strings, compiled scripts, preview data, inspected objects, or debugger bookkeeping can be retained by paths like `DevTools debugger(internal)``synthetic::(Global handles)` → GC roots. Treat these as debugger-induced until proven otherwise. They may not exist in the app before DevTools opens, and they should not be confused with application-owned leaks.
137178
- **`DevToolsLogger._aliveInstances`** (Map): Enabled by `VSCODE_DEV_DEBUG_OBSERVABLES` env var. Retains ALL observed observables. Check if this is active before investigating observable-rooted paths.
138179
- **`GCBasedDisposableTracker` (FinalizationRegistry)**: If `register(target, held, target)` is used (target === unregister token), creates a strong self-reference preventing GC. Currently commented out in production.
139180
- **WeakMap backing arrays**: Show up in retainer paths but don't prevent collection.
Lines changed: 260 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,260 @@
1+
/*---------------------------------------------------------------------------------------------
2+
* Copyright (c) Microsoft Corporation. All rights reserved.
3+
* Licensed under the MIT License. See License.txt in the project root for license information.
4+
*--------------------------------------------------------------------------------------------*/
5+
6+
import { closeSync, openSync, readSync, statSync } from 'fs';
7+
8+
export const defaultTopLevelTokens = [
9+
'"meta"',
10+
'"nodes"',
11+
'"edges"',
12+
'"trace_function_infos"',
13+
'"trace_tree"',
14+
'"samples"',
15+
'"locations"',
16+
'"strings"'
17+
];
18+
19+
export function formatBytes(bytes) {
20+
if (Math.abs(bytes) < 1024) {
21+
return `${bytes} B`;
22+
}
23+
if (Math.abs(bytes) < 1024 * 1024) {
24+
return `${(bytes / 1024).toFixed(1)} KB`;
25+
}
26+
return `${(bytes / 1024 / 1024).toFixed(1)} MB`;
27+
}
28+
29+
export function findTokenOffsets(path, tokens = defaultTopLevelTokens, options = {}) {
30+
const stat = statSync(path);
31+
const fd = openSync(path, 'r');
32+
const chunkSize = options.chunkSize ?? 8 * 1024 * 1024;
33+
const overlap = options.overlap ?? 256;
34+
const found = new Map();
35+
let previous = Buffer.alloc(0);
36+
let position = 0;
37+
38+
try {
39+
while (position < stat.size && found.size < tokens.length) {
40+
const toRead = Math.min(chunkSize, stat.size - position);
41+
const chunk = Buffer.allocUnsafe(toRead);
42+
const bytesRead = readSync(fd, chunk, 0, toRead, position);
43+
if (bytesRead <= 0) {
44+
break;
45+
}
46+
47+
const combined = Buffer.concat([previous, chunk.subarray(0, bytesRead)]);
48+
49+
for (const token of tokens) {
50+
if (found.has(token)) {
51+
continue;
52+
}
53+
54+
const index = combined.indexOf(token);
55+
if (index !== -1) {
56+
found.set(token, position - previous.length + index);
57+
}
58+
}
59+
60+
previous = combined.subarray(Math.max(0, combined.length - overlap));
61+
position += bytesRead;
62+
}
63+
} finally {
64+
closeSync(fd);
65+
}
66+
67+
return { size: stat.size, offsets: found };
68+
}
69+
70+
export function readRange(path, start, length) {
71+
const fd = openSync(path, 'r');
72+
const buffer = Buffer.allocUnsafe(length);
73+
let offset = 0;
74+
75+
try {
76+
while (offset < length) {
77+
const bytesRead = readSync(fd, buffer, offset, length - offset, start + offset);
78+
if (bytesRead === 0) {
79+
return buffer.subarray(0, offset);
80+
}
81+
offset += bytesRead;
82+
}
83+
return buffer;
84+
} finally {
85+
closeSync(fd);
86+
}
87+
}
88+
89+
export function parseMeta(path, options = {}) {
90+
const maxBytes = options.maxBytes ?? 1024 * 1024;
91+
const buffer = readRange(path, 0, maxBytes);
92+
const metaPosition = buffer.indexOf(Buffer.from('"meta"'));
93+
if (metaPosition === -1) {
94+
throw new Error('Unable to find snapshot meta section');
95+
}
96+
97+
const start = buffer.indexOf(Buffer.from('{'), metaPosition);
98+
if (start === -1) {
99+
throw new Error('Unable to find snapshot meta object start');
100+
}
101+
102+
let depth = 0;
103+
for (let i = start; i < buffer.length; i++) {
104+
if (buffer[i] === 0x22) {
105+
i++;
106+
while (i < buffer.length) {
107+
if (buffer[i] === 0x5c) {
108+
i += 2;
109+
continue;
110+
}
111+
if (buffer[i] === 0x22) {
112+
break;
113+
}
114+
i++;
115+
}
116+
continue;
117+
}
118+
119+
if (buffer[i] === 0x7b) {
120+
depth++;
121+
} else if (buffer[i] === 0x7d) {
122+
depth--;
123+
if (depth === 0) {
124+
return JSON.parse(buffer.subarray(start, i + 1).toString('utf8'));
125+
}
126+
}
127+
}
128+
129+
throw new Error(`Unable to parse snapshot meta within first ${formatBytes(maxBytes)}`);
130+
}
131+
132+
export function findArrayStart(path, tokenOffset, options = {}) {
133+
const windowSize = options.windowSize ?? 4096;
134+
const buffer = readRange(path, tokenOffset, windowSize);
135+
const bracket = buffer.indexOf(Buffer.from('['));
136+
if (bracket === -1) {
137+
throw new Error(`Unable to find array start near offset ${tokenOffset}`);
138+
}
139+
return tokenOffset + bracket + 1;
140+
}
141+
142+
export function streamNumberArray(path, start, end, onNumber, options = {}) {
143+
const fd = openSync(path, 'r');
144+
const chunkSize = options.chunkSize ?? 16 * 1024 * 1024;
145+
const buffer = Buffer.allocUnsafe(chunkSize);
146+
let position = start;
147+
let number = 0;
148+
let inNumber = false;
149+
let numberIndex = 0;
150+
151+
try {
152+
while (position < end) {
153+
const toRead = Math.min(chunkSize, end - position);
154+
const bytesRead = readSync(fd, buffer, 0, toRead, position);
155+
if (bytesRead <= 0) {
156+
break;
157+
}
158+
159+
for (let i = 0; i < bytesRead; i++) {
160+
const code = buffer[i];
161+
if (code >= 0x30 && code <= 0x39) {
162+
number = number * 10 + code - 0x30;
163+
inNumber = true;
164+
} else if (inNumber) {
165+
onNumber(number, numberIndex++);
166+
number = 0;
167+
inNumber = false;
168+
if (code === 0x5d) {
169+
return numberIndex;
170+
}
171+
} else if (code === 0x5d) {
172+
return numberIndex;
173+
}
174+
}
175+
176+
position += bytesRead;
177+
}
178+
179+
if (inNumber) {
180+
onNumber(number, numberIndex++);
181+
}
182+
return numberIndex;
183+
} finally {
184+
closeSync(fd);
185+
}
186+
}
187+
188+
/**
189+
* Streams fixed-size tuples from a number array.
190+
*
191+
* By default, the same mutable tuple array instance is reused for each callback
192+
* invocation to avoid per-tuple allocations. Callers must not retain that array
193+
* reference after onTuple returns unless options.copyTuple is enabled.
194+
*/
195+
export function streamNumberTuples(path, start, end, tupleSize, onTuple, options = {}) {
196+
const tuple = new Array(tupleSize);
197+
const copyTuple = options.copyTuple === true;
198+
let tupleIndex = 0;
199+
let fieldIndex = 0;
200+
201+
const numberCount = streamNumberArray(path, start, end, value => {
202+
tuple[fieldIndex++] = value;
203+
if (fieldIndex === tupleSize) {
204+
onTuple(copyTuple ? tuple.slice() : tuple, tupleIndex++);
205+
fieldIndex = 0;
206+
}
207+
}, options);
208+
209+
if (fieldIndex !== 0) {
210+
throw new Error(`Number array ended with an incomplete tuple: ${fieldIndex}/${tupleSize}`);
211+
}
212+
213+
return { numberCount, tupleCount: tupleIndex };
214+
}
215+
216+
export function parseStrings(path, stringsTokenOffset, options = {}) {
217+
const normalizedOptions = typeof options === 'number' ? { fileSize: options } : options;
218+
const fileSize = normalizedOptions.fileSize ?? statSync(path).size;
219+
const length = fileSize - stringsTokenOffset;
220+
const maxBytes = normalizedOptions.maxBytes ?? 512 * 1024 * 1024;
221+
222+
if (length > maxBytes) {
223+
throw new Error(`Refusing to parse ${formatBytes(length)} strings section into one Buffer. Pass a larger maxBytes value only if this is intentional.`);
224+
}
225+
226+
const buffer = readRange(path, stringsTokenOffset, length);
227+
const start = buffer.indexOf(Buffer.from('['));
228+
if (start === -1) {
229+
throw new Error(`Unable to find strings array near offset ${stringsTokenOffset}`);
230+
}
231+
232+
let depth = 0;
233+
for (let i = start; i < buffer.length; i++) {
234+
if (buffer[i] === 0x22) {
235+
i++;
236+
while (i < buffer.length) {
237+
if (buffer[i] === 0x5c) {
238+
i += 2;
239+
continue;
240+
}
241+
if (buffer[i] === 0x22) {
242+
break;
243+
}
244+
i++;
245+
}
246+
continue;
247+
}
248+
249+
if (buffer[i] === 0x5b) {
250+
depth++;
251+
} else if (buffer[i] === 0x5d) {
252+
depth--;
253+
if (depth === 0) {
254+
return JSON.parse(buffer.subarray(start, i + 1).toString('utf8'));
255+
}
256+
}
257+
}
258+
259+
throw new Error('Unable to parse strings array');
260+
}

build/lib/i18n.resources.json

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -636,6 +636,10 @@
636636
"name": "vs/sessions/contrib/agentFeedback",
637637
"project": "vscode-sessions"
638638
},
639+
{
640+
"name": "vs/sessions/contrib/agentHost",
641+
"project": "vscode-sessions"
642+
},
639643
{
640644
"name": "vs/sessions/contrib/aiCustomizationTreeView",
641645
"project": "vscode-sessions"
@@ -684,10 +688,6 @@
684688
"name": "vs/sessions/contrib/logs",
685689
"project": "vscode-sessions"
686690
},
687-
{
688-
"name": "vs/sessions/contrib/localAgentHost",
689-
"project": "vscode-sessions"
690-
},
691691
{
692692
"name": "vs/sessions/contrib/remoteAgentHost",
693693
"project": "vscode-sessions"

extensions/copilot/package-lock.json

Lines changed: 3 additions & 3 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

0 commit comments

Comments
 (0)