Skip to content

Commit f92015a

Browse files
committed
fix: address review feedback on runtime detection and codex version pin
- Replace non-ASCII em dashes with ASCII "--" per repo encoding rules - Fix false-positive runtime detection: use word-boundary regex instead of substring checks in _detect_runtime(), is_runtime_cmd, and _transform_runtime_command(); also handle .exe/.cmd extensions - Route _transform_runtime_command via _detect_runtime() to prevent model names like "gpt-5.3-codex" from triggering the codex branch - Add .cmd/.bat candidates to Windows executable resolution in _execute_runtime_command, matching how setup-llm.ps1 installs wrappers - Include llm in APM runtimes dir detection (setup-llm installs llm/llm.cmd to ~/.apm/runtimes/) - Correct codex version pin: wire_api="chat" was removed in v0.95.0, not v0.116/v0.118; pin to rust-v0.94.0 (last compatible release) - Add ensure_path_within() containment checks in _resolve_prompt_file and _discover_prompt_file to prevent symlinks escaping project or apm_modules boundaries - Update docs (runtime-compatibility, agent-workflows, cli-commands) to reflect codex version pinning and updated runtime preference order - Update tests: mock Path.exists in Windows resolution tests; adjust symlink containment test to expect PathTraversalError
1 parent 3a35477 commit f92015a

File tree

8 files changed

+73
-42
lines changed

8 files changed

+73
-42
lines changed

docs/src/content/docs/guides/agent-workflows.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ apm runtime list
4141
| Runtime | Requirements | Notes |
4242
|---------|-------------|-------|
4343
| Copilot CLI | Node.js v22+, npm v10+ | Recommended. MCP config at `~/.copilot/` |
44-
| Codex | Node.js | Set `GITHUB_TOKEN` for GitHub Models support |
44+
| Codex | Node.js | Pinned to v0.94.0 for GitHub Models compatibility. Set `GITHUB_TOKEN` |
4545
| LLM | Python 3.10+ | Supports multiple model providers |
4646

4747
**Copilot CLI** is the recommended runtime — it requires no API keys for installation and integrates with GitHub Copilot directly.

docs/src/content/docs/integrations/runtime-compatibility.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ APM acts as a runtime package manager, downloading and configuring LLM runtimes
1515
| Runtime | Description | Best For | Configuration |
1616
|---------|-------------|----------|---------------|
1717
| [**GitHub Copilot CLI**](https://github.com/github/copilot-cli) | GitHub's Copilot CLI (Recommended) | Advanced AI coding, native MCP support | Auto-configured, no auth needed |
18-
| [**OpenAI Codex**](https://github.com/openai/codex) | OpenAI's Codex CLI | Code tasks, GitHub Models API | Auto-configured with GitHub Models |
18+
| [**OpenAI Codex**](https://github.com/openai/codex) | OpenAI's Codex CLI (pinned to v0.94.0) | Code tasks, GitHub Models API | Auto-configured with GitHub Models |
1919
| [**LLM Library**](https://llm.datasette.io/en/stable/index.html) | Simon Willison's `llm` CLI | General use, many providers | Manual API key setup |
2020

2121
## Quick Setup
@@ -75,6 +75,8 @@ scripts:
7575
7676
APM automatically downloads, installs, and configures the Codex CLI with GitHub Models for free usage.
7777
78+
> **Note:** Codex v0.95.0+ dropped Chat Completions API support (`wire_api="chat"`) in favor of the Responses API, which GitHub Models does not support. APM pins the default Codex version to `rust-v0.94.0` (the last compatible release). If you need a newer Codex version, pass `--version` explicitly and configure a compatible API provider.
79+
7880
### Setup
7981

8082
#### 1. Install via APM
@@ -83,7 +85,7 @@ apm runtime setup codex
8385
```
8486

8587
This automatically:
86-
- Downloads the latest Codex binary for your platform
88+
- Downloads Codex binary (v0.94.0) for your platform
8789
- Installs to `~/.apm/runtimes/codex`
8890
- Creates configuration for GitHub Models (`github/gpt-4o`)
8991
- Updates your PATH

docs/src/content/docs/reference/cli-commands.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1416,7 +1416,7 @@ apm runtime COMMAND [OPTIONS]
14161416

14171417
**Supported Runtimes:**
14181418
- **`copilot`** - GitHub Copilot coding agent
1419-
- **`codex`** - OpenAI Codex CLI with GitHub Models support
1419+
- **`codex`** - OpenAI Codex CLI with GitHub Models support (pinned to v0.94.0 for compatibility)
14201420
- **`llm`** - Simon Willison's LLM library with multiple providers
14211421

14221422
#### `apm runtime setup` - Install AI runtime
@@ -1497,6 +1497,6 @@ apm runtime status
14971497
```
14981498

14991499
**Output includes:**
1500-
- Runtime preference order (copilot → codex → llm)
1500+
- Runtime preference order (APM runtimes: copilot, llm; then PATH: llm > copilot > codex)
15011501
- Currently active runtime
15021502
- Next steps if no runtime is available

scripts/runtime/setup-codex.ps1

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,8 @@
44
param(
55
[switch]$Vanilla,
66
# Pinned to last version compatible with GitHub Models (wire_api="chat").
7-
# rust-v0.118.0+ requires wire_api="responses" which GitHub Models does not support.
8-
[string]$Version = "rust-v0.117.0"
7+
# rust-v0.95.0+ requires wire_api="responses" which GitHub Models does not support.
8+
[string]$Version = "rust-v0.94.0"
99
)
1010

1111
$ErrorActionPreference = "Stop"

scripts/runtime/setup-codex.sh

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -24,8 +24,8 @@ source "$SCRIPT_DIR/setup-common.sh"
2424
# Configuration
2525
CODEX_REPO="openai/codex"
2626
# Pinned to last version compatible with GitHub Models (wire_api="chat").
27-
# rust-v0.118.0+ requires wire_api="responses" which GitHub Models does not support.
28-
CODEX_VERSION="rust-v0.117.0" # Default version
27+
# rust-v0.95.0+ requires wire_api="responses" which GitHub Models does not support.
28+
CODEX_VERSION="rust-v0.94.0" # Default version
2929
VANILLA_MODE=false
3030

3131
# Parse command line arguments

src/apm_cli/core/script_runner.py

Lines changed: 57 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@
1212

1313
from .token_manager import setup_runtime_environment
1414
from ..output.script_formatters import ScriptExecutionFormatter
15+
from ..utils.path_security import ensure_path_within
1516

1617

1718
class ScriptRunner:
@@ -110,7 +111,7 @@ def run_script(self, script_name: str, params: Dict[str, str]) -> bool:
110111
error_msg += f"Available scripts in apm.yml: {available}\n"
111112
error_msg += f"\nTo get started, create a prompt file first:\n"
112113
error_msg += f" echo '# My agent prompt' > {script_name}.prompt.md\n"
113-
error_msg += f"\nThen run again APM will auto-discover it.\n"
114+
error_msg += f"\nThen run again -- APM will auto-discover it.\n"
114115
error_msg += f"\nOr define a script explicitly in apm.yml:\n"
115116
error_msg += f" scripts:\n"
116117
error_msg += f" {script_name}: copilot {script_name}.prompt.md\n"
@@ -264,7 +265,8 @@ def _auto_compile_prompts(
264265

265266
# Check if this is a runtime command (copilot, codex, llm) before transformation
266267
is_runtime_cmd = any(
267-
runtime in command for runtime in ["copilot", "codex", "llm"]
268+
re.search(r"(?:^|[\s/\\])" + runtime + r"(?:\.exe|\.cmd)?(?:\s|$)", command)
269+
for runtime in ["copilot", "codex", "llm"]
268270
) and re.search(re.escape(prompt_file), command)
269271

270272
# Transform command based on runtime pattern
@@ -343,12 +345,17 @@ def _transform_runtime_command(
343345
result += f" {args_after_file}"
344346
return result
345347

346-
# Handle individual runtime patterns without environment variables
348+
# Handle individual runtime patterns without environment variables.
349+
# Detect the runtime from the executable (first token) to avoid
350+
# false positives when a runtime name appears in flags or model names.
351+
detected = self._detect_runtime(command)
352+
_ext = r"(?:\.exe|\.cmd)?"
353+
_pf = re.escape(prompt_file)
347354

348355
# Handle "codex [args] file.prompt.md [more_args]" -> "codex exec [args] [more_args]"
349-
if re.search(r"codex\s+.*" + re.escape(prompt_file), command):
356+
if detected == "codex" and re.search(r"codex" + _ext + r"\s+.*" + _pf, command):
350357
match = re.search(
351-
r"codex\s+(.*?)(" + re.escape(prompt_file) + r")(.*?)$", command
358+
r"codex" + _ext + r"\s+(.*?)(" + _pf + r")(.*?)$", command
352359
)
353360
if match:
354361
args_before_file = match.group(1).strip()
@@ -362,9 +369,9 @@ def _transform_runtime_command(
362369
return result
363370

364371
# Handle "copilot [args] file.prompt.md [more_args]" -> "copilot [args] [more_args]"
365-
elif re.search(r"copilot\s+.*" + re.escape(prompt_file), command):
372+
elif detected == "copilot" and re.search(r"copilot" + _ext + r"\s+.*" + _pf, command):
366373
match = re.search(
367-
r"copilot\s+(.*?)(" + re.escape(prompt_file) + r")(.*?)$", command
374+
r"copilot" + _ext + r"\s+(.*?)(" + _pf + r")(.*?)$", command
368375
)
369376
if match:
370377
args_before_file = match.group(1).strip()
@@ -381,9 +388,9 @@ def _transform_runtime_command(
381388
return result
382389

383390
# Handle "llm [args] file.prompt.md [more_args]" -> "llm [args] [more_args]"
384-
elif re.search(r"llm\s+.*" + re.escape(prompt_file), command):
391+
elif detected == "llm" and re.search(r"llm" + _ext + r"\s+.*" + _pf, command):
385392
match = re.search(
386-
r"llm\s+(.*?)(" + re.escape(prompt_file) + r")(.*?)$", command
393+
r"llm" + _ext + r"\s+(.*?)(" + _pf + r")(.*?)$", command
387394
)
388395
if match:
389396
args_before_file = match.group(1).strip()
@@ -413,15 +420,15 @@ def _detect_runtime(self, command: str) -> str:
413420
Name of the detected runtime (copilot, codex, llm, or unknown)
414421
"""
415422
command_lower = command.lower().strip()
416-
# Check for runtime keywords anywhere in the command, not just at the start
417-
if "copilot" in command_lower:
418-
return "copilot"
419-
elif "codex" in command_lower:
420-
return "codex"
421-
elif "llm" in command_lower:
422-
return "llm"
423-
else:
424-
return "unknown"
423+
# Check for runtime keywords as standalone tokens or at end of a path
424+
# (e.g. /path/to/copilot, copilot.exe, copilot.cmd)
425+
for runtime in ["copilot", "codex", "llm"]:
426+
if re.search(
427+
r"(?:^|[\s/\\])" + runtime + r"(?:\.exe|\.cmd)?(?:\s|$)",
428+
command_lower,
429+
):
430+
return runtime
431+
return "unknown"
425432

426433
def _execute_runtime_command(
427434
self, command: str, content: str, env: dict
@@ -505,8 +512,13 @@ def _execute_runtime_command(
505512
if sys.platform == "win32" and actual_command_args:
506513
exe_name = actual_command_args[0]
507514
apm_runtimes = Path.home() / ".apm" / "runtimes"
508-
# Check APM runtimes directory first
509-
apm_candidates = [apm_runtimes / exe_name, apm_runtimes / f"{exe_name}.exe"]
515+
# Check APM runtimes directory first (include .exe, .cmd, .bat wrappers)
516+
apm_candidates = [
517+
apm_runtimes / exe_name,
518+
apm_runtimes / f"{exe_name}.exe",
519+
apm_runtimes / f"{exe_name}.cmd",
520+
apm_runtimes / f"{exe_name}.bat",
521+
]
510522
apm_resolved = next((str(c) for c in apm_candidates if c.exists()), None)
511523
if apm_resolved:
512524
actual_command_args[0] = apm_resolved
@@ -574,6 +586,17 @@ def _discover_prompt_file(self, name: str) -> Optional[Path]:
574586
if skill_file.exists():
575587
matches.append(skill_file)
576588

589+
# Filter out matches that escape the apm_modules directory
590+
# (e.g. symlinks pointing outside the project)
591+
safe_matches = []
592+
for m in matches:
593+
try:
594+
ensure_path_within(m, apm_modules)
595+
safe_matches.append(m)
596+
except ValueError:
597+
pass
598+
matches = safe_matches
599+
577600
if len(matches) == 0:
578601
return None
579602
elif len(matches) == 1:
@@ -861,10 +884,10 @@ def _detect_installed_runtime(self) -> str:
861884
system-level stubs (e.g. GitHub CLI copilot extensions).
862885
863886
Priority:
864-
1. APM runtimes dir: copilot (codex excluded v0.116+ is
887+
1. APM runtimes dir: copilot, llm (codex excluded -- v0.95.0+ is
865888
incompatible with GitHub Models' Chat Completions API)
866-
2. PATH: llm > copilot > codex (llm uses Chat Completions, works
867-
with GitHub Models even when codex dropped that API)
889+
2. PATH: llm > copilot > codex (codex v0.95.0+ dropped Chat
890+
Completions, so PATH codex is last resort for older binaries)
868891
869892
Returns:
870893
Name of detected runtime
@@ -877,11 +900,10 @@ def _detect_installed_runtime(self) -> str:
877900
apm_runtimes = Path.home() / ".apm" / "runtimes"
878901

879902
# 1. Check APM-managed runtimes directory first (highest priority).
880-
# Only copilot is checked here — codex installed via APM runtimes
881-
# will be v0.116+ which dropped Chat Completions support and is
882-
# incompatible with GitHub Models.
883-
# llm is checked via PATH only (installed as a Python package).
884-
for name in ("copilot",):
903+
# copilot and llm are checked here -- codex installed via APM
904+
# runtimes will be v0.95.0+ which dropped Chat Completions support
905+
# and is incompatible with GitHub Models.
906+
for name in ("copilot", "llm"):
885907
candidates = [
886908
apm_runtimes / name,
887909
apm_runtimes / f"{name}.exe",
@@ -893,7 +915,7 @@ def _detect_installed_runtime(self) -> str:
893915
if exe.stat().st_size > 0:
894916
return name
895917

896-
# 2. Fall back to PATH prefer llm (uses Chat Completions, works with
918+
# 2. Fall back to PATH -- prefer llm (uses Chat Completions, works with
897919
# GitHub Models even when codex has dropped that API format)
898920
if shutil.which("llm"):
899921
return "llm"
@@ -927,7 +949,7 @@ def _generate_runtime_command(self, runtime: str, prompt_file: Path) -> str:
927949
# Codex CLI with default sandbox and git repo check skip
928950
return f"codex -s workspace-write --skip-git-repo-check {prompt_file}"
929951
elif runtime == "llm":
930-
# llm CLI uses Chat Completions, compatible with GitHub Models
952+
# llm CLI -- uses Chat Completions, compatible with GitHub Models
931953
return f"llm -m github/gpt-4o {prompt_file}"
932954
else:
933955
raise ValueError(f"Unsupported runtime: {runtime}")
@@ -999,16 +1021,19 @@ def _resolve_prompt_file(self, prompt_file: str) -> Path:
9991021
FileNotFoundError: If prompt file is not found in local or dependency modules
10001022
"""
10011023
prompt_path = Path(prompt_file)
1024+
project_root = Path.cwd()
10021025

10031026
# First check if it exists in current directory (local)
10041027
if prompt_path.exists():
1028+
ensure_path_within(prompt_path, project_root)
10051029
return prompt_path
10061030

10071031
# Check in common project directories
10081032
common_dirs = [".github/prompts", ".apm/prompts"]
10091033
for common_dir in common_dirs:
10101034
common_path = Path(common_dir) / prompt_file
10111035
if common_path.exists():
1036+
ensure_path_within(common_path, project_root)
10121037
return common_path
10131038

10141039
# If not found locally, search in dependency modules
@@ -1024,12 +1049,14 @@ def _resolve_prompt_file(self, prompt_file: str) -> Path:
10241049
# Check in the root of the repository
10251050
dep_prompt_path = repo_dir / prompt_file
10261051
if dep_prompt_path.exists():
1052+
ensure_path_within(dep_prompt_path, apm_modules_dir)
10271053
return dep_prompt_path
10281054

10291055
# Also check in common subdirectories
10301056
for subdir in ["prompts", ".", "workflows"]:
10311057
sub_prompt_path = repo_dir / subdir / prompt_file
10321058
if sub_prompt_path.exists():
1059+
ensure_path_within(sub_prompt_path, apm_modules_dir)
10331060
return sub_prompt_path
10341061

10351062
# If still not found, raise an error with helpful message

tests/unit/test_runtime_windows.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -193,6 +193,7 @@ def test_execute_runtime_command_uses_shlex_on_windows(self):
193193
env = {"PATH": "/usr/bin"}
194194

195195
with patch("sys.platform", "win32"), \
196+
patch("pathlib.Path.exists", return_value=False), \
196197
patch("apm_cli.core.script_runner.shutil.which", return_value=None), \
197198
patch("subprocess.run", return_value=MagicMock(returncode=0)) as mock_run:
198199
runner._execute_runtime_command("codex --quiet", "prompt content", env)
@@ -206,6 +207,7 @@ def test_execute_runtime_command_preserves_quotes_on_windows(self):
206207
env = {"PATH": "/usr/bin"}
207208

208209
with patch("sys.platform", "win32"), \
210+
patch("pathlib.Path.exists", return_value=False), \
209211
patch("apm_cli.core.script_runner.shutil.which", return_value=None), \
210212
patch("subprocess.run", return_value=MagicMock(returncode=0)) as mock_run:
211213
runner._execute_runtime_command(

tests/unit/test_symlink_containment.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -41,8 +41,9 @@ def tearDown(self):
4141
shutil.rmtree(self.tmpdir, ignore_errors=True)
4242

4343
def test_symlinked_prompt_outside_project_rejected(self):
44-
"""Symlinked .prompt.md is rejected with clear error message."""
44+
"""Symlinked .prompt.md pointing outside project is rejected."""
4545
from apm_cli.core.script_runner import PromptCompiler
46+
from apm_cli.utils.path_security import PathTraversalError
4647

4748
prompts_dir = self.project / ".apm" / "prompts"
4849
prompts_dir.mkdir(parents=True)
@@ -53,9 +54,8 @@ def test_symlinked_prompt_outside_project_rejected(self):
5354
old_cwd = os.getcwd()
5455
try:
5556
os.chdir(self.project)
56-
with self.assertRaises(FileNotFoundError) as ctx:
57+
with self.assertRaises(PathTraversalError):
5758
compiler._resolve_prompt_file(".apm/prompts/evil.prompt.md")
58-
self.assertIn("symlink", str(ctx.exception).lower())
5959
finally:
6060
os.chdir(old_cwd)
6161

0 commit comments

Comments
 (0)