Skip to content

Commit 7da7095

Browse files
committed
Structure for Platform / EULA
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
1 parent 6e7b1ed commit 7da7095

File tree

9 files changed

+380
-12
lines changed

9 files changed

+380
-12
lines changed

docs/index.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,8 @@ SlicerVM gives you **real Linux, in milliseconds**.
55
Full VMs with systemd and a real kernel, on your Mac, your servers, or your cloud.
66
Slicer is built for teams that need isolation and control without moving code and data to third-party infrastructure.
77

8+
By installing and starting Slicer, you agree to the [End User License Agreement (EULA)](https://slicervm.com/eula/).
9+
810
## Where Slicer fits
911

1012
Slicer is useful for both one-off/ephemeral workloads and long-running Linux services.

docs/mac/overview.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,8 @@ Slicer for Mac was built from the ground up to run Linux microVMs on Apple Silic
44

55
It uses the same familiar API and CLI from Slicer for Linux, using Apple's native [Virtualization framework](https://developer.apple.com/documentation/virtualization) instead of KVM. Only Arm64 hosts and guests are supported, however Rosetta support can be enabled to run Intel/AMD binaries.
66

7+
By installing and starting Slicer, you agree to the [End User License Agreement (EULA)](https://slicervm.com/eula/).
8+
79
Typical use-cases include:
810

911
* Real Linux with systemd instead of POSIX compatibility

docs/mac/sandboxes.md

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -62,6 +62,22 @@ Stop and remove the sandbox when you are done:
6262
slicer vm delete sbox-1
6363
```
6464

65+
## Persistent sandboxes
66+
67+
By default, sandboxes are ephemeral - they are deleted when the daemon stops. To create a sandbox that retains its disk and survives restarts, use `--persistent`:
68+
69+
```bash
70+
slicer vm launch sbox --persistent
71+
```
72+
73+
If the daemon restarts (e.g. after a reboot or sleep), persistent sandboxes are not automatically re-launched. To bring one back:
74+
75+
```bash
76+
slicer relaunch sbox-1
77+
```
78+
79+
This boots the VM from its existing disk. Any data written to the filesystem is still there.
80+
6581
## Use cases
6682

6783
Sandboxes are designed for short-lived workloads and experimentation:

docs/platform/ephemeral-tasks.md

Lines changed: 10 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,16 @@
1-
# Ephemeral Tasks
1+
# Run a Task in Slicer
22

3-
This page shows how to launch short-lived VMs for one-shot tasks via the API. The CLI can also act as a client to the API during testing.
3+
If your product needs to run background jobs, cron-style tasks, or headless AI agents in isolated environments, you can launch microVMs on demand through the API and tear them down when the work is done. The CLI can also act as a client to the API during testing.
44

5-
Use-cases could include:
5+
Use-cases include:
66

7-
* Running an AI coding agent in a contained environment without risking your whole workstation
8-
* Starting on-demand IDEs for pull request development or review
9-
* Autoscaling Kubernetes nodes - added and removed on demand
10-
* Running a CI build or compiling untrusted customer code
11-
* Starting a temporary service such as a database for end to end testing
12-
* Cron jobs, batch jobs, and serverless functions
7+
* Background jobs and batch processing as part of a SaaS product
8+
* Running headless AI coding agents in isolated microVMs
9+
* Cron jobs, scheduled tasks, and serverless-style functions
10+
* CI builds or compiling untrusted customer code
11+
* On-demand IDEs for pull request development or review
1312

14-
One-shot tasks are VMs that are launched on demand for a specific purposes. But there's no limit on the lifetime of these VMs, they can run for any period of time - be that 250ms to process a webhook, 48 hours to run some fine-tuning, or several weeks. Just bear in mind that if you shut down or close Slicer, they will also be shut down and destroyed.
13+
There is no limit on how long a task VM runs - it could be 250ms to process a webhook, 48 hours for fine-tuning, or several weeks. Ephemeral VMs are cleaned up when Slicer shuts down. Use `"persistent": true` if the VM needs to survive restarts.
1514

1615
<iframe width="560" height="315" src="https://www.youtube.com/embed/5RjtVM4bvp0?si=2TPpSKn9YXFw_Nnt" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
1716

@@ -32,7 +31,7 @@ If you don't have ZFS set up yet, you can simply replace the storage flags with
3231
Create `tasks.yaml` slicer config:
3332

3433
```bash
35-
slicer new buildkit \
34+
slicer new task \
3635
--cpu 1 \
3736
--ram 2 \
3837
--count 0 \
Lines changed: 163 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,163 @@
1+
# Instance Per Tenant
2+
3+
For stronger isolation, run a separate Slicer daemon per tenant. Each gets its own UNIX socket, its own network range, and its own VM namespace. Tenant A cannot see, manage, or reach tenant B's VMs.
4+
5+
```
6+
┌───────────────────────────────────────────────────────┐
7+
│ Your Application │
8+
│ │
9+
│ Tenant A request ──► /run/slicer/a3cf.sock │
10+
│ Tenant B request ──► /run/slicer/e7d1.sock │
11+
└───────────┬───────────────────────────┬───────────────┘
12+
▼ ▼
13+
┌─────────────────────┐ ┌─────────────────────┐
14+
│ Slicer (a3cf) │ │ Slicer (e7d1) │
15+
│ 169.254.100.0/22 │ │ 169.254.104.0/22 │
16+
│ │ │ │
17+
│ ┌────────┐┌────────┐│ │ ┌────────┐┌────────┐│
18+
│ │ a3cf-1 ││ a3cf-2 ││ │ │ e7d1-1 ││ e7d1-2 ││
19+
│ └────────┘└────────┘│ │ └────────┘└────────┘│
20+
└─────────────────────┘ └─────────────────────┘
21+
```
22+
23+
## When to use this
24+
25+
Use a separate instance per tenant when:
26+
27+
* You need API-level isolation - one tenant's requests cannot access another's VMs
28+
* You need network isolation between tenants
29+
* You want independent failure domains - one tenant's daemon crashing does not affect others
30+
* Compliance or security requirements demand full separation
31+
32+
## Configuration
33+
34+
Generate a config per tenant with isolated networking, a UNIX socket, and a non-overlapping IP range. Use [isolated mode networking](/reference/networking/#isolated-mode-networking) so VMs from different tenants cannot communicate.
35+
36+
Tenant A:
37+
38+
```bash
39+
slicer new a3cf \
40+
--net=isolated \
41+
--isolated-range 169.254.100.0/22 \
42+
--socket /run/slicer/a3cf.sock \
43+
--count=0 \
44+
--graceful-shutdown=false \
45+
--drop 192.168.1.0/24 \
46+
> tenant-a.yaml
47+
```
48+
49+
This produces:
50+
51+
```yaml
52+
config:
53+
host_groups:
54+
- name: a3cf
55+
storage: image
56+
storage_size: 25G
57+
count: 0
58+
vcpu: 2
59+
ram_gb: 4
60+
network:
61+
mode: "isolated"
62+
range: "169.254.100.0/22"
63+
drop: ["192.168.1.0/24"]
64+
allow: ["0.0.0.0/0"]
65+
image: "ghcr.io/openfaasltd/slicer-systemd:6.1.90-x86_64-latest"
66+
hypervisor: firecracker
67+
graceful_shutdown: false
68+
api:
69+
bind_address: "/run/slicer/a3cf.sock"
70+
```
71+
72+
Tenant B:
73+
74+
```bash
75+
slicer new e7d1 \
76+
--net=isolated \
77+
--isolated-range 169.254.104.0/22 \
78+
--socket /run/slicer/e7d1.sock \
79+
--count=0 \
80+
--graceful-shutdown=false \
81+
--drop 192.168.1.0/24 \
82+
> tenant-b.yaml
83+
```
84+
85+
```yaml
86+
config:
87+
host_groups:
88+
- name: e7d1
89+
storage: image
90+
storage_size: 25G
91+
count: 0
92+
vcpu: 2
93+
ram_gb: 4
94+
network:
95+
mode: "isolated"
96+
range: "169.254.104.0/22"
97+
drop: ["192.168.1.0/24"]
98+
allow: ["0.0.0.0/0"]
99+
image: "ghcr.io/openfaasltd/slicer-systemd:6.1.90-x86_64-latest"
100+
hypervisor: firecracker
101+
graceful_shutdown: false
102+
api:
103+
bind_address: "/run/slicer/e7d1.sock"
104+
```
105+
106+
Each `/22` range provides 256 usable VM slots. Use non-overlapping ranges when running multiple daemons on the same host (e.g. `169.254.100.0/22`, `169.254.104.0/22`, `169.254.108.0/22`).
107+
108+
In isolated mode, each VM gets its own network namespace. VMs cannot communicate with each other, with the host, or with the LAN. The `drop` list blocks specific CIDRs. Auth is disabled by default for UNIX sockets since access is controlled by filesystem permissions.
109+
110+
## Start each daemon
111+
112+
Start each daemon in its own terminal or tmux window:
113+
114+
```bash
115+
sudo slicer up tenant-a.yaml
116+
```
117+
118+
```bash
119+
sudo slicer up tenant-b.yaml
120+
```
121+
122+
Each daemon manages its own VMs. Your application routes requests to the correct socket based on which tenant is making the request.
123+
124+
For production, run each daemon as a systemd service - one unit per tenant. See [running in the background](/getting-started/daemon/) for setup.
125+
126+
## API isolation
127+
128+
Each daemon has its own VM namespace. Listing nodes on tenant A's socket returns only tenant A's VMs:
129+
130+
```bash
131+
TOKEN=$(sudo cat /var/lib/slicer/auth/token)
132+
133+
# Tenant A: create a VM
134+
sudo curl -sf --unix-socket /run/slicer/a3cf.sock \
135+
-H "Authorization: Bearer $TOKEN" \
136+
-H "Content-Type: application/json" \
137+
-X POST http://localhost/hostgroup/a3cf/nodes \
138+
-d '{"tags":["user=alice","job=123"]}'
139+
140+
# Tenant B: create a VM
141+
sudo curl -sf --unix-socket /run/slicer/e7d1.sock \
142+
-H "Authorization: Bearer $TOKEN" \
143+
-H "Content-Type: application/json" \
144+
-X POST http://localhost/hostgroup/e7d1/nodes \
145+
-d '{"tags":["user=bob","job=456"]}'
146+
147+
# Tenant A sees only its own VMs
148+
sudo curl -sf --unix-socket /run/slicer/a3cf.sock \
149+
-H "Authorization: Bearer $TOKEN" http://localhost/nodes
150+
```
151+
152+
```json
153+
[{"hostname":"a3cf-1","hostgroup":"a3cf","ip":"169.254.100.2",
154+
"tags":["user=alice","job=123"],"status":"Running"}]
155+
```
156+
157+
Tenant A cannot manage, exec into, or copy files to tenant B's VMs through its socket.
158+
159+
## See also
160+
161+
* [Single Slicer instance](/platform/single-instance/) - simpler deployment for trusted environments
162+
* [Networking](/reference/networking/) - CIDR configuration and bridge setup
163+
* [REST API reference](/reference/api/) - full endpoint documentation

docs/platform/overview.md

Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,8 @@ Slicer gives you on-demand Linux VMs through a REST API and Go SDK. You can crea
44

55
This section assumes [Slicer for Linux](/getting-started/install/), but many of the REST API examples will also work on [Slicer for Mac](/mac/overview/) if you use the `sbox` host group.
66

7+
By installing and starting Slicer, you agree to the [End User License Agreement (EULA)](https://slicervm.com/eula/).
8+
79
## Self-hosted, real microVMs
810

911
Slicer runs on your hardware. Every VM is a real microVM with its own kernel - not a container dressed up as one. Unlike hosted platforms like Modal, Daytona, or Fly, there are no artificial timeouts, no per-second metering, no rate limits on the API, and no mandatory scale-to-zero. Your VMs run for as long as you need them, using as many resources as the host has available.
@@ -21,6 +23,8 @@ Each VM runs a real Linux kernel with systemd. It is not a container.
2123

2224
VMs launched through the API are called **sandboxes**. They are ephemeral by default: shut down Slicer and they are cleaned up automatically. Sandboxes can also be configured to persist, which is how multi-tenant platforms and the [Kubernetes autoscaler](/examples/autoscaling-k3s/) work - VMs stay running until you explicitly delete them.
2325

26+
To create a persistent sandbox, pass `"persistent": true` in the create request (or `--persistent` via the CLI). Persistent VMs survive Slicer restarts and their disk is retained until you delete the VM.
27+
2428
## Common use-cases
2529

2630
* **Code execution**: run untrusted or user-submitted code in a VM that gets destroyed afterwards
@@ -40,6 +44,43 @@ VMs launched through the API are called **sandboxes**. They are ephemeral by def
4044

4145
The host group sets defaults (CPU, RAM, image, storage backend). Individual VMs can override CPU and RAM at creation time.
4246

47+
VM hostnames are auto-assigned with an incrementing integer based on the host group name (e.g. `sandbox-1`, `sandbox-2`, `sandbox-3`). If you need to correlate a VM with a user, job, or request in your system, pass `tags` when creating the VM. Tags are returned in list responses and can be used to look up VMs later.
48+
49+
## Reference architecture
50+
51+
The difference between "Slicer for Linux" and "Slicer Platform" is who is driving. With Slicer for Linux, a person runs CLI commands. With Slicer Platform, your application drives Slicer through the API.
52+
53+
```
54+
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
55+
│ Your Users │ ────► │ Your App │ ────► │ Slicer API │
56+
└──────────────┘ └──────────────┘ └──────┬───────┘
57+
58+
create / exec / cp / delete
59+
60+
61+
┌──────────────┐
62+
│ microVMs │
63+
└──────────────┘
64+
```
65+
66+
There are two deployment models depending on your isolation requirements:
67+
68+
* [Single Slicer instance](/platform/single-instance/) - one daemon, all tenants share it, use tags to track ownership
69+
* [Instance per tenant](/platform/instance-per-tenant/) - one daemon per tenant with its own UNIX socket and isolated network
70+
71+
A typical request flow for a code execution platform:
72+
73+
1. User submits code through your frontend
74+
2. Your backend calls `POST /hostgroup/sandbox/nodes` to create a VM
75+
3. Poll `GET /vm/HOSTNAME/health` until the agent is ready
76+
4. `POST /vm/HOSTNAME/cp` to copy the code into the VM
77+
5. `POST /vm/HOSTNAME/exec` to run it
78+
6. `GET /vm/HOSTNAME/cp` to copy results back
79+
7. `DELETE /hostgroup/sandbox/nodes/HOSTNAME` to clean up
80+
8. Return the result to the user
81+
82+
The entire flow takes seconds. Each user gets a dedicated VM with its own kernel - no shared state, no container escapes.
83+
4384
## API surfaces
4485

4586
Slicer exposes three ways to manage VMs programmatically:

docs/platform/quickstart.md

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -67,6 +67,16 @@ To override the host group defaults for CPU or RAM:
6767
}
6868
```
6969

70+
Hostnames are auto-assigned (`sandbox-1`, `sandbox-2`, etc.). To track which VM belongs to which user or job in your system, pass `tags`:
71+
72+
```json
73+
{
74+
"tags": ["user=alice", "job=convert-video-123"]
75+
}
76+
```
77+
78+
Tags are returned when you list VMs, so your application can match VMs back to its own records.
79+
7080
## Wait for the agent
7181

7282
The guest agent needs to start before you can run commands or copy files. Poll the health endpoint:
@@ -164,6 +174,20 @@ curl -sf -H "Authorization: Bearer $TOKEN" \
164174

165175
Poll `/vm/HOSTNAME/health` and check `userdata_ran` to know when the script has finished.
166176

177+
## Create a persistent sandbox
178+
179+
By default, sandboxes are ephemeral - they are destroyed when Slicer shuts down. To create a VM that survives restarts and retains its disk, pass `"persistent": true`:
180+
181+
```bash
182+
curl -sf -H "Authorization: Bearer $TOKEN" \
183+
-H "Content-Type: application/json" \
184+
-X POST \
185+
http://127.0.0.1:8080/hostgroup/sandbox/nodes \
186+
-d '{"persistent": true}'
187+
```
188+
189+
The `persistent` field is returned in list responses so your application can distinguish between ephemeral and persistent VMs.
190+
167191
## Delete the sandbox
168192

169193
```bash

0 commit comments

Comments
 (0)