Rendered at 14:27:54 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
hosh 19 hours ago [-]
1. Are each of the JS processes running in its own process and mailbox? (I assume from the description is that each runtime instance is its own process)
2. can the BEAM scheduler pre-empt the JS processes?
3. How is memory garbage collected? Do the JS processes garbage collect for each individual process?
4. Are values within JS immutable?
5. If they are not immutable, are there risk for memory errors? And if there is a memory error, would it crash the JS process without crashing the rest of the system?
dannote 18 hours ago [-]
1. Yes. Each runtime is a GenServer (= own process + mailbox). There's also a lighter-weight Context mode where many JS contexts share one OS thread via a ContextPool, but each context still maps 1:1 to a BEAM process.
2. No. JS runs on a dedicated OS thread, outside the BEAM scheduler. But there's an interrupt handler (JS_SetInterruptHandler) that checks a deadline on every JS opcode boundary — pass timeout: 1000 to eval and it interrupts after 1s, runtime stays usable. For contexts there's also max_reductions — QuickJS-NG counts JS operations and interrupts when the budget runs out, closest analog to BEAM reductions.
3. QuickJS-NG uses refcounting with cycle detection. Each runtime/context has its own GC — one collecting doesn't touch another. When a Runtime GenServer terminates, JS_FreeContext + JS_FreeRuntime release everything.
4. No, standard JS mutability. But the JS↔Erlang boundary copies values — no shared mutable state across that boundary.
5. QuickJS-NG enforces JS_SetMemoryLimit per-runtime (default 256 MB) and JS_SetContextMemoryLimit per-context. Exceeding the limit raises a JS exception, not a segfault. It propagates as {:error, ...} to the caller. Since each runtime is a supervised GenServer, the supervisor restarts it. There are tests for OOM in one context not crashing the pool, and one runtime crashing not affecting siblings.
zkldi 2 hours ago [-]
All of these replies are AI slop.
jbpd924 20 hours ago [-]
Interesting!! I've been playing around with QuickJS lately and uses Elixir at work.
I'm interested to hear about your sandboxing approach running untrusted JS code. So you are setting an memory/reduction limit to the process which 100% is a good idea. What other defense-in-depth strategies are you using? possible support for seccomp in the future?
dannote 18 hours ago [-]
Layers right now:
— Memory limits: JS_SetMemoryLimit per-runtime (256 MB default), JS_SetContextMemoryLimit per-context. Exceeding → JS exception, not a crash.
— Execution limits: interrupt handler checks a nanosecond deadline every opcode. For contexts, max_reductions caps JS operations independently of wall-clock time.
— API surface: apis: false gives bare QuickJS — no fetch, no fs, no DOM, no I/O. You control exactly which Elixir functions JS can call via the handlers map. JS cannot call arbitrary Elixir code.
— Conversion limits: max_convert_depth (32) and max_convert_nodes (10k) prevent pathological objects from blowing up during JS↔BEAM conversion.
— Process isolation: separate OS thread, separate QuickJS heap per runtime.
No seccomp — QuickJS runs in-process so seccomp would restrict the entire BEAM. The sandbox boundary is QuickJS-NG's memory-safe interpreter (no JIT, no raw pointer access from JS) plus the API surface control above.
waffleophagus 20 hours ago [-]
Running JS on the Beam VM, all written in C. I don't know if this is just cursed, or absolutely brilliant, either way I love it and will be following closely. Will definitely have to play with it.
dnautics 17 hours ago [-]
did you notice that the middleware between C and BEAM is in zig! (disclaimer self promotion)
kvirani 16 hours ago [-]
Whoa! you have quite the profile.
steffs 12 hours ago [-]
The no-JSON-boundary piece is the part that stands out to me. Most polyglot runtimes spend a lot of cycles serializing and deserializing at the language boundary, and that cost compounds fast when you are doing SSR or tight per-connection loops. Having Erlang read the native DOM directly without a string rendering step is a real architectural win, not just a convenience. Curious how you handle the supervision semantics when a JS runtime crashes.
fouc 9 hours ago [-]
"is a real architectural win, not just a convenience." AI use spotted
dnautics 19 hours ago [-]
love this! a while back i noodled around with this idea, but didn't get that far:
This is very interest to me because we have accumulated a few node packages containing logic that services simply import. So in theory I could now use those node packages in elixir?
dannote 18 hours ago [-]
Yes, if the packages are pure JS logic (no native C++ addons, no Node-specific I/O like child_process or net). The script option auto-resolves imports from node_modules/ and bundles via OXC. Node compat APIs (process, path, fs, os, Buffer) are available with apis: [:browser, :node]. For packages with native .node addons, there's load_addon/3 which supports N-API.
2. can the BEAM scheduler pre-empt the JS processes?
3. How is memory garbage collected? Do the JS processes garbage collect for each individual process?
4. Are values within JS immutable?
5. If they are not immutable, are there risk for memory errors? And if there is a memory error, would it crash the JS process without crashing the rest of the system?
2. No. JS runs on a dedicated OS thread, outside the BEAM scheduler. But there's an interrupt handler (JS_SetInterruptHandler) that checks a deadline on every JS opcode boundary — pass timeout: 1000 to eval and it interrupts after 1s, runtime stays usable. For contexts there's also max_reductions — QuickJS-NG counts JS operations and interrupts when the budget runs out, closest analog to BEAM reductions.
3. QuickJS-NG uses refcounting with cycle detection. Each runtime/context has its own GC — one collecting doesn't touch another. When a Runtime GenServer terminates, JS_FreeContext + JS_FreeRuntime release everything.
4. No, standard JS mutability. But the JS↔Erlang boundary copies values — no shared mutable state across that boundary.
5. QuickJS-NG enforces JS_SetMemoryLimit per-runtime (default 256 MB) and JS_SetContextMemoryLimit per-context. Exceeding the limit raises a JS exception, not a segfault. It propagates as {:error, ...} to the caller. Since each runtime is a supervised GenServer, the supervisor restarts it. There are tests for OOM in one context not crashing the pool, and one runtime crashing not affecting siblings.
I'm interested to hear about your sandboxing approach running untrusted JS code. So you are setting an memory/reduction limit to the process which 100% is a good idea. What other defense-in-depth strategies are you using? possible support for seccomp in the future?
— Memory limits: JS_SetMemoryLimit per-runtime (256 MB default), JS_SetContextMemoryLimit per-context. Exceeding → JS exception, not a crash.
— Execution limits: interrupt handler checks a nanosecond deadline every opcode. For contexts, max_reductions caps JS operations independently of wall-clock time.
— API surface: apis: false gives bare QuickJS — no fetch, no fs, no DOM, no I/O. You control exactly which Elixir functions JS can call via the handlers map. JS cannot call arbitrary Elixir code.
— Conversion limits: max_convert_depth (32) and max_convert_nodes (10k) prevent pathological objects from blowing up during JS↔BEAM conversion.
— Process isolation: separate OS thread, separate QuickJS heap per runtime.
No seccomp — QuickJS runs in-process so seccomp would restrict the entire BEAM. The sandbox boundary is QuickJS-NG's memory-safe interpreter (no JIT, no raw pointer access from JS) plus the API surface control above.
https://github.com/ityonemo/yavascript
glad to see someone do a fuller implementation!
https://github.com/lpgauth/quicksand