Processes, IPC, and Async Architectures
Deep dive into system processes, communication patterns, and async performance
Processes & Memory Management
A process is a running program with its own isolated memory space, enforced by hardware (MMU - Memory Management Unit, the chip that controls memory access) that translates virtual addresses (fake addresses programs see) to physical addresses (real RAM locations), preventing processes from accessing each other’s memory.
Process = Program Code + Execution Context + Memory Space + System Resources
Virtual memory gives each process the illusion of owning all memory (0x00000000 - 0xFFFFFFFF), but the hardware maps these to different physical locations for isolation.
Since these processes are isolated, we do need them to talk to each other once in a while. Four common ways to do it:
- Event Based (message passing)
- Shared Memory (between processes, yikes, deadlocks?)
- Sockets (abstraction by OS to allow programs to communicate locally or over network, combination of ip:port and tcp/udp)
- Pipes (simple data streams like shell | operator)
Desktop App Architectures: Electron vs Wails
Electron uses multi-process architecture (main process + renderer processes) for security isolation, where the main process has system access and renderers are sandboxed web environments communicating via IPC (Inter-Process Communication - how separate processes talk).
Wails uses single-process architecture with embedded WebView (browser engine component) and direct Go-to-JavaScript function binding, trading some security isolation for lower memory usage and simpler communication, has different threads for each.
Threading Models & Resource Costs
Traditional threading (one thread per connection) is expensive: ~8MB stack (memory for function calls) per thread plus context switching (CPU overhead when switching between threads) overhead, making 10,000 connections cost ~80GB memory.
Threads vs processes: Threads share memory space within a process (lighter), while processes have isolated memory spaces (heavier but more secure).
IPC Mechanisms
Pipes are simple unidirectional data streams (like shell |
operator), while sockets are bidirectional network-style communication that can work locally or over networks.
Message passing in Electron uses event-driven IPC between main and renderer processes, similar to client-server communication but within the same application.
Async I/O Revolution
Async I/O eliminates waiting time, not work time - when a thread would block on I/O, the kernel notifies when data is ready via epoll/kqueue (kernel mechanisms that monitor multiple sockets), allowing one thread to handle thousands of connections.
The performance gain comes from resource utilization: async uses 4 threads + 32MB memory vs traditional 10,000 threads + 80GB memory for the same I/O-bound workload.
Event Loops & Non-Blocking Architecture
Event loops use kernel event notification (epoll/kqueue) to know exactly which sockets have data ready, eliminating the need for threads to sit idle waiting for I/O.
Async excels at I/O-bound work (90% waiting time) but provides no advantage for CPU-bound work (0% waiting time) - the performance formula is: Async Advantage = (Time Spent Waiting) / (Total Time).
The Async Misconception
Async doesn’t make individual requests faster - a 100ms request still takes 100ms, but async increases server throughput (requests handled per second) by letting one thread start many requests while others wait for I/O.
Production systems use hybrid approaches: async event loops for I/O orchestration + worker thread pools for CPU-intensive tasks to get benefits of both models.
Go’s Goroutine Model
Go hides async complexity by automatically transforming blocking-style code into async operations through syscall interception (Go runtime catches system calls), but this requires a massive runtime (500k+ lines) with garbage collection and work-stealing schedulers (algorithms that balance work across threads).
Go’s “simple” M:N threading (M goroutines : N OS threads - many lightweight goroutines mapped to fewer heavy OS threads) involves incredibly complex work-stealing algorithms, preemptive scheduling (forcibly interrupting running code), and automatic thread creation that other languages can’t easily adopt due to ecosystem constraints.
CPU vs I/O in Go
I/O operations in Go automatically park goroutines (put them to sleep) when they would block (via syscall interception), freeing the thread for other work, while CPU-intensive loops monopolize threads because they contain no syscalls (system calls like network/file operations) where Go can intervene.
Go’s scheduler is cooperative for CPU work - it only creates new threads when existing ones block on syscalls, not for pure computation, so CPU-heavy goroutines can starve other goroutines even with GOMAXPROCS (max OS threads Go can use) > 1.
Goroutine Limitations
Calling CPU-heavy work in a goroutine often doesn’t help - it just moves the blocking from one place to another within the same thread, unless you have multiple independent CPU tasks that can run on separate cores.
Goroutines are excellent for I/O multiplexing (handling many network connections efficiently) but not magic for CPU work - they can’t create CPU cores that don’t exist and require explicit yielding (runtime.Gosched() - manually give up CPU time) for cooperative multitasking (voluntarily sharing CPU time).