- Published on
Coroutine at Operating System Level
- Authors
- Name
- Bowen Y
Processes
- Separate instances of programs.
- Independent execution units
- Heavyweight, more memory and overhead.
- Isolated memory space.
- Require IPC for communication.
- Good for high isolation, parallelism.
Threads
- Units of execution within a process.
- Preemptive multitasking — operating system’s scheduler determines when a thread should be paused and resumed
- Lighter weight, less overhead.
- Share memory space.
- Direct communication.
- Good for concurrency and shared data.
Coroutines
- Cooperative units within a thread.
- Cooperative multitasking — they decide when to yield control back to the scheduler or event loop voluntarily.
- Very lightweight.
- Share thread’s memory space.
- Efficient suspension and resumption.
- Good for asynchronous tasks, I/O-bound operations.
Coroutines vs Threads
— Concurrency Model:
- Coroutines: Coroutines are cooperative in nature, meaning they decide when to yield control back to the scheduler or event loop voluntarily. They explicitly define points at which they can be paused and resumed using constructs like await (in languages like Python) or similar keywords.
- Threads: Threads are preemptive, which means that the operating system’s scheduler determines when a thread should be paused and resumed. Threads can be interrupted at any time, and the scheduler switches between threads based on a pre-defined time slice (time-sharing).
— Blocking vs. Non-blocking:
- Coroutines: Coroutines are generally non-blocking by design. When a coroutine encounters a blocking operation (e.g., I/O), it yields control back to the event loop, allowing other coroutines to execute in the meantime. This means coroutines can efficiently handle I/O-bound tasks without creating many threads.
- Threads: Threads can block due to various reasons, such as waiting for I/O, synchronization primitives, or other resource locks. Blocking threads can lead to inefficient resource utilization if not managed carefully.
— Context Switching:
- Coroutines: Context switching between coroutines is typically less expensive than context switching between threads. This is because coroutines are explicitly designed to be paused and resumed at defined points, and switching between them often involves less overhead.
- Threads: Context switching between threads can be more expensive due to the need to save and restore the entire thread’s execution context, including its stack and registers.
— Parallelism:
- Coroutines: Coroutines don’t inherently provide parallelism, as they often run within a single thread. However, they can still achieve concurrency by interleaving the execution of multiple tasks.
- Threads: Threads can achieve true parallelism when executed on multi-core processors, as multiple threads can run simultaneously on different cores. This makes threads suitable for CPU-bound tasks that can be divided into parallel subtasks.
— Resource Consumption:
- Coroutines: Coroutines generally consume fewer system resources compared to threads, as they can be managed within a single thread. This makes them more suitable for scenarios with a large number of concurrent tasks.
- Threads: Threads consume more resources due to the overhead of maintaining separate stacks and execution contexts for each thread.