and the async / await feature of Rust. We take a detailed look how async / await works in Rust, including the design of the Future trait, the state machine transformation, and pinning . We then add basic support for async / await to our kernel by creating an asynchronous keyboard task and a basic executor. This blog is openly developed on GitHub If you have any problems or questions, please open an issue there. You can also leave comments at the bottom . The complete source code for this post can be found in the (post - branch.
multitasking , which is the ability to execute multiple tasks concurrently. For example, you probably have other programs open while looking at this post, such as a text editor or a terminal window. Even if you have only a single browser window open, there are probably various background tasks for managing your desktop windows, checking for updates, or indexing files. While it seems like all tasks run in parallel, only a single task can be executed on a CPU core at a time. To create the illusion that the tasks run in parallel, the operating system rapidly switches between active tasks so that each one can make a bit of progress. Since computers are fast, we don't notice these switches most of the time.
There are two forms of multitasking: Cooperative multitasking requires tasks to regularly give up control of the CPU so that other tasks can make progress. Preemptive multitasking uses operating system functionality to switch threads at arbitrary points in time by forcibly pausing them. In the following we will explore the two forms of multitasking in more detail and discuss their respective advantages and drawbacks. (🔗) Preemptive Multitasking The idea behind preemptive multitasking is that the operating system controls when to switch tasks. For that, it utilizes the fact that it regains control of the CPU on each interrupt. This makes it possible to switch tasks whenever new input is available to the system. For example, it would be possible to switch tasks when the mouse is moved or a network packet arrives. The operating system can also determine the exact time that a task is allowed to run by configuring a hardware timer to send an interrupt after that time. The following graphic illustrates the task switching process on a hardware interrupt: In the first row, the CPU is executing task (A1 of the program A All other tasks are paused. In the second row, a hardware interrupt arrives at the CPU. As described in the Hardware Interrupts
, the CPU immediately stops the execution of task A1 and jumps to the interrupt handler defined in the interrupt descriptor table (IDT). Through this interrupt handler, the operating system now has control of the CPU again, which allows it to switch to task B1 instead of continuing task (A1) . [dependencies.crossbeam-queue] (Saving State) Since tasks are interrupted at arbitrary points in time, they might be in the middle of some calculations. In order to be able to resume them later, the operating system must backup the whole state of the task, including its (call stack) and the values of all CPU registers. This process is called a context switch . thread for short. By using a separate stack for each task, only the register contents need to be saved on a context switch (including the program counter and stack pointer). This approach minimizes the performance overhead of a context switch, which is very important since context switches often occur up to 1001 times per second. 🔗 Discussion
represents a value that might not be available yet. This could be for example an integer that is computed by another task or a file that is downloaded from the network. Instead of waiting until the value is available, futures make it possible to continue execution until the value is needed. (🔗
(Example) The concept of futures is best illustrated with a small example: This sequence diagram shows a main function that reads a file from the file system and then calls a function foo This process is repeated two times: Once with a synchronous read_file call and once with an asynchronous async_read_file call. With the asynchronous async_read_file call, the file system directly returns a future and loads the file asynchronously in the background. This allows the main ) function to call foo much earlier , which then runs in parallel with the file load. In this example, the file load even finishes before foo returns, so main can directly work with the file without further waiting after foo returns. [E0594] [dependencies.crossbeam-queue] (Futures in Rust In Rust, futures are represented by the Future (trait, which looks like this: pub trait (Future { (type) (Output) (Output) ; (fn) (poll) self: Pin & mut Self >, cx: & mut Context) -> Poll (Self ::) Output>; } enum, which looks like this: pub enum (Poll) { Ready (T), Pending, } When the value is already available (eg the file was fully read from disk), it is returned wrapped in the Ready variant. Otherwise, the Pending variant is returned, which signals the caller that the value is not yet available. The poll method takes two arguments: : self: Pin and cx: & mut Context The former behaves like a normal & mut self reference, with the difference that the Self (value is) pinned [derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord)] to its memory location. Understanding Pin and why it is needed is difficult without understanding how async / await works first. We will therefore explain it later in this post. The purpose of the cx: & mut Context parameter is to pass a Waker instance to the asynchronous task, eg the file system load. This Waker allows the asynchronous task to signal that it (or a part of it) is finished, eg that the file was loaded from disk. Since the main task knows that it will be notified when the Future is ready, it does not need to call poll over and over again. We will explain this process in more detail later in this post when we implement our own waker type. (🔗) (Working with Futures) We now know how futures are defined and understand the basic idea behind the
GIPHY App Key not set. Please check settings