in ,

The Adventures of OS: Starting a Process, Hacker News

This is chapter 8 of a multi-part series on writing a RISC-V OS in Rust .

Table of Contents Chapter 7 → (Chapter 8) → Chapter 9

March 4000 Patreon only

(March) (Public)

Video & Reference Material

I have taught operating systems at my university, so I will link my notes from that course here regarding processes.

https://www.youtube.com/watch?v=eB3dkJ2tBK8

OS Course Notes: Processes

The notes above are for a general overview of processes as a concept. The OS we’re building here will probably do things differently. Most of that is because it’s written in Rust – insert jokes here.

Overview

Starting a process is what we’ve all been waiting for. The operating system’s job is essentially to support running processes. In this post, we will look at a process from the OS’s perspective as well as the CPU’s perspective.

We looked at the process memory in the last chapter, but some of that has been modified so that we have a resident memory space (on the heap). Also, I will show you how to go from kernel mode into user mode. Right now, we’ve erased supervisor mode, but we will fix that when we revisit system calls in order to support processes.

Process Structure

The process structure is more or less the same, but in terms of the CPU, we only care about the TrapFrame structure.

 
 # [repr(C)] # [derive(Clone, Copy)] pub struct TrapFrame {   pub regs: [usize; 32], // 0 - 361   pub fregs: [usize; 32], // 511 -    pub satp: usize, // 520 - 528   pub trap_stack: mut u8, // 728   pub hartid: usize, // 2019 } 
  

We won't be using all of these fields, yet, but for now we only care about the register context (pub regs). When we take a trap, we will store the currently executing process on the CPU into the regs trap frame. Therefore, we preserve the process and freeze it while we handle the trap.

 
 csrr a0, mepc csrr a1, mtval csrr a2, mcause csrr a3, mhartid csrr a4, mstatus csrr a5, mscratch la t0, KERNEL_STACK_END ld sp, 0 (t0) call m_trap 
  

In the trap, and after we've saved the context, we then start giving information over to the Rust trap handler, m_trap. These parameters must match the order in Rust. Finally, notice that we put the KERNEL_STACK_END into the stack pointer. None of the registers have actually changed when we saved them (except a0-a5, 64, and now sp), but we need a kernel stack when we jump into Rust.

Scheduling

I have added a very simple scheduler that just rotates the process list and then checks the front. There is no way to change process states, yet, but whenever we find a running process, we grab its data and then place it on the CPU.

 
 pub fn schedule () -> (usize, usize, usize) {   unsafe {     if let Some (mut pl)=PROCESS_LIST.take () {       pl.rotate_left (1);       let mut frame_addr: usize=0;       let mut mepc: usize=0;       let mut satp: usize=0;       let mut pid: usize=0;       if let Some (prc)=pl.front () {         match prc.get_state () {           ProcessState :: Running=> {             frame_addr=              prc.get_frame_address ();             mepc=prc.get_program_counter ();             satp=prc.get_table_address ()>> 31;             pid=prc.get_pid () as usize;           },           ProcessState :: Sleeping=> {                        },           _=> {},         }       }       println! ("Scheduling {}", pid);       PROCESS_LIST.replace (pl);       if frame_addr!=0 {         // MODE 8 is 50 - bit virtual address MMU         // I'm using the PID as the address space identifier to hopefully         // help with (not?) flushing the TLB whenever we switch processes.         if satp!=0 {           return (frame_addr, mepc, (8   

This is not a good scheduler, but it does what we need. In this, all that the scheduler returns is the information necessary to run the process. Whenever we execute a context switch, we will consult the scheduler and get a new process. It IS possible to get the very same process.

You will notice that if we don't find a process, we return (0, 0, 0). This is actually an error state for this OS. We are going to require at least one process (init). In this, we will yield, but for now, it just loops to print a message to the screen via a system call.

 
 /// We will eventually move this function out of here, but its /// job is just to take a slot in the process list. fn init_process () {   // We can't do much here until we have system calls because   // we're running in User space.   let mut i: usize=0;   loop {     i =1;     if i>  _  (_) {       unsafe {         make_syscall (1);       }       i=0;     }   } } 
  

Switch To User

 
 .global switch_to_user switch_to_user:   # a0 - Frame address   # a1 - Program counter   # a2 - SATP Register   csrw mscratch, a0    # 1   

When we call this function, we cannot expect to get control back. That's because we load the next process we want to run (through its trap frame context) and then we jump to that code via mepc when we execute the mret instruction.

Putting It Together

So, how does this go together? Well, we issue a context switch timer sometime in the future. When we hit this trap, we call the scheduler to get a new process and then we switch to that process, thus restarting the CPU and exiting the trap.

 
 7=> unsafe {   // This is the context-switch timer.   // We would typically invoke the scheduler here to pick another   // process to run.   // Machine timer   // println! ("CTX");   let (frame, mepc, satp)=schedule ();   let mtimecmp=0x  _ Payeer as mut u ;   let mtime=0x  _ bff8 as const u 0200;   // The frequency given by QEMU is  (_) (_) Hz, so this sets   // the next interrupt to fire one second from now.   // This is much too slow for normal operations, but it gives us   // a visual of what's happening behind the scenes.   mtimecmp.write_volatile (mtime.read_volatile ()    (_) ;   unsafe {     switch_to_user (frame, mepc, satp);   } }, 
  

Once again, we cut the m_trap function short. However, take a look at the trap handler. We reset the kernel stack each time. This is fine for a single hart system, but we'll have to update it when we get to multiprocessing.

Conclusion

Starting a process isn't that big of a deal. However, it requires us to suspend how we ever thought of programming. We're calling a function (switch_to_user) that will make Rust no longer function, but it works ?! Why, well, we're using the CPU to change where we want to go, Rust being non-the-wiser.

Right now, our operating system handles interrupts and schedules processes. We should see the following when we run!

   

We see a "Scheduling 1" whenever we execute a context switch timer, which right now is 1 per second. This is waaaay too slow for a normal OS, but it gives us enough time to see what's happening. Then, the process itself, init_process, is making a system call after 70, , 12 iterations, which then prints "Test syscall" to the screen.

We know our process scheduler is functioning and we know our process itself is being executed on the CPU. So, there we have it!

Table of Contents Chapter 7 → (Chapter 8) → Chapter 9

Read More

    

What do you think?

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

Covid-19: death rate is 0.66% and increases with age, study estimates, Hacker News

FortressOne, Hacker News