Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
633 views
in Technique[技术] by (71.8m points)

multithreading - Steps in Context Switching

I am asked to describe the steps involved in a context switch (1) between two different processes and (2) between two different threads in the same process.

  1. During a context switch, the kernel will save the context of the old process in its PCB and then load the saved context of the new process scheduled to run.
  2. Context switching between two different threads in the same process can be scheduled by the operating system so that they appear to execute in parallel, and is thus usually faster than context switches between two different processes.

Is this too general or what would you add to explain the process clearer?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

It's much easier to explain those in reverse order because a process-switch always involves a thread-switch.

A typical thread context switch on a single-core CPU happens like this:

  1. All context switches are initiated by an 'interrupt'. This could be an actual hardware interrupt that runs a driver, (eg. from a network card, keyboard, memory-management or timer hardware), or a software call, (system call), that performs a hardware-interrupt-like call sequence to enter the OS. In the case of a driver interrupt, the OS provides an entry point that the driver can call instead of performing the 'normal' direct interrupt-return & so allows a driver to exit via the OS scheduler if it needs the OS to set a thread ready, (eg. it has signaled a semaphore).

  2. Non-trivial systems will have to initiate a hardware-protection-level change to enter a kernel-state so that the kernel code/data etc. can be accessed.

  3. Core state for the interrupted thread has to be saved. On a simple embedded system, this might just be pushing all registers onto the thread stack and saving the stack pointer in its Thread Control Block (TCB).

  4. Many systems switch to an OS-dedicated stack at this stage so that the bulk of OS-internal stack requirements are not inflicted on the stack of every thread.

  5. It may be necessary to mark the thread stack position where the change to interrupt-state occurred to allow for nested interrupts.

  6. The driver/system call runs and may change the set of ready threads by adding/removing TCB's from internal queues for the different thread priorities, eg. network card driver may have set an event or signaled a semaphore that another thread was waiting on, so that thread will be added to the ready set, or a running thread may have called sleep() and so elected to remove itself from the ready set.

  7. The OS scheduler algorithm is run to decide which thread to run next, typically the highest-priority ready thread that is at the front of the queue for that priority. If the next-to-run thread belongs to a different process to the previously-run thread, some extra stuff is needed here, (see later).

  8. The saved stack pointer from the TCB for that thread is retrieved and loaded into the hardware stack pointer.

  9. The core state for the selected thread is restored. On my simple system, the registers would be popped from the stack of the selected thread. More complex systems will have to handle a return to user-level protection.

  10. An interrupt-return is performed, so transferring execution to the selected thread.

In the case of a multicore CPU, things are more complex. The scheduler may decide that a thread that is currently running on another core may need to be stopped and replaced by a thread that has just become ready. It can do this by using its interprocessor driver to hardware-interrupt the core running the thread that has to be stopped. The complexities of this operation, on top of all the other stuff, is a good reason to avoid writing OS kernels :)

A typical process context switch happens like this:

  1. Process context switches are initiated by a thread-context switch, so all of the above, 1-9, is going to need to happen.

  2. At step 5 above, the scheduler decides to run a thread belonging to a different process from the one that owned the previously-running thread.

  3. The memory-management hardware has to be loaded with the address-space for the new process, ie whatever selectors/segments/flags/whatever that allow the thread/s of the new process to access its memory.

  4. The context of any FPU hardware needs to be saved/restored from the PCB.

  5. There may be other process-dedicated hardware that needs to be saved/restored.

On any real system, the mechanisms are architecture-dependent and the above is a rough and incomplete guide to the implications of either context switch. There are other overheads generated by a process-switch that are not strictly part of the switch - there may be extra cache-flushes and page-faults after a process-switch since some of its memory may have been paged out in favour of pages belonging to the process owning the thread that was running before.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...