Tuesday 15 April 2014

operating system - Context Switch questions: What part of the OS is involved in managing the Context Switch? -


I was asked to answer these questions about the OS reference switch, the question is very difficult and my answer to me No answer can be found in the textbook:

  1. How many PCBs are present in a particular time?
  2. What are the two conditions that can be a reference switch? (I think they are in the middle of a process and the end is, but I'm not sure)
  3. The hardware support can distinguish the amount of time to switch. What are two different perspectives?
  4. What part of the OS is included in the management of context switches? 3: A complete number of potential hardware optimizations

    3: Li> Small register set (hence less to save and restore on reference switch)

  5. Floating point / vector processor 'dirty' flags for register set - Allows the kernel to save the reference if anything Do not be as it was switched. FP / VP references are usually very large and a great many threads never use them. Some RTOSs provide an API to tell the kernel that the use of a thread FP / VP never eliminates more context and does not save anything - especially when a thread deals with an ISR and then completes immediately , Then the original thread immediately after the kernel
  6. Shadow register bank: CPU register memory seen on small embedded CPU with on-board singular-cycle SRAM I consequently, switching The bank is just a matter of switching the address of the registers. This is usually achieved in some instructions and it is very cheap. Normally the number of references is severely limited in these systems.
  7. Shadow Interrupt Registers: The shadow register bank for use in ISR is an example that all ARM CPUs with 6 or 7 registers have their shadow bank for fast interrupt handlers and for regular one There is a little less shaded. While there is not strictly a performance increase for reference switching, it can help with the cost of changing the context behind ISR.
  8. Virtually map cache instead of physically If the mmu has changed, then the mapped cache should be flushed on the reference switch - it will be in any multi-process environment with memory protection, however, a physically mapped cache means that the virtual-physical address translation Load and store operation is an important-path activity, and many gates have been spent on caching to improve performance. In fact the mapped cache was a design choice on some of the CPUs designed for embedded systems.

No comments:

Post a Comment