- * A CPU takes an AST when it is about to return to user code.
- * Instead of going back to user code, it calls ast_taken.
- * Machine-dependent code is responsible for maintaining
- * a set of reasons for an AST, and passing this set to ast_taken.
+ * A processor takes an AST when it is about to return from an
+ * interrupt context, and calls ast_taken.
+ *
+ * Machine-dependent code is responsible for maintaining
+ * a set of reasons for an AST, and passing this set to ast_taken.
+ */
+typedef uint32_t ast_t;
+
+/*
+ * When returning from interrupt/trap context to kernel mode,
+ * the pending ASTs are masked with AST_URGENT to determine if
+ * ast_taken(AST_PREEMPTION) should be called, for instance to
+ * effect preemption of a kernel thread by a realtime thread.
+ * This is also done when re-enabling preemption or re-enabling
+ * interrupts, since an AST may have been set while preemption
+ * was disabled, and it should take effect as soon as possible.
+ *
+ * When returning from interrupt/trap/syscall context to user
+ * mode, any and all ASTs that are pending should be handled.
+ *
+ * If a thread context switches, only ASTs not in AST_PER_THREAD
+ * remain active. The per-thread ASTs are stored in the thread_t
+ * and re-enabled when the thread context switches back.
+ *
+ * Typically the preemption ASTs are set as a result of threads
+ * becoming runnable, threads changing priority, or quantum
+ * expiration. If a thread becomes runnable and is chosen
+ * to run on another processor, cause_ast_check() may be called
+ * to IPI that processor and request csw_check() be run there.