论文笔记:Chimera
简介
Chimera: Collaborative Preemption for Multitasking on a Shared GPU
Keywords:
Graphics Processing Unit;
Preemptive Multitasking;
ContextSwitch;
Idempotence
问题的提出
Preemptive multitasking on CPUs has been primarily supported through context switching. However, the same preemption strategy incurs substantial overhead due to the large context in GPUs.
由于GPU的上下文很大,因此上下文切换不适用于GPU的抢占技术
解决方案的提出
we propose Chimera, a collaborative preemption approach that can precisely control the overhead for multitasking on GPUs
提出Chimera,一种适用于多任务GPU的协作抢占方法,可以精确控制抢占的开销。
Chimera can achieve a specified preemption latency while minimizing throughput overhead
所谓精确控制抢占的开销:实现指定的抢占延时,同时最小化吞吐量。
Chimera achieves the goal by intelligently selecting which SMs to preempt and how each thread block will be preempted.
Chimera具体实现:智能选择抢占哪个SM,以及如何抢占每个线程块
综合使用Flushing、Context switching、Draining这三种技术。
本文的contribution
分析GPU的刷新条件: 放宽幂等的语义定义
分析 抢占技术( context switching, draining, and flushing) 与线程运行过程的定量关系
Chimera的实现: 根据不同的抢占技术的开销来智能选择抢占哪个SM以及如何抢占线程块。
名词解释
overhead comes in two dimensions: a preempting kernel suffers from a long preemption latency, and the system throughput is wasted during the switch
这里的开销主要体现两方面: 抢占延时长,系统的吞吐量浪费
- Context switching
Context switching stores the context of currently running thread blocks, and preempts an SM with a new kernel.
Context switching 是保存当前运行线程块的上下文,并用新内核抢占SM。(中规中矩)
- Draining
Draining stops issuing new thread blocks to the SM and waits until the SM finishes its currently running thread blocks.
Draining 是停止分配新线程块给SM,等待当前线程块运行完,再抢占该SM。(彬彬有礼)
- Flushing
Flushing drops the execution of running thread blocks and preempts the SM almost instantly.
Flushing 是取消正在运行的线程块,立即抢占SM(粗暴无礼)。
三种抢占技术不同开销的原因
- the estimated preemption latency for each preemption technique.
- the estimated throughput overhead for each preemption technique.
the theoretical cost of each preemption technique if a thread block at given progress is preempted
- The cost of context switching is dependent on the context size and the available bandwidth for an SM, which is almost constant across thread block execution
- The cost of draining, which is primarily preemption latency, is dependent on the remaining execution time of a thread block. It decreases toward the end of the thread block progress.
- The cost of flushing, on the other hand, is primarily throughput overhead, which is dependent on the work thrown away by flushing.
结论: 采用何种技术更多取决于线程块运行过程的状态,若线程块刚开始运行,采用flushing;若运行到一半,采用contextSwitching;若运行到末尾,采用draining
实现(跟进)
3.1GPU Scheduler with PreemptiveMultitasking
An SM partitioning policy in the kernel scheduler tells howmany SMs each kernelwill run on Chimera consists of two parts: estimating costs of preemption for
each technique, and selecting SMs to preempt with corresponding preempting techniques. Chimera can directly compare the estimated cost of each preemption technique
3.2 Cost Estimation
estimate the cost of each preemption technique precisely for each SM.
First, Chimera measures the total number of executed instructions for each thread block to determine the progress of each thread block
Second, Chimera also measures the progress of each thread block in cycles
instructions-per-cycle (IPC) or cycles-per-instruction (CPI)
estimate the preemption latency of context switching using the same method
3.3 Preemption Selection
how Chimera selects a subset of SMs and techniques to preempt.
The time complexity of algorithm 1 is O(NT logT + NlogN),
Thus, the impact of the selection algorithm in Chimera is negligible to the preemption latency.
3.4 SM Flushing
We relax the idempotence condition by looking at thread blocks individually with the notion of time
实验结果评估
实验结果数据
各种表格
结论
Evaluations show that Chimera violates the deadline for only 0.2% of preemption requests when a 15µs preemption latency constraint is used. For multi-programmed workloads, Chimera can improve the average normalized turnaround time by 5.5x, and system throughput by 12.2%
改善平均周转时间和吞吐量。
个人看法
- 这种综合的抢占技术应对饥饿情况?
- 采用何种抢占技术的决策时间复杂度是否掩盖Chimera本身抢占优势?
- 虽然paper有提到,但感觉还有些讨论的空间余地。
……
- 虽然paper有提到,但感觉还有些讨论的空间余地。