为实现万物智能互联的愿景,网络边缘需要提供实时的计算服务。而随着自动驾驶和车联网的发展,车辆将配备强大的计算存储能力,并通过无线网络实现互联。一些车辆在满足自身驾驶需求的同时,能够作为边缘计算节点,共享计算资源并提供计算服务,称为车载云计算。然而,车辆的高移动性使得网络拓扑高度动态,进而导致无线信道和计算负载等状态信息难以被准确获取,使得优化任务的卸载决策以满足其实时性需求非常有挑战性。另一方面,对比基站及路侧单元,车辆密度较高,计算资源更加丰富,应当被充分有效利用。本文面向车载云计算中的任务卸载,深入研究了延时优化的任务调度及资源有效利用问题,包括:1. 基于在线学习的车联网分布式自适应任务卸载。考虑车辆间的任务卸载,以最小化任务卸载延时为目标,应对请求任务的车辆缺少全局状态的挑战,提出了边卸载,边学习的分布式卸载框架,设计了基于在线学习的分布式自适应卸载策略。所提策略具有感知拓扑及任务大小的能力,能够自适应于动态拓扑和时变任务量。理论分析证明,对比全局状态已知的理想策略,性能损失有上界。2. 终端能耗约束下的分布式能量感知任务卸载。针对能量受限的乘客或行人终端向车辆卸载任务时的延时最小化问题,研究全局状态缺失下的任务卸载策略。面对卸载决策与长时能耗约束相耦合的挑战,结合李雅普诺夫及多臂赌博机理论,设计了延时与能耗折中的在线决策函数,并基于此提出了能够在线学习并感知能量的卸载策略。对比离线最优策略,所提策略的延时与能耗偏差均有上界。3. 基于任务复制的高可靠卸载。提出了通过任务复制的方法,有效利用冗余的车辆计算资源。分析得到最小化平均卸载延时的最优任务复制份数近似闭式解,其与任务到达率成反比,与服务率成正比。进一步基于组合多臂赌博机理论,提出了在线学习的分布式复制卸载策略。基于真实道路环境的仿真结果显示,对比无复制卸载场景,以最优化的复制份数卸载任务可有效降低33%的卸载延时,并将延时界约束下的任务完成比例由97%提高至99.6%。4. 基于编码计算的任务卸载与资源分配。针对矩阵乘法类任务,提出了基于编码计算的最优节点与负载分配策略,最小化计算延时。针对专一服务场景,基于拉格朗日乘子法得到了最优负载分配,进而将节点分配转化为最大-最小公平分配问题并提出了贪心策略。针对概率服务场景,基于连续凸逼近迭代求解非凸优化问题。仿真结果显示,对比无编码计算,所提策略可将延时降低约80%。
To realize the vision of intelligent interconnection of everything, real-time computing services are required at the edge of the network. Meanwhile, with the development of autonomous driving and vehicular network, vehicles will be equipped with powerful computing and storage resources and interconnected. While meeting their own driving demands, some vehicles can serve as edge computing nodes by sharing their computing resources and providing computing services, which is called Vehicular Cloud Computing (VCC). However, the high mobility of vehicles makes the network topology highly dynamic, which makes it difficult to accurately acquire the state information such as wireless channel gain and computing load. It is thus challenging to optimize task offloading decisions to meet the real-time requirements of tasks. On the other hand, the density of vehicles is higher compared to that of base stations and road side units. Therefore, the computing resources are abundant, which should be fully exploited and efficiently used. This thesis focuses on task offloading in the VCC system, and investigates the task scheduling and efficient resource utilization problem for delay optimization. The main contributions are summarized as follows:1. Online learning-based distributed adaptive task offloading in the vehicular network. Consider task offloading among vehicles. With the objective of minimizing offloading delay, a distributed learning while offloading framework is proposed to overcome the challenge that task-request vehicles lack global state information. An online learning-based distributed adaptive task offloading policy is proposed. With occurrence-awareness and input-awareness, the proposed policy can adapt to the dynamic topology of vehicles and the time-varying workloads of tasks. Theoretical analysis proves that the performance loss can be upper bounded compared to the optimal policy with global state information.2. Distributed energy-aware task offloading under energy constraint. Consider that the user equipment of on-board passenger or pedestrian offloads tasks to vehicles. Task offloading policy is investigated in the absence of global state, in order to minimize average offloading delay under long-term energy constraint. To overcome the challenge that the offloading decisions couple with the long-term energy constraint, an online decision making function which balances the delay-energy tradeoff is designed, based on the Lyapunov optimization and the multi-armed bandit theory, and an online learning-based energy-aware task offloading policy is proposed. Theoretical analysis proves that, compared to the offline optimal policy, the deviations of offloading delay and energy consumption of the proposed policy can be upper bounded 3. Task replication for highly reliable task offloading. Task replication technique is used to further exploit the abundant computing resources of vehicles. Under the reliability constraint, an approximation of the optimal number of task replicas that minimizes the average offloading delay is derived in closed-form, which is inversely proportional to the task arrival rate and proportional to the service rate. Based on the combinatorial multi-armed bandit theory, a distributed online learning-based task replication policy is proposed. Simulation results based on the real road environment show that, compared with the offloading scenario without replication, offloading with the optimized number of task replicas can reduce the offloading delay by 33%, while improving the reliability from 97% to 99.6%.4. Coded computing based task offloading and resource allocation. Considering matrix multiplication tasks, a delay-optimized node and load allocation policy is proposed based on coded computing. For dedicated service mode, the optimal load allocation is derived based on the Lagrange multiplier method, and the node allocation problem is then transformed into a max-min allocation problem, and greedy policies are proposed. For probabilistic node allocation, the non-convex optimization problem is solved iteratively based on the successive convex approximation method. Simulation results show that, the proposed policy can reduce the delay by about 80%, compared to the case without coding.