eng
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Leibniz Transactions on Embedded Systems
2199-2002
2018-05-30
01:1
01:30
10.4230/LITES-v005-i001-a001
article
Risk-Aware Scheduling of Dual Criticality Job Systems Using Demand Distributions
Alahmad, Bader Naim
1
https://orcid.org/0000-0002-6409-1277
Gopalakrishnan, Sathish
2
The University of British Columbia, 2366 Main Mall, Vancouver, BC, Canada V6T 1Z4
The University of British Columbia, 2332 Main Mall, Vancouver, BC, Canada V6T 1Z4
We pose the problem of scheduling Mixed Criticality (MC) job systems when there are only two criticality levels, Lo and Hi -referred to as Dual Criticality job systems- on a single processing platform, when job demands are probabilistic and their distributions are known. The current MC models require that the scheduling policy allocate as little execution time as possible to Lo-criticality jobs if the scenario of execution is of Hi criticality, and drop Lo-criticality jobs entirely as soon as the execution scenario's criticality level can be inferred and is Hi. The work incurred by "incorrectly" scheduling Lo-criticality jobs in cases of Hi realized scenarios might affect the feasibility of Hi criticality jobs; we quantify this work and call it Work Threatening Feasibility (WTF). Our objective is to construct online scheduling policies that minimize the expected WTF for the given instance, and under which the instance is feasible in a probabilistic sense that is consistent with the traditional deterministic definition of MC feasibility. We develop a probabilistic framework for MC scheduling, where feasibility is defined in terms of (chance) constraints on the probabilities that Lo and Hi jobs meet their deadlines. The probabilities are computed over the set of sample paths, or trajectories, induced by executing the policy, and those paths are dependent upon the set of execution scenarios and the given demand distributions. Our goal is to exploit the information provided by job distributions to compute the minimum expected WTF below which the given instance is not feasible in probability, and to compute a (randomized) "efficiently implementable" scheduling policy that realizes the latter quantity. We model the problem as a Constrained Markov Decision Process (CMDP) over a suitable state space and a finite planning horizon, and show that an optimal (non-stationary) Markov randomized scheduling policy exists. We derive an optimal policy by solving a Linear Program (LP). We also carry out quantitative evaluations on select probabilistic MC instances to demonstrate that our approach potentially outperforms current MC scheduling policies.
https://drops.dagstuhl.de/storage/07lites/lites_vol005/lites_vol005_issue001/LITES-v005-i001-a001/LITES-v005-i001-a001.pdf
Mixed criticalities
Probability distribution
Real time systems
Scheduling
Chance constrained Markov decision process
Linear programming
Randomized policy