We consider the verification of multiple expected reward objectives at once on Markov decision processes (MDPs).
<br>This enables a trade-off analysis among multiple objectives by obtaining a Pareto front.
<br>We focus on strategies that are easy to employ and implement.
<br>That is, strategies that are pure (no randomization) and have bounded memory.
<br>We show that checking whether a point is achievable by a pure
stationary strategy is NP-complete, even for two objectives, and we
provide an MILP encoding to solve the corresponding problem.
<br>The bounded memory case is treated by a product construction.
<br>Experimental results using Storm and Gurobi show the feasibility of our algorithms.
<br>
<br>This artifact contains the source code of the model checker Storm (cf. stormchecker.org) as well as all required dependencies.
<br>Moreover, we include model files and scripts for replicating the experiments as conducted for the TACAS 2020 paper.
<br>Finally, the original logfiles which were used to produce the tables and figures in the evaluation section are included.