This document describes command line options and interface commands of the Markov Reward Model Checker (below referenced as MRMC) version 1.2.2. ----------------------------------------------------------------- Contents ----------------------------------------------------------------- 0. General information 1. Getting MRMC models 2. Command line options 2.1 An example of .tra file 2.2 An example of .lab file 2.3 An example of .rew file 2.4 An example of .rewi file 3. Interface commands 3.1 PCTL logic formulas 3.2 PRCTL logic formulas 3.3 CSL logic formulas 3.4 PRCTL logic formulas ----------------------------------------------------------------- 0. General information ----------------------------------------------------------------- MRMC is a command-line model checker for discrete-time and continuous-time Markov reward models implemented in C. It supports reward extensions of PCTL and CSL (PRCTL and CSRL), and allows for the automated verification of properties concerning long-run and instantaneous rewards as well as cumulative rewards. In particular, it supports to check the reachability of a set of goal states (by only visiting legal states before) under a time and an accumulated reward constraint. MRMC has an interactive interface i.e. a command line. First in section "1. Command line options" we will explain how to start MRMC and then in section "2. Interface commands" we will talk about how to interact with MRMC. In the MRMC output the probabilities (except for the Qureshi and Sanders uniformization for the CSRL logic) are printed according to the desired error bound. Example: For the error bound set to 1.0e-4 the probability 0.63845249 will be printed up to the 5'th digit (4+1) after the decimal point i.e.: 0.63845 ----------------------------------------------------------------- 1. Getting MRMC models ----------------------------------------------------------------- MRMC models can be generated from Prism models, using the command line Prism starting from the version 3.0: http://www.cs.bham.ac.uk/~dxp/prism/download.php The required options of "prism" are listed here and were obtained by running "prism -help": -exportmrmc .................... When exporting matrices/vectors/labels, use MRMC format -exportlabels ........... Export the list of labels and satisfying states to a file -exporttrans ............ Export the transition matrix to a file -exportstaterewards ..... Export the state rewards vector to a file -exporttransrewards ...... Export the transition rewards matrix to a file NOTE: The "transition rewards" are what we refer to as "impulse rewards". A typical example of generating MRMC model from the Prism model would be: $ prism model.sm model.csl -exportmrmc -exportlabels model.lab -exporttrans model.tra -exportstaterewards model.rew -exporttransrewards model.rewi The resulting model.tra model.lab model.rew model.rewi files can be immediately consumed by MRMC. Some more information on generating MRMC models using Prism can be found here: http://www.cs.bham.ac.uk/~dxp/prism/manual/RunningPRISM/ExportingTheModel ----------------------------------------------------------------- 2. Command line options ----------------------------------------------------------------- Starting MRMC without parameters will give the following output: Usage: mrmc <.tra file> <.lab file> <.rew file> <.rewi file> - could be one of {csl, pctl, prctl, csrl}. - could be one of {-ilump, -flump}, optional. <.tra file> - is the file with the matrix of transitions. <.lab file> - contains labeling. <.rew file> - contains state rewards (for PRCTL/CSRL). <.rewi file> - contains impulse rewards (for CSRL, optional). Note: In the '.tra' file transitions should be ordered by rows and columns! The order of input files, options and other parameters is not strict. Options -ilump and -flump enables formula- independent and dependent lumping correspondingly, for more information on lumping, please consider reading: http://www.cs.utwente.nl/~zapreevis/downloads/BibTex/KatoenKZJ_TACAS07.pdf http://www.cs.utwente.nl/~zapreevis/downloads/BibTex/KatoenKZJ_TACAS07.bib The format of the .tra and .lab file is similar to format of ETMCC. NOTE: State indexes in the input files should start from 1. ----------------------------------------------------------------- 2.1 An example of .tra file ----------------------------------------------------------------- Contains the matrix with rates of probabilities: STATES 5 TRANSITIONS 11 1 2 0.0004 1 5 0.0001 2 1 0.05 2 3 0.0004 2 5 0.0001 3 2 0.05 3 4 0.0004 3 5 0.0001 4 3 0.05 4 5 0.0001 5 1 0.06 The header defines the number of states and transitions in the system. The rest are transitions in the format: Note that transitions are (and should be) ordered by row and column index! ----------------------------------------------------------------- 2.2 An example of .lab file ----------------------------------------------------------------- Contains labeling states with atomic propositions: # DECLARATION Thup Twup Onup Zeup Sup vdown failed #END 1 Thup Sup 2 Twup Sup 3 Onup failed 4 Zeup failed 5 vdown failed Here in the header section we define all atomic propositions (labels), such as "Thup" and "Twup". Then atomic propositions are assigned to states in the format: Note that we allow quite complicated names for atomic propositions, such as: p1+p0<=p2+p3 To be more precise a state label (any atomic proposition) should fit the following regular expression: "{let}{alnum}*" where let [_a-zA-Z] alnum [_a-zA-Z0-9<>_^*+-=] For more details see MRMC/src/io/parser/la_etmcc2.l ----------------------------------------------------------------- 2.3 An example of .rew file ----------------------------------------------------------------- Defines state rewards: 1 8 2 9 3 10 4 11 5 13 This is just a list of the form: WARNING: Only natural reward values are allowed! Note, that any rational rewards can be transfered into natural numbers. ----------------------------------------------------------------- 2.4 An example of .rewi file ----------------------------------------------------------------- Defines state impulse rewards: TRANSITIONS 11 1 2 3 1 5 3 2 1 1 2 3 3 2 5 3 3 2 1 3 4 3 3 5 3 4 3 1 4 5 3 5 1 5 In the header we define the number of transitions, then assign rewards to transitions in the format: WARNING: Only natural reward values are allowed! Note, that any rational rewards can be transfered into natural numbers. ----------------------------------------------------------------- 3. Interface commands ----------------------------------------------------------------- After MRMC is on and running you may start interacting with it using commands. The first one you should learn is: help which provides you with the output like the following: help - display this help quit - exit the program print - print run-time settings. set * - Where * is one of the following: print L - Turn on/off most of the resulting output, see '$RESULT[I]' and '$STATE[I]' commands. ssd L - Turn on/off the steady-state detection for time bounded until (CSL logic). error_bound R - Error Bound for all methods. max_iter I - Number of Max Iterations for all methods. overflow R - Overflow for the Fox-Glynn algorithm. underflow R - Underflow for the Fox-Glynn algorithm. method_path M - Method for path formulas. method_steady M - Method for steady state formulas. method_until_rewards MU - Method for time-reward-bounded until formula. method_bscc MB - Search method for BSCCs. w R - The probability threshold for uniformization Qureshi-Sanders. d R - The discretization factor for state rewards. $RESULT[I] - access the computed results of U, X, L, S, E, C, Y operators by a state index. $STATE[I] - access the state-formula satisfiability set by a state index. Here: I is a natural number. R is a real value. M is one of {gauss_jacobi, gauss_seidel}. MU is one of {uniformization_sericola, uniformization_qureshi_sanders, discretization_tijms_veldman}. MB is one of {recursive, non_recursive}. L is one of {on, off}. All commands are quite clear, but there are some important notes, which we should make: 1. "set error_bound R" - sets error bound for the analysis, for example it defines the overall error for the time-bounded reachability of CSL logic (such as property P{>0.1}[ Sup U[0,50] failed]). 2. "set method_until_rewards uniformization_sericola" - is currently unsupported 3. "set ssd on" - turns on precise on-the-fly steady-state detection for the time-bounded reachability of CSL logic 4. "print" - allows to check the runtime values of error bound, default methods set for the analysis, etc. 5. "set print off" - mostly disables the model-checking results output. It is handy in case of huge Markov chains, where probability or formula-satisfiability results should be known only for some states. Use in combination with $RESULT[I] and $STATE[I]. 6. "$RESULT[I]" - allows to access the resulting probability (reward rates) array in case when the top-most formula is a P{ OP R }[ PFL ] formula or another formula that presumes computation of probabilities (reward rates), such as L{ OP R }[ SFL ] or C [n][ R, R] [ SFL ]. 7. "$STATE[I]" - allows to access the satisfiability set of the top-most state formula by a state index. ----------------------------------------------------------------- 3.1 PCTL logic formulas ----------------------------------------------------------------- Here is the E-BNF notation for the PCTL logic formulas accepted by MRMC: PCTL logic formulas: CONST = ff | tt R = 'real number' N = 'natural number' OP = > | < | <= | >= SFL = CONST | LABEL | ! SFL | SFL && SFL | SFL || SFL | ( SFL ) | L{ OP R }[ SFL ] | P{ OP R }[ PFL ] PFL = X SFL | SFL U SFL | SFL U[ N, N ] SFL Here are some details: tt - atomic proposition TRUE ff - atomic proposition FALSE SFL - is state formula PFL - is path formula X SFL - Next operator SFL U SFL - Unbounded until operator SFL U[ N, N ] SFL - Time bounded until operator L{ OP R }[ SFL ] - Long-run operator, the probability bound for being in SFL states in the long run. P{ OP R }[ PFL ] - Checks the probability bound for any path formula. The probabilistic bounds are checked taking into account the error bound. For example, if computations were done with the error bound E and we check for probabilities to be "OP R" then we check the computed values to be "OP (R + E)" for OP in {<, <=} and "OP (R - E)" for OP in {>, >=}. ----------------------------------------------------------------- 3.2 PRCTL logic formulas ----------------------------------------------------------------- Here is the E-BNF notation for the PRCTL logic formulas accepted by MRMC: PRCTL logic formulas: CONST = ff | tt R = 'real number' N = 'natural number' OP = > | < | <= | >= SFL = CONST | LABEL | ! SFL | SFL && SFL | SFL || SFL | ( SFL ) | E [ R, R] [ SFL ] | E [N][ R, R] [ SFL ] | C [N][ R, R] [ SFL ] | Y [N][ R, R] [ SFL ] | L{ OP R }[ SFL ] | P{ OP R }[ PFL ] PFL = X SFL | SFL U SFL | SFL U[ N, N ][ R, R ] SFL | SFL U[ R, R ] SFL Here are some details "2.1 PCTL logic formulas" and below: E [ R, R] { SFL } - the long-run expected reward rate per time-unit for SFL-states meets the bounds [ R, R] E [N][ R, R] { SFL } - asserts that the expected reward rate in SFL-states up to n transitions is within the interval [ R, R] C [N][ R, R] { SFL } - asserts that the instantaneous reward in SFL-states at the n'th epoch meets the bounds of [ R, R] Y [N][ R, R] { SFL } - asserts that the expected accumulated reward in SFL-states until the n'th transition meets the bounds of [ R, R] SFL U[ N, N ][ R, R ] SFL - Bounded until operator with time and reward bound, first [N, N] defines time bounds, second [R, R] defines reward bounds For details on PRCTL see: 1. S. Andova, H. Hermanns, and J.-P. Katoen. Discrete-time rewards model-checked. In Formal Modeling and Analysis of Timed Systems (FORMATS 2003), Marseille, France, 2003. Lecture Notes in Computer Science, Springer-Verlag. ----------------------------------------------------------------- 3.3 CSL logic formulas ----------------------------------------------------------------- Here is the E-BNF notation for the CSL logic formulas accepted by MRMC: CSL logic formulas: CONST = ff | tt R = 'real number' OP = > | < | <= | >= SFL = CONST | LABEL | ! SFL | SFL && SFL | SFL || SFL | ( SFL ) | S{ OP R }[ SFL ] | P{ OP R }[ PFL ] PFL = X SFL | X[ R, R ] SFL | SFL U SFL | SFL U[ R, R ] SFL This is similar to PCTL logic, but just remember that we work with CTMCs, and thus: 1. Instead of the Long-run operator L{ OP R }[ SFL ] we have a steady-state operator S{ OP R }[ SFL ] 2. There is a time bounded Next operator X[ R, R ] SFL For details on CSL see: 1. C. Baier, B. Haverkort, H. Hermanns, and J.-P. Katoen. Model-checking algorithms for continuous-time Markov Chains. IEEE Transactions on Software Engineering, 29(7), 2003. ----------------------------------------------------------------- 3.4 CSRL logic formulas ----------------------------------------------------------------- Here is the E-BNF notation for the CSRL logic formulas accepted by MRMC: CSRL logic formulas: CONST = ff | tt R = 'real number' OP = > | < | <= | >= SFL = CONST | LABEL | ! SFL | SFL && SFL | SFL || SFL | ( SFL ) | S{ OP R }[ SFL ] | P{ OP R }[ PFL ] PFL = X SFL | X [R, R] SFL | X [R, R][R, R] SFL | SFL U SFL | SFL U[ R, R ] SFL | SFL U[ R, R ] [ R, R ] SFL Here are some details "2.3 CSL logic formulas" and below: X [R, R][R, R] SFL - Next operator with time and reward bound, first [R, R] defines time bounds, second [R, R] defines reward bound SFL U[ R, R ] [ R, R ] SFL - Bounded until operator with time and reward bound, first [R, R] defines time bounds, second [R, R] defines reward bound For details on CSRL see: 1. C. Baier, B. R. Haverkort, H. Hermanns, and J.-P. Katoen. On the Logical Characterization of Performability Properties. In ICALP '00: Proceedings of the 27th International Colloquium on Automata, Languages and Programming, pages 780-792, London, UK, 2000. Springer-Verlag. 2. B. Haverkort, L. Cloth, H. Hermanns, J.-P. Katoen, and C. Baier. Model-checking performability properties. In International Conference on Dependable Systems and Networks (DSN), pages 103-112. IEEE CS Press, 2002.