Mustard (Multiple-Use Scenario Test and Refusal Description)

Mustard Logo

See the download page about obtaining this under licence


The current version of Mustard is 2.0, dated 28th May 2012.


Mustard is a language and a tool for defining and validating test scenarios. Although Mustard was designed for use with Cress, it can be used independently of Cress. See the Mustard home page for an overview of Mustard. See the Cress home page for an overview of Cress.

Mustard has been used to validate services in the following domains:

Mustard validates specifications/implementations using:


Mustard creates and runs tests expressed using a scenario notation. It relies on the Perl module borrowed from Cress.


To run Mustard requires Perl 5 or similar. Mustard has been run on Unix (NextStep 3.3/OpenStep 4.2) and Windows (XP and 7, under CygWin).

A Unix-like installation is assumed in the following, though it should be possible to install and run on other platforms where Perl runs. It is assumed that the files are extracted to $HOME/bin/mustard.

In a few places in the code, a Unix-like environment is assumed. For example, Macintosh end-of-line may not be correctly handled. Paths and filenames are assumed to have '/' separators. Search paths are assumed to have ':' separators. On a Windows system, it is suggested that CygWin be used; ActivePerl may be suitable but has not been tried.

The following environment variables should be set up (e.g. in your .profile, .cshrc or Windows XP environment variables):

Variable Meaning
M4PATH (used by M4) a colon-separated, Unix-like directory path used to locate M4 macro files, e.g. .:$HOME/bin/mustard/m4; if you are using Windows, you will not be able to cite drive letters unless you use CygWin references such as /C/home/me/bin/mustard.
PATH (used by command-line) a directory path used by a shell to locate executables, e.g. to include $HOME/bin/mustard/bin
PERLLIB (used by Perl) a directory path used by Perl to locate modules (cress_*.pm), e.g. $HOME/bin/cress/bin.
telelogic (used by Mustard) for SDL with Unix only, Telelogic Tau directory (e.g. C:/usr/local/tau, no default)
TMP (used by Mustard) a directory temporary files, /tmp by default
WINDIR (used by Mustard) for SDL with Windows only, Windows system directory; used to determine if running on MS Windows (e.g. C:\Windows, no default).


This script takes a filename on the command line. "file[.bpel|.lot|.pr]" should contain a Bpel/Lotos/SDL specification generated by Cress. The feature names in this are automatically extracted.

The main file can optionally be followed by feature or service names to restrict testing to these. A partner "feature" has the form <service>.<partner>.

For each such feature (e.g. CONFIRM), the location of its diagram is found (e.g. directory /home/kjt/bin/cress/vxml/confirm). If an XML file exists (e.g. confirm.xml), it is converted to Mustard form. The XML form of Mustard is defined by an XML schema. If an XML or converted Mustard file does not exist (e.g. confirm.mustard), tests of this feature are not performed.

If there is such a test file, it must contain tests defined in the Mustard scenario language. Tests are generated from this and combined with the original specification into a temporary file. The tests are then run automatically, with output/diagnosis of success, failure or inconclusive results.

Do not define the same test file in both XML and Mustard format, as the former will be converted into and overwrite the latter.

Command-line options are:

Option Meaning
-a append to test log <system>.log (default create new test log)
-b memory bit state hash memory size in MB (default 5) - Lotos
-d depth maximum depth of exploration (default 100) - Lotos
-e level use the given error reporting level (3 - panics, 2 - these plus errors, 1 (default) - these plus notes, 0 - these plus diagnostics)
-h print help
-k key key for server authorisation (BPEL, user:password@host)
-l library Lotos library (default stir) - Lotos
-m manual (run tests manually, default automatic)
-p mode[runs] performance test: c concurrent or s sequential, optionally followed by the number of test runs (default 20) - Bpel
-q qualifier qualify visible macros to disambiguate them by using the given qualifier as a prefix (e.g. 'must_')
-v vocabulary use the named vocabulary (converted to lower case), default as follows: BPEL - ws; Lotos - the basic filename ignoring any '_' suffix, e.g. in.lot has vocabulary in, gs_matcher_scorer.lot has vocabulary gs; SDL - the basic filename ignoring any 'System' suffix, e.g. has vocabulary ivr
-w when waiting for a specified signal and parameters, do not allow other instances of the signal with different parameters - SDL

For manual testing, test files are created as follows:

For automatic testing, a complete log file is created in <file>.log and test files are deleted as follows:

The script assumes certain formats for a Bpel specification (these rules are respected by Cress):

Bpel Meaning
features: SERVICE1/PARTNER1,PARTNER2 SERVICE2 SERVICE3/PARTNER3 services and associated partners

The script assumes certain formats for a Lotos specification (these rules are respected by Cress):

Lotos Meaning
features: FEATURE1 FEATURE2 names of included features
features: SERVICE1/PARTNER1,PARTNER2 SERVICE2 SERVICE3/PARTNER3 services and associated partners
specification SpecName [GateNames] format of specification header
PutStatus(Feature,Number,...,StatusResult(Value)) profile feature and details
process TestXXX format of test process header

The script assumes certain formats for an SDL specification (these rules are respected by Cress):

SDL Meaning
features: FEATURE1 FEATURE2 names of included features
EndSystem SpecName; formformat of system specification end
StatusXXX((. Feature,Number,... .)) := Value profile feature and details

Mustard Literals

Mustard literals are as follows:

Mustard Meaning
?Type any value
list(type,value,...) array
true, false booleans
£7, L13.25, $67.12, D25.00 currency ('£'/'$' converted to 'L'/'D' internally)
@2005 10 04, @10-04, @04 date ('-', '/', space removed)
=mark.dta/mark_check.dta, =merge.spss file to be checked (expect 'mark.dta'/'merge.spss', check contents against 'mark_check.dta'/'.merge.spss')
0, +9, -3.14 number
operation:sort, operation:constructor(...) operation
:6091, :467 000 x 7423, :801-9134 phone ('-', '+', space removed)
'What arrival date?, '45 string (forbidden characters " $ ' ( ) , : ; ? ! [ ] _ ` | removed)
!x, !inputCount variable

Mustard Notation

The core Mustard notation is as follows:

Mustard Meaning
// text explanatory comment (removed during translation, use '%%' for literal '//')
« text » quoted text (not subject to translation)
call(feature,scenario) invokes behaviour of another scenario; the feature name is optional, defaulting to the current feature
decide(behaviour) non-deterministic (scenario-decided) choice of alternative behaviours
depend(condition,behaviour,...) behaviour depends on whether the condition holds; an optional final behaviour acts as an `otherwise' case
exit(behaviour) sequential behaviour with normal termination
interleave(behaviour) concurrent execution of behaviours
offer(behaviour) deterministic (system-decided) choice of alternative behaviours
present(feature) holds if the feature is present in the system
read(signal,parameters) inputs a signal from the system; the variant Read absorbs other signals before the desired one is input
refuse(behaviour) sequential behaviour with abrupt termination if the final behaviour occurs, or successful termination if not
send(signal,parameters) outputs a signal to the system; the variant Send absorbs other signals before the desired one is output
sequence(behaviour) sequential behaviour with abrupt termination
succeed(behaviour) sequential behaviour with successful termination
test(name,behaviour) defines a test for the given name and behaviour

Scenario Examples

A few examples are provided with Mustard:

sample tests in MUSTARD format for the SIP feature CFBL (Call Forward Busy Line) for a SIP User Agent
sample tests in Mustard format for the WS service BROKER (car broker)
mixed.mustard, mixed.xml
sample tests in Mustard XML format for the IN feature POTS and for the IVR feature PIN. These are translated into Mustard format file.

A scenario has a name (which is automatically qualified by the current feature). As an example, the following is a test for a SIP proxy. It defines a simple sequence of events: address 1 goes off-hook, receives dial tone, and goes on-hook. If the specification respects this sequence, a pass verdict is recorded for the scenario.

  test(No_Dial,                          test of call without dialling
  succeed(                               successful sequence
    send(OffHook,1),                     1 goes off-hook
    read(Announce,1,DialTone),           1 gets dial tone
    send(OnHook,1)))                     1 goes on-hook

The following scenario introduces a choice. After address 1 dials address 4, the outcome depends on the status of 4. If this is a valid number, it will start ringing from 1; but if it is an invalid number, then 1 will receive an 'unobtainable' message. The offer combinator allows a deterministic (system-decided) choice. This is appropriate here, as only the system knows whether the callee can be rung.

  test(Ring_Or_Unobtainable,             test of ring or unobtainable
  succeed(                               successful sequence
    send(OffHook,1),                     1 goes off-hook
    read(Announce,1,DialTone),           1 gets dial tone
    send(Dial,1,4),                      1 dials 4
    offer(                               system choice
      read(StartRing,4,1),               4 rings from 1
      read(Announce,1,Unobtainable))))   1 gets unobtainable

It is sometimes desirable to use the decide combinator for a non-deterministic (scenario-decided) choice. This ensures that all alternatives are explored.

The depend combinator makes a scenario conditional; the dependency is evaluated when the scenario is defined, not when it is executed. Dependencies are most commonly used when the specification has features. For example, a scenario can be varied according to the features that have been provisioned. The following tests the feature CFBL (Call Forward on Busy Line), deployed in either a user agent or a proxy. Call forwarding is handled differently in each case: a called user agent will announce a temporary change of number to the caller, while a proxy will automatically re-route the call. The following scenario depends on which feature has been deployed.

  test(Busy_Forward,                     test of forward on busy
  succeed(                               successful sequence
    send(OffHook,1),                     1 goes off-hook
    read(Announce,1,DialTone),           1 gets dial tone
    send(OffHook,3),                     3 goes off-hook
    read(Announce,3,DialTone),           3 gets dial tone
    send(Dial,1,3),                      1 dials 3
    depend(                              feature dependency
      present(AGENT_CFBL),               agent forwarding present?
	..,                              behaviour for agent forwarding
      present(PROXY_CFBL),               proxy forwarding present?
	...)))                           behaviour for proxy forwarding

Scenarios can also be made finer-grained, depending on whether a feature applies to a particular pair of users.

Scenarios often begin in similar ways. It is undesirable if the same opening behaviour has to be repeated. Instead, opening sequences can be defined as scenarios that are called by other scenarios.

Many specification problems arise due to concurrency, for example race conditions. It is desirable to check if a specification suffers from these kinds of problems. This requires scenarios that independently execute multiple behaviours in parallel (i.e. through interleaving). In the following example, addresses 1 and 2 concurrently go off-hook and dial address 3. The outcome depends on the system, so there is a system-defined choice: 3 starts ringing from 1, and 2 receives busy; or 3 starts ringing from 2, and 1 receives busy. Note how sequence is used to group the interleaved and alternative behaviours.

  test(Raced_Calls,                      test of simultaneous calls
  succeed(                               successful sequence
    interleave(                          concurrent behaviour
      sequence(                          sequence
	send(OffHook,1),                 1 goes off-hook
	read(Announce,1,DialTone),       1 gets dial tone
	send(Dial,1,3)),                 1 dials 3
      sequence(                          plus sequence
	send(OffHook,2),                 2 goes off-hook
	read(Announce,2,DialTone),       2 gets dial tone
	send(Dial,2,3))),                2 dials 3
    offer(                               system choice
      sequence(                          sequence
	read(StartRing,3,1),             3 rings from 1
	read(Announce,2,BusyHere)),      2 gets busy
      sequence(                          or sequence
	read(StartRing,3,2),             3 rings from 1
	read(Announce,1,BusyHere)))))    1 gets busy

Refusal tests express what a system must not do. Since the purpose of call screening is to block calls, it might be preferable to state this explicitly in a refusal test. If it is known that address 2 screens calls from address 1, then calls from 1 must not ring 2. The behaviour to be refused is the last parameter of refuse, here the act of ringing 2 from 1.

  test(Screen_Caller_Refusal,            test of caller refusal
  refuse(                                refusal sequence
    Send(OffHook,1),                     1 eventually goes off-hook
    read(Announce,1,DialTone),           1 gets dial tone
    send(Dial,1,2),                      1 dials 2
    Read(StartRing,2,1)))                2 must eventually not ring from 1 (refusal)


This is not open-source software. The author (Kenneth J. Turner, University of Stirling) retains copyright in it. Nonetheless, the author will normally approve its use by others subject to the following conditions:


Version 0.0 - 0.4: Ken Turner, 28th October 2004 - 30th August 2005

Version 1.0: Ken Turner, 1st October 2005

Version 1.1: Ken Turner, 22nd November 2005

Version 1.2: Ken Turner, 1st September 2006

Version 1.3: Ken Turner, 22nd August 2007

Version 1.4: Ken Turner, 12th April 2008

Version 1.5: Ken Turner, 17th July 2008

Version 1.6: Ken Turner and Larry Tan, 7th April 2009

Version 1.7: Ken Turner, 31st January 2010

Version 1.8: Ken Turner, 4th October 2010

Version 1.9: Ken Turner, 19th October 2010

Version 2.0: Ken Turner, 28th May 2012

Up arrow Up one level to Lotos Utilities

Web Ken Turner Home   Email    Search Search Web Pages

Last Update: 18th July 2016