shUnit2 is a xUnit unit test framework for Bourne based shell scripts, and it is designed to work in a similar manner to JUnit, PyUnit, etc.. If you have ever had the desire to write a unit test for a shell script, shUnit2 can do the job.
shUnit2 was originally developed to provide a consistent testing solution for
log4sh, a shell based logging framework similar to
log4j. During the development of that product, a
repeated problem of having things work just fine under one shell (/bin/bash
on
Linux to be specific), and then not working under another shell (/bin/sh
on
Solaris) kept coming up. Although several simple tests were run, they were not
adequate and did not catch some corner cases. The decision was finally made to
write a proper unit test framework after multiple brown-bag releases were made.
Research was done to look for an existing product that met the testing
requirements, but no adequate product was found.
Tested Operating Systems (varies over time)
OS | Support | Verified |
---|---|---|
Ubuntu Linux (14.04.05 LTS) | Travis CI | continuous |
macOS High Sierra (10.13.3) | Travis CI | continuous |
FreeBSD | user | unknown |
Solaris 8, 9, 10 (inc. OpenSolaris) | user | unknown |
Cygwin | user | unknown |
Tested Shells
See the appropriate Release Notes for this release
(doc/RELEASE_NOTES-X.X.X.txt
) for the list of actual versions tested.
A list of contributors to shUnit2 can be found in doc/contributors.md
. Many
thanks go out to all those who have contributed to make this a better tool.
shUnit2 is the original product of many hours of work by Kate Ward, the primary author of the code. For related software, check out https://github.com/kward.
Feedback is most certainly welcome for this document. Send your questions, comments, and criticisms via the shunit2-users forum (created 2018-12-09), or file an issue via https://github.com/kward/shunit2/issues.
This section will give a very quick start to running unit tests with shUnit2. More information is located in later sections.
Here is a quick sample script to show how easy it is to write a unit test in shell. Note: the script as it stands expects that you are running it from the "examples" directory.
#! /bin/sh
# file: examples/equality_test.sh
testEquality() {
assertEquals 1 1
}
# Load shUnit2.
. ../shunit2
Run this codeRunning the unit test should give results similar to the following.
$ cd examples
$ ./equality_test.sh
testEquality
Ran 1 test.
OK
W00t! You've just run your first successful unit test. So, what just happened?
Quite a bit really, and it all happened simply by sourcing the shunit2
library. The basic functionality for the script above goes like this:
test
, and add those to an internal list of tests to
execute. Once a list of test functions to be run has been determined, shunit2
will go to work.oneTimeSetUp()
. If it exists, it will be run. This function is
normally used to setup the environment for all tests to be run. Things like
creating directories for output or setting environment variables are good to
place here. Just so you know, you can also declare a corresponding function
named oneTimeTearDown()
function that does the same thing, but once all the
tests have been completed. It is good for removing temporary directories, etc.setUp()
. If the function
exists, it will be run before each test. It is good for resetting the
environment so that each test starts with a clean slate. At this stage, the
first test is finally run. The success of the test is recorded for a report
that will be generated later. After the test is run, shUnit2 looks for a final
function that might be declared, one named tearDown()
. If it exists, it will
be run after each test. It is a good place for cleaning up after each test,
maybe doing things like removing files that were created, or removing
directories. This set of steps, setUp() > test() > tearDown()
, is repeated
for all of the available tests.We should now try adding a test that fails. Change your unit test to look like this.
#! /bin/sh
# file: examples/party_test.sh
testEquality() {
assertEquals 1 1
}
testPartyLikeItIs1999() {
year=`date '+%Y'`
assertEquals "It's not 1999 :-(" '1999' "${year}"
}
# Load shUnit2.
. ../shunit2
Run this codeSo, what did you get? I guess it told you that this isn't 1999. Bummer, eh? Hopefully, you noticed a couple of things that were different about the second test. First, we added an optional message that the user will see if the assert fails. Second, we did comparisons of strings instead of integers as in the first test. It doesn't matter whether you are testing for equality of strings or integers. Both work equally well with shUnit2.
Hopefully, this is enough to get you started with unit testing. If you want a ton more examples, take a look at the tests provided with log4sh or shFlags. Both provide excellent examples of more advanced usage. shUnit2 was after all written to meet the unit testing need that log4sh had.
If you are using distribution packaged shUnit2 which is accessible from
/usr/bin/shunit2
such as Debian, you can load shUnit2 without specifying its
path. So the last 2 lines in the above can be replaced by:
# Load shUnit2.
. shunit2
Any string values passed should be properly quoted -- they should be
surrounded by single-quote ('
) or double-quote ("
) characters -- so that the
shell will properly parse them.
assertEquals [message] expected actual
Asserts that expected and actual are equal to one another. The expected and actual values can be either strings or integer values as both will be treated as strings. The message is optional, and must be quoted.
assertNotEquals [message] unexpected actual
Asserts that unexpected and actual are not equal to one another. The unexpected and actual values can be either strings or integer values as both will be treated as strings. The message is optional, and must be quoted.
assertSame [message] expected actual
This function is functionally equivalent to assertEquals
.
assertNotSame [message] unexpected actual
This function is functionally equivalent to assertNotEquals
.
assertContains [message] container content
Asserts that container contains content. The container and content values can be either strings or integer values as both will be treated as strings. The message is optional, and must be quoted.
assertNotContains [message] container content
Asserts that container does not contain content. The container and content values can be either strings or integer values as both will be treated as strings. The message is optional, and must be quoted.
assertNull [message] value
Asserts that value is null, or in shell terms, a zero-length string. The value must be a string as an integer value does not translate into a zero- length string. The message is optional, and must be quoted.
assertNotNull [message] value
Asserts that value is not null, or in shell terms, a non-empty string. The value may be a string or an integer as the latter will be parsed as a non-empty string value. The message is optional, and must be quoted.
assertTrue [message] condition
Asserts that a given shell test condition is true. The condition can be as
simple as a shell true value (the value 0
-- equivalent to
${SHUNIT_TRUE}
), or a more sophisticated shell conditional expression. The
message is optional, and must be quoted.
A sophisticated shell conditional expression is equivalent to what the if or while shell built-ins would use (more specifically, what the test command would use). Testing for example whether some value is greater than another value can be done this way.
assertTrue "[ 34 -gt 23 ]"
Testing for the ability to read a file can also be done. This particular test will fail.
assertTrue 'test failed' "[ -r /some/non-existant/file ]"
As the expressions are standard shell test expressions, it is possible to
string multiple expressions together with -a
and -o
in the standard fashion.
This test will succeed as the entire expression evaluates to true.
assertTrue 'test failed' '[ 1 -eq 1 -a 2 -eq 2 ]'
One word of warning: be very careful with your quoting as shell is not the most forgiving of bad quoting, and things will fail in strange ways.
assertFalse [message] condition
Asserts that a given shell test condition is false. The condition can be as
simple as a shell false value (the value 1
-- equivalent to
${SHUNIT_FALSE}
), or a more sophisticated shell conditional expression. The
message is optional, and must be quoted.
For examples of more sophisticated expressions, see assertTrue
.
Just to clarify, failures do not test the various arguments against one another. Failures simply fail, optionally with a message, and that is all they do. If you need to test arguments against one another, use asserts.
If all failures do is fail, why might one use them? There are times when you may
have some very complicated logic that you need to test, and the simple asserts
provided are simply not adequate. You can do your own validation of the code,
use an assertTrue ${SHUNIT_TRUE}
if your own tests succeeded, and use a
failure to record a failure.
fail [message]
Fails the test immediately. The message is optional, and must be quoted.
failNotEquals [message] unexpected actual
Fails the test immediately, reporting that the unexpected and actual values are not equal to one another. The message is optional, and must be quoted.
Note: no actual comparison of unexpected and actual is done.
failSame [message] expected actual
Fails the test immediately, reporting that the expected and actual values are the same. The message is optional, and must be quoted.
Note: no actual comparison of expected and actual is done.
failNotSame [message] expected actual
Fails the test immediately, reporting that the expected and actual values are not the same. The message is optional, and must be quoted.
Note: no actual comparison of expected and actual is done.
failFound [message] content
Fails the test immediately, reporting that the content was found. The message is optional, and must be quoted.
Note: no actual search of content is done.
failNotFound [message] content
Fails the test immediately, reporting that the content was not found. The message is optional, and must be quoted.
Note: no actual search of content is done.
oneTimeSetUp
This function can be optionally overridden by the user in their test suite.
If this function exists, it will be called once before any tests are run. It is useful to prepare a common environment for all tests.
oneTimeTearDown
This function can be optionally overridden by the user in their test suite.
If this function exists, it will be called once after all tests are completed. It is useful to clean up the environment after all tests.
setUp
This function can be optionally overridden by the user in their test suite.
If this function exists, it will be called before each test is run. It is useful to reset the environment before each test.
tearDown
This function can be optionally overridden by the user in their test suite.
If this function exists, it will be called after each test completes. It is useful to clean up the environment after each test.
startSkipping
This function forces the remaining assert and fail functions to be "skipped", i.e. they will have no effect. Each function skipped will be recorded so that the total of asserts and fails will not be altered.
endSkipping
This function returns calls to the assert and fail functions to their default behavior, i.e. they will be called.
isSkipping
This function returns the current state of skipping. It can be compared against
${SHUNIT_TRUE}
or ${SHUNIT_FALSE}
if desired.
The default behavior of shUnit2 is that all tests will be found dynamically. If
you have a specific set of tests you want to run, or you don't want to use the
standard naming scheme of prefixing your tests with test
, these functions are
for you. Most users will never use them though.
suite
This function can be optionally overridden by the user in their test suite.
If this function exists, it will be called when shunit2
is sourced. If it does
not exist, shUnit2 will search the parent script for all functions beginning
with the word test
, and they will be added dynamically to the test suite.
suite_addTest name
This function adds a function named name to the list of tests scheduled for
execution as part of this test suite. This function should only be called from
within the suite()
function.
There are several constants provided by shUnit2 as variables that might be of use to you.
Predefined
Constant | Value |
---|---|
SHUNIT_TRUE | Standard shell true value (the integer value 0). |
SHUNIT_FALSE | Standard shell false value (the integer value 1). |
SHUNIT_ERROR | The integer value 2. |
SHUNIT_TMPDIR | Path to temporary directory that will be automatically cleaned up upon exit of shUnit2. |
SHUNIT_VERSION | The version of shUnit2 you are running. |
User defined
Constant | Value |
---|---|
SHUNIT_CMD_EXPR | Override which expr command is used. By default expr is used, except on BSD systems where gexpr is used. |
SHUNIT_COLOR | Enable colorized output. Options are 'auto', 'always', or 'none', with 'auto' being the default. |
SHUNIT_PARENT | The filename of the shell script containing the tests. This is needed specifically for Zsh support. |
SHUNIT_TEST_PREFIX | Define this variable to add a prefix in front of each test name that is output in the test report. |
The constants values SHUNIT_TRUE
, SHUNIT_FALSE
, and SHUNIT_ERROR
are
returned from nearly every function to indicate the success or failure of the
function. Additionally the variable flags_error
is filled with a detailed
error message if any function returns with a SHUNIT_ERROR
value.
If you include lots of assert statements in an individual test function, it can become difficult to determine exactly which assert was thrown unless your messages are unique. To help somewhat, line numbers can be included in the assert messages. To enable this, a special shell "macro" must be used rather than the standard assert calls. Shell doesn't actually have macros; the name is used here as the operation is similar to a standard macro.
For example, to include line numbers for a assertEquals()
function call,
replace the assertEquals()
with ${_ASSERT_EQUALS_}
.
Example -- Asserts with and without line numbers
#! /bin/sh
# file: examples/lineno_test.sh
testLineNo() {
# This assert will have line numbers included (e.g. "ASSERT:[123] ...").
echo "ae: ${_ASSERT_EQUALS_}"
${_ASSERT_EQUALS_} 'not equal' 1 2
# This assert will not have line numbers included (e.g. "ASSERT: ...").
assertEquals 'not equal' 1 2
}
# Load shUnit2.
. ../shunit2
Run this codeNotes:
Due to how shell parses command-line arguments, all strings used with
macros should be quoted twice. Namely, single-quotes must be converted to single-double-quotes, and vice-versa.
Normal assertEquals
call.
assertEquals 'some message' 'x' ''
Macro _ASSERT_EQUALS_
call. Note the extra quoting around the message and
the null value.
_ASSERT_EQUALS_ '"some message"' 'x' '""'
There are times where the test code you have written is just not applicable to the system you are running on. This section describes how to skip these tests but maintain the total test count.
Probably the easiest example would be shell code that is meant to run under the bash shell, but the unit test is running under the Bourne shell. There are things that just won't work. The following test code demonstrates two sample functions, one that will be run under any shell, and the another that will run only under the bash shell.
Example -- math include
# file: examples/math.inc.
add_generic() {
num_a=$1
num_b=$2
expr $1 + $2
}
add_bash() {
num_a=$1
num_b=$2
echo $(($1 + $2))
}
And here is a corresponding unit test that correctly skips the add_bash()
function when the unit test is not running under the bash shell.
Example -- math unit test
#! /bin/sh
# file: examples/math_test.sh
testAdding() {
result=`add_generic 1 2`
assertEquals \
"the result of '${result}' was wrong" \
3 "${result}"
# Disable non-generic tests.
[ -z "${BASH_VERSION:-}" ] && startSkipping
result=`add_bash 1 2`
assertEquals \
"the result of '${result}' was wrong" \
3 "${result}"
}
oneTimeSetUp() {
# Load include to test.
. ./math.inc
}
# Load and run shUnit2.
. ../shunit2
Run this codeRunning the above test under the bash shell will result in the following output.
$ /bin/bash math_test.sh
testAdding
Ran 1 test.
OK
But, running the test under any other Unix shell will result in the following output.
$ /bin/ksh math_test.sh
testAdding
Ran 1 test.
OK (skipped=1)
As you can see, the total number of tests has not changed, but the report indicates that some tests were skipped.
Skipping can be controlled with the following functions: startSkipping()
,
endSkipping()
, and isSkipping()
. Once skipping is enabled, it will remain
enabled until the end of the current test function call, after which skipping is
disabled.
When running a test script, you may override the default set of tests, or the suite-specified set of tests, by providing additional arguments on the command line. Each additional argument after the --
marker is assumed to be the name of a test function to be run in the order specified. e.g.
test-script.sh -- testOne testTwo otherFunction
or
shunit2 test-script.sh testOne testTwo otherFunction
In either case, three functions will be run as tests, testOne
, testTwo
, and otherFunction
. Note that the function otherFunction
would not normally be run by shunit2
as part of the implicit collection of tests as it's function name does not match the test function name pattern test*
.
If a specified test function does not exist, shunit2
will still attempt to run that function and thereby cause a failure which shunit2
will catch and mark as a failed test. All other tests will run normally.
The specification of tests does not affect how shunit2
looks for and executes the setup and tear down functions, which will still run as expected.
Most continuous integration tools like CircleCI, are capable to interpret test results in JUnit format, helping you with spacilized sections and triggers tailored to identify faster a failing test. This functionality is still unreleased but you can test it right away, installing shunit2 from source.
Given that you execute your test script in the following way
test-script.sh
You can generate the JUnit report like this
mkdir -p results
test-script.sh -- --output-junit-xml=results/test-script.xml
It will generate something like
<?xml version="1.0" encoding="UTF-8"?>
<testsuite
failures="0"
name="test-script.sh"
tests="1"
assertions="1"
>
<testcase
classname="test-script.sh"
name="testOne"
assertions="1"
>
</testcase>
</testsuite>
You can also specify a more verbose suite name
test-script.sh -- --output-junit-xml=results/test-script.xml --suite-name=Test_Script
Then say to your CI tool where the results are. In the case of CircleCI is like the following
- store_test_results:
path: results
For help, please send requests to either the shunit2-users@forestent.com mailing
list (archives available on the web at
https://groups.google.com/a/forestent.com/forum/#!forum/shunit2-users) or
directly to Kate Ward
For compatibility with Zsh, there is one requirement that must be met -- the
shwordsplit
option must be set. There are three ways to accomplish this.
In the unit-test script, add the following shell code snippet before sourcing
the shunit2
library.
setopt shwordsplit
When invoking zsh from either the command-line or as a script with #!
,
add the -y
parameter.
#! /bin/zsh -y
Run this codeWhen invoking zsh from the command-line, add -o shwordsplit --
as
parameters before the script name.
$ zsh -o shwordsplit -- some_script