Automated Tests: Difference between revisions
Line 208: | Line 208: | ||
= Creating Tests = | = Creating Tests = | ||
Creating new automated tests is fairly easy once one gets started. The most difficult part is learning the core test functions found in: [http://nst.cvs.sourceforge.net/nst/test/include/test_functions.bash?revision=HEAD&view=markup test/include/test_functions.bash] | Creating new automated tests is fairly easy once one gets started. The most difficult part is learning the core test functions found in: [http://nst.cvs.sourceforge.net/nst/test/include/test_functions.bash?revision=HEAD&view=markup test/include/test_functions.bash]. | ||
== Files Making Up The Test == | == Files Making Up The Test == | ||
To create a new test: | |||
* Create a directory under the: "[http://nst.cvs.sourceforge.net/nst/test test]" with a name related to the test to be performed. For example, "[http://nst.cvs.sourceforge.net/nst/test/snort test/snort]" is the directory containing the test for '''snort'''. | |||
* Create a '''bash''' script under the new directory with the name: '''test.bash'''. For example, "[http://nst.cvs.sourceforge.net/nst/test/snort/test.bash?revision=HEAD&view=markup test/snort/test.bash]" is the '''bash''' script used to test '''snort'''. | |||
* Optionally, place any other files your test may require under the same directory. The "[http://nst.cvs.sourceforge.net/nst/test/Image_Graph test/Image_Graph]" test is an example of a script having additional files used during the test. | |||
'''NOTE''': You may use any language available on the '''NST''' system when creating tests ('''bash''', '''PHP''', '''Python''', ...). HOWEVER, you MUST create the file: '''test.bash''' as the initial launch point for your test. | |||
== Functions Which Can Be Used == | == Functions Which Can Be Used == | ||
Line 253: | Line 264: | ||
* It uses the: "''test_require_wui''" function to make sure the '''NST WUI''' is up and running. | * It uses the: "''test_require_wui''" function to make sure the '''NST WUI''' is up and running. | ||
* It defines a couple of helper '''bash''' functions. | |||
* It uses the: "''test_start''" and "''test_results''" functions to help report on the success/failure of running tests. | |||
* It redirects output to the: "''test_log''" function (this will appear in the: "'''runtest.log'''" output file). | |||
* It uses the: "''${PIPESTATUS[0]}''" array to pick out the appropriate exit code when multiple commands are piped together. | |||
* It uses the: "''test_wui_get''" function to "drive" the '''NST WUI'''. By using this function, the test script is able to: Perform a virus scan, check scan results, download an infected file, peform another scan and verify the virus is detected. |
Revision as of 15:35, 30 December 2007
Automated Tests
The "test" sub-directory contains a collection of test scripts which can be run by a NST developer to help assure that a new build is behaving properly.
The design of the test framework is as follows:
- Tests are run from the development machine against a remote probe (ssh is used to transfer and run test scripts).
- Tests may be DESTRUCTIVE to the target machine! Run tests against a NST system BEFORE deployment or final installation.
- The automated tests are good base line checks, but they are not exhaustive. They are useful in determining if the current development environment is ready for human testing, but do not replace human testing.
- Running one, many or all tests is a simple process.
- Creating tests is a simple process.
- Automated tests do not need to worry about missing libraries, unknown owners or broken symbolic links as existing make targets already exist for this purpose.
Running Tests
To run tests, you must have the following two systems at your disposal:
- A Fedora based NST Development system with the NST source code checked out and configured (the same system used to build the NST ISO image).
- A running NST system booted from either the NST ISO image OR a hard disk installation of the NST. ***WARNING*** The tests are run against this system and will change the state/configuration of the system - you will likely need to reboot, re-configure, and/or re-install after running tests against this NST system.
Hanging Tests
If a test hangs, you may need to ssh to the NST system being tested, and kill the hanging processes by hand.
The nessus test has sometimes triggered this situation.
Running The Full "probe-check"
To run the full test suite against a running NST system, use the probe-check target on the development system as shown below (change the IP address shown to the IP address of the NST system to be checked):
... All testing is done - Runs "ldd" on files looking for missing modules. - Checks if there are any obvious ownership issues. - Checks for symbolic links which point nowhere. - Runs each of the test scripts under the "test" directory. Takes a long time to complete ...
... By passing the "-R" option to less, you should see color while reviewing the log file ...
Running All Tests
To run all of the tests found under the: "test" sub-directory without running the ldd, ownership, and symbolic link tests, use the test target on the development system as shown below (change the IP address shown to the IP address of the NST system to be checked):
... Runs each of the test scripts under the "test" directory. Takes a long time to complete ...
... By passing the "-R" option to less, you should see color while reviewing the log file ...
Running A Single Test
To quickly run a specific test found under the: "test" sub-directory, include the TEST=NAME option when invoking the test target (change the IP address shown to the IP address of the NST system to be checked):
Running test: "mbrowse" on: "192.168.22.13" Script: "/root/nst/tmp/test/mbrowse/runtest" Log: "/root/nst/tmp/test/mbrowse/runtest.log" Locating: "mbrowse" ........................................ [ OK ] Verify "mbrowse -help" returns expected version ............ [ OK ] Success! All 2 tests passed ................................ [ OK ] SUCCESS! All of the tests passed - congratulations.
Running Multiple Tests
To quickly run a specific set of tests found under the: "test" sub-directory, include the TEST="NAME0 NAME1 ..." option when invoking the test target (change the IP address shown to the IP address of the NST system to be checked):
Running test: "getipaddr" on: "192.168.22.13" Script: "/root/nst/tmp/test/getipaddr/runtest" Log: "/root/nst/tmp/test/getipaddr/runtest.log" Locating: "getipaddr" ...................................... [ OK ] Locating: "wget" ........................................... [ OK ] Getting public IP address .................................. [ OK ] Checking: "http://nst.sourceforge.net/nst/cgi-bin/ip.cgi" .. [ OK ] Checking: "http://www.networksecuritytoolkit.org/nst/cgi-bin [ OK ] Checking: "http://whatismyip.org/" ......................... [ OK ] Success! All 6 tests passed ................................ [ OK ] Running test: "mbrowse" on: "192.168.22.13" Script: "/root/nst/tmp/test/mbrowse/runtest" Log: "/root/nst/tmp/test/mbrowse/runtest.log" Locating: "mbrowse" ........................................ [ OK ] Verify "mbrowse -help" returns expected version ............ [ OK ] Success! All 2 tests passed ................................ [ OK ] Running test: "nfs" on: "192.168.22.13" Script: "/root/nst/tmp/test/nfs/runtest" Log: "/root/nst/tmp/test/nfs/runtest.log" Locating: "mount" .......................................... [ OK ] Locating: "service" ........................................ [ OK ] Locating: "showmount" ...................................... [ OK ] Locating: "sleep" .......................................... [ OK ] Locating: "sync" ........................................... [ OK ] Locating: "umount" ......................................... [ OK ] Saving: "/etc/exports" ..................................... [ OK ] Installing custom: "/etc/exports" .......................... [ OK ] Starting service: rpcbind .................................. [ OK ] Starting service: nfslock .................................. [ OK ] Starting service: rpcidmapd ................................ [ OK ] Starting service: rstatd ................................... [ OK ] Starting service: nfs ...................................... [ OK ] Testing /usr/local/sbin/showmount output ................... [ OK ] Mounting exported filesystem to: "/root/runtest/nfs/nfs" ... [ OK ] Verifying that we can see file on NFS mount ................ [ OK ] Verify that we see removal of file on NFS mount ............ [ OK ] Umount of: "/root/runtest/nfs/nfs" ......................... [ OK ] Restoring: "/etc/exports" .................................. [ OK ] Stopping serivce: nfs ...................................... [ OK ] Stopping serivce: rstatd ................................... [ OK ] Stopping serivce: rpcidmapd ................................ [ OK ] Stopping serivce: nfslock .................................. [ OK ] Stopping serivce: rpcbind .................................. [ OK ] Success! All 24 tests passed ............................... [ OK ] SUCCESS! All of the tests passed - congratulations.
When Tests Fail
Should a test fail, it is often desirable to dig a bit deeper into the log files produced by running the test to see what happened.
Log Files Left On The Development System
When each test is run, a temporary directory is created having the form: "tmp/test/TEST_NAME" on the development system. NOTE: The "tmp" may be a different location if you didn't use the default configuration options when configuring your development system.
For example, after running the getipaddr, one should find the: "tmp/test/getipaddr" directory on the development system. At a minimum, one should find the following files under this directory:
- runtest.log
- Verbose output from the running of the test.
- runtest
- The fully formed test script (supporting bash functions joined with test.bash).
- test.bash
- The source test script.
These files should ALWAYS be present after running a test - regardless of whether the test passes or fails.
Log Files Left On The NST System
When each test is run, a temporary directory is created on the NST system having the form: "/root/runtest/TEST_NAME".
For example, after running the getipaddr, one should find the: "/root/runtest/getipaddr" directory on the NST system. At a minimum, one should find the following files under this directory:
- runtest.log
- Verbose output from the running of the test.
- runtest
- The fully formed test script (supporting bash functions joined with test.bash).
- test.bash
- The source test script.
In addition to the above files, one may also find temporary files generated by running the test under this directory. For example, the clamav test makes requests to the NST WUI and saves the HTML responses to temporary files.
NOTE: These files will only be present if the test fails.
Creating Tests
Creating new automated tests is fairly easy once one gets started. The most difficult part is learning the core test functions found in: test/include/test_functions.bash.
Files Making Up The Test
To create a new test:
- Create a directory under the: "test" with a name related to the test to be performed. For example, "test/snort" is the directory containing the test for snort.
- Create a bash script under the new directory with the name: test.bash. For example, "test/snort/test.bash" is the bash script used to test snort.
- Optionally, place any other files your test may require under the same directory. The "test/Image_Graph" test is an example of a script having additional files used during the test.
NOTE: You may use any language available on the NST system when creating tests (bash, PHP, Python, ...). HOWEVER, you MUST create the file: test.bash as the initial launch point for your test.
Functions Which Can Be Used
Simple Example
The following shows then entire script making up the original mbrowse test (test/mbrowse/test.bash).
# ${Id} # # Make sure mbrowse is installed test_require GREP grep; test_require MBROWSE mbrowse; EXPECT_TEXT="^mbrowse ${mbrowse_VER}"; test_start "Verify \"mbrowse -help\" returns expected version"; ${MBROWSE} -help 2>&1 | ${GREP} "${EXPECT_TEXT}" &>/dev/null; test_passed_or_exit "${PIPESTATUS[1]}" \ "Failed to find \"${EXPECT_TEXT}\" in \"mbrowse -help\" output"; # End of all tests test_exit;
The above is an example of a very simple test and performs the following checks:
- It runs: "test_require MBROWSE mbrowse" to verify that the mbrowse executable can be found on the NST system (if successful, it sets the variable MBROWSE to the location of the executable).
- It then checks to see if the short help output from the mbrowse executable contains the expected version number.
- Finally the "test_exit" function is called for a summary report.
Complex Example
The clamav test (see: "test/clamav/test.bash"), is an example of a more complex test bash script.
- It uses the: "test_require" function to make sure the necessary executables are present on the test system.
- It uses the: "test_require_wui" function to make sure the NST WUI is up and running.
- It defines a couple of helper bash functions.
- It uses the: "test_start" and "test_results" functions to help report on the success/failure of running tests.
- It redirects output to the: "test_log" function (this will appear in the: "runtest.log" output file).
- It uses the: "${PIPESTATUS[0]}" array to pick out the appropriate exit code when multiple commands are piped together.
- It uses the: "test_wui_get" function to "drive" the NST WUI. By using this function, the test script is able to: Perform a virus scan, check scan results, download an infected file, peform another scan and verify the virus is detected.