user_guide:extend:unit_tests

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revisionBoth sides next revision
dev_corner:testcases [2019/11/11 23:23] – [Examples and Tutorials] gawrilowuser_guide:extend:unit_tests [2021/04/21 23:07] – [Testing core library C++ components] gawrilow
Line 40: Line 40:
  
 The test driver script offers a couple of options making the testing more rigorous or relaxed.  Most of them are designed specifically for Jenkins; the developers might want to make occasional use of them as well: The test driver script offers a couple of options making the testing more rigorous or relaxed.  Most of them are designed specifically for Jenkins; the developers might want to make occasional use of them as well:
-  ? ''%%--validate-XML%%'' +  ? ''%%--validate%%'' 
-  :: Every XML file delivering source data during the testing process is validated against the polymake XML schemaa copy of it saved in a scratch file, loaded again, and two objects checked for identity.  This option might be useful for testing after changes to the overall XML schema or I/O methods of some property types.+  :: Every data file involved in tests as input or expected result is validated against the general datafile schema as well as against a type-specific schema; then a copy of the data is saved in a scratch file, loaded again, and two objects checked for identity.  This ensures that no test is executed on outdated or corrupted data.
   ? ''%%--shuffle%%''   ? ''%%--shuffle%%''
   :: The test suites are executed in a random order.  The random seed value is displayed at the beginning of the test series execution.   :: The test suites are executed in a random order.  The random seed value is displayed at the beginning of the test series execution.
Line 48: Line 48:
   ? ''%%--random-failures=ignore%%''   ? ''%%--random-failures=ignore%%''
   :: Report failures of certain unit tests (those marked as //random//) separately and do not consider them for the overall test success status.  Specifying ''hide'' instead of ''ignore'' will completely suppress the notifications about such test failures.   :: Report failures of certain unit tests (those marked as //random//) separately and do not consider them for the overall test success status.  Specifying ''hide'' instead of ''ignore'' will completely suppress the notifications about such test failures.
-  ? ''%%--no-new-glue-code%%'' +  ? ''%%--cpperl-root=PATH%%'' 
-  :: Prohibit automatic generation of C++/perl glue code (aka wrappers).  This option should only be deployed on the Jenkins server.  You as a developer are responsible for creation and submission into the git repository of all necessary wrappers generated on behalf of your new application code.+  :: Redirects new C++/perl glue code (aka wrappers) generated during test execution to the specified location.  If set to ''/dev/null'', new wrapper generation is forbidden.  By default, all wrapper updates are stored directly in the source code tree in your workspace.
   ? ''%%--allow-exec-time=SEC%%''   ? ''%%--allow-exec-time=SEC%%''
   :: Raise the execution time limit for monitored tests.  This option is useful for slowly running builds (debug, coverage, sanitizer).   :: Raise the execution time limit for monitored tests.  This option is useful for slowly running builds (debug, coverage, sanitizer).
Line 202: Line 202:
 simulates the F1 context help function of the interactive shell.  Expected help topic headers should be listed in the order they would appear in the real session, that is, describing incomplete expressions from right to left. simulates the F1 context help function of the interactive shell.  Expected help topic headers should be listed in the order they would appear in the real session, that is, describing incomplete expressions from right to left.
  
-==== Testing Transformation Scripts ====+==== Testing Data Upgrade Rules ====
  
-Each time when you introduce a format change and create a XSLT version upgrade script, you should add testcases for it in the testgroup ''upgrade'' of the affected application.  Preparing such a testcase is extremely simple: you take a data file in an old (pre-conversion) format, store it in two copies, e.g. ''Name-Version.poly'' and ''Name-Version-in.poly'', and load the ''Name-Version.poly'' into polymake, which automatically applies the transformation and stores it in the updated form.  Then you add the following line to ''test.pl'': +Each time when you introduce an incompatible data model change and introduce upgrade rules, you should add testcases for them in the testgroup ''upgrade'' of the affected application.  Preparing such a testcase is extremely simple: you take a data file in an old (pre-conversion) format, store it in two copies, e.g. ''Name-OldVersion.poly'' and ''Name-OldVersion-in.poly'', and load the ''Name-Version.poly'' into polymake, which automatically applies the upgrade rules and stores it in the updated form.  Once you verified the correctness of data transformation, add the following line to ''test.pl'': 
-  compare_transformed_object('Name-Version');+  compare_transformed_object('Name-OldVersion');
  
      
Line 287: Line 287:
  
  
-===== Testing PTL data structures  ===== +===== Testing core library C++ components  ===== 
-All tests for internal data structures are in ''polymake/testscenarios/core_lib_tests''.+Unit tests for core library components are kept in ''testscenarios/core_lib_tests''.  They are based on [[https://github.com/google/googletest/blob/master/docs/index.md | googletest framework]] which should be installed on your computer separately.
  
-1. Implement your change/addition to the PTL.+Please be aware that these tests may not involve any components depending on perl, which for the time being includes BigObjects.
  
-2. Put a c++ file that tests your feature into ''testscenarios/core_lib_tests/source/polymake-pico/apps/test/src/''Note that during execution of the tests, this file will get copied to the very similar directory ''testscenarios/core_lib_tests/polymake-pico/apps/test/src/''; don't let yourself get confused and edit the wrong file!+1. Put the unit tests in a c++ file into ''testscenarios/core_lib_tests/src/''.
  
-3Put a trivial file that calls your test function into ''testscenarios/core_lib_tests/source/polymake-pico/apps/test/scripts/''+2Change into ''testscenarios/core_lib_tests'', build and run the entire unit test suite once by issuing ''./build_test.sh''
  
-4Put the expected output into ''testscenarios/core_lib_tests/expected/''+3Add more unit tests to the same c++ file and run them exclusively:
  
-5. Execute the tests by issuing ''./perform.sh'' (and not sh ./perform.sh) inside ''testscenarios/core_lib_tests''+  ninja -C work/build/Opt 
 +  work/build/Opt/all_tests --gtest_filter=SUITABLE_PATTERN 
 +   
 +You can still use the script ''./build_test.sh'' for repeated test runs, it will just take longer because it starts a full clean build every time.
  
-6Relax if the tests pass; else iterate.+4If you want to debug failing tests, build them in Debug mode: 
 + 
 +  ./build_test.sh --build-mode=Debug 
 +  gdb -args work/build/Debug/all_tests --gtest_filter=SUITABLE_PATTERN 
 +   
 +Again, to avoid repeated full clean builds after fixing the library code or the tests, you can use ''ninja -C work/build/Debug''
  
 ===== Investigating Failures ===== ===== Investigating Failures =====
  • user_guide/extend/unit_tests.txt
  • Last modified: 2023/05/17 12:13
  • by lkastner