Ask Your Question

How to run tempest tests, "testr" vs "nosetests"?

asked 2014-07-25 10:01:56 -0500

danno gravatar image

What is the difference between using "tester run --parallel" and "nosetests -v tempest" to run the tests?

The total number of tests ran is different between the 2 methods.


"testr run --parallel"

Ran 1973 (+1972) tests in 3679.819s (+3679.582s)

FAILED (id=2, failures=2 (+2), skips=233)

"nosetests –v tempest"

Ran 2160 tests in 5648.261s

FAILED (SKIP=233, errors=125, failures=4)

Also, for the test result of testr, what does (+1972) or (+2) mean?

edit retag flag offensive close merge delete

2 answers

Sort by » oldest newest most voted

answered 2016-01-06 07:30:58 -0500

Jagan Prakash gravatar image

using testr run you can't able to stop the test, after finishing the test it will show you that "Failed test = 100".

using nosetests we can able to stop the test once the test fail. That can done by executing the below command

nosetests -vx tempest.api.object_storage.test_container_services:ContainerTest.test_create_container

Note: -vxl

x ---> Stop running tests after the first error or failure.

l ---> Run test in Debug mode: So we can see better error message.

for this purpose we are going for nosetest.

edit flag offensive delete link more

answered 2016-11-23 14:21:40 -0500

Both are just different applications for running tests, they have their own rules for discovering tests and their own way of executing them.

The difference you are seeing in the number of tests executed using testr vs nosetests is basically based on the way each one discovers or identifies what is a test. To explain this easier let's use an example.

Let's say we want to run all tests in this package: tempest/api/compute/flavors

First using nosetests:

ad_cjmarti2@cas-devstack:/opt/stack/tempest$ nosetests -v tempest.api.compute.flavors
tempest.api.compute.flavors.test_flavors.FlavorsV2TestJSON.create_test_server ... ok
tempest.api.compute.flavors.test_flavors.FlavorsV2TestJSON.create_test_server_group ... ok
tempest.api.compute.flavors.test_flavors.FlavorsV2TestJSON.test_get_flavor[id-1f12046b-753d-40d2-abb6-d8eb8b30cb2f,smoke] ... ok
tempest.api.compute.flavors.test_flavors.FlavorsV2TestJSON.test_list_flavors[id-e36c0eaa-dff5-4082-ad1f-3f9a80aa3f59,smoke] ... ok
tempest.api.compute.flavors.test_flavors.FlavorsV2TestJSON.test_list_flavors_detailed_filter_by_min_disk[id-3df2743e-3034-4e57-a4cb-b6527f6eac79] ... ok
tempest.api.compute.flavors.test_flavors.FlavorsV2TestJSON.test_list_flavors_detailed_filter_by_min_ram[id-09fe7509-b4ee-4b34-bf8b-39532dc47292] ... ok
tempest.api.compute.flavors.test_flavors.FlavorsV2TestJSON.test_list_flavors_detailed_limit_results[id-b26f6327-2886-467a-82be-cef7a27709cb] ... ok
tempest.api.compute.flavors.test_flavors.FlavorsV2TestJSON.test_list_flavors_detailed_using_marker[id-6db2f0c0-ddee-4162-9c84-0703d3dd1107] ... ok
tempest.api.compute.flavors.test_flavors.FlavorsV2TestJSON.test_list_flavors_filter_by_min_disk[id-10645a4d-96f5-443f-831b-730711e11dd4] ... ok
tempest.api.compute.flavors.test_flavors.FlavorsV2TestJSON.test_list_flavors_filter_by_min_ram[id-935cf550-e7c8-4da6-8002-00f92d5edfaa] ... ok
tempest.api.compute.flavors.test_flavors.FlavorsV2TestJSON.test_list_flavors_limit_results[id-8d7691b3-6ed4-411a-abc9-2839a765adab] ... ok
tempest.api.compute.flavors.test_flavors.FlavorsV2TestJSON.test_list_flavors_using_marker[id-e800f879-9828-4bd0-8eae-4f17189951fb] ... ok
tempest.api.compute.flavors.test_flavors.FlavorsV2TestJSON.test_list_flavors_with_detail[id-6e85fde4-b3cd-4137-ab72-ed5f418e8c24] ... ok

Ran 13 tests in 16.461s


And now using testr:

ad_cjmarti2@cas-devstack:/opt/stack/tempest$ testr run tempest.api.compute.flavors
Ran 11 tests in 2.366s (+0.587s)
PASSED (id=9)

To make the output of testr to look similar to that of nosetests let's us the subunit output and pipe it to the subunit formatter:

 ad_cjmarti2@cas-devstack:/opt/stack/tempest$ testr run --subunit tempest.api.compute.flavors | subunit-trace -n -f
{0} tempest.api.compute.flavors.test_flavors.FlavorsV2TestJSON.test_get_flavor [0.549721s] ... ok
{0} tempest.api.compute.flavors.test_flavors.FlavorsV2TestJSON.test_list_flavors [0.248651s] ... ok
{0} tempest.api.compute.flavors.test_flavors.FlavorsV2TestJSON.test_list_flavors_detailed_filter_by_min_disk [0.165985s] ... ok
{0} tempest.api.compute.flavors.test_flavors.FlavorsV2TestJSON.test_list_flavors_detailed_filter_by_min_ram [0.156681s] ... ok
{0} tempest.api.compute.flavors.test_flavors.FlavorsV2TestJSON.test_list_flavors_detailed_limit_results [0.066102s] ... ok
{0} tempest.api.compute.flavors.test_flavors.FlavorsV2TestJSON.test_list_flavors_detailed_using_marker [0.164474s] ... ok
{0} tempest.api.compute.flavors.test_flavors.FlavorsV2TestJSON.test_list_flavors_filter_by_min_disk [0.195904s] ... ok
{0} tempest.api.compute.flavors.test_flavors.FlavorsV2TestJSON.test_list_flavors_filter_by_min_ram [0.176698s] ... ok
{0} tempest.api.compute.flavors.test_flavors.FlavorsV2TestJSON.test_list_flavors_limit_results [0.076589s] ... ok
{0} tempest.api.compute.flavors.test_flavors.FlavorsV2TestJSON.test_list_flavors_using_marker [0.191877s] ... ok
{0} tempest.api.compute.flavors.test_flavors.FlavorsV2TestJSON.test_list_flavors_with_detail [0.203010s] ... ok

Ran: 11 tests in 9.0000 sec.
 - Passed: 11
 - Skipped: 0
 - Expected Fail: 0
 - Unexpected Success: 0
 - Failed: 0
Sum of execute time for each test: 2.1957 sec.

Worker Balance
 - Worker 0 (11 tests) => 0:00:02.201494

Viewing both executions we can clearly see that nosetests ran 13 tests while testr ran 11 tests. Taking a closer look at the test results we can see that these two are the tests that were ran by nosetests and were not run by testr:

tempest.api.compute.flavors.test_flavors.FlavorsV2TestJSON.create_test_server ... ok
tempest.api.compute ...
edit flag offensive delete link more

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower


Asked: 2014-07-25 10:01:56 -0500

Seen: 1,892 times

Last updated: Nov 23 '16