Ask Your Question
2

Is there any way to represent graphically the tempest testcases which are executed

asked 2014-02-18 11:42:06 -0600

vinayd gravatar image

Hi All,

I am using tempest framework for automating some of our application specific REST APIs, i am able to execute the test cases using nosetests but its not giving output or representing the results in proper manner, like graphical way of how many test cases passed, failed etc.

Is there any way to do the above

Please let me know if its available

Thanks, Vinay

edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted
1

answered 2014-02-23 12:20:47 -0600

rahmu gravatar image

Running tests with nosetests is not supported in Tempest anymore. Instead it supports Testrepository (testr) to launch the different tests.

I don't know of any graphical way to represent the results of the tests, but Tempest presents a CLI tool that enhances the output of the standard testr run called run_tempest.sh. For instance to run all the network tests, you should run something like this:

$ ./run_tempest.sh tempest.api.network

The output will then have 4 parts:

Each individual tests

Note that if your terminal emulator supports it, it will use a color code to help you identify slower tests.

tempest.api.network.admin.test_agent_management.AgentManagementTestJSON
    test_list_agent[gate,smoke]                                       OK  0.14
    test_list_agents_non_admin[gate,smoke]                            OK  0.16
    test_show_agent[gate,smoke]                                       OK  0.03
    test_update_agent_description[gate,smoke]                         OK  0.13
    test_update_agent_status[gate,smoke]                              OK  0.03
tempest.api.network.admin.test_dhcp_agent_scheduler.DHCPAgentSchedulersTestJSON
    test_list_dhcp_agent_hosting_network[gate,smoke]                  OK  0.31
    test_list_networks_hosted_by_one_dhcp[gate,smoke]                 OK  0.06
    test_remove_network_from_dhcp_agent[gate,smoke]                   OK  0.08
tempest.api.network.admin.test_agent_management.AgentManagementTestXML
    test_list_agent[gate,smoke]                                       OK  0.09
    test_list_agents_non_admin[gate,smoke]                            OK  0.31
...

List slowest tests

Slowest 10 tests took 20.12 secs:
tempest.api.network.test_floating_ips.FloatingIPTestJSON
    test_floating_ip_update_different_router[gate,smoke]                  1.81
tempest.api.network.test_floating_ips.FloatingIPTestXML
    test_floating_ip_delete_port[gate,smoke]                              1.49
    test_floating_ip_update_different_router[gate,smoke]                  2.62
tempest.api.network.test_load_balancer.LoadBalancerTestJSON
    test_create_update_delete_pool_vip                                    2.47
tempest.api.network.test_load_balancer.LoadBalancerTestXML
    test_create_update_delete_pool_vip                                    2.31
tempest.api.network.test_networks.BulkNetworkOpsTestXML
    test_bulk_create_delete_subnet[gate,smoke]                            1.50
tempest.api.network.test_networks.NetworksIpV6TestJSON
    test_port_list_filter_by_router_id[gate,smoke]                        2.06
tempest.api.network.test_networks.NetworksIpV6TestXML
    test_port_list_filter_by_router_id[gate,smoke]                        1.83
tempest.api.network.test_routers.RoutersTest
    test_add_remove_router_interface_with_port_id[gate,smoke]             2.01
    test_update_extra_route[gate,smoke]                                   2.01

Error traceback of failing tests

======================================================================
FAIL: setUpClass (tempest.api.network.test_metering_extensions.MeteringJSON)
----------------------------------------------------------------------
Traceback (most recent call last):
_StringException: Traceback (most recent call last):
  File "tempest/api/network/test_metering_extensions.py", line 46, in setUpClass
    cls.metering_label = cls.create_metering_label(name, description)
  File "tempest/api/network/base.py", line 327, in create_metering_label
    name=data_utils.rand_name("metering-label"))
  File "tempest/services/network/network_client_base.py", line 135, in _create
    resp, body = self.post(uri, post_data)
  File "tempest/services/network/network_client_base.py", line 62, in post
    return self.rest_client.post(uri, body, headers)
  File "tempest/common/rest_client.py", line 177, in post
    return self.request('POST', url, headers, body)
  File "tempest/common/rest_client.py", line 352, in request
    resp, resp_body)
  File "tempest/common/rest_client.py", line 396, in _error_checker
    raise exceptions.NotFound(resp_body)
NotFound: Object not found
Details: 404 Not Found

The resource could not be found.

Summary

Ran 176 tests in 82.424s

FAILED (failures=3)
edit flag offensive delete link more
0

answered 2014-10-04 23:52:58 -0600

updated 2014-11-26 07:53:28 -0600

Hi, Try talking a look at ELK Stack

ElasticSearch, Logstash and kibana.

When all these three combined, you can get a graphical list of the logs, outputs and you can customize them.

http://www.elasticsearch.org/

Let me know if you need more information as i have put in just basic things to get u started.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

Stats

Asked: 2014-02-18 11:42:06 -0600

Seen: 538 times

Last updated: Nov 26 '14