Debugging tempest

Introduction

One of the common tasks when you’re working on TripleO is to debug tempest when you found some failures. This is very important because once you found an error, you need to provide all the information needed when you’re gonna fill a bug, or you can even actually, find the root cause of the problem, and fix it yourself!
It’s also important to know when a failure in tempest is actually a bug, when is a miss configuration in some .conf file, timeouts, etc.
In this post I’ll try to dig into all these aspects when you’re debugging tempest. But before, we need to make some steps, like setup our OpenStack environment, download tempest, setup the environment and then hands on in some cases.

Installing TripleO

I’ll not cover here how to install TripleO, you can follow the official documentation here.

Preparing the environment

Once you have TripleO installed, we need to access the undercloud, and from there, we need to setup tempest environment. You can use both tempest upstream, or the tempest RDO package. We will use the last one.

Usually, TripleO install everything with the user stack, and in this directory we can find all the scripts and images used to install our environment.

ssh stack@undercloud

Now, we need to source the credentials that TripleO provide to us:

source /home/stack/overcloudrc.v3

There’s an overcloudrc for v2 api, however, it’s going to be deprecated soon.

You also need to have a public network available. If you don’t have it, the follow script can do that for you:

#!/bin/bash

for i in $(neutron floatingip-list -c id -f value)
do
neutron floatingip-disassociate $i
neutron floatingip-delete $i
done
for i in $(neutron router-list -c id -f value); do neutron router-gateway-clear $i; done
for r in $(neutron router-list -c id -f value); do
for p in $(neutron router-port-list $r -c id -f value); do
neutron router-interface-delete $r port=$p || true
done
done
for i in $(neutron router-list -c id -f value); do neutron router-delete $i; done
for i in $(neutron port-list -c id -f value); do neutron port-delete $i; done
for i in $(neutron net-list -c id -f value); do neutron net-delete $i; done

neutron net-create public --router:external=True \
--provider:network_type flat \
--provider:physical_network datacentre

public_net_id=$(neutron net-show public -f value -c id)

neutron subnet-create --name ext-subnet \
--allocation-pool \
start=192.168.24.100,end=192.168.24.120 \
--disable-dhcp \
--gateway 192.168.24.1 \
public 192.168.24.0/24

Set the start and end ip for your external subnet according your network environment.

Install the openstack-tempest, python-junitxml and python-tempestconf:

sudo yum -y install openstack-tempest python-junitxml python-tempestconf

Initialize tempest in a new directory (you don’t need to mkdir this, tempest will do that for you)

tempest init /home/stack/tempest
cd /home/stack/tempest

The output should be something very straightforward:

2017-09-21 08:01:07.215 1049 INFO tempest [-] Using tempest config file /etc/tempest/tempest.conf

Tempestconf

Tempestconf is a python script created to make it easy to create a tempest.conf file. If you ever worked with tempest, you know it’s pretty hard to have everything setup properly manually, and tempestconf tries to detect as much as it can from which services are running in your environment, what you have enabled, and what you don’t, etc.
Assuming you have sourced the overcloudrc.v3 and you have executed the script to create your public network, you can run the following command:

discover-tempest-config --out etc/tempest.conf \
--network-id $public_net_id \
--deployer-input ~/tempest-deployer-input.conf \
--debug --create \
identity.uri $OS_AUTH_URL \
auth.admin_password $OS_PASSWORD \
auth.admin_username $OS_USERNAME \
auth.use_dynamic_credentials true \
compute-feature-enabled.attach_encrypted_volume False \
network.tenant_network_cidr 192.168.0.0/24 \
compute.build_timeout 500 \
volume-feature-enabled.api_v1 False \
validation.image_ssh_user cirros \
validation.ssh_user cirros \
network.build_timeout 500 \
volume.build_timeout 500 \
orchestration.stack_owner_role heat_stack_owner

As you can see, tempest can get information from other files, for example the tempest-deployer-input.conf that is created by TripleO with some information like the region, volume backend being used, etc.
You can also manually set some configs, following the pattern ‘section.config_name value’. For example, in the section validation we are setting the ssh_user option to cirros. In the tempest.conf it will show something like this:

[validation]
ssh_user = cirros

Once the script finishes you will have a tempest.conf in your /home/stack/tempest/etc/ directory that with something like this:

[DEFAULT]
debug = true
use_stderr = false
log_file = tempest.log

[auth]
tempest_roles = _member_
admin_username = admin
admin_project_name = admin
admin_domain_name = Default
use_dynamic_credentials = true
admin_password = Tudv4RGAhwnJq8UPvgbamUAkY

[compute]
image_ssh_user = cirros
region = regionOne
build_timeout = 500
flavor_ref = d8668b47-5d68-4729-b3f6-5eed67eb50fe
flavor_ref_alt = 313dfd3d-059a-4dc2-a6da-6ebc32cfe546
image_ref = 31edf0e4-eed4-4b09-96cd-1f8c9c78887d
image_ref_alt = 7943a836-218c-4902-bf4a-1a36ca52010c

[identity]
username = demo
tenant_name = demo
password = secrete
alt_username = alt_demo
alt_tenant_name = alt_demo
alt_password = secrete
admin_username = admin
admin_tenant_name = admin
admin_domain_name = Default
disable_ssl_certificate_validation = true
region = regionOne
admin_password = Tudv4RGAhwnJq8UPvgbamUAkY
uri = http://192.168.24.7:5000/v2.0
auth_version = v3
uri_v3 = http://192.168.24.7:5000/v3
admin_tenant_id = a1e4e1ee88804a8cacc0796757a5a0c1

[object-storage]
operator_role = SwiftOperator
region = regionOne

[data-processing]

[orchestration]
stack_owner_role = heat_stack_owner
region = regionOne

[scenario]
img_dir = etc

[oslo_concurrency]
lock_path = /tmp

[volume-feature-enabled]
bootable = true
volume_services = true
api_v1 = False
api_v2 = True
api_v3 = True
api_extensions =

[compute-feature-enabled]
live_migration = false
live_migrate_paused_instances = true
preserve_ports = true
console_output = false
attach_encrypted_volume = False
api_extensions = NMN,OS-DCF,OS-EXT-AZ,OS-EXT-IMG-SIZE,OS-EXT-IPS,OS-EXT-IPS-MAC,OS-EXT-SRV-ATTR,OS-EXT-STS,OS-FLV-DISABLED,OS-FLV-EXT-DATA,OS-SCH-HNT,OS-SRV-USG,os-access-ips,os-admin-actions,os-admin-password,os-agents,os-aggregates,os-assisted-volume-snapshots,os-attach-interfaces,os-availability-zone,os-baremetal-ext-status,os-baremetal-nodes,os-block-device-mapping,os-block-device-mapping-v2-boot,os-cell-capacities,os-cells,os-certificates,os-cloudpipe,os-cloudpipe-update,os-config-drive,os-console-auth-tokens,os-console-output,os-consoles,os-create-backup,os-create-server-ext,os-deferred-delete,os-evacuate,os-extended-evacuate-find-host,os-extended-floating-ips,os-extended-hypervisors,os-extended-networks,os-extended-quotas,os-extended-rescue-with-image,os-extended-services,os-extended-services-delete,os-extended-status,os-extended-volumes,os-fixed-ips,os-flavor-access,os-flavor-extra-specs,os-flavor-manage,os-flavor-rxtx,os-flavor-swap,os-floating-ip-dns,os-floating-ip-pools,os-floating-ips,os-floating-ips-bulk,os-fping,os-hide-server-addresses,os-hosts,os-hypervisor-status,os-hypervisors,os-instance-actions,os-instance_usage_audit_log,os-keypairs,os-lock-server,os-migrate-server,os-migrations,os-multiple-create,os-networks,os-networks-associate,os-pause-server,os-personality,os-preserve-ephemeral-rebuild,os-quota-class-sets,os-quota-sets,os-rescue,os-security-group-default-rules,os-security-groups,os-server-diagnostics,os-server-external-events,os-server-group-quotas,os-server-groups,os-server-list-multi-status,os-server-password,os-server-sort-keys,os-server-start-stop,os-services,os-shelve,os-simple-tenant-usage,os-suspend-server,os-tenant-networks,os-used-limits,os-used-limits-for-admin,os-user-data,os-user-quotas,os-virtual-interfaces,os-volume-attachment-update,os-volumes

[network-feature-enabled]
ipv6_subnet_attributes = true
api_extensions = default-subnetpools,qos,network-ip-availability,network_availability_zone,auto-allocated-topology,ext-gw-mode,binding,agent,subnet_allocation,l3_agent_scheduler,tag,external-net,standard-attr-tag,flavors,net-mtu,availability_zone,qos-default,quotas,revision-if-match,l3-ha,provider,multi-provider,quota_details,address-scope,trunk,extraroute,net-mtu-writable,subnet-service-types,standard-attr-timestamp,service-type,qos-rule-type-details,l3-flavors,port-security,extra_dhcp_opt,standard-attr-revisions,pagination,sorting,security-group,dhcp_agent_scheduler,router_availability_zone,rbac-policies,qos-bw-limit-direction,tag-ext,standard-attr-description,router,allowed-address-pairs,project-id,trunk-details

[validation]
image_ssh_user = cirros
ssh_user = cirros

[image]
region = regionOne

[network]
region = regionOne
tenant_network_cidr = 192.168.0.0/24
build_timeout = 500
public_network_id = 9c5f91dd-118e-4637-a3cf-809f672a3c38
floating_network_name = public

[volume]
backend1_name = tripleo_iscsi
region = regionOne
build_timeout = 500

[service_available]
glance = True
manila = False
cinder = True
swift = True
sahara = False
panko = True
nova = True
neutron = True
trove = False
ceilometer = True
ironic = False
heat = True
zaqar = True
gnocchi = True
aodh = True
aodh_plugin = True
horizon = False

[image-feature-enabled]
api_v1 = False
api_v2 = True

[identity-feature-enabled]
api_v2 = True
api_v3 = True
api_extensions = https:,OS-REVOKE,OS-FEDERATION,OS-KSCRUD,OS-SIMPLE-CERT,OS-OAUTH1,OS-EC2
forbid_global_implied_dsr = True

[object-storage-feature-enabled]
discoverable_apis = formpost,bulk_delete,versioned_writes,container_quotas,tempurl,bulk_upload,ratelimit,slo,account_quotas,staticweb

[dashboard]
dashboard_url = http://192.168.24.7/dashboard/
login_url = http://192.168.24.7/dashboard/auth/login/

Testing if everything is working properly

Once we have finished setup tempest, let’s see if everything works properly, you can check if we have some plugins for tempest installed:

tempest list-plugins

+------------------+---------------------------------------------------------+
| Name | EntryPoint |
+------------------+---------------------------------------------------------+
| barbican_tests | barbican_tempest_plugin.plugin:BarbicanTempestPlugin |
| zaqar_tests | zaqar.tests.tempest_plugin.plugin:ZaqarTempestPlugin |
| ceilometer_tests | ceilometer.tests.tempest.plugin:CeilometerTempestPlugin |
| gnocchi_tests | gnocchi.tempest.plugin:GnocchiTempestPlugin |
| aodh_tests | aodh.tests.tempest.plugin:AodhTempestPlugin |
| aws_tests | ec2api.tests.functional.plugin:AWSTempestPlugin |
+------------------+---------------------------------------------------------+

And also list the tests:

ostestr -l
running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-500} \
${PYTHON:-python} -m subunit.run discover -t /usr/lib/python2.7/site-packages/tempest /usr/lib/python2.7/site-packages/tempest/test_discover --list
tempest.api.compute.admin.test_agents.AgentsAdminTestJSON.test_create_agent[id-1fc6bdc8-0b6d-4cc7-9f30-9b04fabe5b90]
tempest.api.compute.admin.test_agents.AgentsAdminTestJSON.test_delete_agent[id-470e0b89-386f-407b-91fd-819737d0b335]
tempest.api.compute.admin.test_agents.AgentsAdminTestJSON.test_list_agents[id-6a326c69-654b-438a-80a3-34bcc454e138]
tempest.api.compute.admin.test_agents.AgentsAdminTestJSON.test_list_agents_with_filter[id-eabadde4-3cd7-4ec4-a4b5-5a936d2d4408]
tempest.api.compute.admin.test_agents.AgentsAdminTestJSON.test_update_agent[id-dc9ffd51-1c50-4f0e-a820-ae6d2a568a9e]
tempest.api.compute.admin.test_aggregates.AggregatesAdminTestJSON.test_aggregate_add_host_create_server_with_az[id-96be03c7-570d-409c-90f8-e4db3c646996]
tempest.api.compute.admin.test_aggregates.AggregatesAdminTestJSON.test_aggregate_add_host_get_details[id-eeef473c-7c52-494d-9f09-2ed7fc8fc036]
tempest.api.compute.admin.test_aggregates.AggregatesAdminTestJSON.test_aggregate_add_host_list[id-7f6a1cc5-2446-4cdb-9baa-b6ae0a919b72]
...

It’s a huge list!

Running tempest

Well, in order to have something to debug, we must run some tests, and here we can first save the actual state of the cloud:

tempest cleanup --init-saved-state

This will create a saved_state.json file with the information about how our environment was before we ran tempest. This is good because if some test fails in the tearDownClass step, it might leave some dirty behind, and we can cleanup later.

Smoke

Tempest has a set of tests called smoke, these tests are supposed to be fast and can verify the sanity of your system, and that’s what we going to run here:

ostestr smoke

Debugging tempest

And now what everybody was expecting!
Now that we ran tempest and we have our results, we can start to debug these errors.

Type of errors

In my opinion, tempest have several types of errors, there are some errors that it’s actually a bug, and you need to report, others are errors due miss-configuration files, others because some timeout we will try to cover mostly of the types here

Getting the list of errors

When tempest finishes to run, it shows your a report with the number of tests executed, the success and failures, like this one:

======
Totals
======
Ran: 1522 tests in 3417.0000 sec.
- Passed: 1408
- Skipped: 87
- Expected Fail: 0
- Unexpected Success: 0
- Failed: 27
Sum of execute time for each test: 7235.4187 sec.

==============
Worker Balance
==============
- Worker 0 (367 tests) => 0:45:38.887316
- Worker 1 (384 tests) => 0:38:19.733741
- Worker 2 (325 tests) => 0:56:47.206368
- Worker 3 (446 tests) => 0:46:33.747285

But that’s not enough to us, we need to get the list of errors, for that, we can use the testr tool to get the failing errors, for example, when you execute the following command:

testr failing |grep FAIL | sed -e 's/\[.*\].*//g' -e 's/FAIL\: //'

We will see something like this:

tempest.api.compute.certificates.test_certificates.CertificatesV2TestJSON.test_create_root_certificate
tempest.api.compute.certificates.test_certificates.CertificatesV2TestJSON.test_get_root_certificate
setUpClass (tempest.api.orchestration.stacks.test_nova_keypair_resources.NovaKeyPairResourcesYAMLTest)
setUpClass (tempest.api.orchestration.stacks.test_swift_resources.SwiftResourcesTestJSON)
tempest.api.object_storage.test_object_version.ContainerTest.test_versioned_container
setUpClass (tempest.api.orchestration.stacks.test_templates.TemplateYAMLTestJSON)
setUpClass (tempest.api.orchestration.stacks.test_templates_negative.TemplateAWSNegativeTestJSON)
setUpClass (tempest.api.orchestration.stacks.test_volumes.CinderResourcesTest)
setUpClass (tempest.api.orchestration.stacks.test_limits.TestServerStackLimits)
setUpClass (tempest.api.orchestration.stacks.test_non_empty_stack.StacksTestJSON)
setUpClass (tempest.api.orchestration.stacks.test_resource_types.ResourceTypesTest)
setUpClass (tempest.api.orchestration.stacks.test_soft_conf.TestSoftwareConfig)
setUpClass (tempest.api.orchestration.stacks.test_templates_negative.TemplateYAMLNegativeTestJSON)
tempest.api.volume.admin.test_volumes_backup.VolumesBackupsV1Test.test_volume_backup_create_get_detailed_list_restore_delete
tempest.api.volume.admin.test_volumes_backup.VolumesBackupsV1Test.test_volume_backup_export_import
tempest.scenario.test_encrypted_cinder_volumes.TestEncryptedCinderVolumes.test_encrypted_cinder_volumes_cryptsetup
tempest.api.volume.admin.test_volumes_backup.VolumesBackupsV2Test.test_volume_backup_create_get_detailed_list_restore_delete
tempest.api.volume.admin.test_volumes_backup.VolumesBackupsV2Test.test_volume_backup_export_import
setUpClass (tempest.api.orchestration.stacks.test_environment.StackEnvironmentTest)
setUpClass (tempest.api.orchestration.stacks.test_nova_keypair_resources.NovaKeyPairResourcesAWSTest)
setUpClass (tempest.api.orchestration.stacks.test_stacks.StacksTestJSON)
setUpClass (tempest.api.orchestration.stacks.test_templates.TemplateAWSTestJSON)
tempest.scenario.test_encrypted_cinder_volumes.TestEncryptedCinderVolumes.test_encrypted_cinder_volumes_luks
tempest.scenario.test_network_advanced_server_ops.TestNetworkAdvancedServerOps.test_server_connectivity_reboot
tempest.scenario.test_minimum_basic.TestMinimumBasicScenario.test_minimum_basic_scenario
tempest.scenario.test_security_groups_basic_ops.TestSecurityGroupsBasicOps.test_cross_tenant_traffic
tempest.scenario.test_security_groups_basic_ops.TestSecurityGroupsBasicOps.test_multiple_security_groups

setUpClass

You have noticed that some failures start with this setUpClass, this means, that the test haven’t even started because some error were found when the test was setting whatever the test do (create VM’s, networks, etc).

tempest.something.something_else.MyClassV2.test_that_fails

When you see some failure like that, it means that the test pass from the setUpClass phase, and actually tried to run, and it fails for some reason, don’t worry, we will debug it.

tearDownClass

Some tests fails when it tries to clean the test, for example, in the setUpClass phase, it creates a VM, but in the tearDownClass the VM wasn’t able to be deleted, it’s rare, but it can happen.

The debug!

Let’s try to debug something.

Re-running one particular test

The easiest way to do that is using python testtools with the following command:

python -m testtools.run tempest.test.that.i.want.to.run

Why not use ostestr to run that particular test? Because, we are going to use a tool called pudb, and you can’t use it with ostestr.

Was it a timeout?

The first thing we do when we are debugging, is to re-run the test again to be sure that wasn’t a failure based on some timeout, sometimes the machine or network are struggling and we have a timeout. Tempest have a timeout that is configured in etc/tempest.conf that can, for example, tell that if a VM doesn’t reach the status ACTIVE in 60 seconds, mark the test as a failure, but if you’re running tests in parallel, the machine might take more time to reach ACTIVE because all resources are full for example. So, it’s always a good idea re-run the test, and also, rerunning the test will show the output of the failure, which is a good source of information.

So, let’s see a few examples of failures and how we debug it:

Let’s get the first failure test tempest.api.compute.certificates.test_certificates.CertificatesV2TestJSON.test_create_root_certificate

python -m testtools.run tempest.api.compute.certificates.test_certificates.CertificatesV2TestJSON.test_create_root_certificate

And after a while, here’s the output:

Tests running...
======================================================================
FAIL: tempest.api.compute.certificates.test_certificates.CertificatesV2TestJSON.test_create_root_certificate[id-c070a441-b08e-447e-a733-905909535b1b]
----------------------------------------------------------------------
pythonlogging:'': {{{
2016-06-15 10:18:15,733 28198 INFO [tempest.lib.common.rest_client] Request (CertificatesV2TestJSON:test_create_root_certificate): 504 POST http://10.0.0.101:8774/v2.1/832848e6175148aa874dbb2a891cd02a/os-certificates 60.004s
2016-06-15 10:18:15,734 28198 DEBUG [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': ''}
Body: None
Response - Headers: {'status': '504', 'connection': 'close', 'content-type': 'text/html', 'content-location': 'http://10.0.0.101:8774/v2.1/832848e6175148aa874dbb2a891cd02a/os-certificates', 'cache-control': 'no-cache'}
Body:

<h1>504 Gateway Time-out</h1>

The server didn't respond in time.

}}}

Traceback (most recent call last):
File "tempest/api/compute/certificates/test_certificates.py", line 34, in test_create_root_certificate
body = self.certificates_client.create_certificate()['certificate']
File "tempest/lib/services/compute/certificates_client.py", line 35, in create_certificate
resp, body = self.post(url, None)
File "tempest/lib/common/rest_client.py", line 270, in post
return self.request('POST', url, extra_headers, headers, body, chunked)
File "tempest/lib/services/compute/base_compute_client.py", line 53, in request
method, url, extra_headers, headers, body, chunked)
File "tempest/lib/common/rest_client.py", line 664, in request
resp, resp_body)
File "tempest/lib/common/rest_client.py", line 832, in _error_checker
resp=resp)
tempest.lib.exceptions.UnexpectedResponseCode: Unexpected response code received
Details: 504

Ran 1 test in 61.744s
FAILED (failures=1)

Notice the lines 9-16. The first thing that we can notice is that we got a Gateway Time-out.
Checking the name of the test, it’s trying to create a root certificate, and check the content-location: http://10.0.0.101:8774/v2.1/832848e6175148aa874dbb2a891cd02a/os-certificates
It’s hard to figure out, but in this case, os-certificates it’s from a service, openstack-nova-cert, so, the first thing that we need to check is if this service is up and running in our controller, so, we need to ssh to the controller machine and check it.
You already know the IP address that is 10.0.0.101, and the user is heat-admin

ssh heat-admin@10.0.0.101

In order to ssh works, you need to be logged as the ‘stack’ user

Now in the controller machine, let’s see the status of the services:

systemctl status openstack-*

This will show a list of all services, and you can notice this:

● openstack-nova-cert.service - OpenStack Nova Cert Server
Loaded: loaded (/usr/lib/systemd/system/openstack-nova-cert.service; enabled; vendor preset: disabled)

As you can see, the openstack-nova-cert service is INACTIVE!!!
Let’s start it

sudo systemctl start openstack-nova-cert

And check if the service was successfuly sarted:

systemctl status openstack-nova-cert
systemctl status openstack-nova-cert

And here’s the output:

● openstack-nova-cert.service - OpenStack Nova Cert Server
Loaded: loaded (/usr/lib/systemd/system/openstack-nova-cert.service; disabled; vendor preset: disabled)
Active: active (running) since Qua 2017-09-21 14:46:43 UTC; 51s ago
Main PID: 6985 (nova-cert)
CGroup: /system.slice/openstack-nova-cert.service
└─6985 /usr/bin/python2 /usr/bin/nova-cert

Now, let’s go back to our undercloud, and try again run the test. Please remember to activate the virtualenv again.

python -m testtools.run tempest.api.compute.certificates.test_certificates.CertificatesV2TestJSON.test_create_root_certificate

And now we have a different output:

Tests running...

Ran 1 test in 1.153s
OK

And congratulations! You have debugged your first test in tempest!
But this was a easy one, let’s go for something more challenging

More challenge stuff

Let’s get the next failure tempest.api.orchestration.stacks.test_nova_keypair_resources.NovaKeyPairResourcesYAMLTest

python -m testtools.run tempest.api.orchestration.stacks.test_nova_keypair_resources.NovaKeyPairResourcesYAMLTest

Output:

Tests running...
======================================================================
ERROR: setUpClass (tempest.api.orchestration.stacks.test_nova_keypair_resources.NovaKeyPairResourcesYAMLTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "tempest/test.py", line 278, in setUpClass
six.reraise(etype, value, trace)
File "tempest/test.py", line 266, in setUpClass
cls.setup_credentials()
File "tempest/api/orchestration/base.py", line 40, in setup_credentials
cls.os = cls.get_client_manager(roles=[stack_owner_role])
File "tempest/test.py", line 536, in get_client_manager
creds = cred_provider.get_creds_by_roles(**params)
File "tempest/common/dynamic_creds.py", line 323, in get_creds_by_roles
return self.get_credentials(roles)
File "tempest/common/dynamic_creds.py", line 286, in get_credentials
credentials = self._create_creds(roles=credential_type)
File "tempest/common/dynamic_creds.py", line 157, in _create_creds
self.creds_client.assign_user_role(user, project, role)
File "tempest/common/cred_client.py", line 72, in assign_user_role
raise lib_exc.NotFound(msg)
tempest.lib.exceptions.NotFound: Object not found
Details: No "heat_stack_owner" role found

Ran 0 tests in 4.887s
FAILED (failures=1)

Now, things are getting interested here, it wasn’t a timeout, it wasn’t some service that weren’t enabled, but you can see in the details that was a role that wasn’t founded.
so, let’s verify first which roles we have in keystone:

source /home/stack/overcloudrc.v3
openstack role list
+----------------------------------+------------------+
| ID | Name |
+----------------------------------+------------------+
| 3d7ddc928c3044988b204730a3087d0d | swiftoperator |
| 7608609651af49369aa68510715a3507 | admin |
| 9fe2ff9ee4384b1894a90878d3e92bab | _member_ |
| cc2203d86a2a496c9b62876bbe8ffaa0 | heat_stack_user |
| cfd9df98de374b678f756d1ed8dd3662 | ResellerAdmin |
+----------------------------------+------------------+

So, as you can see, the role is heat_stack_user, instead of heat_stack_owner, but why tempest is trying to use heat_stack_owner?
Let’s see:

cd /usr/lib/python2.7/site-packages/tempest
grep heat_stack_owner * -R
config.py: cfg.StrOpt('stack_owner_role', default='heat_stack_owner',

Now, that’s interesting, if you notice tempest/config.py, the default role is heat_stack_owner.
Tempest have default values for everything that you can set in etc/tempest.conf. If there’s nothing set in tempest.conf file, tempest will use the default one, so we have here two options: creat
e a heat_stack_owner role, or change/add the stack_owner_role option in our etc/tempest.conf. I believe the second option is better so let’s do it.
How do we know which section we need to add it?
If you open the tempest/config.py and look for the heat_stack_owner, you will see that it’s under OrchestrationGroup:

orchestration_group = cfg.OptGroup(name='orchestration',
title='Orchestration Service Options')

OrchestrationGroup = [
cfg.StrOpt('catalog_type',
default='orchestration',
help="Catalog type of the Orchestration service."),
cfg.StrOpt('region',
default='',
help="The orchestration region name to use. If empty, the "
"value of identity.region is used instead. If no such "
"region is found in the service catalog, the first found "
"one is used."),
cfg.StrOpt('endpoint_type',
default='publicURL',
choices=['public', 'admin', 'internal',
'publicURL', 'adminURL', 'internalURL'],
help="The endpoint type to use for the orchestration service."),
cfg.StrOpt('stack_owner_role', default='heat_stack_owner',
...
(orchestration_group, OrchestrationGroup),

OrchestrationGroup became orechestration in etc/tempest.conf. So, let’s go there and add the following line under [orchestration] in etc/tempest.conf

stack_owner_role = heat_stack_user

Let’s try again?

python -m testtools.run tempest.api.orchestration.stacks.test_nova_keypair_resources.NovaKeyPairResourcesYAMLTest

And the output:

Tests running...
======================================================================
ERROR: setUpClass (tempest.api.orchestration.stacks.test_nova_keypair_resources.NovaKeyPairResourcesYAMLTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "tempest/test.py", line 278, in setUpClass
six.reraise(etype, value, trace)
File "tempest/test.py", line 271, in setUpClass
cls.resource_setup()
File "tempest/api/orchestration/stacks/test_nova_keypair_resources.py", line 35, in resource_setup
'KeyPairName2': cls.stack_name + '_2'
File "tempest/api/orchestration/base.py", line 77, in create_stack
files=files)
File "tempest/services/orchestration/json/orchestration_client.py", line 56, in create_stack
resp, body = self.post(uri, headers=headers, body=body)
File "tempest/lib/common/rest_client.py", line 270, in post
return self.request('POST', url, extra_headers, headers, body, chunked)
File "tempest/lib/common/rest_client.py", line 664, in request
resp, resp_body)
File "tempest/lib/common/rest_client.py", line 757, in _error_checker
raise exceptions.Forbidden(resp_body, resp=resp)
tempest.lib.exceptions.Forbidden: Forbidden
Details: {u'title': u'Forbidden', u'explanation': u'Access was denied to this resource.', u'code': 403, u'error': {u'message': u'You are not authorized to use create.', u'traceback': None, u'type': u'Forbidden'}}

Ran 0 tests in 9.718s
FAILED (failures=1)

And now we have a different failure, in this case, the role doesn’t have the rights to add these resources.
Well, we can then use _member_ role, since this was the default role for everything in tempest before they start to create roles separately. Let’s change again our etc/tempest.conf in [orchestration] section:

stack_owner_role = _member_

Running again…

python -m testtools.run tempest.api.orchestration.stacks.test_nova_keypair_resources.NovaKeyPairResourcesYAMLTest

And the output:

Tests running...

Ran 2 tests in 16.784s
OK

The pudb in action

Here is where things get interesting, let’s check the next failure in our list:

python -m testtools.run tempest.api.object_storage.test_object_version.ContainerTest.test_versioned_container
Tests running...

======================================================================
FAIL: tempest.api.object_storage.test_object_version.ContainerTest.test_versioned_container[id-a151e158-dcbf-4a1f-a1e7-46cd65895a6f]
----------------------------------------------------------------------
pythonlogging:'': {{{
2016-06-16 07:15:36,531 15907 INFO [tempest.lib.common.rest_client] Request (ContainerTest:test_versioned_container): 200 POST http://10.0.0.101:5000/v2.0/tokens
2016-06-16 07:15:36,531 15907 DEBUG [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json'}
Body:
Response - Headers: {'status': '200', 'content-length': '4540', 'content-location': 'http://10.0.0.101:5000/v2.0/tokens', 'vary': 'X-Auth-Token,Accept-Encoding', 'server': 'Apache/2.4.6 (Red Hat Enterprise Linux)', 'connection': 'close', 'date': 'Thu, 16 Jun 2016 11:15:36 GMT', 'content-type': 'application/json', 'x-openstack-request-id': 'req-f79820a7-d01a-4ac2-a6ea-ac76f63f6070'}
Body: {"access": {"token": {"issued_at": "2016-06-16T11:15:36.517581Z", "expires": "2016-06-16T12:15:36Z", "id": "28fba048ed524fdcb5309a6cd2e6975c", "tenant": {"description": "tempest-ContainerTest-272815497-desc", "enabled": true, "id": "65f22b448e6c4781bcd5d3eb1796e8b9", "name": "tempest-ContainerTest-272815497"}, "audit_ids": ["Mm8FfDIARqmJw6dx0meMew"]}, "serviceCatalog": [{"endpoints": [{"adminURL": "http://172.16.2.4:8774/v2.1/65f22b448e6c4781bcd5d3eb1796e8b9", "region": "regionOne", "internalURL": "http://172.16.2.4:8774/v2.1/65f22b448e6c4781bcd5d3eb1796e8b9", "id": "564951d5f8bd4f26b0049b2738c4ebb5", "publicURL": "http://10.0.0.101:8774/v2.1/65f22b448e6c4781bcd5d3eb1796e8b9"}], "endpoints_links": [], "type": "compute", "name": "nova"}, {"endpoints": [{"adminURL": "http://172.16.2.4:9696/", "region": "regionOne", "internalURL": "http://172.16.2.4:9696/", "id": "05590b4f4975433287c757c2d5c0d4b9", "publicURL": "http://10.0.0.101:9696/"}], "endpoints_links": [], "type": "network", "name": "neutron"}, {"endpoints": [{"adminURL": "http://172.16.2.4:8776/v2/65f22b448e6c4781bcd5d3eb1796e8b9", "region": "regionOne", "internalURL": "http://172.16.2.4:8776/v2/65f22b448e6c4781bcd5d3eb1796e8b9", "id": "4f9bdc13e0bc4939a2870b4877d59388", "publicURL": "http://10.0.0.101:8776/v2/65f22b448e6c4781bcd5d3eb1796e8b9"}], "endpoints_links": [], "type": "volumev2", "name": "cinderv2"}, {"endpoints": [{"adminURL": "http://172.16.2.4:8042/", "region": "regionOne", "internalURL": "http://172.16.2.4:8042/", "id": "0310c7596c1f45859f5162c58348df71", "publicURL": "http://10.0.0.101:8042/"}], "endpoints_links": [], "type": "alarming", "name": "aodh"}, {"endpoints": [{"adminURL": "http://172.16.2.4:8041/", "region": "regionOne", "internalURL": "http://172.16.2.4:8041/", "id": "042098a261774ce9ac5da1b703877cb9", "publicURL": "http://10.0.0.101:8041/"}], "endpoints_links": [], "type": "metric", "name": "gnocchi"}, {"endpoints": [{"adminURL": "http://172.16.2.4:8777/", "region": "regionOne", "internalURL": "http://172.16.2.4:8777/", "id": "01e56f52214a41efb3e3913e7e7d69d2", "publicURL": "http://10.0.0.101:8777/"}], "endpoints_links": [], "type": "metering", "name": "ceilometer"}, {"endpoints": [{"adminURL": "http://172.16.2.4:8776/v1/65f22b448e6c4781bcd5d3eb1796e8b9", "region": "regionOne", "internalURL": "http://172.16.2.4:8776/v1/65f22b448e6c4781bcd5d3eb1796e8b9", "id": "4af0f4336cd045b3ba28581e4101dc81", "publicURL": "http://10.0.0.101:8776/v1/65f22b448e6c4781bcd5d3eb1796e8b9"}], "endpoints_links": [], "type": "volume", "name": "cinder"}, {"endpoints": [{"adminURL": "http://172.16.2.4:8004/v1/65f22b448e6c4781bcd5d3eb1796e8b9", "region": "regionOne", "internalURL": "http://172.16.2.4:8004/v1/65f22b448e6c4781bcd5d3eb1796e8b9", "id": "0b9f10914c7640288f4520e9d7e6a3ab", "publicURL": "http://10.0.0.101:8004/v1/65f22b448e6c4781bcd5d3eb1796e8b9"}], "endpoints_links": [], "type": "orchestration", "name": "heat"}, {"endpoints": [{"adminURL": "http://172.16.1.4:8080/v1", "region": "regionOne", "internalURL": "http://172.16.1.4:8080/v1/AUTH_65f22b448e6c4781bcd5d3eb1796e8b9", "id": "02eab80fd77c45078559cd02a66bb60c", "publicURL": "http://10.0.0.101:8080/v1/AUTH_65f22b448e6c4781bcd5d3eb1796e8b9"}], "endpoints_links": [], "type": "object-store", "name": "swift"}, {"endpoints": [{"adminURL": "http://172.16.2.4:8386/v1.1/65f22b448e6c4781bcd5d3eb1796e8b9", "region": "regionOne", "internalURL": "http://172.16.2.4:8386/v1.1/65f22b448e6c4781bcd5d3eb1796e8b9", "id": "92e773dde49c49f2adfe9a5993ba42ee", "publicURL": "http://10.0.0.101:8386/v1.1/65f22b448e6c4781bcd5d3eb1796e8b9"}], "endpoints_links": [], "type": "data-processing", "name": "sahara"}, {"endpoints": [{"adminURL": "http://172.16.1.4:9292/", "region": "regionOne", "internalURL": "http://172.16.1.4:9292/", "id": "45af05b43f5f4ba88c8b378bbbde10b4", "publicURL": "http://10.0.0.101:9292/"}], "endpoints_links": [], "type": "image", "name": "glance"}, {"endpoints": [{"adminURL": "http://192.0.2.6:35357/v2.0", "region": "regionOne", "internalURL": "http://172.16.2.4:5000/v2.0", "id": "4076dec3f8c943baa84ad5671983613e",
2016-06-16 07:15:36,708 15907 INFO [tempest.lib.common.rest_client] Request (ContainerTest:test_versioned_container): 201 PUT http://10.0.0.101:8080/v1/AUTH_65f22b448e6c4781bcd5d3eb1796e8b9/tempest-TestVersionContainer-734473029 0.177s
2016-06-16 07:15:36,709 15907 DEBUG [tempest.lib.common.rest_client] Request - Headers: {'X-Auth-Token': ''}
Body: None
Response - Headers: {'status': '201', 'content-length': '0', 'content-location': 'http://10.0.0.101:8080/v1/AUTH_65f22b448e6c4781bcd5d3eb1796e8b9/tempest-TestVersionContainer-734473029', 'x-trans-id': 'tx39e5ffb952fc42909a818-0057628a58', 'date': 'Thu, 16 Jun 2016 11:15:36 GMT', 'content-type': 'text/html; charset=UTF-8', 'connection': 'close'}
Body:
2016-06-16 07:15:36,718 15907 INFO [tempest.lib.common.rest_client] Request (ContainerTest:test_versioned_container): 204 HEAD http://10.0.0.101:8080/v1/AUTH_65f22b448e6c4781bcd5d3eb1796e8b9/tempest-TestVersionContainer-734473029 0.009s
2016-06-16 07:15:36,719 15907 DEBUG [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': ''}
Body: None
Response - Headers: {'status': '204', 'content-length': '0', 'x-container-object-count': '0', 'content-location': 'http://10.0.0.101:8080/v1/AUTH_65f22b448e6c4781bcd5d3eb1796e8b9/tempest-TestVersionContainer-734473029', 'accept-ranges': 'bytes', 'x-storage-policy': 'Policy-0', 'x-container-bytes-used': '0', 'connection': 'close', 'x-timestamp': '1466075736.69614', 'x-trans-id': 'tx88dca78b4ccd4be0a9eb2-0057628a58', 'date': 'Thu, 16 Jun 2016 11:15:36 GMT', 'content-type': 'application/json; charset=utf-8'}
Body:
2016-06-16 07:15:36,742 15907 INFO [tempest.lib.common.rest_client] Request (ContainerTest:test_versioned_container): 201 PUT http://10.0.0.101:8080/v1/AUTH_65f22b448e6c4781bcd5d3eb1796e8b9/tempest-TestBaseContainer-179915470 0.023s
2016-06-16 07:15:36,743 15907 DEBUG [tempest.lib.common.rest_client] Request - Headers: {'X-versions-Location': 'tempest-TestVersionContainer-734473029', 'X-Auth-Token': ''}
Body: None
Response - Headers: {'status': '201', 'content-length': '0', 'content-location': 'http://10.0.0.101:8080/v1/AUTH_65f22b448e6c4781bcd5d3eb1796e8b9/tempest-TestBaseContainer-179915470', 'x-trans-id': 'tx320c91518c554a5c958aa-0057628a58', 'date': 'Thu, 16 Jun 2016 11:15:36 GMT', 'content-type': 'text/html; charset=UTF-8', 'connection': 'close'}
Body:
2016-06-16 07:15:36,753 15907 INFO [tempest.lib.common.rest_client] Request (ContainerTest:test_versioned_container): 204 HEAD http://10.0.0.101:8080/v1/AUTH_65f22b448e6c4781bcd5d3eb1796e8b9/tempest-TestBaseContainer-179915470 0.010s
2016-06-16 07:15:36,753 15907 DEBUG [tempest.lib.common.rest_client] Request - Headers: {'Content-Type': 'application/json', 'Accept': 'application/json', 'X-Auth-Token': ''}
Body: None
Response - Headers: {'status': '204', 'content-length': '0', 'x-container-object-count': '0', 'content-location': 'http://10.0.0.101:8080/v1/AUTH_65f22b448e6c4781bcd5d3eb1796e8b9/tempest-TestBaseContainer-179915470', 'accept-ranges': 'bytes', 'x-storage-policy': 'Policy-0', 'x-container-bytes-used': '0', 'connection': 'close', 'x-timestamp': '1466075736.72735', 'x-trans-id': 'tx81323faab1994f1c97d98-0057628a58', 'date': 'Thu, 16 Jun 2016 11:15:36 GMT', 'content-type': 'application/json; charset=utf-8'}
Body:
}}}

Traceback (most recent call last):
File "tempest/api/object_storage/test_object_version.py", line 69, in test_versioned_container
vers_container_name)
File "tempest/api/object_storage/test_object_version.py", line 45, in assertContainer
self.assertEqual(header_value, versioned)
File "/home/stack/tempest/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py", line 411, in assertEqual
self.assertThat(observed, matcher, message)
File "/home/stack/tempest/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py", line 498, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: 'Missing Header' != 'tempest-TestVersionContainer-734473029'

Ran 1 test in 1.537s
FAILED (failures=1)

Okay, it seems to be a real bug! See, there’s an assert error!
First we need to find the most important line. For me, it’s the Missing Header line. And let’s see if we found something on google.
Usually I remove the quotation marks and random numbers, since these are generated locally. If you run the same test again, the numbers at the end will change. So I looked for:

impl.MismatchError: Missing Header != tempest-TestVersionContainer

But google didn’t bring me anything useful, so it’s time for debug!

Installing pudb

Pudb is a nice python debugger that gives you a very nice ncurses like interface where you can run your code, line by line, and see where’s the error appears
Install it is pretty straightforward:

pip install pudb

Debugging with pudb

In order to debug, you need to edit the python file that you want to debug, import the pudb, and add a pudb.set_trace() where you want to start to debug, in our case, we want to debug tempest.api.object_storage.test_object_version.ContainerTest.test_versioned_container, so let me explain how it works:
You have test separated by dots, where the last part is the name of the test it self, the penultimate is the class name, and then the file and the others are the path, so in our example, test_versioned_container is the name of the test, ContainerTest is the class where the test is, test_object_version is the file, like test_object_version.py, and tempest.api.object_storage is the path, so this test is in tempest/api/object_storage in the test_object_version.py file, see, simple!
And one more tip: Who is the responsible for object_storage? Swift! So we already know where to look for clues

So, now we know that we need to edit the tempest/api/object_storage/test_object_version.py, let’s do it, and in the beginning of the file, together with the other imports, import pudb:

Now let’s find our test test_versioned_container in this same file, and add a pudb.set_trace() in the beginning of the test:

Well, you see that pudb.set_trace() was inserted in the beginning on test_versioned_container, and also, please notice that we found something that might interest us, this test should only run if object_versioning is enabled in swift.
So, let’s run this test now with our pudb and you will see what’s happen:

python -m testtools.run tempest.api.object_storage.test_object_version.ContainerTest.test_versioned_container

The first time you run, pudb will open and show some information, just press enter:

Next he will ask some settings, usually, I just check to show the number of lines, then press right arrow key and enter:

Now you will see the main window:

Now you can debug line by line. For example, in the next window you will see that I press n (next line), n again, and then I press control + x in my keyboard, and typed the name of the vers_container_name:

If you press control + x again, you’ll be back to your code and you can use n for the next line.
Here I was executing line by line, until I reach the exception:

Now that you know where your code is breaking, we can go more deeply in this method self.assertContainer. I’ll press b to add a breakpoint pressing b and let’s rerun our test again. To exit, just press c (continue) that pudb will exit because of the exception.
Next run, it will start in the line containing the pudb.set_trace(), and then you can press c and go straight to the line with the breakpoint, there you can press s (step into), and then next, next so on, until you see where it fails:

So, we see, that the x-versions-location header wasn’t founded, this means that this feature isn’t enabled for us, but for some reason it’s enabled in the CONF.object_storage_feature_enabled.object_versioning, so, let’s see if it’s enabled in our tempest.conf, and we can see that it’s not (sorry, tools/config_tempest.py isn’t perfect), so let’s see if this is set as enabled by default:

grep "object_versioning" * -R
api/object_storage/test_object_version.py: not CONF.object_storage_feature_enabled.object_versioning,
config.py: cfg.BoolOpt('object_versioning',

So, in tempest/config.py we see that the default is set to True. So, we have two options: The first one, is create a section [object-storage-feature-enabled] and add object_versioning = False and then this test will be skipped, or we can log in our controller and go into /etc/swift/container-server.conf and enable it, and restart the swift service, and of course, open a bug, because this is an error!

In your controller:

sudo crudini --set /etc/swift/container-server.conf DEFAULT allow_versions true
sudo systemctl restart openstack-swift-container*

And let’s go back to undercloud and see if it will work.

python -m testtools.run tempest.api.object_storage.test_object_version.ContainerTest.test_versioned_container
Tests running...

Ran 1 test in 4.140s
OK

And congratulations! One more test debugged!
And that’s it! If you have comments, doubts, or want to add more information, feel free to contact me.

Testing http requests with mock or HTTMock?

So, these days I was working in a project that uses requests to make some http/https calls:

import requests


URL = 'https://us.api.battle.net/wow/auction/data/{}?locale=en_US'
API = 'myapikey'

def request_with_api(url):
    apikey = {'apikey': API}
    return requests.get(url, params=apikey)


def get_auctions_url(realm):
    url = URL.format(realm)
    r = request_with_api(url)
    if r.status_code == 200:
        data = r.json()
        data_url = data['files'][0]['url']
        return data_url
    return None


def get_auctions(realm):
    url = get_auctions_url(realm)
    if url:
        r = request_with_api(url)
        if r.status_code == 200:
            return r.json()
        else:
            raise Exception('Error in return code: %s' % r.status_code)
    else:
        raise Exception('Auction data not found')


if __name__ == '__main__':
    auctions = get_auctions('medivh')
    print auctions

And as usual, I wanted to have my unit tests, and requests is very tricky, because, you don’t want to keep making requests to the real server, every time you run your unit tests, and here is a couple of reasons:

  1. I’ll depend of internet to access the server all the time
  2. The server has a limit of requests that I don’t want to waste testing
  3. Some servers may block your account if you do too many requests in a small amount of time.

Mocking

So, the easiest way to test would be using mock, and then do something like this:

import unittest
import mock

import wow

from wow import request_with_api

class TestWow(unittest.TestCase):

    @mock.patch('wow.request_with_api')
    def test_get_auctions_url(self, wow_patch):
        wow_patch.return_value.status_code = 200
        wow_patch.return_value.json.return_value = {'files': [
            {'url': ('http://auction-api-us.worldofwarcraft.com/'
                     'auction-data')}
        ]}
        data = wow.get_auctions_url('medivh')
        wow_patch.assert_called_once()
        self.assertEquals(data,
                'http://auction-api-us.worldofwarcraft.com/auction-data')

        wow_patch.reset_mock()
        wow_patch.return_value.status_code = 403
        data = wow.get_auctions_url('medivh')
        wow_patch.assert_called_once()
        self.assertIsNone(data)


if __name__ == '__main__':
    unittest.main()

This looks good, all I have to do is set the mock return values, reset the mock, set the values of the status_code, the return of the r.json() value, and all of this inside our tests. Of course, I could create the setUp method, and configure the values there:

import unittest
import mock

import wow

from wow import request_with_api

class TestWow(unittest.TestCase):
    def setUp(self):
        self.ok_return = ('http://auction-api-us.worldofwarcraft.com/'
                          'auction-data')
        self.json_ok_return = {
            'files': [
                {
                    'url': self.ok_return
                }
            ]
        }

    @mock.patch('wow.request_with_api')
    def test_get_auctions_url(self, wow_patch):
        wow_patch.return_value.status_code = 200
        wow_patch.return_value.json.return_value = self.json_ok_return
        data = wow.get_auctions_url('medivh')
        wow_patch.assert_called_once()
        self.assertEquals(data, self.ok_return)

        wow_patch.reset_mock()

        wow_patch.return_value.status_code = 403
        data = wow.get_auctions_url('medivh')
        wow_patch.assert_called_once()
        self.assertIsNone(data)


if __name__ == '__main__':
    unittest.main()

Now, so far, we are only testing the get_auctions_url method, let’s add the test to get_auctions method, who calls the get_auctions_url method as well:

import unittest
import mock

import wow


class TestWow(unittest.TestCase):
    def setUp(self):
        self.ok_return = ('http://auction-api-us.worldofwarcraft.com/'
                          'auction-data')
        self.json_ok_return = {
            'files': [
                {
                    'url': self.ok_return
                }
            ]
        }

        self.json_auction_data = {
            'content': {
                "realms": [
                    {"name": "Medivh", "slug": "medivh"},
                    {"name": "Exodar", "slug": "exodar"}
                ],
                "auctions": [
                    {
                        "auc": 1283349179, "item": 129158,
                        "owner": "Toiletcow",
                        "ownerRealm": "Medivh", "bid": 8360, "buyout": 8800,
                        "quantity": 1, "timeLeft": "LONG", "rand": 0,
                        "seed": 0, "context": 0
                    }
                ]
            }
        }

    @mock.patch('wow.request_with_api')
    def test_get_auctions_url(self, wow_patch):
        wow_patch.return_value.status_code = 200
        wow_patch.return_value.json.return_value = self.json_ok_return
        data = wow.get_auctions_url('medivh')
        wow_patch.assert_called_once()
        self.assertEquals(data, self.ok_return)

        wow_patch.reset_mock()

        wow_patch.return_value.status_code = 403
        data = wow.get_auctions_url('medivh')
        wow_patch.assert_called_once()
        self.assertIsNone(data)

    @mock.patch('wow.get_auctions_url')
    @mock.patch('wow.request_with_api')
    def test_get_auctions(self, request_with_api_mock,
                          get_auctions_url_mock):
        get_auctions_url_mock.return_value = None
        self.assertRaises(Exception, wow.get_auctions, 'medivh')
        get_auctions_url_mock.assert_called_once()

        get_auctions_url_mock.reset_mock()

        request_with_api_mock.return_value.status_code = 200
        request_with_api_mock.return_value.json.return_value = (
                self.json_auction_data)
        get_auctions_url_mock.return_value = self.ok_return
        get_auctions_url_mock.json.return_value = self.json_auction_data
        data = wow.get_auctions('medivh')
        self.assertDictEqual(data, self.json_auction_data)


if __name__ == '__main__':
    unittest.main()

So, now things are starting to get a little bit messy, here we need to mock both wow.get_auctions_url, and wow.request_with_api, and reset the mocks and assert that the mock is being called once. It works, but still, it’as a mess for the next guy who will check this code.

HTTMock

The coolest part of the HTTMock is that it returns a requests object for you, which means you can call other requests method if you need it without the need of mock these methods. In our example, we only mocked the json() and status_code, but imagine if we had to mock the headers, responses, etc. HTTMock do that for you.

In my opinion, HTTMock keeps your test methods more clean:

import unittest
from httmock import urlmatch, HTTMock

import wow


@urlmatch(netloc=r'(.*\.)?us\.api\.battle\.net.*$')
def auctions_url(url, request):
    not_found = ('http://auction-api-us.worldofwarcraft.com/'
                 'auction-data/notfound')
    found = 'http://auction-api-us.worldofwarcraft.com/auction-data'

    if 'fail' in url.path:
        return {'status_code': 403}

    return {
           'status_code': 200,
           'content':
           {
               'files': [
                   {'url': not_found if 'notfound' in url.path else found}
               ]
           }
       }


class TestAuctions(unittest.TestCase):

    def test_get_auctions_url(self):
        with HTTMock(auctions_url):
            data = wow.get_auctions_url('medivh')
            self.assertEquals(data, ('http://auction-api-us.worldofwarcraft.'
                                     'com/auction-data'))

        with HTTMock(auctions_url):
            data = wow.get_auctions_url('fail')
            self.assertIsNone(data)

if __name__ == '__main__':
    unittest.main()

As you can see, we can set a method with the @urlmatch decorator and all requests that match that particular regular expression will be handle by this method.
Inside that method we also can set some conditionals as I did to test when we have ‘not found’.
Of course, there are other decorators. You can use, for example the @all_requests decorator

import unittest
from httmock import all_requests, HTTMock

import wow


@all_requests
def auctions_url_req(url, request):
    not_found = ('http://auction-api-us.worldofwarcraft.com/'
                 'auction-data/notfound')
    found = 'http://auction-api-us.worldofwarcraft.com/auction-data'
    if 'fail' in url.path:
        return {'status_code': 403}

    return {
           'status_code': 200,
           'content':
           {
               'files': [
                   {'url': not_found if 'notfound' in url.path else found}
               ]
           }
       }


class TestAuctions(unittest.TestCase):

    def test_get_auctions_url(self):
        with HTTMock(auctions_url_req):
            data = wow.get_auctions_url('medivh')
            self.assertEquals(data, ('http://auction-api-us.worldofwarcraft.'
                                     'com/auction-data'))

        with HTTMock(auctions_url_req):
            data = wow.get_auctions_url('fail')
            self.assertIsNone(data)

if __name__ == '__main__':
    unittest.main()

In this case, all the requests you do no matter the URL, will be handled by the auctions_url_req:

import requests

with HTTMock(auctions_url_req):
    data = requests.get('https://www.google.com')
    print(data.status_code)
    print(data.json())

And you get this output:

200
{u'files': [{u'url': u'http://auction-api-us.worldofwarcraft.com/auction-data'}]}

Bottom line

Use or not to use HTTMock is up to you, I believe this can make your tests cleaner, it’s much more easy look to a with HTTMock and say: oh, hey, so, this will return whatever is in the with method, cool!
While using mock only, I have to be like, okay… this object is a mock, and the returned value of this method here is 200, and oh, it’s resetting the mock here, so let’s forget about all of this, and start over with the… well, you get the idea.
And of course, you can use more than one method to handle the requests, as you can see on line 65:

import unittest
from httmock import urlmatch, HTTMock, all_requests

import wow

AUCTION_CONTENT = {
    "realms": [
        {"name": "Medivh", "slug": "medivh"},
        {"name": "Exodar", "slug": "exodar"}
    ],
    "auctions": [
        {
            "auc": 1283349179, "item": 129158,
            "owner": "Toiletcow",
            "ownerRealm": "Medivh", "bid": 8360, "buyout": 8800,
            "quantity": 1, "timeLeft": "LONG", "rand": 0,
            "seed": 0, "context": 0
         }
    ]
}


@urlmatch(netloc=r'(.*\.)?us\.api\.battle\.net.*$')
def auctions_url(url, request):
    not_found = ('http://auction-api-us.worldofwarcraft.com/'
                 'auction-data/notfound')
    found = 'http://auction-api-us.worldofwarcraft.com/auction-data'

    if 'fail' in url.path:
        return {'status_code': 403}

    return {
           'status_code': 200,
           'content':
           {
               'files': [
                   {'url': not_found if 'notfound' in url.path else found}
               ]
           }
       }


@urlmatch(path=r'^\/auction\-data.*$')
def auctions(url, request):
    if 'notfound' in url.path:
        return {'status_code': 404}

    return {'status_code': 200,
            'content': AUCTION_CONTENT}


class TestAuctions(unittest.TestCase):

    def test_get_auctions_url(self):
        with HTTMock(auctions_url_req):
            data = wow.get_auctions_url('medivh')
            self.assertEquals(data, ('http://auction-api-us.worldofwarcraft.'
                                     'com/auction-data'))

        with HTTMock(auctions_url_req):
            data = wow.get_auctions_url('fail')
            self.assertIsNone(data)

    def test_get_auctions(self):
        with HTTMock(auctions, auctions_url):
            data = wow.get_auctions('medivh')
            self.assertDictEqual(data, AUCTION_CONTENT)

        with HTTMock(auctions, auctions_url):
            self.assertRaises(Exception, wow.get_auctions, 'notfound')

if __name__ == '__main__':
    unittest.main()
    import requests

And that’s it! Now you have two choices to test your python code that uses requests.

Writing tests in python for beginners – Part 1 – doctest

So, I just started an interest on python tests and I would like to share what I learned so far.

Let’s skip all that theory about how tests are important for the code, bla bla bla, if you want to learn test, probably you already have your own reason, so, let’s jump to the part you actually write code!

Our application

Here a  simple code that calculates fibonacci. Let’s save it in the file fibonacci.py

def fibonacci(number):
    if number < 2:
        return number
    return fibonacci(number - 1) + fibonacci(number - 2)

So, as you can see, it’s a very simple code that calculates the fibonacci number, so, how can we ensure that this is working? Well, the easiest way is just invoke the code with different values, and check if the result will be the expected. So, in other words, you’re doing exactly what doctest intent to do!
so, probably you would write a python file, let’s say, test_fibonacci.py with the following content:

from fibonacci import fibonacci

def test_fibonacci():
    expected_result = [0, 1, 1, 2, 3, 5]
    for x in range(0, 5):
        result = fibonacci(x)
        if result != expected_result[x]:
            print "Failing on calculate fibonacci %s" % x

if __name__ == '__main__':
    print "Testing fibonacci"
    test_fibonacci()

Well, it might work, until you want to calculate fibonacci(200), then you gonna need to add a new list, alter the test code, etc.
What if we now only accept positive numbers and negative numberes throws an exception? Try/catch block? Well, soon you’re writing more code, and adding more complexity than you were expecting. Maybe you gonna need to test your test code itself to ensure that it’s working fine (are you feeling the inception here?)

Doctest

So, doctest can help you with some of these problems! Let’s see, first of all, you’re gonna need to create a simple txt file and add the following content:

>>> from fibonacci import fibonacci

>>> fibonacci(0)
0

>>> fibonacci(1)
1

>>> fibonacci(2)
1

>>> fibonacci(3)
2

>>> fibonacci(4)
3

>>> fibonacci(7)
12

So, basically, doctest expect every line starting with “>>>” to be a python code (we are importing fibonacci function in the line number 1 for example. And any other line as the result, looking just like the python shell.

Assuming that both fibonacci.py and fibonacci.txt are in the same directory, you can run the doctest with the command:

$ python -m doctest fibonacci.txt
**********************************************************************
File "fibonacci.txt", line 18, in fibonacci.txt
Failed example:
    fibonacci(7)
Expected:
    12
Got:
    13
**********************************************************************
1 items had failures:
   1 of   7 in fibonacci.txt
***Test Failed*** 1 failures.

Oops! Something went wrong! Don’t worry, that was expected. As you can see, the doctest shows a very good output when some error occurs, it shows you the line where the error happened, what was the expected result, and what was the result doctest gots. So, after fixing it, and runing again, we got no error

>>> from fibonacci import fibonacci

>>> fibonacci(0)
0

>>> fibonacci(1)
1

>>> fibonacci(2)
1

>>> fibonacci(3)
2

>>> fibonacci(4)
3

>>> fibonacci(7)
13
$ python -m doctest fibonacci.txt

Now let’s go further, and say: Hey, I don’t want negative numbers to be allowed, I want it to throw an exception. So, in this case, we just need to add a new like in our fibonacci.txt testing the fibonacci(-1) and of course, checking if the result will be an exception.
If we would do that in our test_fibonacci.py code, we should use a try/catch block, call the fibonacci(-1) inside it, and of course, this woudln’t be in the initial loop, and if something goes wrong, you wouldn’t have the cool output from doctest.

So, let’s change our fibonacci.py to not accept negative numbers:

def fibonacci(number):
    if number == 1 or number == 0:
        return number
    if number < 0:
        raise ValueError('Number is negative')
    return fibonacci(number - 1) + fibonacci(number - 2)

Even if we run our doctest after this changes, we got no error, but we are not testing the negative number yet. so, let’s add a fibonacci(-1) in our fibonacci.txt test.

Just a heads up, if we run fibonacci(-1) we will get a exception like this:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "fibonacci.py", line 5, in fibonacci
    raise ValueError('Number is negative')
ValueError: Number is negative

And you’re probably thinking that you can copy and paste this error on our fibonacci.txt and it will work, but  what if the file can change? You can add more code on the file, and the line where the error is being raised (in this case, line 5) might change. Being so, doctest only cares about the first and last lines, so our code in fibonacci.txt file will look like this:

>>> fibonacci(-1)
Traceback (most recent call last):
ValueError: Number is negative

And we are done! now we have a set of tests to our fibonacci code!

But what if you don’t want to have a txt and still have your tests? You can write doctest directly in your code on the docstring:

def fibonacci(number):
    '''
    This is a fibonacci function.
    Example of usage:
    >>> fibonacci(0)
    0
    >>> fibonacci(1)
    1
    >>> fibonacci(2)
    1
    >>> fibonacci(3)
    2
    >>> fibonacci(4)
    3
    >>> fibonacci(7)
    13
    >>> fibonacci(-1)
    Traceback (most recent call last):
    ValueError: Number is negative
    '''
    if number == 1 or number == 0:
        return number
    if number < 0:
        raise ValueError('Number is negative')
    return fibonacci(number - 1) + fibonacci(number - 2)

and you can run python -m doctest fibonacci.py.

More

Doctest have more features, as you can see below:

ELLIPISIS

According our fibonacci code, if you pass a negative number, it will raise a ValueError exception with the message ‘Number is negative’. There are some cases, that we don’t need to know which message is returning, we just want to know that an exception is being raised. Another example is when the output is the default __repr__ output, every time is a different memory address and you can’t control it. For this kind of situations, we can use the ELLIPISIS like in the example below.

>>> fibonacci(-1) #doctest: +ELLIPSIS
Traceback (most recent call last):
ValueError: ...

The line containing #doctest: + ELLIPSIS tells to doctest that the ‘…’ can match anything after ‘ValueError: ‘

SKIP

As the name says, it skips a test. This can be usefull when you know that some test is failing due some know bug, but it’s not fixed yet.

>>> fibonacci(-1) #doctest: +ELLIPSIS +SKIP
Traceback (most recent call last):
ValueError: ...

As you can see, you can use the two options, ELLIPSIS and SKIP together.

Conclusion

Tests can be fun and easy, and it helps to keep you code working properly in the whole cycle of development.
For more information about doctest, visit doctest website

C cedilha no Fedora 22

É sempre comum, após um upgrade termos o problema do ç aparecer como ć, para quem usa teclado padrão US. No Fedora 22 eu encontrei essa solução, que para mim foi muito mais simples do que ficar editando um monte de arquivo.

Lembrando que pode ser feito também no modo gráfico com as contra-partes (dconf-editor e im-chooser).

Instale os pacotes gtk3-immodules e gtk3-immodule-xim

sudo dnf install -y gtk3-immodules gtk3-immodule-xim

Alterar o input padrão:

imsettings-switch im-cedilla

Se por algum motivo você ver alguma mensagem desse tipo:

GDBus.Error:org.gtk pilule viagra achat.GDBus.UnmappedGError.Quark._imsettings_2derror_2dquark.Code5: Current desktop isn't targeted by IMSettings.

Por algum motivo, o gnome vem por padrão para não aceitar immodule do teclado, então, temos que alterar isso no dconf

dconf write /org/gnome/settings-daemon/plugins/keyboard/active false

Reinicie a sessão e seja feliz! Ou você pode sempre usar o Alt Gr + c e você terá o ç da mesma forma. É uma questão de se acostumar.

New small project: stackquery-dashboard

I’ve started a new project called stackquery-dashboard. Basically, it’s a web interface to show reports about user contribution in OpenStack, using the stackalytics.com API and part of the code from launchpadstats from  Martina Kollarova.

The first reason I start this project is because stackalytics.com only shows reports for each user, and sometimes managers in companies working with OpenStack, need a overview of all their employees at once, and it’s kinda boring to look for each user and then mount a spreadsheet.

The project have a Web interface, where you can create teams, assign users for these teams and generate reports based on several OpenStack projects (actually, right now it’s only Openstack and Stackforge).

The second reason is because for a long time I wanted to use Python for web, and it was a good opportunity to start.

Flask is really amazing and very easy to learn. I read this book and in one day I was already doing some cool stuff.
Also, since I’m not a Web designer, I had the opportunity to learn about Twitter bootstrap, which is very good for people who have no idea how CSS and JavaScript works and don’t want to waste a lot of time learning before start some project using these technologies.

The project is very simple, and still in the earlier stages. I’m not sure if it will be useful for anyone, but if is, please, let me know. I would be glad to hear about it.

Next steps
  • I need to work on the unit tests (yes, I started the project before create the unit tests, but I’ll fix it soon)
  • Documentation
  • More API’s
  • More types of reports
  • Make the project work with modules so you can have modules enabled for other types of reports
  • Create a config file
  • Save the state in the forms between posts
  • Better message system when delete, update and create teams, users, projects, etc.
  • Can’t think anything else for now.

And if you’re interested, here’s a few screenshots:

Screen Shot 2015-02-05 at 10.56.10 Screen Shot 2015-02-05 at 10.56.17 Screen Shot 2015-02-05 at 10.56.25
Screen Shot 2015-02-05 at 10.57.08