Code Monkey home page Code Monkey logo

qase-python's Introduction

qase-python's People

Contributors

1ivliev avatar alekseilantsov avatar apetrosyan1613 avatar cskmnrpt avatar danil-velin avatar dependabot[bot] avatar gibiw avatar kmalinin avatar mgolovko avatar n3r avatar nickvolynkin avatar pyscht avatar quadespresso avatar samuelcostab avatar sklmx avatar svtkachenko avatar urmatov74 avatar zeburek avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

qase-python's Issues

Problem to recognize the API token suggested

Hello everyone
I'm have a problem trying to integrate qase with robotframework on Windows OS
I already installed the dependecy that is needed but although the Qase reporter is installed, upon the execution it does not recognize the API token suggested

It always returns:
QASE_API_TOKEN=631bf2fe3087c04875eb646a30744e911edb8e44 : The term 'QASE_API_TOKEN=631bf2fe3087c04875eb646a30744e911edb8e44' does not
is recognized as a cmdlet, function, script file, or operable program name. Check the spelling of the name or, if a path
has been included, make sure the path is correct and try again.
On line:1 character:1

  • QASE_API_TOKEN=631bf2fe3087c04875eb646a30744e911edb8e44 QASE_PROJECT= ...
  •   + CategoryInfo          : ObjectNotFound: (QASE_API_TOKEN=...0744e911edb8e44:String) [], CommandNotFoundException
      + FullyQualifiedErrorId : CommandNotFoundException
    
    

image

Note: Some of my dependencies includes:
qase-robotframework 1.1.0
qaseio 2.2.4
reportlab 3.6.2
requests 2.24.0
requests-oauthlib 1.3.0
robotframework 4.0.1
robotframework-appiumlibrary 1.5.0.6
robotframework-jsonlibrary 0.3.1
robotframework-pabot 2.1.0
robotframework-requests 0.9.2
robotframework-stacktrace 0.4.1
rsa 4.7.2
selenium 3.141.0

Someone could help me with this issue?

[qase-robotframework] Make tags case insensitive

Hello,

We are using the qase-robotframework listener to feed test results of our runs into Qase and we're using certain formatting tools in our project which make all tags on test cases lowercase. Recently I've found out that this makes the test runs empty because no link is found between test cases in Qase and the lowercase tags.

Would it be possible to make these tags case insensitive when parsing them?

Generate test case automatically for pytest

Hi,

I noticed the qaseio library already has full support for test case creation/deletion. But it doesn't seem to be used in the pytest hooks, which means, for pytest, users still have to create the test case in Qase and link the testID in their code manually. Is there any plan to implement this feature for pytest anytime soon?

qase-pytest is slowing down tests executions

with qase-pytest the execution time is 7 mins 33 secs
without qase-pytest the execution time is ~45 secs

is it possible to do bulk update test results once all the tests are executed?

QASE is amazing but this issue is holding me back to integrate it with my e2e.

Looking forward to quick fix here.

Python Client. The attachments uploaded during a result submission are not shown in the test run

STR:

  • use client v2.2.4 or later
  • report the results into a run using python client
  • send attachments with a result

AR:

  • in the UI of a test run, attachments are not shown
  • they send attachments using Qase.attachments.upload(self.project_code, file_path)
  • request is successful
  • but, in the run there is no attachment for a case that was submitted

ER:

  • attachments passed with result should be shown in the run UI

qaseio.exceptions.ApiTypeError: Invalid type for variable 'id'. Required value type is int and passed type was str at ['id']

Getting the following error when running pytest using following command:

Launching pytest with arguments --qase-mode=testops --qase-to-api-token=<> --qase-to-project=<> --qase-to-run=12

Using following:

qase-pytest==4.0.0
qaseio==3.0.0
Traceback (most recent call last):
  File "/Users/krishnayadav/Library/Application Support/JetBrains/PyCharm2022.1/plugins/evaluate-async-code/_pydevd_async_debug.py", line 627, in <module>
    run_path(sys.argv.pop(1), {}, "__main__")  # pragma: no cover
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 263, in run_path
    return _run_module_code(code, init_globals, run_name,
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 96, in _run_module_code
    _run_code(code, mod_globals, init_globals,
  File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevd.py", line 2181, in <module>
    main()
  File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevd.py", line 2172, in main
    globals = debugger.run(setup['file'], None, None, is_module)
  File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevd.py", line 1484, in run
    return self._exec(is_module, entry_point_fn, module_name, file, globals, locals)
  File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevd.py", line 1491, in _exec
    pydev_imports.execfile(file, globals, locals)  # execute the script
  File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pycharm/_jb_pytest_runner.py", line 51, in <module>
    sys.exit(pytest.main(args, plugins_to_load + [Plugin]))
  File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/_pytest/config/__init__.py", line 167, in main
    ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main(
  File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__
    return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
  File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec
    return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
  File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/pluggy/_callers.py", line 60, in _multicall
    return outcome.get_result()
  File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result
    raise ex[1].with_traceback(ex[2])
  File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall
    res = hook_impl.function(*args)
  File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/_pytest/main.py", line 317, in pytest_cmdline_main
    return wrap_session(config, _main)
  File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/_pytest/main.py", line 305, in wrap_session
    config.hook.pytest_sessionfinish(
  File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__
    return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
  File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec
    return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
  File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/pluggy/_callers.py", line 55, in _multicall
    gen.send(outcome)
  File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/_pytest/terminal.py", line 808, in pytest_sessionfinish
    outcome.get_result()
  File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result
    raise ex[1].with_traceback(ex[2])
  File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall
    res = hook_impl.function(*args)
  File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/qaseio/pytest/plugin.py", line 99, in pytest_sessionfinish
    self.reporter.complete_run()
  File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/qaseio/pytest/testops.py", line 262, in complete_run
    self._complete_run()
  File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/qaseio/pytest/testops.py", line 113, in _complete_run
    res = api_runs.get_run(self.project_code, self.run_id).result
  File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/qaseio/api/runs_api.py", line 805, in get_run
    return self.get_run_endpoint.call_with_http_info(**kwargs)
  File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/qaseio/api_client.py", line 859, in call_with_http_info
    self.__validate_inputs(kwargs)
  File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/qaseio/api_client.py", line 753, in __validate_inputs
    fixed_val = validate_and_convert_types(
  File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/qaseio/model_utils.py", line 1583, in validate_and_convert_types
    converted_instance = attempt_convert_item(
  File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/qaseio/model_utils.py", line 1458, in attempt_convert_item
    raise get_type_error(input_value, path_to_item, valid_classes,
qaseio.exceptions.ApiTypeError: Invalid type for variable 'id'. Required value type is int and passed type was str at ['id']

Support auto creating test cases for all reporters

Need to add the ability to create test cases when sending bulk method results (in the absence of binding to the test case ID in Qase TMS). Several scenarios need to be supported:

Test result status in test run always In process after update the status to PASSED

The test was break during the testing but not do any ending action(qase api to end the test)

Test result details page display all step pass and get the test result status is passed
image

TestRunResultInfo(hash='ad18416977acea13f4f4acd0a62954284ac7f490', comment='', stacktrace=None, run_id=258, case_id=1172, steps=None, status='Passed', is_api_result=False, time_spent=None, end_time='2022-02-09 09:02:37', attachments=[])

But it show in process in Test result page
image

DeprecationWarning: HTTPResponse.getheader

qaseio\rest.py:43: DeprecationWarning: HTTPResponse.getheader() is deprecated and will be removed in urllib3 v2.1.0. Instead use HTTResponse.headers.get(name, default).
return self.urllib3_response.getheader(name, default)

Please fix warning. This print at console after qase reports results.

pytest-xdist incompatibility

I have an issue with qase-pytest == 3.0.0 and pytest-xdist == 2.5.0.
When I install xdist and even not using it - qase-pytest plugin creating test run, but does not sending bulk results at all. Function send_bulk_results was not executed.

If I run tests with parameter -n=2, qase-pytest plugin trying to send bulk results, but in failing with error

Sending results to test run ...
Error at sending results for run : Invalid type for variable 'id'. Required value type is int and passed type was str

I looked inside with debugger and find out inside https://github.com/qase-tms/qase-python/blob/master/qase-pytest/src/qaseio/pytest/plugin.py function send_bulk_results have variable self.testrun_id = '', and that's why it can't send results - it does not know where to send.

Set loglevel in qase-robotframework

Hello,

We're using the qase-robotframework listener to feed the test results into Qase but we're not utilising the mapping of step names in code compared to the step names in Qase. Now the listener library floods our logs with warning messages saying that no step was found for each keyword but I'd like to control the loglevel so I can keep our other logs in the project.
Currently as a workaround I'm setting the loglevel of the whole project to ERROR but this makes all our logs disappear, not just the Qase warnings.

results.create fails if defect is not provided as boolean in req_data

if data (TestRunResultCreate) does not have defect as either True or False the request (results.create) fails with:

Got unexpected status code: 422 b'{"status":false,"errorMessage":"Data is invalid.","errorFields":[{"field":"defect","error":"The defect field must be true or false."}]}'

This also makes the qase-robotframework listener fail as it is expecting defect to be set to either True or False

Robot Framework test runs not flagging steps in QASE test runs.

When executing tests one by one in RF, hooking into QASE, the tests steps in the test run is flagged as expected, be it passed or failed.

As soon as all the tests are executed in RF in one run, then the steps in QASE under the test run is NOT flagged at all. Only the RF status is set to the QASE test run. (Passed || Failed).

Example Outputs:
Executing single test case: clear && robot -d assets/results/ --listener qaseio.robotframework.Listener -i q-2 code/tests

Results in:
image

Console Output:

==============================================================================
Tests :: This section sets up and tears down all tests.                       
==============================================================================
Tests.Fox                                                                     
==============================================================================
Tests.Fox.Privacy Notice :: Ensure that clients has access to and can view ...
==============================================================================
[ WARN ] Missing step for name Set Selenium Timeout                           
[ WARN ] Missing step for name Evaluate                                       
[ WARN ] Missing step for name Set Global Variable                            
[ WARN ] Missing step for name Set Global Variable                            
[ WARN ] Missing step for name Evaluate                                       
[ WARN ] Missing step for name '${IN_PIPELINES}' == 'true'                    
[ WARN ] Missing step for name Set Global Variable                            
[ WARN ] Missing step for name                                                
[ WARN ] Missing step for name Fail                                           
[ WARN ] Missing step for name '${ENV}' not in ${ENVIRONMENTS}                
[ WARN ] Missing step for name Set Global Variable                            
[ WARN ] Missing step for name '${ENV}' == 'dev' or '${ENV}' == 'qa'          
[ WARN ] Missing step for name Set Global Variable                            
[ WARN ] Missing step for name Log To Console                                 
[ WARN ] Missing step for name Pause Execution                                
[ WARN ] Missing step for name                                                
[ WARN ] Missing step for name Set Log Level                                  
[ WARN ] Missing step for name Set Log Level                                  
[ WARN ] Missing step for name Evaluate                                       
[ WARN ] Missing step for name Run And Return Rc And Output                   
[ WARN ] Missing step for name "${USE_AWS_PROFILE}" == "not_set"              
[ WARN ] Missing step for name Run And Return Rc And Output                   
[ WARN ] Missing step for name                                                
[ WARN ] Missing step for name Fail                                           
[ WARN ] Missing step for name ${exit_code} > 0                               
[ WARN ] Missing step for name Convert String To Json                         
[ WARN ] Missing step for name Set Global Variable                            
[ WARN ] Missing step for name Set Global Variable                            
[ WARN ] Missing step for name                                                
[ WARN ] Missing step for name Set Log Level                                  
[ WARN ] Missing step for name Retrieve AWS Secrets                           
[ WARN ] Missing step for name Set Log Level                                  
[ WARN ] Missing step for name Start Virtual Display                          
[ WARN ] Missing step for name '${SCREEN}' == 'headless'                      
[ WARN ] Missing step for name Open Browser                                   
[ WARN ] Missing step for name '${BROWSER_TYPE}' == 'local'                   
[ WARN ] Missing step for name Open Browser                                   
[ WARN ] Missing step for name                                                
[ WARN ] Missing step for name Maximize Browser Window                        
[ WARN ] Missing step for name Wait Until Location Contains                   
[ WARN ] Missing step for name Begin Session                                  
[ WARN ] Missing step for name Click Element                                  
[ WARN ] Missing step for name Switch Window                                  
[ WARN ] Missing step for name Wait Until Element Is Enabled                  
[ WARN ] Missing step for name Location Should Contain                        
[ WARN ] Missing step for name Page Should Contain                            
[ WARN ] Missing step for name Close All Browsers                             
Privacy Notice :: Validate that clients can view the privacy policy.  | PASS |
------------------------------------------------------------------------------
Tests.Fox.Privacy Notice :: Ensure that clients has access to and ... | PASS |
1 test, 1 passed, 0 failed
==============================================================================
Tests.Fox                                                             | PASS |
1 test, 1 passed, 0 failed
==============================================================================
Tests :: This section sets up and tears down all tests.               | PASS |
1 test, 1 passed, 0 failed
==============================================================================
Output:  /media/veracrypt1/sandbox/tx/rox/assets/results/output.xml
Log:     /media/veracrypt1/sandbox/tx/rox/assets/results/log.html
Report:  /media/veracrypt1/sandbox/tx/rox/assets/results/report.html 

Executing Multiple test cases in RF: clear && robot -d assets/results/ --listener qaseio.robotframework.Listener code/tests

Results In:
image

Console Output:

==============================================================================
Tests :: This section sets up and tears down all tests.                       
==============================================================================
Tests.Fox                                                                     
==============================================================================
Tests.Fox.Forgot Password Request :: Validate that users can request a new ...
==============================================================================
[ WARN ] Missing step for name Set Selenium Timeout                           
[ WARN ] Missing step for name Evaluate                                       
[ WARN ] Missing step for name Set Global Variable                            
[ WARN ] Missing step for name Set Global Variable                            
[ WARN ] Missing step for name Evaluate                                       
[ WARN ] Missing step for name '${IN_PIPELINES}' == 'true'                    
[ WARN ] Missing step for name Set Global Variable                            
[ WARN ] Missing step for name                                                
[ WARN ] Missing step for name Fail                                           
[ WARN ] Missing step for name '${ENV}' not in ${ENVIRONMENTS}                
[ WARN ] Missing step for name Set Global Variable                            
[ WARN ] Missing step for name '${ENV}' == 'dev' or '${ENV}' == 'qa'          
[ WARN ] Missing step for name Set Global Variable                            
[ WARN ] Missing step for name Log To Console                                 
[ WARN ] Missing step for name Pause Execution                                
[ WARN ] Missing step for name                                                
[ WARN ] Missing step for name Set Log Level                                  
[ WARN ] Missing step for name Set Log Level                                  
[ WARN ] Missing step for name Evaluate                                       
[ WARN ] Missing step for name Run And Return Rc And Output                   
[ WARN ] Missing step for name "${USE_AWS_PROFILE}" == "not_set"              
[ WARN ] Missing step for name Run And Return Rc And Output                   
[ WARN ] Missing step for name                                                
[ WARN ] Missing step for name Fail                                           
[ WARN ] Missing step for name ${exit_code} > 0                               
[ WARN ] Missing step for name Convert String To Json                         
[ WARN ] Missing step for name Set Global Variable                            
[ WARN ] Missing step for name Set Global Variable                            
[ WARN ] Missing step for name                                                
[ WARN ] Missing step for name Set Log Level                                  
[ WARN ] Missing step for name Retrieve AWS Secrets                           
[ WARN ] Missing step for name Set Log Level                                  
[ WARN ] Missing step for name Start Virtual Display                          
[ WARN ] Missing step for name '${SCREEN}' == 'headless'                      
[ WARN ] Missing step for name Open Browser                                   
[ WARN ] Missing step for name '${BROWSER_TYPE}' == 'local'                   
[ WARN ] Missing step for name Open Browser                                   
[ WARN ] Missing step for name                                                
[ WARN ] Missing step for name Maximize Browser Window                        
[ WARN ] Missing step for name Wait Until Location Contains                   
[ WARN ] Missing step for name Begin Session                                  
[ WARN ] Missing step for name Click Element                                  
[ WARN ] Missing step for name Wait Until Element Is Enabled                  
[ WARN ] Missing step for name Input Text                                     
[ WARN ] Missing step for name Wait Until Element Is Enabled                  
[ WARN ] Missing step for name Click Element                                  
[ WARN ] Missing step for name Wait Until Page Contains                       
Forgot Password Request :: Ensure users can request a new password.   | PASS |
------------------------------------------------------------------------------
Tests.Fox.Forgot Password Request :: Validate that users can reque... | PASS |
1 test, 1 passed, 0 failed
==============================================================================
Tests.Fox.Privacy Notice :: Ensure that clients has access to and can view ...
==============================================================================
[ WARN ] Missing step for name Set Selenium Timeout                           
[ WARN ] Missing step for name Evaluate                                       
[ WARN ] Missing step for name Set Global Variable                            
[ WARN ] Missing step for name Set Global Variable                            
[ WARN ] Missing step for name Evaluate                                       
[ WARN ] Missing step for name '${IN_PIPELINES}' == 'true'                    
[ WARN ] Missing step for name Set Global Variable                            
[ WARN ] Missing step for name                                                
[ WARN ] Missing step for name Fail                                           
[ WARN ] Missing step for name '${ENV}' not in ${ENVIRONMENTS}                
[ WARN ] Missing step for name Set Global Variable                            
[ WARN ] Missing step for name '${ENV}' == 'dev' or '${ENV}' == 'qa'          
[ WARN ] Missing step for name Set Global Variable                            
[ WARN ] Missing step for name Log To Console                                 
[ WARN ] Missing step for name Pause Execution                                
[ WARN ] Missing step for name                                                
[ WARN ] Missing step for name Set Log Level                                  
[ WARN ] Missing step for name Set Log Level                                  
[ WARN ] Missing step for name Evaluate                                       
[ WARN ] Missing step for name Run And Return Rc And Output                   
[ WARN ] Missing step for name "${USE_AWS_PROFILE}" == "not_set"              
[ WARN ] Missing step for name Run And Return Rc And Output                   
[ WARN ] Missing step for name                                                
[ WARN ] Missing step for name Fail                                           
[ WARN ] Missing step for name ${exit_code} > 0                               
[ WARN ] Missing step for name Convert String To Json                         
[ WARN ] Missing step for name Set Global Variable                            
[ WARN ] Missing step for name Set Global Variable                            
[ WARN ] Missing step for name                                                
[ WARN ] Missing step for name Set Log Level                                  
[ WARN ] Missing step for name Retrieve AWS Secrets                           
[ WARN ] Missing step for name Set Log Level                                  
[ WARN ] Missing step for name Start Virtual Display                          
[ WARN ] Missing step for name '${SCREEN}' == 'headless'                      
[ WARN ] Missing step for name Open Browser                                   
[ WARN ] Missing step for name '${BROWSER_TYPE}' == 'local'                   
[ WARN ] Missing step for name Open Browser                                   
[ WARN ] Missing step for name                                                
[ WARN ] Missing step for name Maximize Browser Window                        
[ WARN ] Missing step for name Wait Until Location Contains                   
[ WARN ] Missing step for name Begin Session                                  
[ WARN ] Missing step for name Click Element                                  
[ WARN ] Missing step for name Switch Window                                  
[ WARN ] Missing step for name Wait Until Element Is Enabled                  
[ WARN ] Missing step for name Location Should Contain                        
[ WARN ] Missing step for name Page Should Contain                            
Privacy Notice :: Validate that clients can view the privacy policy.  | PASS |
------------------------------------------------------------------------------
Tests.Fox.Privacy Notice :: Ensure that clients has access to and ... | PASS |
1 test, 1 passed, 0 failed
==============================================================================
Tests.Fox.Sign In :: Validate login functionality.                            
==============================================================================
[ WARN ] Missing step for name Set Selenium Timeout                           
[ WARN ] Missing step for name Evaluate                                       
[ WARN ] Missing step for name Set Global Variable                            
[ WARN ] Missing step for name Set Global Variable                            
[ WARN ] Missing step for name Evaluate                                       
[ WARN ] Missing step for name '${IN_PIPELINES}' == 'true'                    
[ WARN ] Missing step for name Set Global Variable                            
[ WARN ] Missing step for name Fail                                           
[ WARN ] Missing step for name '${ENV}' not in ${ENVIRONMENTS}                
[ WARN ] Missing step for name Set Global Variable                            
[ WARN ] Missing step for name '${ENV}' == 'dev' or '${ENV}' == 'qa'          
[ WARN ] Missing step for name Set Global Variable                            
[ WARN ] Missing step for name Log To Console                                 
[ WARN ] Missing step for name Pause Execution                                
[ ERROR ] Calling method 'end_keyword' of listener 'qaseio.robotframework.Listener' failed: KeyError: 'NOT RUN'
[ WARN ] Missing step for name Set Log Level                                  
[ WARN ] Missing step for name Set Log Level                                  
[ WARN ] Missing step for name Evaluate                                       
[ WARN ] Missing step for name Run And Return Rc And Output                   
[ WARN ] Missing step for name "${USE_AWS_PROFILE}" == "not_set"              
[ WARN ] Missing step for name Run And Return Rc And Output                   
[ ERROR ] Calling method 'end_keyword' of listener 'qaseio.robotframework.Listener' failed: KeyError: 'NOT RUN'
[ WARN ] Missing step for name Fail                                           
[ WARN ] Missing step for name ${exit_code} > 0                               
[ WARN ] Missing step for name Convert String To Json                         
[ WARN ] Missing step for name Set Global Variable                            
[ WARN ] Missing step for name Set Global Variable                            
[ WARN ] Missing step for name                                                
[ WARN ] Missing step for name Set Log Level                                  
[ WARN ] Missing step for name Retrieve AWS Secrets                           
[ WARN ] Missing step for name Set Log Level                                  
[ WARN ] Missing step for name Start Virtual Display                          
[ WARN ] Missing step for name '${SCREEN}' == 'headless'                      
[ WARN ] Missing step for name Open Browser                                   
[ WARN ] Missing step for name '${BROWSER_TYPE}' == 'local'                   
[ WARN ] Missing step for name Open Browser                                   
[ WARN ] Missing step for name                                                
[ WARN ] Missing step for name Maximize Browser Window                        
[ WARN ] Missing step for name Wait Until Location Contains                   
[ WARN ] Missing step for name Begin Session                                  
[ WARN ] Missing step for name Email                                          
[ WARN ] Missing step for name Password                                       
[ WARN ] Missing step for name Input Text                                     
[ WARN ] Missing step for name Input Password                                 
[ WARN ] Missing step for name Submit Form                                    
[ WARN ] Missing step for name Wait Until Page Contains                       
[ WARN ] Missing step for name Reload Page                                    
[ WARN ] Missing step for name Input Text                                     
[ WARN ] Missing step for name Input Password                                 
[ WARN ] Missing step for name Submit Form                                    
[ WARN ] Missing step for name Wait Until Page Contains                       
[ WARN ] Missing step for name Reload Page                                    
[ WARN ] Missing step for name Input Text                                     
[ WARN ] Missing step for name Input Password                                 
[ WARN ] Missing step for name Submit Form                                    
[ WARN ] Missing step for name Wait Until Page Contains                       
[ WARN ] Missing step for name Reload Page                                    
[ WARN ] Missing step for name Input Text                                     
[ WARN ] Missing step for name Input Password                                 
[ WARN ] Missing step for name Submit Form                                    
[ WARN ] Missing step for name Wait Until Element Is Visible                  
[ WARN ] Missing step for name Wait Until Element Is Enabled                  
Credentials Validation :: Test that combinations of credentials fa... | PASS |
------------------------------------------------------------------------------
Tests.Fox.Sign In :: Validate login functionality.                    | PASS |
1 test, 1 passed, 0 failed
==============================================================================
Tests.Fox                                                             | PASS |
3 tests, 3 passed, 0 failed
==============================================================================
Tests :: This section sets up and tears down all tests.               | PASS |
3 tests, 3 passed, 0 failed
==============================================================================
Output:  /media/veracrypt1/sandbox/tx/rox/assets/results/output.xml
Log:     /media/veracrypt1/sandbox/tx/rox/assets/results/log.html
Report:  /media/veracrypt1/sandbox/tx/rox/assets/results/report.html

Although I'm not aware of a way to suppress the warnings, I do not intend to log those in QASE as those are setup scripts that need to run before any tests can start.

Those steps are specific to our environment.

The following scripts will not execute in your environment and only serves as information as to my setup:

init.robot (base script)

*** Settings ***
Documentation
...                 This section sets up and tears down all tests.

Resource            ../core/resources.resource

Suite Setup         Prep Environment
Suite Teardown      Clear Environment


*** Keywords ***
Prep Environment
    [Documentation]
    ...    Setup environment, getting required information from AWS.

    Clear Files
    Begin Session

Clear Environment
    [Documentation]
    ...    Clear environment of all vaiables as well as close any opened browsers.

    Close All Browsers

Privacy Notice Test Script:

*** Settings ***
Documentation
...    Ensure that clients has access to and can view the privacy policy.

Resource
...    ../../../code/core/resources.resource

Default Tags
...    fox
...    privacy_notice
...    q-2


*** Test Cases ***
Privacy Notice
    [Documentation]
    ...    Validate that clients can view the privacy policy.

    Click Element    data: pq-selector:privacy-notice

    Switch Window    ProQuo AI Privacy Notice
    Wait Until Element Is Enabled    xpath: (//*[contains(text(),"Book a demo")])[1]    ${TIMEOUT}
    Location Should Contain    privacy-notice
    Page Should Contain    CONTACT US

Each test is isolated and only does one thing, nothing is shared across tests aside from the setup script.
They are all coded the same except that the tags and tests are different for each.

QaseApi.suites.get_all() returns limited entities to 10

When issuing a QaseApi call like below:

suites = QaseApi(token).suites.get_all(project_code)
len(suites.entities) # returns 10

even if the get_all() didn't specify any limit and the default limit is None.

Is it possible to get all of the suites and how?
My current workaround is to specify a limit of 100 but this will fail in the face of an increasing number of suites.
Many thanks for your support!

behave support request

hello

I want to 'Testrun' a Text in Qase using Behave and update that. Can you create a library?

thank you

method end_test failing with robot framework

Hello,

I was executing some test scenario with qase integration and every test was failing because of this error:

Calling method 'end_test' of listener 'qaseio.robotframework.Listener' failed: BadRequestException: Got error during response: b'{"status":false,"errorMessage":"The given route params are invalid","errorFields":{"hash":["The hash field is required."]}}'

My tox.ini file look like this:

[qase] qase_api_token=some_token qase_project=EC qase_run_id=49 qase_run_complete=True

In my test run in QASE, the scenario are marked as "In progress" and stay this way forever.

I'm not uploading attachments when the test fail or pass.

P.s: I din't change anything in my QASE repository or in my robot test scenario and it was working fine until last week.

with qase.step - Error at sending results for run

qase-pytest == 3.0.1
qaseio == 3.1.0

We have an issue with using qase.step like this:

from qaseio.pytest import qase

with qase.step(1): 
    ...

It worked something like a week ago, but now we are getting this error:

Error at sending results for run 1028: Invalid type for variable '0'. Required value type is TestStepResultCreate and passed type was ResultCreateStepsInner at ['results'][0]['steps'][0]

And test runs don't mark as completed automatically.

Qase cloud returned 502 when run 10 parallel workers for single test run

Steps:
create test run for test plan via curl
run test in parallel (10 threads)

expected result:
tests result is uploaded into the cloud
actual result:
tests result is not uploaded into the cloud

INTERNALERROR> def worker_internal_error(self, node, formatted_error): INTERNALERROR> """ INTERNALERROR> pytest_internalerror() was called on the worker. INTERNALERROR> INTERNALERROR> pytest_internalerror() arguments are an excinfo and an excrepr, which can't INTERNALERROR> be serialized, so we go with a poor man's solution of raising an exception INTERNALERROR> here ourselves using the formatted message. INTERNALERROR> """ INTERNALERROR> self._active_nodes.remove(node) INTERNALERROR> try: INTERNALERROR> > assert False, formatted_error INTERNALERROR> E AssertionError: Traceback (most recent call last): INTERNALERROR> E File "/usr/local/lib/python3.7/site-packages/_pytest/main.py", line 268, in wrap_session INTERNALERROR> E session.exitstatus = doit(config, session) or 0 INTERNALERROR> E File "/usr/local/lib/python3.7/site-packages/_pytest/main.py", line 321, in _main INTERNALERROR> E config.hook.pytest_collection(session=session) INTERNALERROR> E File "/usr/local/lib/python3.7/site-packages/pluggy/_hooks.py", line 265, in __call__ INTERNALERROR> E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) INTERNALERROR> E File "/usr/local/lib/python3.7/site-packages/pluggy/_manager.py", line 80, in _hookexec INTERNALERROR> E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) INTERNALERROR> E File "/usr/local/lib/python3.7/site-packages/pluggy/_callers.py", line 60, in _multicall INTERNALERROR> E return outcome.get_result() INTERNALERROR> E File "/usr/local/lib/python3.7/site-packages/pluggy/_result.py", line 60, in get_result INTERNALERROR> E raise ex[1].with_traceback(ex[2]) INTERNALERROR> E File "/usr/local/lib/python3.7/site-packages/pluggy/_callers.py", line 39, in _multicall INTERNALERROR> E res = hook_impl.function(*args) INTERNALERROR> E File "/usr/local/lib/python3.7/site-packages/_pytest/main.py", line 332, in pytest_collection INTERNALERROR> E session.perform_collect() INTERNALERROR> E File "/usr/local/lib/python3.7/site-packages/_pytest/main.py", line 661, in perform_collect INTERNALERROR> E session=self, config=self.config, items=items INTERNALERROR> E File "/usr/local/lib/python3.7/site-packages/pluggy/_hooks.py", line 265, in __call__ INTERNALERROR> E return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) INTERNALERROR> E File "/usr/local/lib/python3.7/site-packages/pluggy/_manager.py", line 80, in _hookexec INTERNALERROR> E return self._inner_hookexec(hook_name, methods, kwargs, firstresult) INTERNALERROR> E File "/usr/local/lib/python3.7/site-packages/pluggy/_callers.py", line 60, in _multicall INTERNALERROR> E return outcome.get_result() INTERNALERROR> E File "/usr/local/lib/python3.7/site-packages/pluggy/_result.py", line 60, in get_result INTERNALERROR> E raise ex[1].with_traceback(ex[2]) INTERNALERROR> E File "/usr/local/lib/python3.7/site-packages/pluggy/_callers.py", line 39, in _multicall INTERNALERROR> E res = hook_impl.function(*args) INTERNALERROR> E File "/usr/local/lib/python3.7/site-packages/qaseio/pytest/plugin.py", line 232, in pytest_collection_modifyitems INTERNALERROR> E exist_ids, missing_ids = self.check_case_ids(data) INTERNALERROR> E File "/usr/local/lib/python3.7/site-packages/qaseio/pytest/plugin.py", line 110, in check_case_ids INTERNALERROR> E case = self.client.cases.exists(self.project_code, _id) INTERNALERROR> E File "/usr/local/lib/python3.7/site-packages/qaseio/client/services/cases.py", line 37, in exists INTERNALERROR> E return self.get(code, case_id) INTERNALERROR> E File "/usr/local/lib/python3.7/site-packages/qaseio/client/services/cases.py", line 26, in get INTERNALERROR> E to_type=TestCaseInfo, INTERNALERROR> E File "/usr/local/lib/python3.7/site-packages/qaseio/client/services/__init__.py", line 67, in validate_response INTERNALERROR> E res.status_code, res.content INTERNALERROR> E ValueError: Got unexpected status code: 502 b'<html>\r\n<head><title>502 Bad Gateway</title></head>\r\n<body>\r\n<center><h1>502 Bad Gateway</h1></center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>\r\n' INTERNALERROR> E assert False INTERNALERROR> INTERNALERROR> /usr/local/lib/python3.7/site-packages/xdist/dsession.py:192: AssertionError INTERNALERROR> Traceback (most recent call last): INTERNALERROR> File "/usr/local/lib/python3.7/site-packages/_pytest/main.py", line 268, in wrap_session INTERNALERROR> session.exitstatus = doit(config, session) or 0 INTERNALERROR> File "/usr/local/lib/python3.7/site-packages/_pytest/main.py", line [322](https://gl.wallarm.com/tests/integration-tests/-/jobs/320228#L322), in _main INTERNALERROR> config.hook.pytest_runtestloop(session=session) INTERNALERROR> File "/usr/local/lib/python3.7/site-packages/pluggy/_hooks.py", line 265, in __call__ INTERNALERROR> return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) INTERNALERROR> File "/usr/local/lib/python3.7/site-packages/pluggy/_manager.py", line 80, in _hookexec INTERNALERROR> return self._inner_hookexec(hook_name, methods, kwargs, firstresult) INTERNALERROR> File "/usr/local/lib/python3.7/site-packages/pluggy/_callers.py", line 60, in _multicall INTERNALERROR> return outcome.get_result() INTERNALERROR> File "/usr/local/lib/python3.7/site-packages/pluggy/_result.py", line 60, in get_result INTERNALERROR> raise ex[1].with_traceback(ex[2]) INTERNALERROR> File "/usr/local/lib/python3.7/site-packages/pluggy/_callers.py", line 39, in _multicall INTERNALERROR> res = hook_impl.function(*args) INTERNALERROR> File "/usr/local/lib/python3.7/site-packages/xdist/dsession.py", line 117, in pytest_runtestloop INTERNALERROR> self.loop_once() INTERNALERROR> File "/usr/local/lib/python3.7/site-packages/xdist/dsession.py", line 140, in loop_once INTERNALERROR> call(**kwargs) INTERNALERROR> File "/usr/local/lib/python3.7/site-packages/xdist/dsession.py", line 180, in worker_workerfinished INTERNALERROR> self._active_nodes.remove(node) INTERNALERROR> KeyError: <WorkerController gw5>

Invalid type for qs_testplan_id in pytest.ini

In examples for INI file parameters I see

qs_testplan_id (string):
                        default value for --qase-testplan

If I use qs_testplan_id = 1 I got this error

INTERNALERROR> Traceback (most recent call last):
INTERNALERROR>   File "/usr/local/lib/python3.8/dist-packages/_pytest/main.py", line 264, in wrap_session
INTERNALERROR>     config._do_configure()
INTERNALERROR>   File "/usr/local/lib/python3.8/dist-packages/_pytest/config/__init__.py", line 981, in _do_configure
INTERNALERROR>     self.hook.pytest_configure.call_historic(kwargs=dict(config=self))
INTERNALERROR>   File "/usr/local/lib/python3.8/dist-packages/pluggy/_hooks.py", line 277, in call_historic
INTERNALERROR>     res = self._hookexec(self.name, self.get_hookimpls(), kwargs, False)
INTERNALERROR>   File "/usr/local/lib/python3.8/dist-packages/pluggy/_manager.py", line 80, in _hookexec
INTERNALERROR>     return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
INTERNALERROR>   File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 60, in _multicall
INTERNALERROR>     return outcome.get_result()
INTERNALERROR>   File "/usr/local/lib/python3.8/dist-packages/pluggy/_result.py", line 60, in get_result
INTERNALERROR>     raise ex[1].with_traceback(ex[2])
INTERNALERROR>   File "/usr/local/lib/python3.8/dist-packages/pluggy/_callers.py", line 39, in _multicall
INTERNALERROR>     res = hook_impl.function(*args)
INTERNALERROR>   File "/usr/local/lib/python3.8/dist-packages/qaseio/pytest/conftest.py", line 84, in pytest_configure
INTERNALERROR>     QasePytestPluginSingleton.init(
INTERNALERROR>   File "/usr/local/lib/python3.8/dist-packages/qaseio/pytest/plugin.py", line 472, in init
INTERNALERROR>     QasePytestPluginSingleton._instance = QasePytestPlugin(**kwargs)
INTERNALERROR>   File "/usr/local/lib/python3.8/dist-packages/qaseio/pytest/plugin.py", line 143, in __init__
INTERNALERROR>     self.check_testrun()
INTERNALERROR>   File "/usr/local/lib/python3.8/dist-packages/qaseio/pytest/plugin.py", line 195, in check_testrun
INTERNALERROR>     test_plan = api_plans.get_plan(
INTERNALERROR>   File "/usr/local/lib/python3.8/dist-packages/qaseio/api/plans_api.py", line 637, in get_plan
INTERNALERROR>     return self.get_plan_endpoint.call_with_http_info(**kwargs)
INTERNALERROR>   File "/usr/local/lib/python3.8/dist-packages/qaseio/api_client.py", line 859, in call_with_http_info
INTERNALERROR>     self.__validate_inputs(kwargs)
INTERNALERROR>   File "/usr/local/lib/python3.8/dist-packages/qaseio/api_client.py", line 753, in __validate_inputs
INTERNALERROR>     fixed_val = validate_and_convert_types(
INTERNALERROR>   File "/usr/local/lib/python3.8/dist-packages/qaseio/model_utils.py", line 1583, in validate_and_convert_types
INTERNALERROR>     converted_instance = attempt_convert_item(
INTERNALERROR>   File "/usr/local/lib/python3.8/dist-packages/qaseio/model_utils.py", line 1458, in attempt_convert_item
INTERNALERROR>     raise get_type_error(input_value, path_to_item, valid_classes,
INTERNALERROR> qaseio.exceptions.ApiTypeError: Invalid type for variable 'id'. Required value type is int and passed type was str at ['id']

Process finished with exit code 3

It has id = '1' in debugger, but it expects id to be int type.

Can't change test run's title and setup environment_id

It would be greate to have functional to change test run's title and setup environment_id "out of the box", for example in pytest.ini.

I can do this now by monkey patching QasePytestPlugin.create_testrun function

run_create=RunCreate(
    title=f"{name} {datetime.now()}",
    cases=cases,
    is_autotest=True,
    environment_id=environment_id
),

But this would work fine until package update.

Often get internal server error

Description

since we are using the qase-python for integration pytest to qase io, sometimes we got the internal server error as in the picture below. mostly this happens in Indonesia working hour and At the same time, we run 4 test runs.

Screen Shot 2021-09-15 at 11 12 14
Screen Shot 2021-09-15 at 11 12 22

Details

Question: could you guys please check the error that we got? and why this is happening?
and this issue comes from which side? (qase-tms or something)

...
INTERNALERROR>     raise ConnectionError(err, request=request)
INTERNALERROR> requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))

Running Time: 12.30 PM (Indonesian time) GMT+7
and, this issue happens a lot during Indonesian working hours (noon)
Run 4 test runs at the same time.

Tools version

Device: Mac mini (Mac os)
Jenkins: 2.289.3
qase-pytest: latest version 2.3.0

Run qase with pytest.mark.parametrize gives an error

For example
Create Project, add test case with id = 1
Run this test

@qase.id(1)
@pytest.mark.parametrize('a, b, c', [
                            [1, 2, 3],
                            [1, 3, 4]
                            ])
def test_sum(a, b, c):
    assert a + b == c

with settings in ini file

qs_enabled=true
qs_api_token=xxxxx
qs_project_code=YOUR CODE
qs_new_run=true

Result

INTERNALERROR> qaseio.client.services.BadRequestException: Got error during response: b'{"status":false,"errorMessage":"Test run is not active"}'

This happens because the run test is already closed when parameterization is started 2. But I don't use the flag qs_complete_run, why did the testran close?

pytest 7.1.2
qase-pytest 2.3.2

Qase ID Decorator Not Picking up Test Function

Hi!

I'm evaluating the Qase tool on some existing tests I have in our software product. I'm simply looking to send test results from pytest into Qase. I'm using the Demo project, and trying to send a test function's result into a case tagged as DEMO-12. I've decorated the function with @qase.id(12), and am running with the following command:

pytest --qase --qase-api-token=<TOKEN> --qase-project=DEMO --qase-new-run --qase-debug

of course, I've replaced the real token with <TOKEN>.

The tests run properly, but the Qase debug output reports that the test I've marked up does not have a Qase id (it's listed under This tests does not have test case ids).

It's very possible that I've configured something improperly. Is this the case?

The other potential issue is that although I'm using pytest to execute the tests (and use the Qase plugin), they're defined using the python unittest library (And are defined under a TestCase object that inherits from unittest.TestCase object). Are tests of this type not supported by the plugin?

Thank you!

qase-pytest: Run not getting completed when running pytests in parallel

Used the following command:
pytest --workers auto --qase --qase-api-token=<token> --qase-project=<code> --qase-testplan=<plan> tests/

I was trying to run pytest in parallel & got the following exception:

..INTERNALERROR> Traceback (most recent call last):
INTERNALERROR>   File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/pytest_parallel/__init__.py", line 93, in run
INTERNALERROR>     run_test(self.session, item, None)
INTERNALERROR>   File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/pytest_parallel/__init__.py", line 54, in run_test
INTERNALERROR>     item.ihook.pytest_runtest_protocol(item=item, nextitem=nextitem)
INTERNALERROR>   File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__
INTERNALERROR>     return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
INTERNALERROR>   File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec
INTERNALERROR>     return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
INTERNALERROR>   File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/pluggy/_callers.py", line 60, in _multicall
INTERNALERROR>     return outcome.get_result()
INTERNALERROR>   File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result
INTERNALERROR>     raise ex[1].with_traceback(ex[2])
INTERNALERROR>   File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/pluggy/_callers.py", line 34, in _multicall
INTERNALERROR>     next(gen)  # first yield
INTERNALERROR>   File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/qaseio/pytest/plugin.py", line 296, in pytest_runtest_protocol
INTERNALERROR>     self.start_pytest_item(item)
INTERNALERROR>   File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/qaseio/pytest/plugin.py", line 337, in start_pytest_item
INTERNALERROR>     result = self.client.results.create(
INTERNALERROR>   File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/qaseio/client/services/results.py", line 40, in create
INTERNALERROR>     self.s.post(
INTERNALERROR>   File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/requests/sessions.py", line 635, in post
INTERNALERROR>     return self.request("POST", url, data=data, json=json, **kwargs)
INTERNALERROR>   File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/apitist/requests.py", line 252, in request
INTERNALERROR>     resp = ApitistResponse(self.send(prep, **send_kwargs))
INTERNALERROR>   File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/requests/sessions.py", line 701, in send
INTERNALERROR>     r = adapter.send(request, **kwargs)
INTERNALERROR>   File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/requests/adapters.py", line 563, in send
INTERNALERROR>     raise SSLError(e, request=request)
INTERNALERROR> requests.exceptions.SSLError: None: Max retries exceeded with url: /v1/result/WEBEDI/130 (Caused by None)
INTERNALERROR> 
INTERNALERROR> The above exception was the direct cause of the following exception:
INTERNALERROR> 
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR>   File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/_pytest/main.py", line 268, in wrap_session
INTERNALERROR>     session.exitstatus = doit(config, session) or 0
INTERNALERROR>   File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/_pytest/main.py", line 322, in _main
INTERNALERROR>     config.hook.pytest_runtestloop(session=session)
INTERNALERROR>   File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__
INTERNALERROR>     return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
INTERNALERROR>   File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec
INTERNALERROR>     return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
INTERNALERROR>   File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/pluggy/_callers.py", line 60, in _multicall
INTERNALERROR>     return outcome.get_result()
INTERNALERROR>   File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result
INTERNALERROR>     raise ex[1].with_traceback(ex[2])
INTERNALERROR>   File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall
INTERNALERROR>     res = hook_impl.function(*args)
INTERNALERROR>   File "/Users/krishnayadav/Krishna/repositorties/fastfood/venv38/lib/python3.8/site-packages/pytest_parallel/__init__.py", line 334, in pytest_runtestloop
INTERNALERROR>     six.raise_from(exc, err[1])
INTERNALERROR>   File "<string>", line 3, in raise_from
INTERNALERROR> RuntimeError: pytest-parallel got 8 errors, raising the first from Thread-2.

Terminal command to set environment doesn't work

Environment: MacOS, terminal

Terminal commands are not working for declaring environments for test runs.
Command shown in README: --qase-environment=QS_ENVIRONMENT
Commands tested:
--qase-environment=staging
--qase-environment=Staging
--qase-environment=STAGING
--qase-environment=QS_STAGING

The following terminal command runs the tests, creates and completes a test run, but doesn't fill out the environment.
Full terminal command used:
python3 -m pytest \
--qase-mode=testops --qase-to-api-token=
--qase-to-project=FLOW --qase-environment=Staging


Screenshot 2022-11-18 at 10 02 19
Screenshot 2022-11-18 at 10 03 57
Screenshot 2022-11-18 at 10 04 27

qase-to-complete-run cl argument does not work correctly

qase-pytest/src/qaseio/pytest/conftest.py

add_option_ini(
        "--qase-to-complete-run",
        "qs_to_complete_run",
        type="bool",
        default=False,
        help="Complete run after all tests are finished",
        action="store_false",
    )

https://docs.python.org/3/library/argparse.html#action

'store_true' and 'store_false' - These are special cases of 'store_const' used for storing the values True and False respectively. In addition, they create default values of False and True respectively.

If qase-to-complete-run argument not specified then default value is set to True and TestRun completed. However, when qase-to-complete-run is specified, Testrun not compeleted.

Support passing headers for qase-robotframework / qase-xctest

Example from PHPUnit reporter:
X-Client: qaseapi=v1.0.0-alpha.3;qase-phpunit=v1.0.0-alpha.3;phpunit=9.5.13
X-Platform: os=Linux;arch=aarch64;php=8.1.3;composer=2.2.7
Reference example:
X-Client: qaseapi=[API_VERSION];qase-[REPORTER_NAME]=[REPORTER_VERSION];[FRAMEWORK_NAME]=[FRAMEWORK_VERSION]
X-Platform: os=[OS_NAME];arch=[ARCH];[LANG]=[LANG_VER]
Example implementation in qase-python:
https://github.com/qase-tms/qase-python/blob/master/qase-pytest/src/qaseio/pytest/plugin.py

qase.step exception if qs_enabled = False in pytest.ini

If I disable qase plugin for debug I have this exception

core/test_runner.py:162: in run_test_case
    with qase.step(self.command_number):
/usr/lib/python3.8/contextlib.py:113: in __enter__
    return next(self.gen)
/usr/local/lib/python3.8/dist-packages/qaseio/pytest/__init__.py:120: in step
    raise e
/usr/local/lib/python3.8/dist-packages/qaseio/pytest/__init__.py:113: in step
    plugin = QasePytestPluginSingleton.get_instance()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

    @staticmethod
    def get_instance() -> QasePytestPlugin:
        """ Static access method"""
        if QasePytestPluginSingleton._instance is None:
>           raise Exception("Init plugin first")
E           Exception: Init plugin first

/usr/local/lib/python3.8/dist-packages/qaseio/pytest/plugin.py:478: Exception

Plugin was not init, because qs_enabled = False, but It was ignored in qase.step

qaseio - `get_results` failed

qaseio 3.1.1

filters = GetResultsFiltersParameter(
                    status="failed",
                    run=str(self.test_run_id),
)
result_list: ResultListResponse = results_api_instance.get_results(
                        self.project_code,
                        filters=filters,
                        limit=limit,
                        offset=offset
                    )

HTTP response body: {"status":false,"errorMessage":"Data is invalid.","errorFields":[{"field":"filters","error":"The filters must be an array."}]}

Problem with integration of Pytest with Qase

When run the test through terminal with: "pytest test_base_page.py --qase --qase-api-token='_________________________________________' --qase-project=MWA --qase-testru
n=1 --qase-debug "

The terminal throw this output: ''Invalid type for variable 'id'. Required value type is int and passed type was str at ['id']"

image

Crash when running `pytest` whithout the `--qase` flag for test cases which contain `@qase.step()` decorators

Hello,

our test cases make use of the @qase.step() decorators. Pretty much our test cases follow a pattern like:

from qaseio.pytest import qase


@qase.step(1)
def step_1():
    """Descriprion."""


@qase.step(2)
def step_2():
    """Description."""


@qase.id(46)
def test_accessory_can_be_switched_onoff():
    """Description"""
    step_1()
    step_2()

When developing test cases, we don't want yet to publish test run results to our TMS yet, in order not to pollute it with "inaccurate" test results. Therefore, when developing the test cases, we test the test cases first without enabling the plugin and not specifying the --qase flag. However this fails with the following:

================================================================================================ FAILURES ================================================================================================
__________________________________________________________________________________ test_accessory_can_be_switched_onoff __________________________________________________________________________________

    @qase.id(46)
    def test_accessory_can_be_switched_onoff():
        """Description"""
>       step_1()

test_plans/suite_demo_cc/test_accessory_can_be_switched_onoff.py:17: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/contextlib.py:74: in inner
    with self._recreate_cm():
/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/contextlib.py:113: in __enter__
    return next(self.gen)
env/lib/python3.8/site-packages/qaseio/pytest/__init__.py:120: in step
    raise e
env/lib/python3.8/site-packages/qaseio/pytest/__init__.py:113: in step
    plugin = QasePytestPluginSingleton.get_instance()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

    @staticmethod
    def get_instance() -> QasePytestPlugin:
        """ Static access method"""
        if QasePytestPluginSingleton._instance is None:
>           raise Exception("Init plugin first")
E           Exception: Init plugin first

env/lib/python3.8/site-packages/qaseio/pytest/plugin.py:416: Exception
======================================================================================== short test summary info =========================================================================================
FAILED test_plans/suite_demo_cc/test_accessory_can_be_switched_onoff.py::test_accessory_can_be_switched_onoff - Exception: Init plugin first
=========================================================================================== 1 failed in 0.11s ============================================================================================

On the other hand, when running the same pytest command without having decorated steps:

from qaseio.pytest import qase


def step_1():
    """Descriprion."""


def step_2():
    """Description."""


@qase.id(46)
def test_accessory_can_be_switched_onoff():
    """Description"""
    step_1()
    step_2()

Then the test runs locally, without attempting to connect to qase TMS, while also having other qase decorators such as:
@qase.id()

test_plans/suite_demo_cc/test_accessory_can_be_switched_onoff.py .                                                                                                                                 [100%]

=========================================================================================== 1 passed in 0.01s ============================================================================================

I think that this should be the expected behaviour when running test cases without the --qase flag passed to pytest even if qase decorators (step, id) are used in the test item.

Remove validation for number of params ResultCreate()

Hi,

Currently there's a limitation in qaseio that prevents us from creating a test result with more than 1 param.

See the validations dict in in qaseio/model/result_create.py:84

validations = {
        ('time',): {
            'inclusive_maximum': 31536000,
            'inclusive_minimum': 0,
        },
        ('time_ms',): {
            'inclusive_maximum': 31536000000,
            'inclusive_minimum': 0,
        },
        ('param',): {
            'max_items': 1,
            'max_properties': 1,
        },
    }

Seems like the API supports passing as many parameters as I want in the param dict of the test result object, but the sdk throws an error if the dict length is larger than 1. I tried commenting out the '('param',)' key of the dict and everything works fine, can this limitation be removed?

The results of checking test cases are not displayed in the created testrun

I start my automation test run using --qase --qase-api-token=______________ --qase-project=EXAMPLE --qase-testplan=5.
Then test run is created in QASE, but it's empty, does not have any included testcases. Also after successful completion all tests in PyCharm, test run in QASE still is empty.
Earlier it's work ok

from qase-python

Thank you!
qase_a

Need to add support skip/xfail

    @pytest.mark.skip(reason="Reason")
    @qase.id(1)
    def test_test(self):
        pass

we will get

INTERNALERROR> AttributeError: 'str' object has no attribute 'value'

because instead of an object we pass a string

    def _unstructure_enum(self, obj = 'skipped'):
        """Convert an enum to its value."""
        return obj.value

python 3.10/pytest ^7.0.1

Calling end_test fails when status is SKIP

Whenever a test case in robotframework is skipped, it results in a SKIP status which is neither found in the TestRunResultStatus model of qaseio.client.models nor mapped in the STATUSES dict of the robotframework listener.

Since there is a skip status in test runs in Qase, I'd expect that it's also mapped.

Original error

Calling method 'end_test' of listener 'qaseio.robotframework.Listener' failed: KeyError: 'SKIP'

[robotframework] Ignoring the results when sending to the linked gherkin test case

STR:

  • send result to gherkin test case

AR:

  • the result is assigned only to the first step or not assigned at all

ER:

  • the result must be transferred in full

ps, example data:
*** Settings ***
Library RequestsLibrary
Library Collections

*** Test Cases ***
Quick Get A JSON Body Test ## ----------------------
[Tags] Q-3
${response}= When GET https://jsonplaceholder.typicode.com/posts/1 ## First step in Qase TMS
And Create Session google http://www.google.com ## Second step in Qase TMS

*** Variables ***
&{info}

Created three run ids but only wrote result comments on the latest one

I tried to run the test on multiple platforms at the same time with Browserstack.

I noticed qase-pytest will create multiple run ids successfully but only
write the results in the latest one.
The screenshot is an example of the result:
image

May I separate the results into different run ids?

Invalid type for --qase-to-run

qase-pytest with pytest option as described in README:

pytest \
    --qase-mode=testops \
    --qase-to-api-token=<your api token here> \
    --qase-to-project=TP
    --qase-to-run=1
  File "/home/user/.virtualenvs/api-tests-j2XfIJfy/lib/python3.8/site-packages/qaseio/model_utils.py", line 1458, in attempt_convert_item
    raise get_type_error(input_value, path_to_item, valid_classes,
qaseio.exceptions.ApiTypeError: Invalid type for variable 'id'. Required value type is int and passed type was str at ['id']

Pytest results are not uploaded in real time

Citing from the documentation execution logic:

https://github.com/qase-tms/qase-python/tree/master/qase-pytest#execution-logic

Execute tests and publish results in a runtime, not waiting all run to finish

With the new OpenAPI qaseio client the results are not uploaded in real time.

At our company we rely on this functionality because we have long-running tests and we don't want to loose the entire test run results if we are not able to upload them in bulk for some reason.

We would like to be available to further upload results in real time. How can we overcome this besides sticking with the old non OpenAPI generated client?

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.