Code Monkey home page Code Monkey logo

jimasp / behave-vsc Goto Github PK

View Code? Open in Web Editor NEW
15.0 15.0 3.0 3.68 MB

A vscode extension that provides a test runner/debugger and navigation for Python behave tests

Home Page: https://marketplace.visualstudio.com/items?itemName=jimasp.behave-vsc

License: Other

JavaScript 0.69% Gherkin 6.91% Python 5.61% TypeScript 86.79%
bdd behave debug debugging gherkin python testing visual-studio-code visual-studio-code-extension vscode vscode-extension

behave-vsc's People

Contributors

dependabot[bot] avatar jimasp avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

behave-vsc's Issues

Problem with multi-line scenario name

Describe the bug (required):
For multi-line scenario names where the first line is the same in two or more scenarios, the Behave-VCS reports:
"Error: Attempted to insert a duplicate test item ID..."
Since our scenario names are long, we often split them into 2 lines.
Version 0.6.1

To Reproduce (required):
Just copy to feature:
`
Scenario: Multi-line scenario name
the second line of 1

Scenario: Multi-line scenario name
the second line of 2
`

Operating system (required):
Windows

Best regards

Test Status Displays Failed when test passes

NOTE: before posting an issue, please make sure you read through the Troubleshooting section of the readme: https://github.com/jimasp/behave-vsc/blob/main/README.md#troubleshooting
jimsisco: I did look here first.

Please note that I can only support issues with the latest release (https://github.com/jimasp/behave-vsc/releases) and the latest version of vscode. The supported behave version is 1.2.6.
jimsisco: I am running the latest version of behave and behave-vsc

Describe the bug
The test status icon displays as failed even though the test has passed in the Behave VSC OUTPUT window. And, when I run the test via launch.json.

To Reproduce

  1. Select a test and click the run button
  2. The test runs and complete with all steps passing being displayed in the behave vsc output window.
  3. Go to test in test explore to validate that the icon "red"

Expected behavior
The icon should be "green"

Directory structure (optional, i.e. if relevant)

VScode setting.json
{
"python.defaultInterpreterPath": "python",
"python.languageServer":"Pylance",
"python.analysis.extraPaths": [
"./libs/",
],
"python.testing.unittestEnabled": false,
"python.testing.pytestEnabled": false,
"behave-vsc.featuresPath": "./",
"behave-vsc.multiRootRunWorkspacesInParallel": false,
"behave-vsc.showSettingsWarnings": true,
"behave-vsc.runParallel": true,
"behave-vsc.xRay": true,
}

Screenshots and GIFS (optional)
image
Below is screenshot of my directory structure
image

Debug result (optional)
See above screenshots and settings.json

Pull request (optional)
If you have created a pull request in a fork to fix the issue, please link it here

Operating system (please complete the following information):
running in a cloned volume.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.4 LTS
Release: 20.04
Codename: focal

Additional context
I do not understand the magic and where the test results are being stored. Thanks for taking a look.

${workspaceFolder} in behave settings.json does not get resolved when running w/o debugger

Describe the bug (required):
I have a behave set-up where I need to set some environment variables from within the workspace's settings.json, like

    "behave-vsc.envVarOverrides": {
        "MY_PATH": "${workspaceFolder}/path/to/my/tool",
    }

This works fine when I run the tests from within the debugger.
If I just run the tests in without the debugger, ${workspaceFolder} will not get resolved. Did some logging to ensure that this is the case. The environment variable will be like:

assert os.environ.get("MY_PATH") == "${workspaceFolder}/path/to/my/tool"

I would expect that ${workspaceFolder} will get resolved, e.g.:

assert os.environ.get("MY_PATH") == "/path/to/workspace/path/to/my/tool"

To Reproduce (required):
Try passing a simple environment variable to behave using the envVarOverrides. Within the envVarOverrides, reference ${workspaceFolder}.

Unfortunetaly I do not have a public repo available, if it cannot be reproduced on your end, I'll try to make one.

Operating system (required):
Running inside a devcontainer (Ubuntu 20.04) inside WSL (Ubuntu 22.04).

Please let me know in case more information is required.

Bug in step detection

Describe the bug (required):
When defining steps for behave in python you define them in the features/steps directory, when I import behave and annotate:

import behave

@behave.given
def ...

This will not be detected as a step

while:
from behave import given

@given
def ...

Is detected

It would be nice if it supports the other scenario too, I was struggeling to get a proper experience of the product until I found that to be true.

To Reproduce (required):
See example above.

Operating system (required):
Running ubuntu latest lts with python 3.10

Directory structure (optional, but often relevant):
project/

  • features/
  • all feature definitions
    ---steps
    ---all step definitions in python

Debug result (optional):
result is you cant navigate to steps inside the editor if youre using the first methopd

Official Release

I have a simple request: I'd like to have an official release of this great plugin. As it works pretty well for, I would not see a blocker.
This would make things easier when it comes to sharing the workspaces with others.

"No module named 'features'" for `features` folder that is not in root

Hi there, my directory structure is similar to the example-projects/project A

But my environment.py needs to import a python script (i.e. test_script1.py) in the same folder as environment.py:

from features import test_script1

It errors when I run any of the test from the plugin

    from features import test_script1
ModuleNotFoundError: No module named 'features'
Exception ModuleNotFoundError: No module named 'features'

I also have a folder (i.e. foobar) inside the features folder that contains multiple python scripts that will be used by the steps_*.py inside the step folder, which need to import them like so:

from features.foobar.something import *

Is such directory structure supported by this plugin? Thanks.

0.6.0 does not run a scenario if more than one scenario in a feture file and there are double quotes in the scenario title

Describe the bug (required):

Behave-VSC 0.6.0 does not run a scenario under the following conditions:

  1. There are more than one scenario in a feature file, and
  2. One of the scenarios has double quotes in its title, and
  3. That specific scenario, instead of the whole feature, is run from the Testing view

It looks like Behave-VSC 0.6.0 will skip the scenario, and the scenario test result icon in the Testing view will show a freezed, gray rotating icon. This issue did not happen in 0.5.0.

To Reproduce (required):

  1. Write the following feature file tutorial.feature in "features" folder:

     Feature: showing off behave
    
       Scenario: run a "simple" test
         Given we have behave installed
         When we implement a simple test
         Then behave will test it for us!
    
       Scenario: run a difficult test
         Given we have behave installed
         When we implement a difficult test
         Then behave will test it for us!
    
  2. Write the following step definition file tutorial.py in "features/steps" folder:

     from behave import *
    
     @given('we have behave installed')
     def step_impl(context):
         pass
    
     @when('we implement a simple test')
     def step_impl(context):
         assert True is not False
    
     @when('we implement a difficult test')
     def step_impl(context):
         assert True is not False
    
     @then('behave will test it for us!')
     def step_impl(context):
         assert context.failed is False        
    
  3. From the Testing view, hover over the first scenario title 'run a "simple" test', and click "Run Test" button to run that specific scenario.

  4. Check the Behave VSC output view, it shows that the scenario is skipped and is not run:

     --- BehaveVSC tests started for run 537333 @2023-09-05T06:03:09.811Z ---
    
    
     powershell commands:
     cd "c:\BehaveVSC"
     & "C:\Python311\python.exe" -m behave --show-skipped --junit --junit-directory "c:\Users\pdguser\AppData\Local\Temp\behave-vsc\junit\537333\BehaveVSC" -i "features/tutorial.feature$" -n "^run a "simple" test$"
    
     Feature: showing off behave # features/tutorial.feature:1
    
     0 features passed, 0 failed, 1 skipped
     0 scenarios passed, 0 failed, 2 skipped
     0 steps passed, 0 failed, 6 skipped, 0 undefined
     Took 0m0.000s
    
     --- BehaveVSC tests completed for run 537333 @2023-09-05T06:03:10.213Z (0.4024400999993086 secs)---    
    
  5. From the Testing view, check the test result icon of that scenario. It displays a freezed, gray rotating icon.

Operating system (required):

Windows 10

Directory structure (optional, but often relevant):

    [BehaveVSC]
          [features]
                [steps]
                     tutorial.py
                tutorial.feature

Screenshots and GIFS (optional):

behave-vsc-0 6 0-does-not-run-scenario

Debug result (optional):

N/A

Pull request with proposed fix (optional):

N/A

Feature file outline in VS Code outline view

Hey there,
It would be great to have the outline view displaying an outline of the currently opened feature file.
That would help me dealing with larger feature files and increase acceptance in my testing team.

Thank you for all this and best regards
Tobias

Support for Scenario Outline

Is your feature request related to a problem? Please describe.
Today I tried to use the Scenario Outline feature of behave. Unfortunately this does not yet seem to be supported by the plug-in. When trying to execute those parameterized tests, it will just hang. Also when scanning the available tests it would hang.

Describe the solution you'd like
Tests will be executed and each individual example's result will be shown as passed of failed

Describe alternatives you've considered
So far the only alternative would be to run behave manually from the command line - which of course I'd like to avoid since I found this nice plug-in ๐Ÿ‘

Additional context
Scenario outline example

    Scenario Outline: collect build writes build number <build number> into artifacts.properties
        Given CI environment variable is true
        And BUILD_NUMBER environment variable is <build number>
        When collect_build_info is called
        Then ~/.conan/artifacts.properties contains artifact_property_build.number=<build number>

        Examples:
            | build number |
            | 666          |
            | 999          |

Please let me know in case you need further information. I'd really love to see this plugin supporting that feature. Thanks!

Tests running in parallel despite `runParallel` = `false`

Describe the bug (required):
We have a bunch of tests configured which rely on opening some TCP port for testing.
When running all tests, I can see in VSCode that all tests seem to be triggered in parallel. Also, some of the tests would fail claiming that the port is already in use.

Therefore I believe the the tests are still somehow running in parallel despite the setting in settings.json

    "behave-vsc.runParallel": false

It does not seem to make any difference whether this is set to true or false or not set at all.

To Reproduce (required):
To reproduce the issue, please run some behave tests in VSCode which need to use the same resource (e.g. opening a file).
Try running all tests with one click, supplying different values to runParallel.

If it is hard to reproduce, I will try to come up with some example.

Operating system (required):
Ubuntu 20.04 (dev container) from within Ubuntu 20.04 running inside WSL2 on Win11.

Run/Debug config

Is your feature request related to a problem? Please describe.
It would be nice to have an interface for building custom runners, setting the feature/s to run, passing the parameters and so on ...

Describe the solution you'd like
Basically a copy of the run/degub configuration of Pycharm would be perfect

Describe alternatives you've considered
If this can't be done, We could add more configuration options to the plugin itself to add the working directory, add the content roots and the source roots to the PYTHONPATH.

Additional context
Captura de pantalla 2023-08-01 a las 8 51 50

[Feature request] Setting breakpoints at the steps in the feature files

Currently, we can set breakpoints at step definition (implementation) files. Is it possible that we can also set breakpoints directly at the steps in the feature files?

image

image

With this feature, we can trace the test execution from the very beginning of each step description. This will help us grab the test flow more clearly when debugging.

Thank you very much!

Add f string option to autocomplete feature

Is your feature request related to a problem? Please describe.
When defining a behave step in python, we usually use f string to interpolate values. For example, we have:
@behave.given(f'the {regex} is set to "(.*?)"') However, the autocomplete could not search for the the regex that expressed as a f string. It only shows the options that use string like this : @behave.given("regex is reset")

Describe the solution you'd like
Autocomplete pop up should show the regex expressed as f string in implementation.

Additional context
I suggest the implementation of stepsParser.ts could be the following:
replace
const stepFileStepRe = new RegExp(${stepFileStepStartStr}u?(?:"|')(.+)(?:"|').*\\).*$, "i");
with
const stepFileStepRe = new RegExp(${stepFileStepStartStr}u?(?:"|f?')(.+)(?:"|').*\\).*$, "i");

Initial release?

Hey!

Loving this extension. Could you cut a non-prerelease version so I can set it to be installed by default in devcontainers? As its pre-release I can't install it by default ๐Ÿ˜ข

Thanks!

recursive parsing from import file

Is your feature request related to a problem? Please describe.
When there is a step file that has only one import line, the step parser just ignore the line and move to other files. However, if we have a file that is imported from other file in other repo, the regex in that file will not be thrown into the parser so that the autocomplete and eror step check does not evolve these steps.

Describe the solution you'd like
If there is a step file in current repo has only one line import import other_repo.feature.steps.other_step.py. Then any step regex from other_repo.feature.steps.other_step.py should be parsed.

Additional context
Currently working on modifying code to accomplish this.

Run test with admin privileges

Thank you for this great extension!

Is there a possibility to run tasts with admin privileges?

Starting VS Code as admin on Windows 10 has no effect.

Add ability to change CWD

Is your feature request related to a problem? Please describe.

Project directory structure looks like this:

automation-tests
selenium-aws
โ””โ”€โ”€ tests
โ”œโ”€โ”€ init.py
โ”œโ”€โ”€ behave-parallel.py
โ”œโ”€โ”€ drag_and_drop.js
โ”œโ”€โ”€ driver
โ”œโ”€โ”€ example_files
โ”œโ”€โ”€ features
โ””โ”€โ”€ utils

changing features path is not enough, because project imports use from features... import ... and it breaks things, changing CWD fixes the issue.

Describe the solution you'd like
I'd like to point to selenium-aws/tests as entry point for behave, e.g. with configuration setting like: behave-vsc.cwd or behave-vsc.entryPoint

Describe alternatives you've considered
As a workaround I can open project in tests directory and select interpreter from 2 directories up or include venv in tests dir. But I lose visibility to rest of the project.

Can running a feature test invokes just one test run command?

It seems when running a feature test (by clicking the Run Test button right to the feature), behave-vsc will invoke multiple behave command line runs, each for a scenario in that feature. This results in before_all() and after_all() hooks being repeatedly run for each scenario, and will unnecessarily slow down the tests. As far as I know, the behave command line supports using just one command to run a whole feature test. Could behave-vsc invoke this one command instead?

For example, instead of running the following commands, each for a scenario of that feature:
python3.10.exe -m behave --show-skipped -i "feature_file_name.feature" -n "scenario_name_1"
python3.10.exe -m behave --show-skipped -i "feature_file_name.feature" -n "scenario_name_2"
python3.10.exe -m behave --show-skipped -i "feature_file_name.feature" -n "scenario_name_3"
...

Could it be running just the following one command only?
python3.10.exe -m behave --show-skipped -i "feature_file_name.feature"

This will automatically run all scenarios of the feature, and will run before_all() and after_all() hooks just once.
Thanks.

Scenarios names with <variable> of a scenario outline show <variable> in the test explorer

I have a scenario outline as such:

Scenario Outline: Schedule an Unplanned Default infrastructure team Change Request for < | Configuration Item |>"

Examples:
  | Configuration Item |
  | Google Chrome      |
  | Active Directory   |

Instead of seeing 2 tests in the test explorer, I see one:
"Schedule an Unplanned Default infrastructure team Change Request for < | Configuration Item |>"

I expect to see:
Schedule an Unplanned Default infrastructure team Change Request for Google Chrome
Schedule an Unplanned Default infrastructure team Change Request for Active Directory

fyi: I dont see | Configuration Item |, I see Configuration Item but with it wouldn't show as text.

python 3.10.2
Behave VSC v0.6.4
Package Version


allure-behave 2.13.2
allure-python-commons 2.13.2
async-generator 1.10
atomicwrites 1.4.0
attrs 21.4.0
autopep8 1.6.0
behave 1.2.6
behavex 3.0.0
certifi 2021.10.8
cffi 1.15.0
charset-normalizer 2.0.12
colorama 0.4.4
configobj 5.0.6
cryptography 36.0.1
csscompressor 0.9.5
exceptiongroup 1.1.3
extras 1.0.0
fixtures 3.0.0
fuzzywuzzy 0.18.0
glob2 0.7
greenlet 1.1.2
h11 0.13.0
htmlmin 0.1.12
idna 3.3
iniconfig 1.1.1
Jinja2 3.0.3
Mako 1.1.6
MarkupSafe 2.0.1
mock 4.0.3
numpy 1.22.2
outcome 1.1.0
packaging 21.3
pandas 1.4.1
parse 1.19.0
parse-type 0.6.0
pbr 5.8.0
pip 23.3
pluggy 1.0.0
py 1.11.0
pycodestyle 2.8.0
pycparser 2.21
pymssql 2.2.4
pyOpenSSL 21.0.0
pyparsing 3.0.7
pypiwin32 223
PySocks 1.7.1
pytest 7.2.2
pytest-bdd 6.1.1
python-dateutil 2.8.2
python-subunit 1.4.0
pytz 2021.3
pywin32 303
requests 2.27.1
requests-negotiate-sspi 0.5.2
selenium 4.1.2
setuptools 60.5.0
six 1.16.0
sniffio 1.2.0
sortedcontainers 2.4.0
SQLAlchemy 1.4.31
sure 2.0.0
testtools 2.5.0
toml 0.10.2
tomli 2.0.0
trio 0.19.0
trio-websocket 0.9.2
typing_extensions 4.8.0
urllib3 1.26.8
wsproto 1.0.0

Allow passing environment variables from workspace

Hey there!

From what I can tell, the current behaviour is to only pass environment variables specified in envVarList. It would be great if this extension could pass through the variables already configured in my shell (so it has the same behaviour as if I called behave from the command line myself).

Thanks!

Output window support color

When we run baheve directly from the terminal, it has colored output, can we support it in output window?

No JUnit file was written for this test. Check output in Behave VSC output window.

Hi,
using the latest preview I get this error when running a test:

No JUnit file was written for this test. Check output in Behave VSC output window.

Looking at the Behave VSC output window the test runs and it passes. The command line the extension uses includes --junit --junit-directory "/tmp/behave-vsc/junit/185800/asset" which, after the test completes, contains the junit file TESTS-connectivity.xml

This was working before with the same extension version, so it is possible something else has changed in my system, but I am stuck...

Ability to switch command used to run behave.

Is your feature request related to a problem? Please describe.
When using the extension with runners that wrap the behave module or add additional hooks into setup, the extension fails.
I'm using [behave-django](https://github.com/behave/behave-django), which starts behave with a django management command and injects additional functionality at that point.

Describe the solution you'd like
An additional setting that lets me specify the behave module or script to use to start behave.

Describe alternatives you've considered
I've tried to provide a package script that overwrites the behave name inside the virtual environment, but as the extension runs the module directly with the vscode python environment, I'm unable to swap out the reference.

Add examples in test panel tree view

Is your feature request related to a problem? Please describe.
The test panel is lacking the ability to run one example only.

Describe the solution you'd like
Display in tree view the examples as well so it can be selected and run/debug.

Describe alternatives you've considered
Run manually by editing launch.json

Configurable steps folder

Is your feature request related to a problem? Please describe.
According to its documentation, behave allows flexibility in terms of directory layout. In particular, it doesn't require nesting of steps under feature files and allows these two folders to have the same parent - behave will search for step definitions recursively. This is the configuration I successfully used in several repositories and would like to retain.

Describe the solution you'd like
I would like to have a configurable option (similar to behave-vsc.featuresPath) to set the location of my step definitions (alternatively, for steps to be searched recursively from behave root, just the way behave does it).

Describe alternatives you've considered
I considered using a plugin called "Cucumber (Gherkin) Full Support".

Additional context

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.