dsd-dbs / py-capellambse Goto Github PK
View Code? Open in Web Editor NEWA Python 3 headless implementation of the Capella modeling tool.
Home Page: https://dsd-dbs.github.io/py-capellambse/
License: Apache License 2.0
A Python 3 headless implementation of the Capella modeling tool.
Home Page: https://dsd-dbs.github.io/py-capellambse/
License: Apache License 2.0
This was already briefly touched on in #25, but we never opened a formal issue for this.
Currently, a lot of calculations are duplicated between capellambse.aird
and capellambse.svg
. To make matters worse, the two modules have small, but significant differences in their implementations. As an example, the aird
module makes box sizing calculations without taking into account any icons, only the text label, whereas the svg
module does account for icons. This has lead to several hard to debug rendering issues in the past already, and it's safe to assume that similar issues will continue to crop up in the future.
The fact that aird
module has to do some calculations is unavoidable: The XML does not contain enough information on its own to render a complete diagram (which makes sense, as most of it can be relatively easily calculated from the information that is given). Examples include the physical extent of labels, i.e. their height and width when drawn โ this can be (and is already) calculated based on the text, font and the font size.
On the other hand, the svg
module needs to do its own additional calculations because not all of the information that it needs is provided to it from the aird
module. For example, to draw text, it needs to not only know the size of the bounding box, but also the height and vertical position of each line. This effectively results in doing (almost) all the text-related calculations for the second time, and then doing some more on top.
A simple solution to this problem is for the aird
module to provide all of the additional numbers that the svg
module needs as part of the JSON document that is exchanged between the two. However, this approach makes the JSON format even more specific to the current SVG format converter, and does not scale very well if other modules emerge that provide different output formats (and which do not just convert from SVG, as is done by the current PNG converter).
A viable alternative approach is for the svg
module to consume the aird.Diagram
instance directly. This would allow us to provide either methods on that object or functions in some standardized place for all the calculations that need to be done additionally, while also taking advantage of the information already calculated before. Such methods can also be leveraged by alternative format conversion modules, without having to calculate things that are not needed for the particular requested format.
Both proposals constitute a breaking change for the JSON interface (the first one because older documents would be lacking crucial information, the second one because the interface would be abandoned entirely). However, I don't think that this is a particularly big issue, as the only real use case for these JSON documents was this exchange between capellambse
submodules anyway. Without a way to reconstruct an aird.Diagram
instance, they are not very useful for persisting modified (or newly created) diagrams, and with the svg
module being the only known consumer of that format, it makes more sense to just convert it to the well-known, standardized SVG format straight away.
No matter which route we choose, addressing this issue can be expected to provide some additional benefits.
svg
module is, frankly, a mess. During this refactoring, there is a great opportunity to clean it up, and to add some more documentation to it.aird.parser
has grown to be a very complex beast as well. It's possible that some calculations can be moved out of it, reducing its overall complexity.At the moment PhysicalComponent
class has the following missing or not properly working features:
kind
attribute is missingnature
attrubute is missingcomponents
attribute is not working properly (returns empty list)deployed_components
attributedeploying_components
attributeIn PA we are missing the physical path object. We should fix that
This is what we should do to close this issue:
.all_physical_paths
accessor to PhysicalArchitecture
PhysicalPath
class into cs.py
.physical_paths
accessor to PhysicalLink
As per Viktors request I had a look at the project. My initial impression is that the code is of high quality. It may be worth to add some banners at the top, stating code coverage, code quality ๐ .
This is the list of things I looked for and things I checked, being an Average Joe with some Python experience, giving this project a swing.
python setup.py install
console_scripts
). Or is is a Sphinx extension?pyproject.toml
filepyproject.toml contains
buildsystem` section[build-system]
requires = ["setuptools", "wheel"]
README should provide a step-by-step example, e.g. using one of the models in this repo.
README should provide developer install instructions:
* create a virtual env python3 -m venv .venv
* source .venv/bin/activate
* pip install pre-commit
* pre-commit install
README: Make a short title and a subtitle, instead of the line wrapping title it is now.
public CI build (e.g. GitHub Actions)
MyPy is not configured in pre-commit
python setup.py test
/pytest fails -> require module cssutils
, sphinx
Set setup.py
test runner to pytest
pre-commit run --all-files
fails (fixed in 26056b9)
When you import dependencies only for type checking (typing.TYPE_CHECKING
), make sure the following import is also made:
from __future__ import annotations
(as far as I know it's still required for Python 3.8)
Remove the ci
folder
A copyright notice in a .gitignore
and .gitattributes
file is a bit too much :)
Although the tests are well written, it may be worth following the Arrange, Act, Assert pattern (a.k.a Given, When, Then). Separating the preparation, action and checks by blank lines. Then (most) assertions should end up in at the bottom of the test. Now they're sprinkled throughout the test, which I tend to find confusing. Precondition checks can be performed in the fixtures themselves.
NB. One thing we should strive for is to make a new user enthusiastic within 5 minutes. You want to make a new achieve something, so he's hooked.
To improve consistency we need to generalize current implementation of functions as it is done in Capella meta-model and properly implement AbstractFunction in fa package.
Our current documentation is a bit too technical. We need to improve user experience there by providing at least layer specific pages and introduction to API and supporting packages (aird parsing, rendering).
Under all ArchitectureLayers we have all_{functions, capabilities,...} look up ElementLists that made our lives easier regarding our other tools. By today we have the tools that make these items easily accessible. For example:
import capellambse
all_logical_functions = model.search(capellambse.model.layers.la.LogicalFunction)
So for all the look-ups that are simply derived via the ProxyAccessor(Child-Relationship) and concrete Classtype in the current layer there is imo no reason to define them.
Especially the following:
aren't working correctly. The actor_exchanges should catch all ComponentExchanges that have either an LogicalComponent | is_actor is True as source_port.owner and/or target_port.owner... but this definition won't catch exchanges that were moved into ComponentPackages.
There is an open question on how we want to make these two look-ups accessible.
We add an owner attribute on AbstractExchange in the fa crosslayer. For ComponentExchanges only Components and their Packages can be the owner. Then Option A is possible:
# Get all instances of LogicalComponents
all_logical_components = model.search("LogicalComponent")
# Get all instances of LogicalComponentPkgs
all_logical_component_pkgs = model.search("LogicalComponentPkg")
all_logical_component_exchanges = model.search("ComponentExchange").by_owner(
*all_logical_components, *all_logical_component_pkgs
)
all_logical_actor_exchanges = [
aex for aex in all_logical_component_exchanges
if aex.source_port.owner.is_actor or aex.target_port.owner.is_actor
]
We add a new attribute containing_layer on GenericElements that gives the ArchitectureLayer instance which underneath the GenericElement is defined. This will need a new Accessor that is more implementation work than option A, but leads to shorter code for usability.
all_logical_component_exchanges = model.search("ComponentExchange").by_containing_layer(model.la)
It would be nice to have a method that would allow finding all elements (within a model) that have a matching name (or contain a matching string fragment in the name). This ideally would result in a mixed element list.
The attributes (apart from username and password) can have a docstring.
Some are already documented on the FileHandler
ABC, we should add a link to it and explicitly mention which ones we support here.
The
open
method doesn't have a docstring yet
It uses the same docstring as the base class.
The Numpy style guide for docstrings doesn't explicitly say anything about this case AFAICT. The Google style however says to insert a docstring ร la """See base class""". I'll raise this in our next regular meeting, where we can discuss it properly with the entire team.
and
write_transaction
is referencing a private class' docstring (_GitTransaction
).
It isn't referencing, but rather copying it:
Or was your point that we shouldn't do this? (In which case, I'm curious as to why?)
Originally posted by @ewuerger in #99 (review)
We need simple a method, similar to pandas df.to_excel(filename)
to dump requirement modules in ReqIF format.
Ideally, the call should look like this:
module = model.la.requirement_modules[0]
module.to_reqif("my_module.reqif")
In a Jupyter notebook, the following code:
import capellambse
model = capellambse.MelodyModel("tests/data/melodymodel/5_0/Melody Model Test.aird")
trajectory = model.search("Class").by_name("Trajectory")
assert len(trajectory.properties) > 0
trajectory.properties[0]
results in this error:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File /tmp/tmp.QyReJLIMrO/lib/python3.10/site-packages/IPython/core/formatters.py:343, in BaseFormatter.__call__(self, obj)
341 method = get_real_method(obj, self.print_method)
342 if method is not None:
--> 343 return method()
344 return None
345 else:
File ~/git/capellambse/capellambse/model/common/element.py:309, in GenericElement._repr_html_(self)
308 def _repr_html_(self) -> str:
--> 309 return self.__html__()
File ~/git/capellambse/capellambse/model/common/element.py:282, in GenericElement.__html__(self)
279 fragments.append('</th><td style="text-align: left;">')
281 if hasattr(value, "_short_html_"):
--> 282 fragments.append(value._short_html_())
283 elif isinstance(value, str):
284 fragments.append(escape(value))
File ~/git/capellambse/capellambse/model/common/element.py:299, in GenericElement._short_html_(self)
296 def _short_html_(self) -> markupsafe.Markup:
297 return self._wrap_short_html(
298 f" "{markupsafe.Markup.escape(self.name)}""
--> 299 f"{(': ' + str(self.value)) if hasattr(self, 'value') else ''}"
300 )
File ~/git/capellambse/capellambse/loader/xmltools.py:96, in AttributeProperty.__get__(***failed resolving arguments***)
94 xml_element = getattr(obj, self.xmlattr)
95 try:
---> 96 return self.returntype(xml_element.attrib[self.attribute])
97 except KeyError:
98 if self.default is not self.NOT_OPTIONAL:
ValueError: could not convert string to float: '*'
To reduce on-boarding effort we should demonstrate how one could generate simple document out of a model with Jinja templates. We'll use HTML source and target as this can be nicely visualized in Jupyter without any additional software.
We should also add some hints how to move from there in other directions like markdown, weasyprint or python-docx
We can not expect every user to put his/her models in Git LFS (Large File Storage).
Martin wrote](#22 (comment)):
I'd consider the hard requirement on Git LFS a bug. Although it has become quite common, we still can't just require everyone to install LFS even if they don't use it.
After a brief look at the affected code, it should be fairly straight-forward to catch the produced error and pretend that git lfs ls-files had simply listed nothing. Getting test coverage on both cases in a single test run might become annoying though :)
Suggested solutions:
At the moment it is only possible to find object - object (Class) relationships via low level API calls which makes adoption fo the library for interface spec generation a bit difficult.
We should provide a high level API to facilitate object relationships exploration.
In Capella metamodel, the object relationships are captured via Association and Property objects. The below set of class diagrams describes the Capella implementation:
Lets apply this now to the following practical example: a Trajectory
object is made of an ordered list of Waypoint
objects
There are a few paths that we could take to get the end user from Trajectory
object to Waypoint
object and the other way around:
trajectory.properties.by_name("waypoints").type
--> Waypoint
model.search("Association").by_name("DataAssociation1").roles[1].type
--> Waypoint
Next actions:
In Capella it is possible to explore allocation traces between exchanges (via semantic browser).
This library should enable that kind of exploration too, so lets add the following object properties:
FunctionalExchange.allocating_component_exchange
--> ComponentExchange
or None
FunctionalExchange.owner
--> ComponentExchange
(shortcut)ComponentExchange.allocating_physical_link
--> PhysicalLink
or None
ComponentExchange.allocating_physical_path
--> PhysicalPath
or None
ComponentExchange.owner
--> PhysicalLink
or PhysicalPath
or None
(shortcut)and we are also missing .all_functional_exchanges
at physical layer
The way how we currently detect which files in a Git repo use LFS has two major flaws:
It only works if git-lfs
is installed on the host. If it is not, all files will be treated as if they weren't using LFS, even if they are. When attempting to load a model from a Git repo under these conditions, a very non-obvious error will be raised:
Traceback (most recent call last):
File "/home/martinlehmann/git/capellambse/./_modeltest.py", line 137, in <module>
model = capellambse.MelodyModel(**modelinfo)
File "/home/martinlehmann/git/capellambse/capellambse/model/__init__.py", line 177, in __init__
self._loader = loader.MelodyLoader(path, **kwargs)
File "/home/martinlehmann/git/capellambse/capellambse/loader/core.py", line 215, in __init__
self.__load_referenced_files(
File "/home/martinlehmann/git/capellambse/capellambse/loader/core.py", line 240, in __load_referenced_files
frag = ModelFile(filename, self.filehandler)
File "/home/martinlehmann/git/capellambse/capellambse/loader/core.py", line 102, in __init__
self.tree = etree.parse(
File "src/lxml/etree.pyx", line 3521, in lxml.etree.parse
File "src/lxml/parser.pxi", line 1876, in lxml.etree._parseDocument
File "src/lxml/parser.pxi", line 1896, in lxml.etree._parseMemoryDocument
File "src/lxml/parser.pxi", line 1784, in lxml.etree._parseDoc
File "src/lxml/parser.pxi", line 1141, in lxml.etree._BaseParser._parseDoc
File "src/lxml/parser.pxi", line 615, in lxml.etree._ParserContext._handleParseResultDoc
File "src/lxml/parser.pxi", line 725, in lxml.etree._handleParseResult
File "src/lxml/parser.pxi", line 654, in lxml.etree._raiseParseError
File "<string>", line 1
lxml.etree.XMLSyntaxError: Start tag expected, '<' not found, line 1, column 1
It doesn't work well together with branches. git lfs ls-files
always works on the HEAD
, which may not be related to the branch we're interested in. If a file is using LFS on the branch we're using, but is not marked with LFS in the HEAD
commit, we will currently not apply the LFS filters. This leads to the same error as above. If on the other hand a file is marked LFS in HEAD
but is not actually LFS on our branch, we will try to apply the filter. This produces a warning message, which gets demoted to "debug" level as the operation succeeds anyway.
To address both points simultaneously, I propose switching away from git lfs ls-files
and instead inspecting the .gitattributes
files ourselves. This can either be done once for the entire repo and the results saved, similar to how it works now, or it could be done for each file whenever it is open()
ed. For flexibility, I prefer the latter approach, but it may lead to performance issues especially on Windows due to repeated calls out to git
.
We can take advantage of the fact that git-lfs
always uses the standardized filter name lfs
(lower-case), which will greatly simplify the operation and avoid the need to additionally parse any git configuration files and/or guess the actual filter name. However, our implementation does need to be aware that .gitattributes
files can exist on any directory level, not just the repository root.
Physical components are supposed to have nature
attribute, however for root physical component this attribute is undefined. Current impolementation assumes it is always defined and expects the value to match an enum. This failes with key error.
The issue can be reproduced on the test model 5_0
with the following code: model.pa.all_components[0].nature
Currently, we have a relatively large (13 lines) header in each Python source file. The Apache-2.0 license requires that such a notice be put there, and it provides this copy-pasta as an example (see the definition of "Work" in ยง1), but it does not mandate that this long notice must be used.
This is where SPDX comes into play. It offers a standardized, compact, but human-readable format for declaring licenses (among other things). Using SPDX, the license header could be condensed down to two lines. Omitting the year from it prevents mistakenly copying an older year into a new file, and avoids having to fix all affected files each year. (I'm assuming that this library will keep being maintained for a while. ;) )
# Copyright DB Netz AG and the capellambse contributors
# SPDX-License-Identifier: Apache-2.0
This is much easier on the eyes, and greatly increases the ratio of code to boilerplate, especially for small files.
I start with an example:
LogicalFunctions.owner attribute currently links to the LogicalComponent where this function is allocated to(it is in component.allocated_functions). This behaviour was introduced when we wanted to access the owners of Functions displayed in diagrams. But in the explorer the owner should be either of type LogicalFunction or LogicalFunctionPkg.
In capella's semantic browser it says:
Here it is called parent. We should be consistent with the naming of attributes on ModelElements to not confuse the user and most importantly ourselves.
In addition to the model itself, there can also exist auxiliary files next to it. These files are defined by users of the model. This will be especially useful in conjunction with the GitFileHandler or similar, where such files would be downloaded automatically and transparently to the user.
In order to implement this functionality, we first need to define an appropriate API:
MelodyModel
instance.MelodyLoader
provides access to the underlying file handler.The first two points can be solved in a very simple and effective manner by exposing the actual FileHandler object. This allows the FileHandler to implement any arbitrary API without having to worry about name collisions with attributes of the model object.
For the last point we can implement a PathLike
API. This is both user friendly and allows code that works with pathlib.Path
objects to also transparently handle files in the file handler.
To enable further development of diagraming engine we should provide an overview page (documentation) that explains:
In Capella a state can be realized by another state (of a higher level). We are missing ability to check that realization attribute in the current API.
Currently, when a model is loaded that links to a library project, an exception like the following is raised:
FileNotFoundError: [Errno 2] No such file or directory: 'TestProject/platform:/resource/Test%20Library/TestLibrary.capella'
This occurs due to two new types of links in the model, which currently aren't handled correctly or at all:
platform:
links, which look similar to platform:/resource/Test%20Library/TestLibrary.capella
. capellambse
does not recognize this special syntax and interprets them simply as relative paths, which leads to the above exception..aird
file. This is not handled correctly by capellambse
due to the applied path normalization: If a relative link would go beyond the top of the hierarchy, it is cut off and constrained to within the hierarchy. However, if there are layers above this project root within the file handler (as can be the case e.g. in git, if the .aird
file lives in a subdirectory of the repo), the file handler would attempt to find the mentioned file, fail, and raise a similar FileNotFoundError
as above.There currently are no publicly available practical examples on how to use the Property Value Management extension, and the documentation about it is also lacking some important details. We should improve the docs and add an example notebook, similar to how it was done for the Requirements extension.
Currently the functionality of getting Requirements by RequirementsTypes easily is not so easy:
rationale_type = model.search(reqif.XT_REQ_TYPE).by_long_name("ReqType")[0]
req = fnc.requirements.by_type(rationale_type)[0]
req.type.long_name == "ReqType"
The functionality that is more intuitive and also was the way how it behaved earlier:
>>> rationale_reqs = fnc.requirements.by_type("ReqType")
>>> for req in rationale_reqs:
>>> print(req.type)
ReqType
ReqType
ReqType
...
While modeling processes (operational, functional) people capture context information which is attached to exchanges (i.e. configuration / values of an exchange item, constraints, etc). In the current API via .involved
property we skip the involvement object as technical and deliver a list of end elements (functions or operational activities, functional exchanges, etc) leaving no means for working with the involvement context itself. What would help is a new property of a functional chain / operational process like .involvements
that would deliver a list of involvement objects where the end user could retrieve the involvement context or involved element.
Capella metamodel analysis follows:
The below view is of particular interest for the usecase as it shows the exchange_context
relationship with Constraint
:
When MelodyModel
is being loaded with a filled diagram_cache
param, we try to request diagrams from the so called Capella diagram service.
When such a request is not returning a diagram, the algorithm shall fallback to the internal diagram engine.
That kind of behaviour will also cover cases where we access synthentic context diagrams.
In the description of GenericElements you can link to other GenericElements and that leaves an <a>GenericElement.name</a>
in the xml attribute value. In the case of a Specification we convert these links, if they are there, to #{uuid}
. This is inconsistent with the standard capella hlink format: hlink://{uuid}
.
In the current collection of test models we are missing a test model for Capella 5.1 so that we could see if any other API gets broken there (as it does atm for diagram rendering)
A really weird thing that happened during development of Association factory:
Normally we define a special factory function and try to reuse the generic_factory, since this already gets almost everything done. Most special factories just deal with the labels that should appear.
With associations in class diagrams it's a bit different:
Here we have to deal with multiple(I think it's at maximum 2) labels that are specifying the rolenames. I tried following changes to the aird.diagram.Edge:
Edges now have labels that default to an empty sequence.
This wasn't explicitly handed over in the generic factory and caused repetition of all preceeding edge.labels in the newly constructed edge... almost like the edge wasn't constructed freshly, just reused.
I'd really like to understand how this could happen in my favourite programming language.
Copied here for visibility from #43 (comment) ("Make RequirementType and EnumValue hashable")
There's a good reason why Python requires an explicit override of the __hash__
method when overriding __eq__
. Just keeping the parent class' __hash__
in this case violates the one fundamental requirement on __hash__
:
The only required property is that objects which compare equal have the same hash value
Since instances of these classes compare equal to the string value of their .long_name
, their hash value must be equal to that of the long_name
; in other words hash(some_enum_value) == hash(some_enum_value.long_name)
must always be True
. Otherwise they cannot be looked up in hash collections (i.e. dictionaries) by their long_name, which is the exact property that we were after with having them compare equal in the first place. Try this with our 5.0 test model:
>>> dtd = model.by_uuid("637caf95-3229-4607-99a0-7d7b990bc97f")
>>> dtd.values
<CoupledElementList at 0x00007F2D8FB9BCB0 [<EnumValue 'enum_val1' (efd6e108-3461-43c6-ad86-24168339ed3c)>, <EnumValue 'enum_val2' (3c2390a4-ce9c-472c-9982-d0b825931978)>]>
>>> "enum_val1" in dtd.values
True
>>> "enum_val1" in set(dtd.values)
False
Because the __eq__()
of these two classes delegates to GenericElement
through super()
, the hash value of EnumValue
s must also be equal to the hash of self._element
(see https://github.com/DSD-DBS/py-capellambse/blob/a501bdf2a77ea906f79cc79bae793283a805dff3/capellambse/model/common/element.py#L206..L207). Satisfying both conditions at once is not possible for obvious reasons, but it's safe to drop the super delegation.
However, it is not safe to make the hash()
based on the .long_name
. Remember: The only reason we can get away with GenericElement
s being hashable at all is that their _element
is considered immutable over their lifetime. Everything else, specifically everything that accesses the XML, is mutable and can therefore not be used in hashes.
Therefore the only possible course of action for this pull request (that I can come up with) is being rejected (or reverted, as it has already found its way into master).
There are basically two ways how we can address the unhashability of EnumValue
and AbstractType
:
.long_name
attributeBoth cause inconveniences for different use cases.
In my opinion, we should go with option 1 and advise users who unconditionally require the hashability of all model objects (independent of their type) to explore alternative solutions. To facilitate this, we can "open up" the _element
a little bit. Currently we treat it as a private attribute; we could "publish" it as an opaque, but hashable object. Then advanced users could use it to make hashes of every possible model object. In this case, to make the user experience around it more pleasant, we should also offer a function that takes such an object and converts it back into a "proper" model object, i.e. a GenericElement
instance. This would most likely be a method on MelodyModel
, analogous to .search()
and .by_uuid()
.
Of course another solution, which should be even easier for everyone involved, would be to hash the .uuid
instead of the object itself. That can already be looked up with constant time via MelodyModel.by_uuid()
. Depending your use case, this might just do the trick.
Summoning @amolenaar and @vik378, the people involved in #43.
Managing requirement objects is an important part of our MBSE workflow / one of the primary usecases for our library, hence we should provide some common usage examples.
The following topics should be covered:
It would be helpful to indicate the amount of classes, relationships and queries that we already cover vs discovered (based on Capella meta-model analysis). The implementation status is already collected via PVMT attribute in the API model.
This includes the following actions:
IMO it would be better to add this to the 5.0 test model, instead of adding some cases to the 5.2 one. Keeping as much as possible in the same model will make it easier to migrate to a higher minimum supported version in the future, because there's only one model with actual useful content.
Originally posted by @Wuestengecko in #94 (comment)
Keeping support for Python 3.8 introduces a notable amount of developer overhead. This is primarily due to it being the last version without PEP-585. This PEP deprecates most of the stdlib-parallel classes in typing
(e.g. List
, Set
, etc.) and allows to use type hints on the stdlib classes themselves.
This works fine in a type annotation context, however when we want to subclass something we still need to go back to the typing
classes for that instance. This is not only annoying, but also makes the code a little less clear to read due to essentially the same class being used from different modules.
Dropping support for Python 3.8 entirely is at this point a reasonable choice, in my opinion. However, we do need to analyze our dependency chain and make sure that all dependencies and dependents, as well as all relevant tool configuration, have been updated accordingly.
Use of models with space in name results in failure to render diagrams. We can then see the diagram name, uuid and description. But any call that involves renderer (i.e .nodes or actual svg preview) will fail with ValueError: Malformed link: 'test 5.1.aird#_r-0YcAauEeydodL3xp60Ww'
The failure occurs in regex of follow_link
method of MelodyLoader
in loader.core
module
Steps to reproduce:
model = capellambse.MelodyModel("test.aird")
model.diagrams
model.diagrams[idx]
where idx is a number from the above listAs the filter names in the XML are quite cryptic and arbitrary at times, it might be a good idea to add an Enum or something that has all the filters that exist. Each member could have a short, but ideally still readable name, and map to the internal XML name. Then each docstring could have the full name shown in the GUI.
I added the _enum.py
where all found filters are stored. Let's finalize this PR by using the filter names defined there. What do you think @Wuestengecko?
Originally posted by @ewuerger in #84 (comment)
Using the git://
protocol (specifically the git+ssh://
variant of it) is not obvious for new users, especially if they're used to the scp
-like short form. Additionally, the error message produced when trying to use the short form is confusing and suggests that Git-via-SSH is not supported at all.
Alternatively, a special case could be implemented in the URL parsing logic that handles the scp
-like short form.
Remove all support functionality for pre 5.x Capella version. It's probably broken anyway since we don't test against the 1.x test models (These can therefore go too).
There are a few drawing issues in Class diagrams:
Diamond-Marker
since the aggregation kind of role trajectory is a composition.Box.NumericType
Box.StringType
for Class Diagram Blank.
Actually:
Actually:
Actually:
Solved in branch code-generation.
When loading a model that links in a library, there is currently no straight forward way to access or search for elements defined in the library:
capellambse
allows runtime modifications of objects defined in libraries, but it will not save those libraries during MelodyModel.save()
. This can easily lead to inconsistencies, and therefore should be changed to a) disallow modification of objects in the first place if they are defined in a library with access policy readOnly
and b) save all libraries along with the base model.model.la.all_functions
currently only search the base model, not linked libraries. (Whether or not we want to change this is up for debate, but either way this needs to be documented.)model.search()
finds elements from the base model and all linked libraries, but there is no easy way to figure out where an element is defined.Apart from that, there's a few simple additions that we can make to the API to solve (2):
MelodyModel.libraries
, which acts as dict-like and provides access to each library's layers. In other words, given a library "testlib"
in a model
, a possible call would be: model.libraries["testlib"].la.all_functions
.MelodyModel.search()
which restricts the search to a certain subset of the model, e.g. "search only the base model" or "search only the linked testlib
".By default a requirement object is described as GenericElement, however since it is closer to ReqIF element it needs to be described differently - the following attributes should be visible:
At the moment it is possible to get requirements list linked to a model element via .requirements
While working with this list in a document template we frequently face the following challenges:
While working with requirement objects I also noticed that attribute selection and retrieval needs some improvement:
req.attributes[1].values[0].long_name
to get a value of attribute 1. And I'll need to somehow find upfront that attribute 1 is what I'm after. It would work better if the attribute access looked like so: req.attributes["ChangeStatus"] -> "Unmodified"
"ChangeStatus" in req.attributes
working. This will be implemented by providing the keys()
function, due to the different semantics of __contains__
in lists and dicts. Therefore, the actual check will be: "ChangeStatus" in req.attributes.keys()
.It might be a little late for that discussion (although it can never be too late for improvements!), but do you think it makes sense to refactor the aird.DiagramDescriptor
to include a reference to the actual XML element instead of just its UUID? This would allow code in aird
that actually uses that descriptor to avoid essentially the same lookup in several places, and therefore also avoid the possibility for these lookups to be implemented slightly differently each time (see https://github.com/DSD-DBS/py-capellambse/blob/master/capellambse/aird/parser/__init__.py#L166-L168).
Aside from that point, it might make sense for API consistency reasons to pass the DiagramDescriptor
instead of just the uid
part here, even though the other parts of it aren't actually used in this function.
Originally posted by @Wuestengecko in #84 (comment)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.