Code Monkey home page Code Monkey logo

model-evaluation's People

Contributors

astante avatar benjaminbeichler avatar berndhekele avatar cyrilcornu avatar hm-teststudent avatar janwelte avatar jastram avatar jenkins-semiant avatar jonashelming avatar ladenberger avatar mahlmann avatar mariellepetitdoche avatar matthiaskuhn avatar mercebardot avatar mercementre avatar srieger avatar stanpintethesignallingcompany avatar uwesteinkefromsiemens avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

model-evaluation's Issues

ERTMSFormalSpecs assessment by Assessor 2 / semi-formal model / structure of the language

in the following note:

"For example, formal language dedicated to software design as Scade or Classical B, how to structure the model and to define its architecture is an important part of the language, to allow the design of e cient software. i do not see how to link the structure of an ERTMS Formal Spec model closed to SRS to the one of a software."

Could you please remove this comment? ERTMSFormalSpecs doesn't include a software generator, and therefore, the structure of the model can be decoupled from the structure of the software.

Warning and Error in GNATprove

When I open the project in GPS 5.1.1 I get the following warning:

default.gpr:1:9 warning: object directory "obj/" not found

when trying to execute "Prove All" or "Prove Root Project" I get the following error message with gnatprove (version "2012 (20120509)")

gnatprove -P/users/gudemann.home/tmp/model/tmp/model/GNATprove-MERCE/default.gpr --mode=prove --ide-progress-bar
Phase 1 of 3: frame condition computation ...
Phase 2 of 3: translation to intermediate language ...
+===========================GNAT BUG DETECTED==============================+
| GPL 2012 (20120509) (i686-pc-linux-gnu) Program_Error gnat2why-decls.adb:626 explicit raise|
| Error detected at a-chtgbk.adb:30:1 [a-cfhama.adb:83:4 [com_map.ads:31:4]]|
| Please submit a bug report by email to [email protected].               |
| GAP members can alternatively use GNAT Tracker:                          |
| http://www.adacore.com/ section 'send a report'.                         |
| See gnatinfo.txt for full info on procedure for submitting bugs.         |
| Use a subject line meaningful to you and us to track the bug.            |
| Include the entire contents of this bug box in the report.               |
| Include the exact gcc or gnatmake command that you entered.              |
| Also include sources listed below in gnatchop format                     |
| (concatenated together with no headers between files).                   |
| Use plain ASCII or MIME attachment.                                      |
+==========================================================================+

Please include these source files with error report
Note that list may not be accurate in some cases,
so please double check that the problem can still
be reproduced with the set of files listed.
Consider also -gnatd.n switch (see debug.adb).

/users/gudemann.home/tmp/model/tmp/model/GNATprove-MERCE/src/com_map.ads
/users/gudemann.home/tmp/model/tmp/model/GNATprove-MERCE/obj/gnatprove/GNAT-TEMP-000001.TMP
/users/gudemann.home/tmp/model/tmp/model/GNATprove-MERCE/src/data_types.ads

compilation abandoned

   compilation of com_map.ads failed

gprbuild: *** compilation phase failed
gnatprove: error during translation to intermediate language, aborting.

How can I proceed? Dows the error depend on the first warning?

Why3 benchmark

David,

you have decided to drop the Why3 model.
I think it will be nice to have a short explanation of why you think this tool is not convenient for the OpenETCS project.

Maybe this tool can be take in consideration for VnV activities.

applicable area of track conditions

I have a Question concerning 3.13.6.2.1.5

According to special track conditions several special brakes must not be used.

Start, Length and end of the track conditions are described in Messages, but there is no information, how long they should be regarded. In my understanding it should be also taken into account in the whole train length, as it is done in for example at the gradients, because there are no assumptions about the position of the special brake.

Am I reading the SRS wrong or is this a spec issue ?

kind regards

Benjamin

ERTMSFormalSpecs assessment by Assessor 2 / Main usage of the approach / missing documentation items 2

In section C.2 Main usage of the approach, there is the following note:

"The tool allow the generation of some reports as data dictionary, specification
issues or coverage reports, but no element to understand how the system works in themodel. The current produced elements are not su�cient for a design process."

1/ Incorrect statement. The User Guide explains how the model is built, and has received positive feedback from the industry, from a model-based design perspective.

2/ Please justify why 1 instead of 3 for Documentation. Please identify missing documentation items.

Evaluation matrix issue (quantification)

Criterias should be quantified

If we are to take a decision on methods and tools, it must be on an objective base, i.e. compliance on the criterias.

That compliance must be quantified somehow (for instance rated on a scale x/100). That rating has to be justified, so that it can be assessed by others.

Of course, we can decide to give different weight to Different criterias, and even to combine languages and tools to obtain a best-of-breed method (for instance, ERTMSFormalSpecs doesn't include formal proof -> if combined With another approach enabling proof, it could score more).

Looking forward to your feedback, I wish you an excellent evening.

Very kind regards

Stan

ERTMSFormalSpecs assessment by Assessor 2 / use of the tool / cooperation of tools

Cooperation of tools (D2.6-02-076)

1 unjustified. The model is stored in XML file format, and there is already an interaction with Eclipse toolchain implemented, by exporting the full ERTMSFormalSpecs model to EMF repository.

Could you please update your quotation for:

  • Cooperation of tools (D2.6-02-076)
  • Tool chain integration

ERTMSFormalSpecs assessment by Assessor 2 / main usage / tests or model or both executed

In the following note:

"It is not clear if the test or the model are executed."

Both are executed. the tests are made up of actions on the model variables and expectations on these variables, but the model is executed during the tests.

Could you please read the documentation regarding testing, and point to any unclear statement or missing explanation in the documentation?

How to handle scalability issues with Event B?

The current formal model covers a little part of the SRS (e.g. §3.5.3). How do you plan to cover the whole SRS with Event B? Through several small models, each one covering a subpart of the SRS? Through other means?

ERTMSFormalSpecs assessment by Assessor 2 / Main usage of the approach / model verification capabilities

In section C.2 Main usage of the approach, there is the following note:

"For verification, it is di�cult through the document to understand what is verified. It is
verifications are those provided by the tool as lexical or syntax verification. I have not find other kind of verification as type or coverage. How to verify the model in a more general way (indeed that the model specifies well the informal specification) is not describe in the documents."

This is an incorrect statement. ERTMSFormalSpecs provide verification through its User Interface, through traceability and through testing and coverage analysis. The documentation also clearly covers how to verify the model in a general way.

Therefore, could you please

1/ justify why 1 instead of 3 for Verification?

2/ Justify why 1 instead of 3 in section C.8 Main usage of the tool, for model verification

3/ identify, if any, missing documentation items?

ERTMSFormalSpecs assessment by Assessor 2 / main usage / model checks

in the following note:

"The only verification supported is lexical and syntactical verification, no
means are provided to check for example types, domains, or properties."

Incorrect statement. Types, domains are verified by the ERTMSFormalSpecs tool.

Could you please remove that note?

ERTMSFormalSpecs assessment by Assessor 2 / Language / modular design

there is the following note:

"But this way do not allow to have a really modular design : all the data are stored in the same database, and not clear share of function is provide to allow the design of module in an independent way."

Incorrect statement. Modular design can be achieved by using Namespaces, functions, structures, etc (all typical artifacts of modular languages). The fact that all the model is stored in the same database is not related to modularity.

Justify why 1 instead of 3 for Modular Language. Identify missing documentation items, if any.

Evaluation matrix issue (Open process, relationship to WP2, quantification)

Open, collaborative process to establish the evaluation matrix

I understand you have started to work on the matrix, however, all stakeholders of the project should be allowed to participate in the drafting and review of this matrix.
This matrix should not be presented in the Munich workshop, without having being discussed on the WP2, WP3 and WP7 mailing list.

I therefore think that this matrix should be put in GIT in the https://github.com/openETCS/model-evaluation repository, as a .tex versioned-control document, So that we can collaboratively contribute, review and comment. (Maybe it is already the case, sorry if I missed it).

Could you confirm that it shall be the case?

ERTMSFormalSpecs assessment by Assessor 2 / semi-formal model / requirements justification + traceability

there is the following note:

"Concerning management of requirement justification and traceability to exported
requirements, the current model do not show how this can be systematically covered."

Incorrect statement. The ERTMSFormalSpecs toolchain allow for complete traceability, and provide complete tooling support to enforce this traceability.

Justify why 1 instead of 3 for "Management of requirement justification" and "Traceability of exported requirements"

Evaluation matrix issue (relationship to WP2 reqs)

Criterias should come from WP2 requirements, listed in https://github.com/openETCS/requirements/blob/master/D2.3/Synthesis/req_synthesis.tex

I don't understand why the criterias should come from other sources (report on methodologies, for instance). Criterias should be based on requirements.
These requirements can come from WP2 requirements list (req_synthesis.tex) or other applicable standards like EN50128.

Especially after the collaborative work that has been done around WP2 req_synthesis document. --> This should be the most important input for the matrix.

However, not all EN50128 requirements are applicable on the language and toolchain, and therefore not all requirements should be taken into account in the evaluation matrix. The report from Merlin should provide a starting base for this -> extract the EN50128 requirements that have been judged applicable on language and tools, and insert them in the matrix.

Readibility issues in Event B model

When reviewing Event B model, I found the textual explanation well done and necessary. However I had issues to link the textual explanations with the formal text, especially the complex parts like §5.10.

Moreover, I think the formal model should be as much as possible self explanatory.

I would suggest to give explicit names to invariants and guards. Instead of inv1, inv2, ..., use references to SRS paragraphs and covered properties (e.g. 3_5_3_7_part_of_start_of_mission).

I would also suggest to use such explicit names into the body of the textual explanations surrounding the formal parts.

ERTMSFormalSpecs assessment by Assessor 2 / Language / standardization

there is the note:

"Standardization is poor due to no methodological guide to apply the approach"

Incorrect statement. Standardization means documentation support, public availability, etc.

Could you please remove the note, or provide due justification?

ERTMSFormalSpecs has good documentation support, including methodology for its intended scope, which is modelling the Subset-026

Could you please justify why 1 instead of 2 or 3?

how to handle spec issues

Dear all,

as my last question mentioned a possible spec issue, my question here is, how to handle such things. How should be the process to get information from domain specialists?

I know this is only a evaluation benchmark, but also at this point its annoying to have wrong behavior of the models.

ERTMSFormalSpecs assessment by Assessor 2 / semi-formal model / timeouts and truth tables

There is the following note:

"The current model does not cover the § 3.5 on session management and the
§4, thus it is di cult to evaluate the capabilities of the language to treat time-outs and
truth tables."

This statement is incorrect, as a good deal of §3.5 is implemented, including timeouts (see for instance this screenshot for timeouts).

timeout_screenshot

Also, the high-priority elements that should be modelled to prove the support of truth tables (HIGH PRIORITY §4.6.2 (Transitions Table) and §4.6.3 (Transitions Condition table)) have been 100% and successfully modelled.

Therefore, could you please:

  • remove the note
  • mark truth tables and timeouts as supported

ERTMSFormalSpecs assessment by Assessor 2 / Language / formal or not

there is the following note:

"ERTMS Formal Spec is not a formal language : semantic is not clearly
defined thus no formal verification can be done."

1/ Incorrect statement. ERTMSFormalSpecs is a formal language, in the sense that its semantic is 100% defined. Please identify the undefined semantics, if any.

2/ ERTMSFormalSpecs, according to widely accepted Wikipedia definition, is a formal language (http://en.wikipedia.org/wiki/Formal_language#Definition). Could you please remove this note?

3/ Could you please justify why 1 instead of 3 for Formal Language?

ERTMSFormalSpecs assessment by Assessor 2 / semi-formal model / translation to another language

in the following note:

"The current elements (document, GUI) do not allow to give a clear view of the structure and if this structure can be easily translated to the structure of an another language"

Complete translatability to SCADE has been validated as easely automatable by Esterel technologies (confirmed by Esterel during the munich workshop). Therefore, a 1 is unacceptable for "Extensible to strictly formal model" , "Easy to refine towards strictly formal model" and "Extensible to software architecture and design"

Could you please provide better notes for the three above mentioned elements, or provide a compelling justification?

ERTMSFormalSpecs assessment by Assessor 2 / use of the tool / distributed development

Distributed software development (D2.6-02-078.03)

the current version of the tool has now splitted the single model and test XML file into one file per namespace, and one file per test, allowing git-based multi-user modelling in a scalable way.

Could you please:

1/ upgrade your note for:

  • Simultaneous multi-users (D2.6-02-078.04)
  • Version management (D2.6-02-078.07)
  • Concurrent version development (D2.6-02-078.08)
  • Model-based version control (D2.6-02-078.09) (Moreover, there is now model-based version control implemented in ERTMSFormalSpecs eclipse-based toolchain, on top of EMFStore).

2/ remove the note "The current version of the tool does not allow to define how to have a multi-users development of large scale system."?

Committer vote for Thomas Bardot

Hello,

Do you agree to give committer right to Thomas Bardot on model-evaluation repository?

He works with me on making such models.

Best regards,
David Mentré

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.