Code Monkey home page Code Monkey logo

prophy's People

Contributors

aurzenligl avatar bronisze avatar florczakraf avatar jgrycz avatar kamichal avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

prophy's Issues

encode issue

We met a issue, when use python version:

payload whished:
.... 0x20 0x00 0x00 0x00 0xff 0xff 0xff 0xff 0xff 0xff 0xff 0xff 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x01 0x00 0x00 0x00 0x80 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00

result:
.... 0x20 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0xff 0xff 0xff 0xff 0xff 0xff 0xff 0xff 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x01 0x00 0x00 0x00 0x80 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00

Here is our code:
class SApiBbbSendVsbDataReq(prophy.with_metaclass(prophy.struct_generator, prophy.struct)):
_descriptor = [
('transactionId', prophy.u32),
('portId', EBbbPortId),
('vsbTxRegionId', prophy.u32),
('transmissionEnabled', prophy.u32),
('dataLength', prophy.u32),
('data', prophy.array(prophy.u64, size=32)),
]

msg_type = 'SApiBbbSendVsbDataReq'
msg_name = 'API_BBB_SEND_VSB_DATA_REQ_MSG'
hwapi_msg = fsp.syscom.get_hwapi_message(msg_type=msg_type, msg_name=msg_name)
msg_obj = hwapi_msg.get_msg_obj()
msg_obj.transactionId = 0x0000001b
msg_obj.portId = get_EBbbPortId(cell.cpri_link[0])
msg_obj.vsbTxRegionId = 0x00000000
msg_obj.transmissionEnabled = 0x00000001
msg_obj.dataLength = 0x00000020
msg_obj.data[0] = 0xffffffffffffffff
msg_obj.data[1] = 0x0000000000000000
msg_obj.data[2] = 0x0000000800000001

We find between datalength and data there is extra “00 00 00 00”.
I trace the code, found in site-packages\prophy\composite.py
def encode(self, endianness):
data = b""

for field in self._descriptor:
    data += (self._get_padding(len(data), field.type._ALIGNMENT))
    data += field.encode_fcn(self, field.type, getattr(self, field.name, None), endianness)

    if field.type._PARTIAL_ALIGNMENT:
        data += self._get_padding(len(data), field.type._PARTIAL_ALIGNMENT)

data += self._get_padding(len(data), self._ALIGNMENT)

return data

**data += (self._get_padding(len(data), field.type._ALIGNMENT))

len(data) = 20, field.type._ALIGNMENT = 8 ( next data is array u64)**

Why add padding after the previous data(or say 'before the next data') based on next data’s field.type._ALIGNMENT ?

This is why “00000000” was added in payload between “datalength” and “data”.

Is this the right behavior?

Can you help us check it?

static const int2type<discriminator_id> objects are not defined

Hey,

I'll base on https://prophy.readthedocs.io/en/latest/examples.html code:

In generated .hpp file there are multiple static const objects declared:

static const prophy::detail::int2type<discriminator_id> discriminator_id_t;
static const prophy::detail::int2type<discriminator_keys> discriminator_keys_t;
static const prophy::detail::int2type<discriminator_nodes> discriminator_nodes_t;

But they are never defined in any .cpp file. Reasonable usage for constructor dispatch I've seen is e.g. Token t{Token::discriminator_keys_t, Keys{...}} and it kinda works with GCC, as overall these discriminators are stateless. EDIT: Usage from example: Object{{Token::discriminator_keys_t, {1, 2, 3}}, {1, 2, 3, 4, 5}, {'\x0e'}}

But it causes linkage errors when shared objects are used with clang++. I can come up with example project if necessary. It is enough to create test.so file and list missing symbols in so file with nm -C -u test.so though.

I've tried constexpring them, but clang still expected their definitions. I guess they should be defined in corresponding .cpp file

Travis broken

The last failed job was #120
it ran: export TOXENV=something
New jobs does:
export env=TOXENV=pypy

And tox is not even started, because:
if [ $TOXENV ]; then tox -v; fi

I should be rather something like:
if [ $TOXENV ]; then tox -v; else exit 1; fi

Is it a good idea to list imported symbols explicitly?

Please help me in deciding if that change I made on #20 is correct.
It was a huge PR and it sneaked somewhere in between before I asked this question.
Test that exposes it is here:

def test_includes_rendering():
common_include = model.Include("foo", [
model.Constant("symbol_1", 1),
model.Constant("number_12", 12),
])
nodes = [
common_include,
model.Include("root/ni_knights", [
model.Include("../root/rabbit", [
common_include,
model.Constant("pi", "3.14159"),
model.Typedef("definition", "things", "r32", "docstring"),
]),
model.Constant("symbol_2", 2),
]),
model.Include("../root/baz_bar", []),
model.Include("many/numbers", [model.Constant("number_%s" % n, n) for n in reversed(range(20))]),
]
ref = """\
from foo import number_12, symbol_1
from ni_knights import definition, pi, symbol_2
from numbers import (
number_0, number_1, number_10, number_11, number_13, number_14, number_15,
number_16, number_17, number_18, number_19, number_2, number_3, number_4,
number_5, number_6, number_7, number_8, number_9
)
"""
# call twice to check if 'duplication avoidance' machinery in _PythonTranslator.translate_include works ok
assert serialize(nodes) == ref
assert serialize(nodes) == ref

Before that change python generator created such a import statement:

from foo import *
from ni_knights import *
from numbers import *

Now the imported symbols are listed explicitly. The list contains everything that is defined by this Include and each sub-Include element unless it's already imported by previous statements in this file.

from foo import number_12, symbol_1
from ni_knights import definition, pi, symbol_2
from numbers import (
    number_0, number_1, number_10, number_11, number_13, number_14, number_15,
    number_16, number_17, number_18, number_19, number_2, number_3, number_4,
    number_5, number_6, number_7, number_8, number_9
)

There is also some "duplication avoidance" mechanism that raises my doubts at most.
The code responsible for that generation is here:

def translate_include(self, include):
included = list(sorted(n.name for n in include.defined_symbols() if n.name not in self.included_symbols))
self.included_symbols.update(included)
if not included:
return ""
statement_begin = "from %s import " % include.name.split("/")[-1]
symbols = ", ".join(included)
if len(statement_begin) + len(symbols) <= 80:
return statement_begin + symbols
@import_breaker
def importer():
yield symbols
return "%s(\n%s)" % (statement_begin, "".join(importer()))

Please let me know if you would like to bring it back to the original form.

Outdated contact information

I need to discuss a implementation of new feature in prophy with you, but the only contact I find in the whole repo is hosted at nokia.com.
Please update it in git and github config, setup.py, documentation, authors file and anywhere else.

Decode dynamic structures in a sensible manner

Background

I'm often struggling with decoding structures that consist of dynamic arrays. I'm unable to carve the exact payload to provide to the decode method because there are no extra markers in the blob I'm working on. When there are some bytes in the payload after decoding, prophy raises enigmatic ProphyError (but it's message is meaningful). I'd like to simplify the logic that a user of the library has to write in such case. For example:

try:
    struct.decode(payload, endianness='<')
except ProphyError as e:
    if 'not all bytes of' not in str(e):
        raise

Simple error change

We could just subclass the ProphyError with something more explicit to use in this particular case. For example:

class ProphyRemainingBytesError(ProphyError):
    pass

then the user's code could be just:

try:
    struct.decode(payload, endianness='<')
except ProphyRemainingBytesError:
    pass

Potential bug

I think there's a bug related to the behavior I'm observing. To fix it, the library will require slight change in the public interface.

For some reason, encode method can receive terminal argument but it's unused. On the other hand, decode lacks such argument but it fills the internal's _decode_impl terminal with True all the time. If we could add terminal=True to decode, then the user could explicitly state that the structure she is trying to decode is not the final one in the payload and therefore the mentioned error would not be raised at all. The above use case could be compressed to just:

struct.decode(payload, endianness='<', terminal=False)

Summary

I think that we could implement the new error independently from the second part. As for the interface change, adding terminal=True to decode is just an extension of the interface so it shouldn't be a big problem to introduce it at any time. It turns out that the encode case is being covered by @kamichal's #16 in the meantime.

python3: decoding a byte stream containing non ascii data ends with exception

@pytest.fixture(scope = 'session')
def FixedBytes():
class FixedBytes(prophy.with_metaclass(prophy.struct_generator, prophy.struct_packed)):
_descriptor = [("value", prophy.bytes(size = 5))]
return FixedBytes

def test_fixed_bytes_assignment(FixedBytes):
x = FixedBytes()
x.value = b"\x00\x00\x00\x00\xaa"
str(x)

result:

x = b'\x00\x00\x00\x00\xaa'

def b(x):
  return x.decode()

E UnicodeDecodeError: 'utf-8' codec can't decode byte 0xaa in position 4: invalid start byte

Prophy schema parser needs to parse comments.

In order to implement babel feature - schema parser needs to collect comments. These have to be written in the same format as schema generator's output (to be seen in test_schema.py). I.e. Struct, Union and Enum gets a doc string block and separatelly each of its members gets either single line comment (after member def, inline) or a block comment, (before the member definition). Also typedef and constant could get comments in these both forms.

It's the last objective to get bi-direcional compilation from schema to prophyc.model and vice versa. Schema is supposed to become a babel's vault language. Oh yes, it has to be as epic as the names :D

Unfortunately I failed trying to implement that.
'ply' won this painfull battle... I managed LEX tokens but YACC was blowing up with each little change I made. It's a ultra fragile piece of software. Or maybe I'm an elephant in a china-shop.
Can somebody help?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.