Code Monkey home page Code Monkey logo

meta's Introduction

Piston-Meta

A DSL parsing library for human readable text documents

Introduction

Piston-Meta makes it easy to write parsers for human readable text documents. It can be used for language design, custom formats and data driven development.

Meta parsing is a development technique that goes back to the first modern computer. The idea is to turn pieces of a computer program into a programmable pipeline, thereby accelerating development. An important, but surprisingly reusable part across projects, is the concept of generating structured data from text, since text is easy to modify and reason about.

Most programs that work with text use the following pipeline:

f : text -> data

The problem with this approach is that f changes from project to project, and the task of transforming text into a data structure can get very complex. For example, to create a parser for the syntax of a programming language, one might need several thousands lines of code. This slows down development and increases the chance of making errors.

Meta parsing is a technique where f gets split into two steps:

f <=> f2 . f1
f1 : text -> meta data
f2 : meta data -> data

The first step f1 takes text and converts it into meta data. A DSL (Domain Specific Language) is used to describe how this transformation happens. The second step f2 converts meta data into data, and this is often written as code.

Rules

The meta language is used to describe how to read other documents. First you define some strings to reuse, then some node rules. The last node is used to read the entire document.

20 document = [.l(string:"string") .l(node:"node") .w?]

Strings start with an underscore and can be reused among the rules:

_opt: "optional"

Nodes start with a number that gets multiplied with 1000 and used as debug id. If you get an error #4003, then it was caused by a rule in the node starting with 4.

Rule Description
.l(rule) Separates sub rule with lines.
.l+(rule) Separates sub rule with lines, with indention (whitespace sensitive)
.r?(rule) Repeats sub rule until it fails, allows zero repetitions.
.r!(rule) Repeats sub rule until it fails, requires at least one repetition.
...any_characters?:name Reads a string until any characters, allows zero characters. Name is optional.
...any_characters!:name Reads a string until any characters, requires at least one character. Name is optional.
..any_characters?:name Reads a string until any characters or whitespace, allows zero characters. Name is optional.
..any_characters!:name Reads a string until any characters or whitespace, requires at least one character. Name is optional.
.w? Reads whitespace. The whitespace is optional.
.w! Reads whitespace. The whitespace is required.
?rule Makes the rule optional.
"token":name Expects a token, sets name to true. Name is optional.
"token":!name Expects a token, sets name to false. Name is required.
!"token":name Fails if token is read, sets name to true if it is not read. Name is optional.
!"token":!name Fails if token is read, sets name to false if it is not read. Name is required.
!rule Fails if rule is read.
.s?(by_rule rule) Separates rule by another rule, allows zero repetitions.
.s!(by_rule rule) Separates rule by another rule, requires at least one repetition.
.s?.(by_rule rule) Separates rule by another rule, allows trailing.
{rules} Selects a rule. Tries the first rule, then the second, etc. Rules are separated by whitespace.
[rules] A sequence of rules. Rules are separated by whitespace.
node Uses a node without a name. The read data is put in the current node.
node:name Uses a node with a name. The read data is put in a new node with the name.
.t?:name Reads a JSON string with a name. The string can be empty. Name is optional.
.t!:name Reads a JSON string with a name. The string can not be empty. Name is optional.
.$:name Reads a number with a name. The name is optional.
.$_:name Reads a number with underscore as visible separator, for example 10_000. The name is optional.

"Hello world" in Piston-Meta

extern crate piston_meta;

use piston_meta::*;

fn main() {
    let text = r#"hi James!"#;
    let rules = r#"
        1 say_hi = ["hi" .w? {"James":"james" "Peter":"peter"} "!"]
        2 document = say_hi
    "#;
    // Parse rules with meta language and convert to rules for parsing text.
    let rules = match syntax_errstr(rules) {
        Err(err) => {
            println!("{}", err);
            return;
        }
        Ok(rules) => rules
    };
    let mut data = vec![];
    match parse_errstr(&rules, text, &mut data) {
        Err(err) => {
            println!("{}", err);
            return;
        }
        Ok(()) => {}
    };
    json::print(&data);
}

Bootstrapping

When the meta language changes, bootstrapping is used to hoist the old meta syntax into the new meta syntax. Here is how it works:

  1. Piston-Meta contains composable rules that can parse many human readable text formats.
  2. Piston-Meta knows how to parse and convert to its own rules, known as "bootstrapping".
  3. Therefore, you can tell Piston-Meta how to parse other text formats using a meta language!
  4. Including the text format describing how to parse its own syntax, which generates equivalent rules to the ones hard coded in Rust.
  5. New versions of the meta language can describe older versions to keep backwards compatibility, by changing the self syntax slightly, so it can read an older version of itself.

meta's People

Contributors

bvssvni avatar emberian avatar kerfufflev2 avatar zummenix avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

meta's Issues

Validate node rules

This is to prevent typing errors in the meta language, for example a number that is set to the wrong property.

All properties must be listed in the node parent's lift of properties, or else the rule is invalid.

Redesign using dynamic structure

Perhaps one could query for a dynamic structure with mutable references into the properties:

pub struct Struct<'a> {
    pub fields: &'a mut [Data<'a>],
}

pub enum Data<'a> {
    F64(&'a str, &'a mut f64),
    Bool(&'a str, &'a mut bool),
    String(&'a str, &'a mut String),
    Node(&'a str),
    MaybeF64(&'a str, &'a mut Option<f64>),
    MaybeBool(&'a str, &'a mut Option<bool>),
    MaybeString(&'a str, &'a mut Option<String>),
    MaybeNode(&'a str),
}

When reading a new node, one would first query for the structure, which the meta reader will allocate either directly or on the stack and return through a closure. The meta parser will then fill in the properties or query for a new node.

Add `Text::parse`

Should use the string and parse_string functions in read_token.

Might require wrapping ParseStringError in ParseError.

Add tokenizer implementing `MetaReader`

All data will be put into a Vec<(MetaData, Range)> to delay the error reporting to a later stage. The state will simply be the number of tokens read, and if there is a roll back it will truncate it.

This might not require any redesign.

`CommandState`

See #40

Problem

  • The meta parser passes around a mutable reference to the meta reader
  • The meta reader receives a state for each command and returns a new state on success
  • The meta reader must resolve roll backs by analyzing the state
  • The meta reader might be a composed structure of generic and concrete meta readers
  • We want to use as little memory as possible to interpret the state

Semantics

Using the notation PistonDevelopers/piston#736

command_state(t, u) = { handle(t), forward(u) }
is_current_state(command_state(_, _)) -> bool
is_current_state [:] (forward(_)) -> true
is_curent_state [:] (handle(_)) -> true | false

Suggestion

pub enum CommandState<T, U> {
    Handle(T),
    Forward(U),
}

The CommandState is used to decide whether commands are handled by a container meta reader or forwarded to a sub meta reader. For example, you could have CommandState<Foo, CommandState<Bar, Baz>>. Forwarding only happens to the "current branch" of the container meta reader, which is possible because the meta parser never rolls back to a mutually exclusive state.

Force node name everywhere

Alternative to #89

Instead of having a rule that defines how a node name should behave, it could force using a node name everywhere.

Add `SeparatedBy`

This might be used instead of a while loop. The problem with a while loop is that you need to read the separator tokens inside the body. You also need to know the end token to break, which is bad if there are multiple alternatives for an end token.

A SeparatedBy will run the body 0, 1 or more times. If it fails the first time, it will break and roll back, just like Optional (for example you are reading 0 arguments to a function). If it succeeds the first time, it will run the separated-by rule. If the separated-by rule succeeds, it will run the body again, but now errors will be reported. If the separated-by rule fails, it will continue after the separated-block.

Here is how it might be used:

arguments { val }:
    whitespace?
    separated (
        number as val
    ) by (
        whitespace?
        token(",")
        whitespace?
    )

You might want to decide whether to report errors if it should fail first time or not.

Trailing separators might be supported by rolling back when the body rule fails.

Ignore data from sub rules that are not connected to a property

This was planned accordingly to #15. The idea is that the application code never receives the name of the rule currently used for parsing, such that rules can be changed arbitrarily without affecting the application code. If a rule is not connected to a property, it will evaluate the rule but not send the data.

Add `ParseResult` type alias

See #96
See #97

A pattern of returning an optional error seem to be needed for picking the deepest error. The new signature will then become Result<(Range, M::State, Option<(Range, ParseError)>), (Range, ParseError)> which could need a type alias.

Meta composing

Composing is the opposite of parsing, where the output data is described through the same meta language. One starts with a tree structure implementing MetaWriter and the meta encoder uses the meta rules to generate a text which could be parsed unambiguously using the same rules.

The motivation of composing is to allow automatic translation from one document to another.

Node usage design

See #88

There might be cases where you want to use the name of the rule as the property name (perhaps this should be default?). There might also be the case that you want to redirect the meta data to the parent and therefore ignore the start/end node.

pub enum NodeUsage {
    Default,
    Redirect,
    Silence,
    Override(Rc<String>),
}

Add `Lines::parse`

See #95

An idea to make this easier would be to have a kind of SeparatedBy, called Lines that specializes on lines.

  • Allow sub rule to have Lines or SeparatedBy
  • Ignore empty lines
  • Ignore lines with whitespace

Parsing of `Select`

It might be an idea to keep the first error and check the other rules for whether another succeeds. If another rule succeeds, it will return the new state of that rule, but if all fails, it will handle the error.

Parsing of `Parameter`

I don't think the arguments are needed by Parameter::parse, because the safety check that sub rules uses the arguments as property names can be separated from parsing.

Dealing with state

One of the reasons the current design is so simple is the assumption that dealing with state can be solved in some efficient way.

Basic design

It would be nice to make the trait to implement for meta parsing and composing as simple as possible. To achieve this one probably wants to eliminate the state. The overall idea of the design is separating the state such that roll backs are a natural side effect of performing an operation on an earlier state.

Concepts

  • Rule is an object that controls how a document is parsed and composed
  • A meta parser or composer uses sub rules and be a rule itself
  • Meta reader is an object implementing MetaReader which is the receiver of meta data from a meta parser. The meta reader communicates with the meta parser, such that error messages are handled as early as possible.
  • Meta writer is an object implementing MetaWriter which is the sender of meta data to a meta composer. The meta writer receives output character ranges from the meta composer such that error messages can be linked to the original source through mapping.
  • A state is an object associated with the meta reader or writer. For a reader it tells where to put data in the output structure. For a writer it tells where to read from in the input structure. When jumping back to an earlier state, it means the parsing or composing was rolled back because a rule failed. This must be handled by the meta reader or writer.

There is no need for a centralized meta parser or composer, if we can split it into objects that represents the rules and then pass along the state.

The state needed for parsing is chars: &[char], offset: usize.

Semantics

The semantics using the notation PistonDevelopers/piston#736 :

rule = { whitespace, parameter, token, select,
optional, until_whitespace, until_any_or_whitespace,
new_line, tab_indent, point_to, text, while, whole_line }

rule.parse(meta_reader, state, chars, offset) -> 
  meta_reader', result((range, state), (range, parse_error))

meta_reader(meta_data, state, range -> result(state, parse_error)

update(range, chars, offset) -> chars', offset'

rule.compose(meta_writer, state, meta_data, output) ->
  meta_writer', output', result((range, state, meta_data), (range, compose_error))

meta_writer(range, state) -> (meta_date, state)

Why Piston-Meta?

This issue is for explaining why meta parsing and composing is an interesting research topic for Piston.

The normal way to read a document format is to use a library to parse the document into a tree structure, and then you write algorithms for converting the tree structure into the data structure used by the application. This has drawbacks for several reasons:

  1. Changes in the document structure are entangled with the application code
  2. It requires one dependency for each document format
  3. Data validation is limited and hard
  4. Writing parsing logic manually is error prone and composing is duplicated work
  5. Error messages are bad

In the Piston project we did some research in 2014 about AI behavior trees, where we managed to describe building blocks for a language that deals with processing of states https://github.com/pistondevelopers/ai_behavior. The general idea is that you separate the language describing the behavior and use an algorithm to convert the behavior into a state tree. The state tree is then connected to a finite state machine interface that carries out the operations. The syntax of AI behavior trees is far easier to understand than writing the state machine directly. A new idea is the syntax for parallel semantics, which could be used to make parsing more powerful through self-referencing the meta rules, while using a rule to control the termination of the sub rule. The finite state machine interface could be implemented through a trait in Rust, connecting the parsing mechanism directly to the application code. Error messages can be communicated directly back and forward between the meta parser and the application. Through high level parsing syntax, we might be able to make error messages clearer and even allow composing with same library, or detect when composing is unambiguous.

OMeta https://en.wikipedia.org/wiki/OMeta is a language that can transform arbitrary text into trees, and has proven itself as a rapid prototyping tool for domain specific languages. OMeta can parse itself and this can also be used to extend language syntax with new features. This expressiveness comes from the idea of pattern matching and choosing actions depending on the state, which is context sensitive. It is the ability to pick a context and send a message describing the state that drives the transformation. Since you can change the syntax arbitrarily, it is easy to make it look more advanced than what it actually does, which is the kind of cheating you want in rapid prototyping. In a way it is similar to regular expressions but more powerful. For a general programming language, context sensitive grammar might be a bad idea, but for data it means one can narrow down invalid states easier and build the data validation into the parsing logic.

The problem with OMeta is it is not designed particularly for understanding the parsing, so you can not easily write a parser manually from the rules. In the Piston project we have the ability to design the rules such that it tells you how to use the parser library, making it easier to mentally grasp how it works. We can also design it for debugging, such that it is clear what it does by pausing at a particular line. It is also possible that prototyping a meta language syntax will help writing the library, such that the seemingly difficult problem of meta parsing can be solved by having a better understand of it in the first place. We are also not that interested in low level syntax like regular expressions like OMeta does, but more for data transformations typical to the building blocks of JSON. Unlike OMeta, we will design it in a such way that the transformed node always has a superset of possible states compared to the rule, such that if you know the meta rule, it can be easily detected whether the transformation can be reversed. This might make it possible to perform automatic conversions of data, which allows meta syntax and data to be iterated on during a project.

The first draft for a subset of the notation can be found here https://gist.github.com/bvssvni/18aa2ebac87081cafdd0. This is meant to demonstrate that a meta notation can be self-documenting, which is important as one would be able to implement the parsing manually in either Rust or another language.

Meta language described by itself

Syntax is not final. This serves both as a way to test the language and a way to check whether the source code is correctly implemented:

    whitespace { optional }:
        whitespace?
        token("whitespace")
        select(
            token("?") as optional
            token("!") as !optional
        )
        whitespace?

    node { name, args, value }:
        whitespace?
        until_any_or_whitespace("{")! as name
        whitespace?
        token("{")
        separated? (
            whitespace?
            until_any_or_whitespace(",)") as args
            whitespace?
        ) by trailing (
            token(",")
        )
        token("}")
        ?(
            whitespace!
            token("as")
            whitespace!
            until_whitespace as value
        )
        whitespace?
        token(":")
        whitespace?
        new_line
        select(
            tab_indent
            point_to(node, "This node lacks a body")
        )

    token { text_value, inverted, property }:
        whitespace?
        token("token")
        whitespace?
        token("(")
        whitespace?
        text! as text_value
        whitespace?
        token(")")
        ?(
            whitespace!
            token("as")
            whitespace!
            token("!") as inverted
            until_whitespace as property
        )
        whitespace?

    select { args }:
        whitespace?
        token("select")
        whitespace?
        token("(")
        new_line
        separated (
            whitespace?
            whole_line.push_to(args)
        ) by (
            new_line
        )
        whitespace?
        token(")")

    optional { args }:
        whitespace?
        token("?")
        whitespace?
        token("(")
        new_line
        separated (
            whitespace?
            whole_line.push_to(args)
        ) by (
            new_line
        )
        whitespace?
        token(")")

    until_whitespace { property }:
        whitespace?
        token("until_whitespace")
        ?(
            whitespace!
            token("as")
            whitespace!
            until_whitespace as property
        )
        whitespace?

    until_any_or_whitespace { any_characters, optional, property, push_value }:
        whitespace?
        token("until_any_or_whitespace")
        whitespace?
        token("(")
        whitespace?
        text! as any_characters
        whitespace?
        token(")")
        select(
            token("?") as optional
            token("!") as !optional
        )
        select(
            property { value } as property:
                whitespace!
                token("as")
                whitespace!
                until_whitespace as value
            push_value { value } as push_value:
                token(".push_to")
                whitespace?
                token("(")
                whitespace?
                until_whitespace as value
                whitespace?
                token(")")
        )
        whitespace?

    new_line { }:
        whitespace?
        token("new_line")
        whitespace?

    tab_indent { }:
        whitespace?
        token("tab_indent")
        whitespace?

    point_to { part, error_message }:
        whitespace?
        token("point_to")
        whitespace?
        token("(")
        until_any_or_whitespace(",") as part
        whitespace?
        token(",")
        whitespace?
        text! as error_message
        whitespace?
        token(")")
        whitespace?

    text { allow_empty, property }:
        whitespace?
        token("text")
        select(
            token("?") as allow_empty
            token("!") as !allow_empty
        )
        whitespace!
        token("as")
        whitespace!
        until_whitespace as property
        whitespace?

    while { end_token }:
        whitespace?
        token("while")
        whitespace?
        token("(")
        whitespace?
        token("!token")
        whitespace?
        token("(")
        whitespace?
        text! as end_token
        whitespace?
        token(")")
        whitespace?
        new_line
        select(
            tab_indent
            point_to(while, "This while block lacks a body")
        )

    whole_line { push_value }:
        whitespace?
        token("whole_line.push_to")
        whitespace?
        token("(")
        whitespace?
        until_whitespace as push_value
        whitespace?
        token(")")
        whitespace?

    separated_by { rules, by, optional, allow_trail }:
        whitespace?
        token("separated")
        select(
            token("?") as optional
            token("!") as !optional
        )
        whitespace?
        token("(")
        new_line
        separated (
            whitespace?
            rule as rules
        ) by (
            new_line
        )
        new_line
        whitespace?
        token(")")
        whitespace!
        token("by")
        whitespace?
        optional (
            token("trailing") as allow_trail
            whitespace?
        )
        token("(")
        new_line
        separated_by (
            whitespace?
            rule as by
        ) by (
            new_line
        )
        new_line
        whitespace?
        token(")")

Inheritance/replacement of properties with rules

Some patterns occur frequently in rules, and it would be nice to reuse those patterns to create a shorter rule. This could be done by declaring the pattern as Node and then replace the property parts, creating a new Node.

For example:

arguments { items }:
    token("(")
    separated? (
        whitespace?
        until_any_or_whitespace(",)") as item
        whitespace?
    ) by trailing (
        token(",")
    )
    whitespace?
    token(")")

number_constant { val }:
    number as val
text_constant { val }:
    text as val
variable { name }:
    until_any_or_whitespace(",)") as item

expression { name, args }:
    until_any_or_whitespace("(") as name
    whitespace?
    replace arguments (
        items with select (
            number
            text_constant
            variable
            expression
        )
    )

Planning

  • Add Whitespace::parse
  • Add Token::parse
  • Add Select::parse
  • Add Node::parse
  • Add Optional::parse
  • Add Text::parse
  • Add Number::parse
  • Add UntilAnyOrWhitespace::parse
  • Add Sequence::parse
  • Add SeparatedBy::parse
  • Add Lines::parse
  • Add UntilAny::parse
  • Add parse unit tests for Whitespace
  • Figure out how to deal with state
  • Add Tokenizer that implements MetaReader
  • Add parse unit tests for Token
  • Add parse unit tests for Number
  • Add parse unit tests for Text
  • Add parse unit tests for UntilAnyOrWhitespace
  • Add parse unit tests for Select
  • Add parse unit tests for SeparatedBy
  • Add parse unit tests for Optional
  • Add parse unit tests for Node
  • Add parse unit tests for Lines
  • Add parse unit tests for UntilAny

Remove properties from `Node`

When parsing, there is no way to check the correctness of properties from the node, and this is likely to be caught by the meta reader or the later stages using Tokenizer. One might consider this check as part of the meta language implementation, or as an unwanted constraint when the application supports dynamic structures.

How Piston-Meta deals with types

Notice: This is outdated since data are read without type inference. Type checking might be considered part of meta language which is outside the scope of this library, as it will focus on rules and error messages.

The meta language describing the rules in Piston-Meta is just enough high level that the type can be inferred directly from the rule used. You do not have to declare the types of the properties. Since types are programming language dependent, it also means that the meta language does not favor bool vs boolean etc.

Example:

    foo { optional }:
        whitespace?
        token("foo")
        select(
            token("?") as optional
            token("!") as !optional
        )
        whitespace?

A property is set only if the rule succeeds. The token rule tells the meta parser to look for a string and succeeds if that string comes next. token("?") as optional means that if the meta parser reads a "?", it will set optional to true. token("!") as !optional means that if the meta parser reads a "!", it will set optional to false. The select rule tries the first one and if it fails, tries another one.

Each rule, except rules like whitespace, has an implicit associated type of data it reads. For example, in the case token, the type is a bool in Rust. The num rule reads a f64. You can choose to ignore the data or parse the data through different rules.

Sub rules

You can also use a sub rule to set a property, but this behaves differently. Instead of reading the data it will send MetaData::StartNode and send the name of the property. It will then use the sub rule and set properties on the sub node. When the sub rule succeeds it will call MetaData::EndNode.

If you are using a sub rule without connecting it to a property it will still parse according to the sub rule, but it will not send data.

The implementation of MetaReader never knows the name of the sub rule, because it is assumed that it knows the type to put data in. This design makes it possible to change the name of sub rules in the meta language without affecting how the application parses it.

If a sub rule fails, the meta reader will fall back to an earlier state which is associated with the meta reader. The implementation of the meta reader must check the state and perform the necessary roll back or overwrite the failed data.

Redesign to use iterator pattern

The problem is that you need to resolve a complicated state which requires associated relations between meta readers and the data. I think this could be done simpler by having an iterator and then pass it around while the state is on the stack.

Don't implement a meta language in this library

A meta language requires a meta parser that outputs rules. The rules drives parsing/composing and therefore it is possible to describe how parsing/composing is done through the meta language. Since the rules are structures in Rust, they can be generated through other ways than a meta language. For example, there could be a macro or an algorithm composing the rules. I think that there are too many possibilities to settle down on one approach.

This library should focus on making the rules compose and report clear error messages, but a syntax for a meta language should be outside the scope of the project.

Allow multiple passes in the meta language

A pseudo-parsing algorithm could work directly against Tokenizer that forwards the ranges from the original document. This will allow the meta language to describe multiple passes without sacrificing error messages. For the end receiver, it will appear as if the transformation happens directly.

Change to `EndNode(Rc<String>)`

Sometimes you need to ignore nodes and treat their properties as part of the parent node. To make it easier to track the state for when ending the parent node, the MetaData::EndNode should pass the name of the ended node. This will also make it easier to refactor meta rules.

Change to `Rule::Parameter(ParameterRef)`

A parameter rule might be used in multiple places.

When parsing meta rules, the self referencing rules are pointing by names. These must then be updated with real references. A ParameterVisit flag tells whether a parameter rule has been updated or not.

pub enum ParameterRef {
    Name(Rc<String>),
    Ref(Rc<RefCell<Parameter>>, ParameterVisit)
}

pub enum ParameterVisit {
    Unvisited,
    Visited,
}

Pick deepest error

Some rules, like SeparatedBy might fail at a deep level and roll back partially, then trigger a fail in the following rule.

Example, if whitespace is required after "," between arguments:

Error: Expected `)`
0: foo(a,b)
0:      ^

Here, the deepest error occurs inside SeparatedBy (showing correct input):

foo(a, b)
      ^

vs the expected by the error (showing correct input):

foo(a)
     ^

The idea is that the deepest error message is likely the most useful.

Use rules directly

By replacing meta_reader with a dynamic structure like #47 one could use the rules directly.

The problem is to handle rollbacks nicely, but this is hard to do when the meta reader is passed among the rules. If the rules are restricted to a single structure at a time, it call read sub nodes by calling a closure, then create the state for that node on the stack. This will make it easier to roll back the state in user code since the state lives on the stack.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.