Code Monkey home page Code Monkey logo

links's Introduction

Links: Linking Theory to Practice for the Web

Build status

Links helps to build modern Ajax-style applications: those with significant client- and server-side components.

A typical, modern web program involves many "tiers": part of the program runs in the web browser, part runs on a web server, and part runs in specialized systems such as a relational database. To create such a program, the programmer must master a myriad of languages: the logic is written in a mixture of Java, Python, and Perl; the presentation in HTML; the GUI behavior in Javascript; and the queries are written in SQL or XQuery. There is no easy way to link these: to be sure, for example, that an HTML form or an SQL query produces the type of data that the Java code expects. This is called the impedance mismatch problem.

Links eases the impedance mismatch problem by providing a single language for all three tiers. The system is responsible for translating the code into suitable languages for each tier: for instance, translating some code into Javascript for the browser, some into Java for the server, and some into SQL to use the database.

Links incorporates ideas proven in other programming languages: database-query support from Kleisli, web-interaction proposals from PLT Scheme, and distributed-computing support from Erlang. On top of this, it adds some new web-centric features of its own.

FEATURES

  • Allows web programs to be written in a single programming language
  • Call-by-value functional language
  • Server / Client annotations
  • AJAX
  • Scalability through defunctionalised server continuations.
  • Statically typed database access a la Kleisli
  • Concurrent processes on the client and the server
  • Statically typed Erlang-esque message passing
  • Polymorphic records and variants
  • An effect system for supporting abstraction over database queries whilst guaranteeing that they can be efficiently compiled to SQL
  • Handlers for algebraic effects on the server-side and the client-side

links's People

Contributors

chefyeum avatar corneliusbusch avatar dependabot[bot] avatar dhil avatar djedr avatar elordin avatar emanon42 avatar ezrakilty avatar fehrenbach avatar frank-emrich avatar frmi avatar jamescheney avatar jgbm avatar jstolarek avatar kit-ty-kate avatar kwanghoon avatar mseri avatar nikswamy avatar orbion-j avatar rudihorn avatar samo-novak avatar simonjf avatar slindley avatar squiddev avatar thierry-martinez avatar thwfhk avatar wricciot avatar yallop avatar yi-zhou-01 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

links's Issues

Implement Recursive Type Inlining for types with type variables

Currently, recursive type inlining is only done for types without type variables. Additionally, types aren't inlined when they themselves are type arguments.

Examples:

 typename MessageBody = String; 
 typename ReceiveMail(a) =
   [&|
     MAIL:?Address.
       [+|
         REJECT: ReceiveMail(a),
         ACCEPT: ReceiveBody(b)
       |+],
     QUIT:EndBang
   |&];

 typename ReceiveBody(b) =
   [&|
     RCPT: ?Address . [+|REJECT: ReceiveBody(b),
                         ACCEPT: ReceiveBody(b)|+],
     DATA: ?b . ?MessageBody . ReceiveMail(b)
 |&];
typename A = [| E | B:C |];
typename C = [| F | D:Maybe(A) |];

Stack overflow with mutually recursive types

Works:

typename A = mu a . [| E | B:[| F | D:a |] |];
typename C = [| F | D:A |];

Doesn't:

typename A = [| E | B:C |];
typename C = [| F | D:A |];

Error message:

*** Error: Stack overflow

Improving test harness: add "expected broken" feature

I would like to be able to mark some tests as expected to be broken. Such a tests is expected not to pass and if it passes this is reported as failure (prevents fixing the bug accidentally). This would be useful in situations like the one we have at the moment when there is a known bug that causes one of the tests to fail and therefore the whole Travis build is marked as failing.

TODO

  • investigate possibility of adapting GHC's testsuite to Links
  • decide what set of features we need for the testsuite

Replacing "deriving" with "ppx_deriving"

Links does not compile with the latest OCaml compiler (4.03), because the package "deriving" is not compatible with 4.03 (actually I think it is one of the dependencies of "deriving" that's causing the issue).

The package "deriving" appears to be obsolete, and for future projects with Links it may very well be worthwhile replacing "deriving" with "ppx_deriving". My own project would benefit from this change as well as the OCaml Multicore backend will be released for 4.03+.

According to Sam switching from "deriving" to "ppx_deriving" is not an entirely mechanical process. Furthermore, the proposed change is likely to be thorough, and since it may benefit Links in general James suggested that we implement this change on the de facto master-branch "sessions".

The question remains: Which parts of Links are sensitive to this change?

Shallow handler typing bug

The following should type check, but sadly it doesn't:

links> fun h1(m) { 
   shallowhandle(m) { 
      case Op(k) -> h1(fun() { k(1) }) 
      case Return(x) -> x 
   } 
};
h1 = fun : (() {Op:Int|a}~> b) {Op{_}|a}~> b
links> fun h2(m) { 
    shallowhandle(m) { 
        case Op(k) -> h1(fun() { k(2) }) 
        case Return(x) -> x 
     } 
};
<stdin>:1: Type error: The codomain of continuation `k' has type
    `_'
but a type compatible with
    `_'
was expected.
In expression:  shallowhandle(m) { case Op(k) -> h1(fun() { k(2) }) case Return(x) -> x }.

Introducing git and code conventions

I would like to suggest introducing some working conventions for Links project:

  • Clean source code from trailing whitespaces. These have no semantic meaning for humans or compilers but have meaning for source control systems like git. I would be radical here and suggest that once all major branches are merged and the Gorgie release is made we should just remove all trailing whitespace in one go. Once we've done that we could install a git hook that prevents pushing trailing whitespaces to the repository to make sure they are not re-introduced.

    If you want Emacs to highlight trailing whitespaces add these two lines to your .emacs file:

(setq-default show-trailing-whitespace t)
(setq-default indicate-empty-lines t)
  • Stop using merge commits. Looking at the git history I see that the current convention is to use merge commits. I think this is a Bad Thing as it totally obscures commit history. Firstly, I don't think it is really possible to follow a history that looks like this:

1

Secondly, it is not possible to figure out what commits do. Take for example commit 5786aa1. It introduces a lot of changes in the source code but the commit message gives me (or any of you in six months) no clue as to what these changes are about. It only mentions that some branch was merged but that's not really useful - merged branches usually get deleted and following what work was done on a deleted branch is non trivial (at least to me). So I would propose to stop using merge commits and start using git rebase + git merge --ff-only.

TODO

  • set "rebase merging" strategy as the default. This has to be done in repository settings to which I do not have access. In this way "rebase merging" will be the only way to merge pull requests. To change this go to Settings -> Options -> Merge button and untick "Allow merge commits" and "Allow rebase merging".

Possible bug in unification

I get this somewhat confusing error message.
Is this a bug in the unification code, or could someone explain to me what's wrong with my code?

Unification error: Rows
 |_
and
 |_
 could not be unified because one is closed and the other has a rigid row variable
Unification error: Couldn't match ((id:Int,parent:Int,schema:Int,value:String|_)) -> Bool against ((id:Int,parent:Int,schema:Int,value:String|_)) {}-@ _::Any
/home/stefan/src/dbwiki/xpath.links:287: Type error: The function
    `filter'
has type
    `((id:Int,parent:Int,schema:Int,value:String|_)) -> Bool'
while the arguments passed to it have types
    `(id:Int,parent:Int,schema:Int,value:String|_)'
and the currently allowed effects are
    `'.
In expression: filter(r).

Don't use "postgres" superuser for running tests

I just spent about an hour trying to figure out why the following line from the run-tests script does not work on my machine:

psql -v ON_ERROR_STOP=1 -q -U postgres -d links -f $s

It turns out that Debian (and supposedly Ubuntu also) blocks access to postgres superuser via TCP. The workaround that works on Debian is to explicitly connect to localhost using a *nix socket since the rules are less strict then:

psql -v ON_ERROR_STOP=1 -q -U postgres -d links -h localhost -f $s

I believe the proper solution to this would be to stop using the superuser for operations on the database. Perhaps we could assume that a user that runs the test script has a links database that she has access to? Then we could just say:

psql -v ON_ERROR_STOP=1 -q -d links -f $s

What others think?

Pattern matching failure in miskinded type application

Here's a buggy program that causes a pattern matching failure in instantiate.ml:

links> typename T(r::Row) = [|r|];
T = a::Row.[||a|]
links> typename X = T(Int);
*** Error: File "instantiate.ml", line 396, characters 15-20: Pattern matching failed

Type variable names are too localised

Type variable names are local to individual types rather than whole error messages. The current implementation threads a renaming environment through the pretty-printer. It may be easier to do something more imperative.

TODO

  • figure out why gripers are called several times even though only one error message is printed. Answer: this is caused by unification that tries to unify types and backtracks in case of failure. Sam says that unification throws exceptions containing error messages, these exceptions get caught and backtracking is done. Only the last one is printed. This is probably not what unification should be doing. Error messages should be generated lazily. See #85
  • all gripers should have their code refactored: calls to code/show_type should be assigned to let-bindings. This ensures proper evaluation order. Otherwise, subsequent calls to ^ are evaluated right-to-left and if the calls are inlined then variables are generated backwards. UPDATE This does not seem to work as intended. Variables names are generated in reverse order anyway. The probable reason is that they are not generated lazily enough. Needs fixing.
  • figure out why hide_fresh_type_vars is enabled by default. This does not seem to me like a useful thing to do. Perhaps we should reconsider?
  • Documentation: describe each new function, explain motivation for refactoring gripers (proper evaluation order), describe mechanism of name generation
  • Fix the bug where effect type variable is not printed in the error message.

Figure out how to do a release

There are several files in the repo related to preparing a release tarball: checkfiles and make_release scripts as well as MANIFEST and COMANIFEST data files. These are woefully out of date. MANIFEST and COMANIFEST mention files that no longer exist and omit files that exist. Files generated by checkfiles are not properly cleaned by make clean and are not ignored by git. These are all easy things to fix but the questions are whether the upcoming release will be made using these scripts and whether it is worth to invest time in fixing this? Perhaps it would make sense to make a release using OPAM?

Since this is about making a release I am assigning this to Gorgie milestone, but if anyone more knowledgeable than me thinks this is incorrect please remove the milestone.

TODO

  • remove old scripts and data files from the repo: checkfiles, MANIFEST, COMANIFEST, make_release
  • remove RELEASE-CHECKLIST, but make sure relevant things are placed on the wiki
  • update contents of INSTALL file. Perhaps point to wiki?
  • create OPAM description file
  • verify that the package installs correctly with all the required data files (most importantly, prelude.links, but also user documentation)
  • check how to enable/disable database backends during installation with opam
  • check what happens when libraries required by database backends are installed in the system. Is the package automatically reinstalled with the new backend enabled?
  • make sure the installation process and release process are described. Release should be described on the wiki here. Installation should be described in INSTALL file (or perhaps on the wiki?) Also, the database backend wiki page needs to be updated.

Too many compiler warnings

Lots of compiler warnings are produced when Links is compiled. We should fix or suppress most of them.

Database information leaks to the client

We don't have a proper semantics for database and table values on the client. If js_hide_database_info is enabled then database and table statements are all interpreted as the unit value on the client. This happens to work for all of our examples, and in some cases prevents database information leaking to the client. It will fail if a function on the client tries to pass database or table information to the server. It won't prevent database and table information from being passed from the server to the client. If js_hide_database_info is disabled then all of the database information including the username and password is visible on the client.

Continuation typing

This test fails:

continuation typing [3]
{ escape y in { ("" == y(1), true == y(1)); 2 } }
stdout : 1 : Int

I have no clue what escape does. Someone have a look?

Removing dead code

In several places in the source code I have stumbled upon fragments of code that are commented out. This makes the code a lot less pleasant to read. git blame tells me that many of these fragments have been in commented out many years. This leads me to suspect that the code inside the comments might have bitrotten and will never be resurrected. I would like to remove that code. Are there any objections to doing so?

Flexible type variables

Example from the documentation:

links> (1, (2, ((3, fun (x) {x}), "a")), true) : (Int, ?a, Bool);
*** Parse error: <stdin>:1

  (1, (2, ((3, fun (x) {x}), "a")), true) : (Int, ?a, Bool);
                                                     ^

Were flexible type variables removed or did they break?

If they were removed intentionally, someone should update the documentation. Preferably with a better solution than just commenting out the signature.

errormsg branch

The small improvement to Links error message displaying that is done in the "errormsg" branch should be merged into a more mainstream branch, and "errormsg" should be retired.

There is no type checker for the IR

Transformations on the IR reconstruct the type of a term, but only inspect the types of subterms that are necessary for type reconstruction. It would be useful to be able to check that types of all subterms are well-formed in order to debug optimisation passes.

*** Error: Query.Eval.DbEvaluationError("Error projecting from record")

I have some code that triggers this (in sessions), but used to work (in cb0e214).
I tried bisecting and I think things start breaking with the closure conversion project at around 74cb2f4.

The exact error message keeps changing (bad pattern matches, bad assertions).
There are a couple of bug fix commits 13d7ffc, 59d863f, that mention queries, but don't fix this one.

There is probably a smaller test case than this, but this is what I got. Expected result [(name="b"), (name="d")] : [(name:String)].

create table xml (
  id int primary key,
  parent int,
  name text,
  pre int,
  post int
  );

insert into xml (id, parent, name, pre, post) values
  (0, -1, '#doc', 0, 13),
  (1, 0, 'a', 1, 12),
  (2, 1, 'b', 2, 5),
  (3, 2, 'c', 3, 4),
  (4, 1, 'd', 6, 11),
  (5, 4, 'e', 7, 8),
  (6, 4, 'f', 9, 10);
var db = database "stefan";

var xml = table ("xml") with
    (id : Int,
     parent : Int,
     name : String,
     pre : Int,
     post : Int) from db;

typename Axis = [| Self
                 | Child
                 | Descendant
                 | DescendantOrSelf
                 | Following
                 | FollowingSibling
                 | Rev:Axis
                 |];

typename Path = [| Axis:Axis
                 | Seq:(Path, Path)
                 | Name:String
                 | Filter:Path
                 |];

typename Node = (id:Int,name:String,parent:Int,post:Int,pre:Int);

fun axis(ax) {
  switch (ax) {
    case Self -> fun (s, t) { s.id == t.id }
    case Child -> fun (s, t) { s.id == t.parent }
    case Descendant -> fun (s, t) { s.pre < t.pre && t.post < s.post }
    case DescendantOrSelf -> fun (s, t) { s.pre <= t.pre && t.post <= s.post }
    case Following -> fun (s, t) { s.post < t.pre }
    case FollowingSibling -> fun (s, t) { s.post < t.pre && s.parent == t.parent }
    case Rev(ax) -> var rev = axis(ax); fun (s, t) { rev(t, s) }
  }
}

sig path : (Path) ~> (Node, Node) -> Bool
fun path(p: Path) {
  switch (p) {
    case Seq(p, q) ->
      var p = path(p);
      var q = path(q);
      fun (s, u) {
        not(empty(for (t <-- xml)
                  where (p(s, t) && q(t, u))
                   [()]))
      }
    case Axis(ax) -> axis(ax)
    case Name(name) -> fun (s, t) { s.id == t.id && s.name == name }
    case Filter(p) ->
      var p = path(p);
      fun (s, t) {
        s.id == t.id && not(empty(for (u <-- xml)
                                  where (p(s, u))
                                    [()]))
      }
  }
}

# /*/*
var xp0 = Seq(Axis(Child), Axis(Child));

# //*/parent::*
var xp1 = Seq(Axis(DescendantOrSelf), Axis(Rev(Child)));

# //*[following-sibling::d]
var xp2 = Seq(Axis(DescendantOrSelf), Filter(Seq(Axis(FollowingSibling), Name("d"))));

# //f[ancestor::*/preceding::b]
var xp3 = Seq(Axis(DescendantOrSelf),
              Seq(Name("f"),
                  Filter(Seq(Axis(Rev(DescendantOrSelf)),
                             Seq(Axis(Rev(Following)),
                                 Name("b"))))));

var xpmin = Seq(Axis(DescendantOrSelf),
              Filter(Seq(Axis(Child),
                         Axis(Child))));

fun xpath(p) {
  var p = path(p);
  query {
    for (root <-- xml,
         s <-- xml)
    where (root.parent == -1 && p(root, s))
      [(name=s.name)]
  }
}

xpath(xp0)

Replacing OCamlMakefile with ocamlbuild?

Currently used build method - OCamlMakefile - seems to have two drawbacks:

  • it puts all the compilation artefacts in the same directory as the source files. I looked at the documentation and it seems that OCamlMakefile can't do better
  • file dependencies need to be figured out manually and listed explicitly in the Makefile

I was wondering if it would make sense to move from OCamlMake to ocamlbuild? As a quick and dirty attempt I managed to compile links with:

ocamlbuild -use-ocamlfind -pkgs 'bigarray,num,str,deriving.syntax,deriving.syntax.classes,deriving.runtime,lwt,lwt.syntax,lwt.unix' -syntax camlp4o "links.native"

This is definitely not sufficient as it does not seem to compile database backends. It would require a bit more investigation to figure out how to deal with that. Do others think this is worth doing? The benefits would be clean source tree (all .o, .annot, .cmi and .cmx files would land in _build directory) and there would be no need to manually figure out module dependencies.

See #78 for an uo-to-date list of TODOs.

order by refers to "wrong" tuple variable

query {
 for (v <-- versions)
 orderby (v.id)
   [(id=v.id)]
}

compiles to

select (t1635."id") as "id",
       (t1634."id") as "order_1" -- <-- t1634 is not bound!
from TEST_version as t1635
order by order_1

Not sure what the resulting query is meant to be, but probably s/t1634/t1635/g.
Might or might not be related to #11.

Support types in modules

At the 1 August meeting, we discussed supporting types in modules.

One point of discussion was the overlap between session type dots and module syntax. We decided to require parentheses for types from modules in session types for the time being. So, rather than writing

?A.B.!C.D.End

We would require

 ?(A.B).!(C.D).End

This should hopefully resolve the conflict between session types and module access syntax.

Unified pattern matching representation

Links has three different pattern matching constructs (four if you include receive which is desugared into switch). These are

  • switch which destructs value patterns
  • choice which destructs choice patterns
  • handle which destructs effect patterns

Each pattern-matching construct has its own slightly distinct internal representation. Moreover, each construct has its own entry point to the pattern-matching compiler (compilePatterns.ml). The handle and choice constructs employ various hacks to make use of pattern matching compilation infrastructure for switch. This infrastructure is rather rigid, making it difficult to reuse and adapt for new experiments. It would be beneficial to have a common representation of pattern-matching internally. It would require a rewrite of the pattern-matching compiler. In addition, it would reduce the amount of boilerplate code and code duplication that we currently have in sugarTraversals.ml and transformSugar.ml.

I would like a design which partitions clauses into value, effect, and choice clauses.

Table column names with underscores

(in the shredding branch)

Underscores have special meaning in shredding/nested-records queries. Links gets confused when columns have underscores in their names.

var agencies =
  table "agencies"
  with (oid: Int,
        id: Int,
        name: String,
        based_in: String,
        phone: String)
  where oid readonly
  tablekeys [["oid"], ["id"]]
  from db;

Generated query:

select (0) as "1_1",(t1778."2") as "1_2",(t1776."based_in") as "2_a_data_based_in",(t1776."id") as "2_a_data_id",(t1776."name") as "2_a_data_name",(t1776."oid") as "2_a_data_oid",(t1776."phone") as "2_a_data_phone",(t1776."oid") as "2_a_prov_row",('agencies') as "2_a_prov_table" from (select (1) as "2") as t1778,agencies as t1776

*** Fatal error : Internal error: NotFound "based" (in Map.find) while interpreting.

Scoping of nested for comprehensions vs multiple binders

Are these two snippets meant to be equivalent?

for (a <- [[1], [2]])
  for (b <- a)
    [b]
for (a <- [[1], [2]],
     b <- a)
  [b]

They are not, because a is not visible in the binding for b, in the second version.

Type error: Unknown variable a.
In expression: a.

Wrong tuple variable name in FROM clause vs. WHERE clause

Hi,

I have a query that doesn't work:

query { for (r <-- xml)
        where (r.id == 0
           && not(empty(for (ooc <-- xml,
                             oc <-- xml)
                        where (ooc.parent == r.id && oc.parent == ooc.id)
                            [oc])))
          [r]
}

Is this sensible at all?

It produces the following SQL:

select (t1759."id") as "id",(t1759."name") as "name",(t1759."parent") as "parent",(t1759."post") as "post",(t1759."pre") as "pre"
  from xml as t1759
  where ((t1759."id") = (0)) and
     (exists (select 0 as dummy
                from xml as t1768, -- t1760 used but t1768 bound!
                     xml as t1769
               where ((t1760."parent") = (t1759."id")) and ((t1769."parent") = (t1760."id"))))

which doesn't work, because it refers to tuple variable t1760, which is not bound in any FROM clause. Instead, there is a from xml as t1768, but t1768 is never used.

Any suggestions? (I'm not entirely sure the query is valid Links code. If it's not "Your query is wrong!" is a perfectly acceptable answer...)

PS: This is using the current sessions branch at 3477fee.

Nontermination in queries due to recursive types

The effect analysis used to determine whether something is convertible to a query fails to detect recursion that sneaks in via recursive types. For example,

 sig f : mu a . ((a) -> b)
 fun f(x) {x(x)}

 query {f(f)}

is accepted and loops for ever.

Set `sessions` branch as the default branch

Several people told me "sessions branch is the default branch, we should set it as a default branch on github". I would find that very convenient - switching manually from master to sessions all the time when working with github is annoying. Is there any reason why this change has not been made? It should take less than a minute.

TODO

  • tag master session as pre-session-types so that we can easily go back to version of Links before development of session types started.
  • switch temporarily to sessions branch as the main branch
  • merge sessions branch into master. Should be a simple fast-forward merge, since master has not been developed since then.
  • revert to master as the default

Implement conditional assertions

I types.ml I stumbled upon:

  let combine (name, (flavour, kind, count)) (flavour', kind', scope) =
(*     assert (flavour = flavour'); *)
(*     assert (kind = kind'); *)

I assume that these are commented out for performance reasons. This gives me the idea that we could create assertion macros, which would compile conditionally. In a release version such assertions would be a no-op, while in a development version they would contain actual assertions with file name and line number. This would require a CPP pre-processor, e.g. CPPO. Is this a good idea? Is an extra dependency on CPPO acceptable?

TODO

  • add a flag to _oasis file to enable -noassert conditionally
  • make sure that when Links is distributed to users via OPAM -noassert is enabled. For us as developers it should be enabled by default.

Pattern matching on lists fails in queries

Pattern-matching is always typed as being compileable to the database, but pattern matching on lists compiles to functions hd and tl that are not yet compileable to the database.

Detecting redundant patterns

There are a couple of tests in tests/patterns.tests about redundant patterns that fail.
Did we at some point stop caring about detecting redundant patterns?

Add proper unicode support

Server-side Links supports unicode via UTF8, but this means that the reported length of non-ASCII strings is not always correct.

Redefining hd and pattern matching

This test fails:

Case patterns (with redefined hd)
{ fun hd(_) { 1 } switch (['a']) { case [y] -> y }}
stdout : 'a' : Char

My best guess: pattern matching desugars to calls to hd and tl, but is not hygienic.

Value restriction and effect signatures

The type checker rejects the following program

links> sig foo : () {Bar:a|_}-> a
...... fun foo() { var x = do Bar; x };
<stdin>:2: Type error: Because of the value restriction there can be no
free rigid type variables at an ungeneralisable binding site,
but the type `_' has free rigid type variables.
In expression: var x = do Bar;.

Even though the type checker infers that type

links> fun foo() { var x = do Bar; x };
foo = fun : () {Bar:a|_}-> a

If the type variable a appears first time outside of the effect row then it type checks:

links> sig foo : (a) {Bar:a|_}-> a
...... fun foo(y) { var x = do Bar; x };
foo = fun : (a) {Bar:a|_}-> a

But only if the dummy variable (y) is not any, i.e.

links> sig foo : (a) {Bar:a|_}-> a
...... fun foo(_) { var x = do Bar; x };
<stdin>:2: Type error: Because of the value restriction there can be no
free rigid type variables at an ungeneralisable binding site,
but the type `_' has free rigid type variables.
In expression: var x = do Bar;.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.