Code Monkey home page Code Monkey logo

pyfunctional's People

Contributors

abybaddi009 avatar adrian17 avatar ameier38 avatar artemisart avatar chrisrink10 avatar chuyuhsu avatar dependabot[bot] avatar digenis avatar entilzha avatar geenen124 avatar guillem96 avatar jsemric avatar jwilk avatar kache avatar kae2312 avatar kchro3 avatar lucidfrontier45 avatar maestro-1 avatar oliverlockwood avatar piercefreeman avatar ponytailer avatar simlei avatar stephan-rayner avatar swiergot avatar thejohnfreeman avatar timgates42 avatar versae avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pyfunctional's Issues

fold_left and fold_right have incorrect order of arguments in fold function

While working to add aggregate I noticed a major bug which affects fold_left and fold_right. Referencing the scala documentation for foldLeft shows that given a sequence of type A, the passed function should have the type func(x: B, y: A) => B. This means that x should be the current folded value and y should be the next value to fold.

Currently, fold_left and fold_right` behave with this arguments reversed which is inconsistent with both scala as well as the similar aggregate function defined in the LINQ documentation.

To confirm this behavior:

Scala REPL

List("a", "b", "c").foldLeft("")((current, next) => current + ":" + next)
res3: String = :a:b:c

Python Terminal

In [1]: seq('a', 'b', 'c').fold_left("", lambda current, next: current + ":" + next)
Out[1]: 'c:b:a:'

Correcting this bug introduces a breaking change with all previous versions of ScalaFunctional which contain fold_left namely 0.2.0, 0.3.0, 0.3.1. Since this fix is a breaking change, it will not be backported to versions as patches (third number in version), but will be introduced in 0.4.0.

Edit Documentation for LINQ

Add documentation, change summary, and pypi keywords to improve discoverability for users looking for LINQ-like features.

Child to #38

Implement data passing functions

So far the only way to ingest data into ScalaFunctional is to read through using python defined data structures. It would be helpful to be able to read directly from data formats such as json/sql/csv.

Target milestone for everything completed will be 0.4.0.

This issue will serve as a parent issue for implementing each specific function.

Child issues:
#34 seq.open
#35 seq.range
#36 seq.csv
#37 seq.jsonl
#29 seq.json
#30 to_json
#31 to_csv
#32 to_file
#33 to_jsonl

Broken 0.4.0 on Python 3 due to enum34 bug in wheel distribution

Looking into this. I suspect its that the wheel is built using python2, which means that unlike the source distribution the code in setup.py to handle the correct version is not functioning correctly. This didn't appear to be a problem before because part of this release fixed using the correct requirements list.

Doing my best to get a good fix out tonight and bump to 0.4.1 to avoid breaking things on pip.

Using Generators Multiple Times

The general bug or undesirable behavior comes from using a generator from ScalaFunctional twice. This will of course, make it keep returning nothing since the generator is exhausted. While in general this is expected behavior, there are some functions where this could be prevented.

Specifically, for to_list, list, to_set, set, to_dict, and dict, since the sequence is getting expanded, it should also get stored for future calls. This increases memory use, but only by a constant factor.

In general, need to look at the library and consider where it makes sense to do this (sparingly).

Investigate Integrating Toolz

Via Twitter got suggested to checkout toolz. I think its worth looking into as powering part of the backend. It may help clean up code, make certain things easier to extend, and improve performance.

I've posted to their mailing list expressing interest in collaboration.

Create `seq.json`

In this ticket, implement the seq.json function. This should be styled similar to other functions in functional.streams. The primary decision points are:

  • When given a list at the json root, seq.json will make each element in the json list an element in the sequence
  • When given a dictionary at the json root, seq.json will return a list of (Key, Value) pairs where the keys are the root dictionary keys and values the corresponding values.

The behavior of the second is consistent that Sequence is storing a list, not other collection types and that in the context of functional, it is best represented as a list of (Key, Value) pairs.

Child of #19

Add functionality to seq

Issue for book keeping, already implemented. seq has been modified to support this behavior:

>>> # Already supported
>>> seq([1, 2, 3])
[1, 2, 3]
>>> # Newly added
>>> seq(1, 2, 3)
[1, 2, 3]
>>> seq(1)
[1]
>>> # Behavior changed, used to expand string
>>> seq("abc")
["abc"]

[lineage] Potential performance problems

With this code, I traced all function calls for a simple operation.

import sys
from functional import seq

def tracefunc(frame, event, arg, indent=[0]):
    if event == "call":
        indent[0] += 2
        print (" " * indent[0] + "|", frame.f_code.co_name)
    elif event == "return":
        indent[0] -= 2
    return tracefunc

sys.settrace(tracefunc)

def dummyPredicate(line):
    return True

list(seq([1, 2, 3, 4, 5]).filter(dummyPredicate))

Here are the results for master and lineage-rewrite branches: https://gist.github.com/adrian17/5daa0db38fb4340c9f6e

As you can see, dummyPredicate is called twice as much as it should be - it looks like the base collection is iterated twice.

Add SQLite3 Output Stream

Add a function to_sqlite3 that can write to sqlite3 databases.

Text from prior discussion

Potential API

Writing to sqlite won't be complex if we can supply insertion SQL like below.

the_seq.to_sqlite3("db_path", "insert into test_table (id, name) values (?, ?);")

However, API without query like pandas' to_sql needs some work.

the_seq.to_sqlite3("db_path", "table_name")

Potential API Description

For inserting, the first example seems fine to me. The reason Pandas can do the second one is that it works with structured data for which it knows the types/names. The second query would get translated to something like insert into table_name (col1,col2....) values (?.....); where the columns come from the DataFrame's columns.

The second call could be fairly useful and not too difficult to write. Since we don't keep track of columns, in order to do something like this we would have to enforce that the sequence is a sequence of Tuple/namedtuple/List/dict of the same length/form (for dict this would require a scan to determine all dict fields since that is more friendly than every dict having every column, for list it would require getting max length list). Following that, we could do our best to infer the names of the columns for insert into test_table (id, name) values (?, ?) (from namedtuple._fields or scanning for dict fields) or give up and use insert into test_table values (?, ?).

The second requirement would be to check the input string against a table name regex to determine what should be done.

RFC: Name Change

I have long thought that the name ScalaFunctional is not that great, but so far haven't done anything about it. I think that this might be a good time to come up with a better name that suits what it does and direction of the project better. To be clear, the name change is for the distribution name (repository, website, PyPI), the import name will not change, because too many things would break.

I would like to detail where the name came from, why its not that good, and what would be desired in a new name by explaining the overall goals for the project. I plan on posting some name ideas later this week after more thought, but would like to get ideas from others as well.

At the end of the issue, I will explain logistically what the plan is to TLDR not break anything.

ScalaFunctional Name

Origin

  • API heavily inspired by/copies the Scala collections API
  • Second major source of ideas is Apache Spark, which is written in Scala
  • Library itself facilitates functional programming
  • Didn't think too much about name when making library since I primarily wanted it on PyPI to use it easily at the company I worked for (ie, not setup private PyPI server and share code across projects)
  • Matched the import name functional
  • functional on PyPI had/has been dead for a long time, but cannot be reclaimed. Since it is dead, a name conflict with it from import functional is unlikely, but not possible to claim the dead project's distribution name

Why its not good

  • This is a Python package, not a Scala package
  • Users may not care/know that it is Scala inspired so its confusing
  • Package is focused on data pipelines. This is pretty clear from lots of work to support various input/output data forms (to python collections, files, sql, pandas, and probably more later). The name doesn't highlight that
  • Hurts discoverability by LINQ users, which is a fairly large segment

Overall Goals and Direction

  1. Support creating data pipelines using functional programming concepts
  2. Provide read/write interoperability with all common data sources in the domain of data pipelines
  3. Improve aspects of the functional programming experience in python that enhance the first goal
  4. Let LINQ users seamlessly use the same engine from a familiar API
  5. Provide the above with negligible impact on performance, and the possibility of a parallel execution engine

Currently, the project is doing very well in supporting the first two. The streams/actions API is very complete, and more or less all common data formats/sources are supported (file compression coming in next release). The next possible targets would be SQL DBs with SQLAlchemy or similar to to_pandas, provide a way to make an SKLearn node (auto generate class that satisfies the node API).

I am not quite happy with progress on the third goal, namely making lambda calls more succinct. This is my motivation to at some point natively support something like _ from fn.py. This is paired with the exploratory work I have been doing on a SQL Parser/Compiler. With the code/understanding I have right now, something like below is looking pretty easy:

User = namedtuple('User', 'id name city')
seq([User(1, 'pedro', 'boulder'), User(2, 'fritz', 'seattle')]).filter('id < 1').select('name,city')

I am fairly confidant that as time goes on, the fourth goal will be better and better met.

The last goal has a few things wrapped in:

  • Currently performance is good (tested the other day) amortized over larger collections
  • However, there is no parallel execution support. This hasn't been done because it requires quite a bit of work, and lots of testing.
  • Currently, seq forces an expansion of its input, I would like to provide a family of seq operations that behave slightly differently for particular use cases. seq will stay default, sseq (stream sequence) will not force expand its input, pseq (parallel sequence) will provide a parallel execution engine. sseq is fairly low hanging fruit of these

New Name Goals

  • Describes better what the package does: data pipelines, functional programming, chain functional programming...
  • Does not provide confusion with inspiration source, but makes sense given them

Name Name Requirements

  • Must be available on PyPI
  • Name is not too similar to existing python package

Names Taken on PyPi

  • functional
  • functionally
  • chainz
  • pipeline
  • linqpy
  • linqish
  • py-linq
  • asq
  • PyLINQ
  • chain
  • fn
  • pipe
  • datapipeline

Plan

  1. Reserve name on PyPI (don't release yet), rename repository to new name, change all references to new name, and place notices wherever needed
  2. Verify that old links redirect to new name (I plan on making a dummy repository to test this behavior)
  3. Make sure readthedocs, travis, and codecov work correctly with new name
  4. Dual release package under new name and ScalaFunctional as 0.6.0
  5. Current plan is to dual release under both names until 1.0, whenever that might be. The import name will not change, only the distribution/repository name. Open to comment on this or any part of the plan

Hopefully I didn't forget anything, open to comments on anything at all (including that name change is not a good idea)

Fix `zip_with_index` Behavior

zip_with_index behavior is inconsistent with how it is defined in spark/scala, and redundant with enumerate. Specifically, it zips with the index on the left hand side of the tuple, instead of the right hand side of the tuple.

Create library of common operators

There are quite a few of common operators which are passed into functions such as map/filter/reduce. It might be a good idea to compile a library of common operators in functional.ops.

Fix count to match Scala docs instead of Spark docs

In Spark docs count returns the number of elements from all partitions without using a predicate. In scala count returns the number of elements which satisfy some predicate. In general I think its better to go with Scala definitions over Spark (although things like group_by_key are inspired from there). Additionally, len and size already do what count does.

Add SQLite3 Input Stream

As described in Pull Request #55, add seq.sqlite3(arg, sql_statement) to input streams API. arg can be any of

  1. Connection string
  2. SQLite connection
  3. SQLite cursor

The input stream comes from the sqlite3 execute(sql_statement) function which returns an iterable of tuple rows

Performance Regression with seq.open

Thanks for @adrian17 to finding this.

Using the file here: http://norvig.com/ngrams/enable1.txt, and output/images below, its easy to see that LazyFile is creating a 2x overhead. Currently, the culprit seems to be a combination of additional call overhead to next and that next in builtins.open seems to be implemented in C.

$ python3 -m timeit -s "from functional import seq;" "lines = list(open('/Users/pedro/Desktop/enable1.txt')); seq(lines).select(str.rstrip).count(lambda line: len(line) > 20)"
10 loops, best of 3: 101 msec per loop

$ python3 -m timeit -s "from functional import seq" "seq.open('/Users/pedro/Desktop/enable1.txt').select(str.rstrip).count(lambda line: len(line) > 20)"
10 loops, best of 3: 195 msec per loop

Putting these in files and us pygraphviz have these call graphs (look at the far right, the rest is not relevant):
normal-callgraph
special-callgraph

product() should return value on empty lists

In Scala, .product() on an empty list returns 1, 1.0 etc... depending on the type of List's values.

Here, currently seq([]).product() throws.

I think product should take an optional initializer parameter (in case someone used classes with overloaded multiplication) with default value 1 or 1.0... I don't know which though.

Underscore similar to fn.py _

Creating to discuss possibly implementing and better integrating something similar to _ in fn.py. From the 0.5.0 milestones:

Another idea is to implement the _ operator from fn.py. It is quite useful, but its overkill to require the library as a dependency and gimmicky to check if it exists just to import. This might open doors to integrate it more deeply as well.

on_error Functionality

The functionality from https://github.com/jagill/python-chainz#errors would be useful for certain use cases. Wrapping this into Lineage seems like a fairly clean way to accomplish this.

On a side note, this might be a good time to look at making the exceptions raised from evaluating an incorrect user function in a PyFunctional pipeline cleaner. Currently, there is quite a bit of noise when its is very unlikely the core issue is coming from PyFunctional.

Implement functions with generators

Re-implement/change all the functions in the library to be compatible with generators. Currently, sequential calls to transformations produces a new list between each transformation even if it is only used for the next transformation, not in the result. This is wasteful and could be eliminated using generators.

Targeting this to be the large (potentially breaking, hopefully not though) feature of 0.2.0 while 0.1.7 will be used to add more utility functions.

Parallel Execution Engine

Creating issue to discuss potential of implementing a parallel execution engine. From the 0.5.0 milestone this might include:

The first possibility is to abstract the execution engine away so that ScalaFunctional can use either a sequential or parallel execution engine. This would need to be done through a combination of multiprocessing and determining where it could be used without creating massive code duplication. Additionally, this would require writing completely new tests and infrastructure since order is not guaranteed, but expected in the current sequential tests.

Create `to_json`

Implement to_json

This should give the option to write values as an array at the json root, or if the sequence is a list of (Key, Value) pairs to write it as a dictionary at the json root.

Child of #19

Regression using iterators using PyPy

First reported here: #24

The core issue is when running code like this in pypy

>>> l = seq([1, 2, 3]).union([4, 5])
>>> set(l)
set([])

The result in standard python is different:

>>> l = seq([1, 2, 3]).union([4, 5])
>>> set(l)
set([1, 2, 3, 4, 5])

Looking into this further, I had a suspicion that at heart of the issue is that on master, union and many other operators return iterators. If they are iterated over once, then will return nothing. So it seemed like something was iterating over them before set() got to them. A common culprit for this type of problem is len, so I stuck some debugging statements and confirmed this is the problem.

For demonstration purposes, below is minimalistic code to replicate the same behavior, followed by the terminal session for that and scalafunctional with the print statements on iter, getitem and len

from collections import Iterable

class A(object):
    def __init__(self, seq):
        self.l = seq
    def __getitem__(self, item):
        print "DEBUG:getitem called"
        return self.l[item]
    def __iter__(self):
        print "DEBUG:iter called"
        return iter(self.l)
    def __len__(self):
        print "DEBUG:len called"
        if isinstance(self.l, Iterable):
            self.l = list(self.l)
        return len(self.l)

class B(object):
    def __init__(self, seq):
        self.l = seq
    def __iter__(self):
        print "DEBUG:iter called"
        return iter(self.l)


print "Calling set(A([1, 2]))"
a = A([1, 2])
print set(a)


print "Calling set(B([1, 2]))"
b = B([1, 2])
print set(b)

print "Calling union"
s = set([1, 2, 3]).union([4, 5])
c = A(iter(s))
print set(c)

Output

$ pypy iterable.py
Calling set(A([1, 2]))
DEBUG:iter called
DEBUG:len called
set([1, 2])
Calling set(B([1, 2]))
DEBUG:iter called
set([1, 2])
Calling union
DEBUG:iter called
DEBUG:len called
set([])
$ python iterable.py
Calling set(A([1, 2]))
DEBUG:iter called
set([1, 2])
Calling set(B([1, 2]))
DEBUG:iter called
set([1, 2])
Calling union
DEBUG:iter called
set([1, 2, 3, 4, 5])

Terminal sessions for scalafunctional

$ python
Python 2.7.9 (default, Jan  7 2015, 11:49:12)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.56)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from functional import seq
>>> l = seq([1, 1, 2, 3, 3]).union([1, 4, 5])
>>> set(l)
DEBUG:iter
set([1, 2, 3, 4, 5])
$ pypy
Python 2.7.9 (9c4588d731b7fe0b08669bd732c2b676cb0a8233, Mar 31 2015, 07:55:22)
[PyPy 2.5.1 with GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>>> from functional import seq
>>>> l = seq([1, 1, 2, 3, 3]).union([1, 4, 5])
>>>> set(l)
DEBUG:iter
DEBUG:len
DEBUG:iterable expanded in len via list()
set([])

Basically what is happening is that in both examples:

  1. set() results in a call to iter. Since the return type of union is an iterator, the final return value of that call is like iter(iter(resultOfUnion)).
  2. However the object holds a reference to the inner iter(resultOfUnion). When len gets called, it evaluates iter(resultOfUnion) and saves it to l. scalafunctional does this in order to reduce many evaluations of a generator which can cause problems.
  3. This causes a problem when the outer iter is finally called though because there are no elements to call it on.

I am unsure of the best way to fix this, but some considerations

  1. Is pypy correct to be calling len when standard python doesn't? Is there a good reason for this (probably)? Moreover, it seems even if i disagree, unlikely that this would be changed.
  2. The current work on lineage-rewrite, #20, and #17 will fix this I think without any specific attention to it.

Given that, I am inclined to followup with pypy devs to see if this is expected "correct" behavior or something needing fixing. I will also finish up the work on the lineage rewrite, then include the tests using set() and dict(). I am presuming the problems are due to similar issues with iterators. If it is still a problem, then I will have to think more about what to do.

Edit documentation and add alias methods for LINQ

Based on a reddit thread, this package would be helpful for users looking for features of LINQ (from .NET) in Python. This is a parent issue for editing documentation to talk about this use case and add new/alias methods (like where and select) common in LINQ.

Improve to_file for string writing

In using seq.to_file I have found a common case is to write a collection to a file as a string. I think the right way to expose is through a delimiter option in to_file. If it is None, the default is to str(self.to_list()), if it is defined then it will do self.make_string(separator) and write that.

Better LINQ Integration

Creating to discuss potential better LINQ integration for 0.6.0. From the milestone summery:

Another possible focus is on LINQ. This could take the form of implementing a limited SQL parser and optimizer using pyparsing. This might also be giving select, where, and related methods more definition. For example, if the LINQ functions are used using calls like select("atr").filter("atr == 1") be smarter about how they are executed. This is a wide open door, looking for thoughts and suggestions on what is of value. The basic concept is to start working on smarter ways of reading data, although this might tread into the territory of much more mature libraries like pandas its dataframes.

`tail` differing from the Scala version

From documentation: "Selects all elements except the first."

Your version: "get last element".

Any good reason behind it? I can change it to the former, and also implement stuff like inits and tails.

support compression in File I/O functions

I could be usefull if stream functions like seq.open, seq.csv etc can read compressed files like Spark sc.textFile.

Also writing a compressed file by to_file, to_csv etc is great.

Create `to_csv`

Implement to_csv with similar interface to python module csv.writer

Child of #19

Create `to_file`

Implement to_file with similar options to builtins.open in write mode

Child of #19

Implement join function

Issue to match with implementing a join function. The implementation should take two sequences with tuples (K, V) and (K, W). The return value is the sequence joined on K to return a sequence of (K, (V, W)) tuples.

Additionally, should implement join_on which creates the keys via the result of a passed function.

Create `to_jsonl`

Implement to_jsonl which matches the implementation of functional.streams.jsonl

Child of #19

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.