Code Monkey home page Code Monkey logo

pydal's People

Contributors

alfonsodg avatar buhtigithub avatar carpaidea avatar charleslaw avatar cwinebrinner avatar dokime7 avatar flavour avatar gi0baro avatar ilvalle avatar jasonphillips avatar jmistx avatar jvanbraekel avatar leonelcamara avatar mdipierro avatar nextghost avatar nexusbla18 avatar nicozanf avatar niphlod avatar nursix avatar omartrinidad avatar ortgit avatar reingart avatar robertop23 avatar rpedroso avatar samuelbonilla avatar spametki avatar stephenrauch avatar timrichardson avatar valq7711 avatar viniciusban avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pydal's Issues

Replace project description

Replace the current description: the web2py abstraction layer without web2py
with the more descriptive: This is the web2py's Database Abstraction Layer. It does not require web2py and can be used with any Python program.

import_from_csv() may delete uploaded files

Dear all,
The import_from_csv() code contains the following lines :

   # create new id until we get the same as old_id+offset
   while curr_id < csv_id+id_offset[self._tablename]:
         self._db(self._db[self][colnames[cid]] == curr_id).delete()

Inserted records can thus be removed several times before they reach their final id. If there is in the record any upload field, this will trigger delete_uploaded_files(), removing files when autodelete is set.

DAL runtime debug=True

Hi All,

I tried t set debug=True in dal at runtime, it doesn't seem to work. I think could be really usefull to have a switchable debugging in DAL, foremost in that occasion in which you can't understand if it's your code or your data buggy.

Thank you for your time!

'unique' attribute for tables

As @massimo noticed, we missed this: web2py/web2py#395

I think we should start a discussion about this. My points:

  • I'm not sure about the use case of this feature
  • I don't really like that implementation: if we want this, let's write it from scratch.

What do you think?

boolean in MSSQL2Adapter

The boolean data type is mapped to bit or boolean for all mssql adapters but MSSQL2Adapter.
Is there a special reason?

fix allowed table and field names

As discussed in web2py-developers (such as here ), we should avoid any table and field name that is not coherent with python identifiers. The cleanup() needs a revision and tests should be added to avoid regressions. We have rname for legacy and/or funny names.

Supported python versions

I'm wondering if python 2.5 is still considered as supported or not by pydal.
The readme should explicitly state the python versions considered as stable and 'supported', the ones marked as 'experimental' (I guess jython?), and those not supported at all (python 3.x ?)

Typo in pydal/adapters/postgres.py line 208

In they postgres.py adapter the ILIKE definition has a type in line 208
instead of returning

return '(%s ILIKE %s)' % (
                self.CAST(args[0], 'CHAR(%s)' % first.length), args[1])

it is written as

return '(%s LIKE %s)' % (
                self.CAST(args[0], 'CHAR(%s)' % first.length), args[1])

This causes ilike queries to some field types to return like queries instead.

As a result, when searching (for example) inside a field of type list:string the results are not as expected.

query = (db.metadatatable.pl1_host_list.ilike('%Amanda%'))
rows = db(query).select()

Returns

ENBT_15004_C    ['Amanda Salas', 'AJ Gibson']

and

query = (db.metadatatable.pl1_host_list.ilike('%Amanda%'))
rows = db(query).select()

Returns

HTDL_14125_D    ['amanda salas']

both queries should return

ENBT_15004_C    ['Amanda Salas', 'AJ Gibson']
HTDL_14125_D    ['amanda salas']

Having a problem with `with_alias`

Hi there. I'm trying to use pydal with an existing set of tables on my database and I'm hitting a wall here about how to deal with two rows of the same table appearing in the same query.

First a working example. Imagine I have two tables representing a directed graph:

from pydal import * 

db = DAL('mysql+mysqldb://someuser:somepassword@somehost:3306/graph')

db.define_table(
    'node',
    Field('name', type="string")
)

db.define_table(
    'edge',
    Field('node_from', type=db.node),
    Field('node_to', type=db.node)
)

Than I want the simplest query, just to print the names of the nodes that are connected by an edge:

db.node.with_alias('start')
db.node.with_alias('end')
db.edge.with_alias('hop')

hop_join = (db.start.id == db.hop.node_from) & (db.end.id == db.hop.node_to)
print db(hop_join)._select(db.start.name, db.end.name)

The obtained query is:

SELECT 
    start.name, end.name
FROM
    graph.some_rname_for_node AS start,
    graph.some_rname_for_node AS end,
    graph.some_rname_for_edges AS hop
WHERE
    ((start.id = hop.node_from)
        AND (end.id = hop.node_to));

Which is exactly what I want. This works perfectly.

But for some reason, when I try this with my real database this doesn't work at all:

db.define_table(
    'position',
    Field('position_id', rname='cargo_id', type='id'),
    Field('title', rname="titulo", type="string"),
    rname='cargo'
)

db.define_table(
    'similar_positions',
    Field('similar_id', rname='cargo_similar_id', type='id'),
    Field('position_from', rname='cargo_id1', type=db.position),
    Field('position_to', rname='cargo_id2', type=db.position),
    rname='cargo_similar'
)

db.position.with_alias("position_from")
db.position.with_alias("position_to")
db.similar_positions.with_alias("hop")

hop = (
    (db.position_from.id == db.hop.position_from) & 
    (db.position_to.id == db.hop.position_to)
)

print db(hop)._select(db.position_from.title, db.position_to.title)

The sql generated is:

SELECT 
    position_from.titulo, position_to.titulo
FROM
    cargo,
    cargo_similar AS hop,
    cargo AS position_from,
    cargo AS position_to
WHERE
    ((cargo.cargo_id = hop.cargo_id1)
        AND (cargo.cargo_id = hop.cargo_id2));

which fails to use the alias and ends up not producing the inner join I want (it actually returns all pairs of position_from and position_to regardless of whether there is a hop connecting them or not.

I can't spot the difference between the two codes except for the names of the tables and fields. What am I doing wrong?

Thanks for your time.

pydal to connect to app in standalone mode but can't open database

from pydal import DAL, Field
db = DAL('sqlite://storage.sqlite',folder='C:...\web2py_new\web2py_new\applications\my_application_name\databases',auto_import=True);
db.commit()


Gives the following error:

Traceback (most recent call last):
File "C:\Users\Ron\AppData\Local\Enthought\Canopy\User\lib\site-packages\pydal-15.02.27-py2.7.egg\pydal\base.py", line 434, in init
self._adapter = ADAPTERSself._dbname
File "C:\Users\Ron\AppData\Local\Enthought\Canopy\User\lib\site-packages\pydal-15.02.27-py2.7.egg\pydal\adapters\base.py", line 54, in call
obj = super(AdapterMeta, cls).call(_args, *_kwargs)
File "C:\Users\Ron\AppData\Local\Enthought\Canopy\User\lib\site-packages\pydal-15.02.27-py2.7.egg\pydal\adapters\sqlite.py", line 78, in init
if do_connect: self.reconnect()
File "C:\Users\Ron\AppData\Local\Enthought\Canopy\User\lib\site-packages\pydal-15.02.27-py2.7.egg\pydal\connection.py", line 99, in reconnect
self.connection = f()
File "C:\Users\Ron\AppData\Local\Enthought\Canopy\User\lib\site-packages\pydal-15.02.27-py2.7.egg\pydal\adapters\sqlite.py", line 76, in connector
return self.driver.Connection(dbpath, **driver_args)
OperationalError: unable to open database file

rname support for import_from_csv_file

I'd import a csv containing real database field name (dumped outside web2py).

db.define_table('test_1',
    Field('name', rname='firstname'))
import os
file_path = os.path.join(request.folder, 'test_1.csv')
db.test_1.import_from_csv_file(open(file_path, 'r'))
print db(db.test_1).select()
test_1.id,test_1.name
1,<NULL>
2,<NULL>

The content of test_1.csv is

"firstname"
Paolo
Luca

Without explicitly map the field names rname is not supported.

complex orderby doesn't work on datastore

db(db.tt.id > 0).select(orderby=db.tt.aa | db.tt.id)

raises

Traceback (most recent call last):
  File "tests/nosql.py", line 300, in testRun
    self.assertEqual(db(db.tt.id > 0).select(orderby=db.tt.aa | db.tt.id)[0].aa, '3')
  File "pydal/objects.py", line 2093, in select
    return adapter.select(self.query,fields,attributes)
  File "pydal/adapters/google_adapters.py", line 462, in select
    (items, tablename, fields) = self.select_raw(query,fields,attributes)
  File "pydal/adapters/google_adapters.py", line 420, in select_raw
    _order = orders.get(order, make_order(order))
  File "pydal/adapters/google_adapters.py", line 418, in make_order
    return  (desc and  -getattr(tableobj, s)) or getattr(tableobj, s)
AttributeError: type object 'tt' has no attribute 'id'
db(db.tt.id > 0).select(orderby=~db.tt.aa | db.tt.id)

raises

Traceback (most recent call last):
  File "tests/nosql.py", line 300, in testRun
    self.assertEqual(db(db.tt.id > 0).select(orderby=~db.tt.aa | db.tt.id)[0].aa, '3')
  File "pydal/objects.py", line 2093, in select
    return adapter.select(self.query,fields,attributes)
  File "pydal/adapters/google_adapters.py", line 462, in select
    (items, tablename, fields) = self.select_raw(query,fields,attributes)
  File "pydal/adapters/google_adapters.py", line 410, in select_raw
    orderby = self.expand(orderby)
  File "pydal/adapters/google_adapters.py", line 237, in expand
    return expression.op(expression.first, expression.second)
  File "pydal/adapters/google_adapters.py", line 304, in COMMA
    return '%s, %s' % (first.name,second.name)
AttributeError: 'Expression' object has no attribute 'name'

backport storage

I'd discuss how storage can be backported in pydal
The main reason is to use it as a super class for Row

datetime not json serializable

Datetime objects in the pyDLA are not json serializable. In a pevious version of web2py's DAL json serializion did work fine.

I use Version 15.3 of pyDAL with python2.7 on Ubuntu 14.04.2 LTS.

Here ist a small program which demonstrates the issue:

from pydal import DAL, Field
from datetime import datetime

db = DAL('sqlite://storage.db', folder="test_databases")
db.define_table('test_table', Field('date_field', 'datetime'))

db.test_table.insert(date_field=datetime.now())

rows = db().select(db.test_table.ALL)
print(rows.as_json())

Which gives this output on my machine:

Traceback (most recent call last):
  File "db_text.py", line 13, in <module>
    print(rows.as_json())
  File "/usr/local/lib/python2.7/dist-packages/pydal/objects.py", line 2741, in as_json
    return json.dumps(items)
  File "/usr/lib/python2.7/json/__init__.py", line 243, in dumps
    return _default_encoder.encode(obj)
  File "/usr/lib/python2.7/json/encoder.py", line 207, in encode
    chunks = self.iterencode(o, _one_shot=True)
  File "/usr/lib/python2.7/json/encoder.py", line 270, in iterencode
    return _iterencode(o, 0)
  File "/usr/lib/python2.7/json/encoder.py", line 184, in default
    raise TypeError(repr(o) + " is not JSON serializable")
TypeError: datetime.datetime(2015, 3, 27, 13, 37, 7) is not JSON serializable

Rows.as_json failing with default JSON

Rows.as_json fails with default JSON (Python 2.7) because it doesn't convert datetimes into strings.

There is currently no option to tell Rows.as_json() to convert datetimes into strings like:

jsonstr = row.as_json(datetime_to_str=True)

...and it defaults to False:

def as_dict(self, datetime_to_str=False, custom_types=None):

That won't work with standard json.dumps.

However, even if that would be overridable: datetime_to_str is not propagated through the recursion in Row.as_dict when v is a Row (e.g. as result of a join):

        elif isinstance(v,Row):
            d[k]=v.as_dict()

Shoud be:

        elif isinstance(v,Row):
            d[k]=v.as_dict(datetime_to_str=datetime_to_str,
                           custom_types=custom_types,
                           )

So, with standard json.dumps as serializer, Rows.as_json does currently not work.

mongo db warning

On travis-ci I noticied the following:

pydal/adapters/mongo.py:365: DeprecationWarning: The safe parameter is deprecated. Please use write concern options instead.
  ctable.insert(values, safe=safe)

test on appveyor

To my knowledge, appveyor is the only online service that has a Windows environment readily available, and it ships with several version of MSSQL installed.

I'd set up CI on that too, after a bit of trial and error I got that working (look at https://github.com/niphlod/pydal/tree/tests/appveyor). The rationale behind this is that I can't keep up with being the only man testing pydal commits on MSSQL, but I'd reaaaally like that tests continue to pass and that mssql is still kept on the "officially supported and battle-tested" backends.
Of course integration with coveralls is problematic but until something proper comes up (travis-ci started to look into Windows environments more than a year ago and still has no ETA on it) at least we'd have MSSQL tested.
appveyor isn't bad at all, but of course isn't as fast as dockerized travis-ci.
Results are like https://ci.appveyor.com/project/niphlod/pydal/build/1.0.7 and badges are available, with notifications too.
What do you think ?

Running tests on the local machine

Initially reported on the forum: https://groups.google.com/forum/#!topic/web2py-developers/tVZ-jYKgEF0
I recently cloned the pydal repository, and imported it into Eclipse (Luna).
Went ahead and right-clicked on the "tests" folder, and selected the "Run As > Python unit-test" option.
The output I see in the Eclipse console, is below.

Finding files... done.
Importing test modules ... Testing against sqlite engine (sqlite:memory)
Testing against sqlite engine (sqlite:memory)
done.


======================================================================
ERROR: testRun (tests.nosql.TestImportExportUuidFields)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "D:\code\git\pydal\tests\nosql.py", line 774, in testRun
    db.import_from_csv_file(stream)
  File "D:\code\git\pydal\pydal\base.py", line 1125, in import_from_csv_file
    *args, **kwargs)
  File "D:\code\git\pydal\pydal\objects.py", line 969, in import_from_csv_file
    curr_id = self.insert(**dict(items))
  File "D:\code\git\pydal\pydal\objects.py", line 736, in insert
    ret =  self._db._adapter.insert(self, self._listify(fields))
  File "D:\code\git\pydal\pydal\adapters\base.py", line 714, in insert
    raise e
IntegrityError: foreign key constraint failed

======================================================================
ERROR: testInsert (tests.nosql.TestRNameFields)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "D:\code\git\pydal\tests\nosql.py", line 1186, in testInsert
    self.assertEqual(isinstance(db.tt.insert(aa='1'), long), True)
  File "D:\code\git\pydal\pydal\objects.py", line 736, in insert
    ret =  self._db._adapter.insert(self, self._listify(fields))
  File "D:\code\git\pydal\pydal\adapters\base.py", line 714, in insert
    raise e
OperationalError: near "very": syntax error

======================================================================
ERROR: testRun (tests.nosql.TestRNameFields)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "D:\code\git\pydal\tests\nosql.py", line 1117, in testRun
    self.assertEqual(isinstance(db.tt.insert(aa='x'), long), True)
  File "D:\code\git\pydal\pydal\objects.py", line 736, in insert
    ret =  self._db._adapter.insert(self, self._listify(fields))
  File "D:\code\git\pydal\pydal\adapters\base.py", line 714, in insert
    raise e
OperationalError: near "very": syntax error

======================================================================
ERROR: testSelect (tests.nosql.TestRNameFields)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "D:\code\git\pydal\tests\nosql.py", line 1003, in testSelect
    Field('rating', 'integer', rname=rname2, default=2)
  File "D:\code\git\pydal\pydal\base.py", line 821, in define_table
    table = self.lazy_define_table(tablename,*fields,**args)
  File "D:\code\git\pydal\pydal\base.py", line 858, in lazy_define_table
    polymodel=polymodel)
  File "D:\code\git\pydal\pydal\adapters\base.py", line 458, in create_table
    self.create_sequence_and_triggers(query,table)
  File "D:\code\git\pydal\pydal\adapters\base.py", line 1299, in create_sequence_and_triggers
    self.execute(query)
  File "D:\code\git\pydal\pydal\adapters\base.py", line 1318, in execute
    return self.log_execute(*a, **b)
  File "D:\code\git\pydal\pydal\adapters\base.py", line 1312, in log_execute
    ret = self.cursor.execute(command, *a[1:], **b)
OperationalError: near "from": syntax error



======================================================================
ERROR: testSelect (tests.nosql.TestRNameTable)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "D:\code\git\pydal\tests\nosql.py", line 907, in testSelect
    rname=rname
  File "D:\code\git\pydal\pydal\base.py", line 821, in define_table
    table = self.lazy_define_table(tablename,*fields,**args)
  File "D:\code\git\pydal\pydal\base.py", line 858, in lazy_define_table
    polymodel=polymodel)
  File "D:\code\git\pydal\pydal\adapters\base.py", line 458, in create_table
    self.create_sequence_and_triggers(query,table)
  File "D:\code\git\pydal\pydal\adapters\base.py", line 1299, in create_sequence_and_triggers
    self.execute(query)
  File "D:\code\git\pydal\pydal\adapters\base.py", line 1318, in execute
    return self.log_execute(*a, **b)
  File "D:\code\git\pydal\pydal\adapters\base.py", line 1312, in log_execute
    ret = self.cursor.execute(command, *a[1:], **b)
OperationalError: near "very": syntax error



----------------------------------------------------------------------
Ran 88 tests in 1.907s

FAILED (errors=5, skipped=3)
---------------------------------------------------------------------------------------

with smart_query() "in" don't works with ID field type

Why?

Is there any considerations (differents backend don't work the same) for this or maybe it is just a forget??

I can do that in PostgreSQL

SELECT id 
   FROM table_name
 WHERE id in (1, 2, 3)

And even with DAL

db(db.table_name.id.belongs([1, 2, 3])).select()

nosql not operator

On gae the not operator is not implemented; on mongodb it doesn't work for expression such as ~(db.tt.aa > 2) (it throws cmd failed: unknown top level operator: $not).
However, under some constraints it should be feasible to convert the expression into (db.tt.aa <= 2) for both engines. Can De Morgan law help us in such situation? Do you see any drawback?

Table redefine with lazy_table

The following:

from pydal import DAL, Field
db = DAL("sqlite:memory", lazy_tables=True,migrate=False)
db.define_table('t_a', Field('code'))
print 'code' in db.t_a
db.define_table('t_a', Field('code_a'), redefine=True)
print 'code' in db.t_a
print 'code_a' in db.t_a

print

True
True
False

while with lazy_tables=False

True
False
True

Use domains for fields

Do not think it's very difficult to implement the use of domains in the definition of the fields.

A domain is a user-defined custom data type. It is used for defining the format and range of field

Now it can be implemented through a dictionary:

person_name = dict(
type='string',
length=100,
requires=IS_NOT_EMPTY(),
etc,
etc
)
db.define_table('my_table',
Field('name', **dict(person_name))
)

but the proposal is to make it more explicit:

person_name = Domain(
type='string',
length=100,
requires=IS_NOT_EMPTY(),
etc,
etc
)

db.define_table('my_table',
Field('name', domain=person_name)
)

db.define_table('other_table',
Field('other_name', domain=person_name)
)

This is very comfortable when you want to change data types from many fields at once

Migrations as a separate engine

Originally proposed by @michele-comitini

Some goals in chronological order could be:

  • .table files with more attributes and hints (for indexes and other constraints).
  • history
  • versioning and conflict management (to help manage merging diverting history)

Update AUTHORS

Would be nice to move out thanks from base.py and use AUTHORS for this purpose.

row representation

print row
used to print a <Row {field: value, ....}>
now prints:
<Row <pydal.objects.Row object at 0x108f0ca50>>

Avoid code duplication in tests

Right now I see a lot of tests are duplicated in:

  • tests/sql.py
  • tests/nosql.py

Would be nice to re-organize the tests to have:

  • Tests we run on all the drivers on a single file like base.py
  • SQL specific tests into sql.py
  • noSQL specific tests into nosql.py

Would also be nice to separate tests depending on the specific things we test, for example I moved validation tests into validation.py and will do the same for caching.

google datastore error with nonetype in query

Query: db(db.tt.id).count()

raises

Traceback (most recent call last):
  File "tests/nosql.py", line 355, in testRun
    db(db.tt.id).count()
  File "pydal/objects.py", line 2083, in count
    return db._adapter.count(self.query,distinct)
  File "pydal/adapters/google_adapters.py", line 481, in count
    (items, tablename, fields) = self.select_raw(query,count_only=True)
  File "pydal/adapters/google_adapters.py", line 353, in select_raw
    filters = self.expand(query)
  File "pydal/adapters/google_adapters.py", line 239, in expand
    return expression.op(expression.first)
  File "pydal/adapters/google_adapters.py", line 286, in NE
    return self.gaef(first,'!=',second)
  File "pydal/adapters/google_adapters.py", line 274, in gaef
    value = self.represent(second,first.type,first._tablename)
  File "pydal/adapters/google_adapters.py", line 157, in represent
    return ndb.Key(tablename, long(obj))
TypeError: long() argument must be a string or a number, not 'NoneType'

Allow work with encoding different to utf 8

I have an application that uses a SQLServer database, which is part of another system. The database is latin1

when I create the DAL object I pass the parameter db_codec = 'latin1'

This solves the conversion to utf8 when I read data, but when I write it does in utf8.

I solved it by doing the following: If I create a new record from a SQLFORM use "dbio = False", then analyze and convert the dictionary form to latin1:

for k in form.vars:
if isinstance (form.vars [k], unicode) or isinstance (form.vars [k], str):
form.vars [k] = form.vars [k] .decode ('utf8'). encode ('latin1')

db.mytable.insert (** dict (form.vars))

Then, for a very simple task you have to write a lot of code.

The proposal is to become the edges to encoding necessary:

Database <---enconding1 (default utf8)---> DAL (utf8) <---encoding2 (default utf8)---> Application

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.