web2py / pydal Goto Github PK
View Code? Open in Web Editor NEWA pure Python Database Abstraction Layer
License: BSD 3-Clause "New" or "Revised" License
A pure Python Database Abstraction Layer
License: BSD 3-Clause "New" or "Revised" License
Replace the current description: the web2py abstraction layer without web2py
with the more descriptive: This is the web2py's Database Abstraction Layer. It does not require web2py and can be used with any Python program.
Dear all,
The import_from_csv()
code contains the following lines :
# create new id until we get the same as old_id+offset
while curr_id < csv_id+id_offset[self._tablename]:
self._db(self._db[self][colnames[cid]] == curr_id).delete()
Inserted records can thus be removed several times before they reach their final id
. If there is in the record any upload
field, this will trigger delete_uploaded_files()
, removing files when autodelete
is set.
There are references to sqlite and mysql specific checks, those checks should be delegated to correct driver. While could be good for perfomance reasons this is very bad for code maintenability, and should be corrected.
see for instance:
https://github.com/web2py/pydal/blob/master/pydal/adapters/base.py#L404
https://github.com/web2py/pydal/blob/master/pydal/adapters/base.py#L433
Hi All,
I tried t set debug=True in dal at runtime, it doesn't seem to work. I think could be really usefull to have a switchable debugging in DAL, foremost in that occasion in which you can't understand if it's your code or your data buggy.
Thank you for your time!
As @massimo noticed, we missed this: web2py/web2py#395
I think we should start a discussion about this. My points:
What do you think?
The boolean data type is mapped to bit or boolean for all mssql adapters but MSSQL2Adapter.
Is there a special reason?
As discussed in web2py-developers (such as here ), we should avoid any table and field name that is not coherent with python identifiers. The cleanup() needs a revision and tests should be added to avoid regressions. We have rname for legacy and/or funny names.
I'm wondering if python 2.5 is still considered as supported or not by pydal.
The readme should explicitly state the python versions considered as stable and 'supported', the ones marked as 'experimental' (I guess jython?), and those not supported at all (python 3.x ?)
Details in this discussion thread: https://groups.google.com/forum/#!topic/web2py/3hY4niQu-jY.
The request is to use the postgres' jsonb support, via pydal.
Taking the effort of posting here what is reported on web2py. I feel that those DAL bugs are never seen as a priority if not posted here.........
In they postgres.py adapter the ILIKE definition has a type in line 208
instead of returning
return '(%s ILIKE %s)' % (
self.CAST(args[0], 'CHAR(%s)' % first.length), args[1])
it is written as
return '(%s LIKE %s)' % (
self.CAST(args[0], 'CHAR(%s)' % first.length), args[1])
This causes ilike queries to some field types to return like queries instead.
As a result, when searching (for example) inside a field of type list:string
the results are not as expected.
query = (db.metadatatable.pl1_host_list.ilike('%Amanda%'))
rows = db(query).select()
Returns
ENBT_15004_C ['Amanda Salas', 'AJ Gibson']
and
query = (db.metadatatable.pl1_host_list.ilike('%Amanda%'))
rows = db(query).select()
Returns
HTDL_14125_D ['amanda salas']
both queries should return
ENBT_15004_C ['Amanda Salas', 'AJ Gibson']
HTDL_14125_D ['amanda salas']
Hi there. I'm trying to use pydal with an existing set of tables on my database and I'm hitting a wall here about how to deal with two rows of the same table appearing in the same query.
First a working example. Imagine I have two tables representing a directed graph:
from pydal import *
db = DAL('mysql+mysqldb://someuser:somepassword@somehost:3306/graph')
db.define_table(
'node',
Field('name', type="string")
)
db.define_table(
'edge',
Field('node_from', type=db.node),
Field('node_to', type=db.node)
)
Than I want the simplest query, just to print the names of the nodes that are connected by an edge:
db.node.with_alias('start')
db.node.with_alias('end')
db.edge.with_alias('hop')
hop_join = (db.start.id == db.hop.node_from) & (db.end.id == db.hop.node_to)
print db(hop_join)._select(db.start.name, db.end.name)
The obtained query is:
SELECT
start.name, end.name
FROM
graph.some_rname_for_node AS start,
graph.some_rname_for_node AS end,
graph.some_rname_for_edges AS hop
WHERE
((start.id = hop.node_from)
AND (end.id = hop.node_to));
Which is exactly what I want. This works perfectly.
But for some reason, when I try this with my real database this doesn't work at all:
db.define_table(
'position',
Field('position_id', rname='cargo_id', type='id'),
Field('title', rname="titulo", type="string"),
rname='cargo'
)
db.define_table(
'similar_positions',
Field('similar_id', rname='cargo_similar_id', type='id'),
Field('position_from', rname='cargo_id1', type=db.position),
Field('position_to', rname='cargo_id2', type=db.position),
rname='cargo_similar'
)
db.position.with_alias("position_from")
db.position.with_alias("position_to")
db.similar_positions.with_alias("hop")
hop = (
(db.position_from.id == db.hop.position_from) &
(db.position_to.id == db.hop.position_to)
)
print db(hop)._select(db.position_from.title, db.position_to.title)
The sql generated is:
SELECT
position_from.titulo, position_to.titulo
FROM
cargo,
cargo_similar AS hop,
cargo AS position_from,
cargo AS position_to
WHERE
((cargo.cargo_id = hop.cargo_id1)
AND (cargo.cargo_id = hop.cargo_id2));
which fails to use the alias and ends up not producing the inner join I want (it actually returns all pairs of position_from
and position_to
regardless of whether there is a hop
connecting them or not.
I can't spot the difference between the two codes except for the names of the tables and fields. What am I doing wrong?
Thanks for your time.
from pydal import DAL, Field
db = DAL('sqlite://storage.sqlite',folder='C:...\web2py_new\web2py_new\applications\my_application_name\databases',auto_import=True);
db.commit()
Traceback (most recent call last):
File "C:\Users\Ron\AppData\Local\Enthought\Canopy\User\lib\site-packages\pydal-15.02.27-py2.7.egg\pydal\base.py", line 434, in init
self._adapter = ADAPTERSself._dbname
File "C:\Users\Ron\AppData\Local\Enthought\Canopy\User\lib\site-packages\pydal-15.02.27-py2.7.egg\pydal\adapters\base.py", line 54, in call
obj = super(AdapterMeta, cls).call(_args, *_kwargs)
File "C:\Users\Ron\AppData\Local\Enthought\Canopy\User\lib\site-packages\pydal-15.02.27-py2.7.egg\pydal\adapters\sqlite.py", line 78, in init
if do_connect: self.reconnect()
File "C:\Users\Ron\AppData\Local\Enthought\Canopy\User\lib\site-packages\pydal-15.02.27-py2.7.egg\pydal\connection.py", line 99, in reconnect
self.connection = f()
File "C:\Users\Ron\AppData\Local\Enthought\Canopy\User\lib\site-packages\pydal-15.02.27-py2.7.egg\pydal\adapters\sqlite.py", line 76, in connector
return self.driver.Connection(dbpath, **driver_args)
OperationalError: unable to open database file
following this thread https://groups.google.com/forum/#!topic/web2py/2i9zfAzG2s8
db.define_table('person',
Field('name'),
format='%(name)s')
db.define_table('thing',
Field('name'),
Field('owner_id', 'reference person'),
format='%(name)s')
there's no 'represent' function attached to db.thing.owner_id, so serializers (such as SQLTABLE) can't fetch the correct representation.... where did it go ?
I'd import a csv containing real database field name (dumped outside web2py).
db.define_table('test_1',
Field('name', rname='firstname'))
import os
file_path = os.path.join(request.folder, 'test_1.csv')
db.test_1.import_from_csv_file(open(file_path, 'r'))
print db(db.test_1).select()
test_1.id,test_1.name
1,<NULL>
2,<NULL>
The content of test_1.csv is
"firstname"
Paolo
Luca
Without explicitly map the field names rname is not supported.
db(db.tt.id > 0).select(orderby=db.tt.aa | db.tt.id)
raises
Traceback (most recent call last):
File "tests/nosql.py", line 300, in testRun
self.assertEqual(db(db.tt.id > 0).select(orderby=db.tt.aa | db.tt.id)[0].aa, '3')
File "pydal/objects.py", line 2093, in select
return adapter.select(self.query,fields,attributes)
File "pydal/adapters/google_adapters.py", line 462, in select
(items, tablename, fields) = self.select_raw(query,fields,attributes)
File "pydal/adapters/google_adapters.py", line 420, in select_raw
_order = orders.get(order, make_order(order))
File "pydal/adapters/google_adapters.py", line 418, in make_order
return (desc and -getattr(tableobj, s)) or getattr(tableobj, s)
AttributeError: type object 'tt' has no attribute 'id'
db(db.tt.id > 0).select(orderby=~db.tt.aa | db.tt.id)
raises
Traceback (most recent call last):
File "tests/nosql.py", line 300, in testRun
self.assertEqual(db(db.tt.id > 0).select(orderby=~db.tt.aa | db.tt.id)[0].aa, '3')
File "pydal/objects.py", line 2093, in select
return adapter.select(self.query,fields,attributes)
File "pydal/adapters/google_adapters.py", line 462, in select
(items, tablename, fields) = self.select_raw(query,fields,attributes)
File "pydal/adapters/google_adapters.py", line 410, in select_raw
orderby = self.expand(orderby)
File "pydal/adapters/google_adapters.py", line 237, in expand
return expression.op(expression.first, expression.second)
File "pydal/adapters/google_adapters.py", line 304, in COMMA
return '%s, %s' % (first.name,second.name)
AttributeError: 'Expression' object has no attribute 'name'
I'd discuss how storage can be backported in pydal
The main reason is to use it as a super class for Row
For backward compatibility one should be able to do
db.Field alias for Field
db.Table alias for Table
The methods LIKE, STARTSWITH, ENDSWITH, CONTAINS are defined twice.
For each of them, which is the one to delete and the one to use?
To be published on readthedocs
Datetime objects in the pyDLA are not json serializable. In a pevious version of web2py's DAL json serializion did work fine.
I use Version 15.3 of pyDAL with python2.7 on Ubuntu 14.04.2 LTS.
Here ist a small program which demonstrates the issue:
from pydal import DAL, Field
from datetime import datetime
db = DAL('sqlite://storage.db', folder="test_databases")
db.define_table('test_table', Field('date_field', 'datetime'))
db.test_table.insert(date_field=datetime.now())
rows = db().select(db.test_table.ALL)
print(rows.as_json())
Which gives this output on my machine:
Traceback (most recent call last):
File "db_text.py", line 13, in <module>
print(rows.as_json())
File "/usr/local/lib/python2.7/dist-packages/pydal/objects.py", line 2741, in as_json
return json.dumps(items)
File "/usr/lib/python2.7/json/__init__.py", line 243, in dumps
return _default_encoder.encode(obj)
File "/usr/lib/python2.7/json/encoder.py", line 207, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib/python2.7/json/encoder.py", line 270, in iterencode
return _iterencode(o, 0)
File "/usr/lib/python2.7/json/encoder.py", line 184, in default
raise TypeError(repr(o) + " is not JSON serializable")
TypeError: datetime.datetime(2015, 3, 27, 13, 37, 7) is not JSON serializable
Rows.as_json fails with default JSON (Python 2.7) because it doesn't convert datetimes into strings.
There is currently no option to tell Rows.as_json() to convert datetimes into strings like:
jsonstr = row.as_json(datetime_to_str=True)
...and it defaults to False:
def as_dict(self, datetime_to_str=False, custom_types=None):
That won't work with standard json.dumps.
However, even if that would be overridable: datetime_to_str is not propagated through the recursion in Row.as_dict when v is a Row (e.g. as result of a join):
elif isinstance(v,Row):
d[k]=v.as_dict()
Shoud be:
elif isinstance(v,Row):
d[k]=v.as_dict(datetime_to_str=datetime_to_str,
custom_types=custom_types,
)
So, with standard json.dumps as serializer, Rows.as_json does currently not work.
On travis-ci I noticied the following:
pydal/adapters/mongo.py:365: DeprecationWarning: The safe parameter is deprecated. Please use write concern options instead.
ctable.insert(values, safe=safe)
To my knowledge, appveyor is the only online service that has a Windows environment readily available, and it ships with several version of MSSQL installed.
I'd set up CI on that too, after a bit of trial and error I got that working (look at https://github.com/niphlod/pydal/tree/tests/appveyor). The rationale behind this is that I can't keep up with being the only man testing pydal commits on MSSQL, but I'd reaaaally like that tests continue to pass and that mssql is still kept on the "officially supported and battle-tested" backends.
Of course integration with coveralls is problematic but until something proper comes up (travis-ci started to look into Windows environments more than a year ago and still has no ETA on it) at least we'd have MSSQL tested.
appveyor isn't bad at all, but of course isn't as fast as dockerized travis-ci.
Results are like https://ci.appveyor.com/project/niphlod/pydal/build/1.0.7 and badges are available, with notifications too.
What do you think ?
Initially reported on the forum: https://groups.google.com/forum/#!topic/web2py-developers/tVZ-jYKgEF0
I recently cloned the pydal repository, and imported it into Eclipse (Luna).
Went ahead and right-clicked on the "tests" folder, and selected the "Run As > Python unit-test" option.
The output I see in the Eclipse console, is below.
Finding files... done.
Importing test modules ... Testing against sqlite engine (sqlite:memory)
Testing against sqlite engine (sqlite:memory)
done.
======================================================================
ERROR: testRun (tests.nosql.TestImportExportUuidFields)
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:\code\git\pydal\tests\nosql.py", line 774, in testRun
db.import_from_csv_file(stream)
File "D:\code\git\pydal\pydal\base.py", line 1125, in import_from_csv_file
*args, **kwargs)
File "D:\code\git\pydal\pydal\objects.py", line 969, in import_from_csv_file
curr_id = self.insert(**dict(items))
File "D:\code\git\pydal\pydal\objects.py", line 736, in insert
ret = self._db._adapter.insert(self, self._listify(fields))
File "D:\code\git\pydal\pydal\adapters\base.py", line 714, in insert
raise e
IntegrityError: foreign key constraint failed
======================================================================
ERROR: testInsert (tests.nosql.TestRNameFields)
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:\code\git\pydal\tests\nosql.py", line 1186, in testInsert
self.assertEqual(isinstance(db.tt.insert(aa='1'), long), True)
File "D:\code\git\pydal\pydal\objects.py", line 736, in insert
ret = self._db._adapter.insert(self, self._listify(fields))
File "D:\code\git\pydal\pydal\adapters\base.py", line 714, in insert
raise e
OperationalError: near "very": syntax error
======================================================================
ERROR: testRun (tests.nosql.TestRNameFields)
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:\code\git\pydal\tests\nosql.py", line 1117, in testRun
self.assertEqual(isinstance(db.tt.insert(aa='x'), long), True)
File "D:\code\git\pydal\pydal\objects.py", line 736, in insert
ret = self._db._adapter.insert(self, self._listify(fields))
File "D:\code\git\pydal\pydal\adapters\base.py", line 714, in insert
raise e
OperationalError: near "very": syntax error
======================================================================
ERROR: testSelect (tests.nosql.TestRNameFields)
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:\code\git\pydal\tests\nosql.py", line 1003, in testSelect
Field('rating', 'integer', rname=rname2, default=2)
File "D:\code\git\pydal\pydal\base.py", line 821, in define_table
table = self.lazy_define_table(tablename,*fields,**args)
File "D:\code\git\pydal\pydal\base.py", line 858, in lazy_define_table
polymodel=polymodel)
File "D:\code\git\pydal\pydal\adapters\base.py", line 458, in create_table
self.create_sequence_and_triggers(query,table)
File "D:\code\git\pydal\pydal\adapters\base.py", line 1299, in create_sequence_and_triggers
self.execute(query)
File "D:\code\git\pydal\pydal\adapters\base.py", line 1318, in execute
return self.log_execute(*a, **b)
File "D:\code\git\pydal\pydal\adapters\base.py", line 1312, in log_execute
ret = self.cursor.execute(command, *a[1:], **b)
OperationalError: near "from": syntax error
======================================================================
ERROR: testSelect (tests.nosql.TestRNameTable)
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:\code\git\pydal\tests\nosql.py", line 907, in testSelect
rname=rname
File "D:\code\git\pydal\pydal\base.py", line 821, in define_table
table = self.lazy_define_table(tablename,*fields,**args)
File "D:\code\git\pydal\pydal\base.py", line 858, in lazy_define_table
polymodel=polymodel)
File "D:\code\git\pydal\pydal\adapters\base.py", line 458, in create_table
self.create_sequence_and_triggers(query,table)
File "D:\code\git\pydal\pydal\adapters\base.py", line 1299, in create_sequence_and_triggers
self.execute(query)
File "D:\code\git\pydal\pydal\adapters\base.py", line 1318, in execute
return self.log_execute(*a, **b)
File "D:\code\git\pydal\pydal\adapters\base.py", line 1312, in log_execute
ret = self.cursor.execute(command, *a[1:], **b)
OperationalError: near "very": syntax error
----------------------------------------------------------------------
Ran 88 tests in 1.907s
FAILED (errors=5, skipped=3)
---------------------------------------------------------------------------------------
Why?
Is there any considerations (differents backend don't work the same) for this or maybe it is just a forget??
I can do that in PostgreSQL
SELECT id
FROM table_name
WHERE id in (1, 2, 3)
And even with DAL
db(db.table_name.id.belongs([1, 2, 3])).select()
On gae the not
operator is not implemented; on mongodb it doesn't work for expression such as ~(db.tt.aa > 2)
(it throws cmd failed: unknown top level operator: $not).
However, under some constraints it should be feasible to convert the expression into (db.tt.aa <= 2)
for both engines. Can De Morgan law help us in such situation? Do you see any drawback?
Seems quite dangerous.
I would like to know if this is about DAL or mysql adapter only.
@mdipierro Any ideas?
The following:
from pydal import DAL, Field
db = DAL("sqlite:memory", lazy_tables=True,migrate=False)
db.define_table('t_a', Field('code'))
print 'code' in db.t_a
db.define_table('t_a', Field('code_a'), redefine=True)
print 'code' in db.t_a
print 'code_a' in db.t_a
True
True
False
while with lazy_tables=False
True
False
True
Do not think it's very difficult to implement the use of domains in the definition of the fields.
A domain is a user-defined custom data type. It is used for defining the format and range of field
Now it can be implemented through a dictionary:
person_name = dict(
type='string',
length=100,
requires=IS_NOT_EMPTY(),
etc,
etc
)
db.define_table('my_table',
Field('name', **dict(person_name))
)
but the proposal is to make it more explicit:
person_name = Domain(
type='string',
length=100,
requires=IS_NOT_EMPTY(),
etc,
etc
)
db.define_table('my_table',
Field('name', domain=person_name)
)
db.define_table('other_table',
Field('other_name', domain=person_name)
)
This is very comfortable when you want to change data types from many fields at once
using CASCADE or SET NULL, it does not matter, web2py does not remove the reference from the referencing collection or delete the reference...
More about this issue here: https://groups.google.com/forum/#!msg/web2py/0tVp5QVR1-8/8F4Xojdug7QJ
Originally proposed by @michele-comitini
Some goals in chronological order could be:
with the upgrade of web2py we lost this commit:
web2py/web2py@d00be02
Would be nice to move out thanks from base.py
and use AUTHORS for this purpose.
print row
used to print a <Row {field: value, ....}>
now prints:
<Row <pydal.objects.Row object at 0x108f0ca50>>
We need to add a caching class directly into tests.
Right now I see a lot of tests are duplicated in:
Would be nice to re-organize the tests to have:
base.py
sql.py
nosql.py
Would also be nice to separate tests depending on the specific things we test, for example I moved validation tests into validation.py
and will do the same for caching.
Query: db(db.tt.id).count()
raises
Traceback (most recent call last):
File "tests/nosql.py", line 355, in testRun
db(db.tt.id).count()
File "pydal/objects.py", line 2083, in count
return db._adapter.count(self.query,distinct)
File "pydal/adapters/google_adapters.py", line 481, in count
(items, tablename, fields) = self.select_raw(query,count_only=True)
File "pydal/adapters/google_adapters.py", line 353, in select_raw
filters = self.expand(query)
File "pydal/adapters/google_adapters.py", line 239, in expand
return expression.op(expression.first)
File "pydal/adapters/google_adapters.py", line 286, in NE
return self.gaef(first,'!=',second)
File "pydal/adapters/google_adapters.py", line 274, in gaef
value = self.represent(second,first.type,first._tablename)
File "pydal/adapters/google_adapters.py", line 157, in represent
return ndb.Key(tablename, long(obj))
TypeError: long() argument must be a string or a number, not 'NoneType'
I have an application that uses a SQLServer database, which is part of another system. The database is latin1
when I create the DAL object I pass the parameter db_codec = 'latin1'
This solves the conversion to utf8 when I read data, but when I write it does in utf8.
I solved it by doing the following: If I create a new record from a SQLFORM use "dbio = False", then analyze and convert the dictionary form to latin1:
for k in form.vars:
if isinstance (form.vars [k], unicode) or isinstance (form.vars [k], str):
form.vars [k] = form.vars [k] .decode ('utf8'). encode ('latin1')
db.mytable.insert (** dict (form.vars))
Then, for a very simple task you have to write a lot of code.
The proposal is to become the edges to encoding necessary:
Database <---enconding1 (default utf8)---> DAL (utf8) <---encoding2 (default utf8)---> Application
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.