berinhard / model_mommy Goto Github PK
View Code? Open in Web Editor NEWNo longer maintained, please migrate to model_bakery
Home Page: http://model-bakery.readthedocs.org/
License: Other
No longer maintained, please migrate to model_bakery
Home Page: http://model-bakery.readthedocs.org/
License: Other
mommy.make_recipe('dog', owner__name='John')
or
mommy.prepare_recipe('dog', owner__name='John')
Creates two objects: One for the query_lookup syntax and other one declared in recipe.
I created a branch with a test case which reproduces the bug (topic/recipe_query_lookup)
With an integer field as primary key, model mommy sometimes generates negative numbers. Negative numbers seem to break Django's reverse()
function.
Is this on purpose? If no, can we fix it? If yes, is there a way to suppress negative integers for primary keys?
Or is it even a bug in Django's reverse()
function?
Supose that we have two models:
from django.db import models
class Bar(models.Model):
name = models.CharField(max_length=30)
class Foo(models.Model):
bar = models.ForeignKey(Bar)
Now, imagine if we have a single test method that needs an instance of Foo which has a reference to a Bar's instance with a specif name value such as "Bob". In the current state of mode_mommy, the way to achieve this problem is just like the code bellow:
bar = mommy.make_one(Bar, name='Bob')
foo = mommy.make_one(Foo, bar=bar)
I'm imagining an API similar to Django Queryset Field Lookup. Something lke the following code:
foo = mommy.make_one(Foo, bar__name="Bob")
What do you think? I' want to work on this feature by now...
I've been thinking... Why we do not use the same generator that we use for FileField for the ImageField. I've implemented a prove of concept and it works, but the problem is that the project will gain the dependency of PIL. I have two points about this:
1 - This is a problem because PIL is a big python package and it's a dependency that some users won't see the reason about it on a first glance;
2 - Since model_mommy it's used only for development, I think this is acceptable.
Anyway, for the sake o documentation, I've used this workaround to prove my point:
from model_mommy import mommy, generators
mommy_obj = mommy.Mommy(ModelWithImageField)
mommy_obj.type_mapping[ImageField] = generators.gen_file_field
model_obj = mommy_obj.make_one()
assert model_obj.image_field
I have the following models
class UserProfile(Subject):
"""
UserProfile class
"""
# This field is required.
user = models.OneToOneField(User)
# Other fields here
company = models.CharField(max_length=50, null=True, blank=True,
verbose_name=("Company"))
contact = models.CharField(max_length=50, null=True, blank=True,
verbose_name=("Contact"))
msg = models.TextField(null=True, blank=True, verbose_name=_("Message"))
def __unicode__(self):
return self.user.username
class ParentImportJob(models.Model):
"""
Class to store importing jobs
"""
STATUS_ACTIVE = u'A'
STATUS_SUCCESS = u'S'
STATUS_PARTIAL = u'P'
STATUS_ERROR = u'E'
STATUS_CHOICES = (
(STATUS_ACTIVE, _(u'In Progress')),
(STATUS_SUCCESS, _(u'Successfully Imported')),
(STATUS_PARTIAL, _(u'Partially Imported')),
(STATUS_ERROR, _(u'Aborted with error')),
)
status = models.CharField(max_length=1, choices=STATUS_CHOICES)
user_profile = models.ForeignKey(UserProfile)
errors = models.TextField(null=True, blank=True)
start_date = models.DateTimeField(auto_now_add=True)
end_date = models.DateTimeField(blank=True, null=True)
instance_class = models.CharField(max_length=200)
class ImportJob(ParentImportJob):
"""
Class to store jobs of files being imported
Extends ParentImportJob
ParentImportJob is not abstract! But I am interested in 2 separated tables
"""
_imported_file = models.TextField(null=True,
blank=True,
db_column='imported_file')
import_result = models.TextField(null=True, blank=True)
def set_import_file(self, imported_file):
""" Set method for import_file field """
self._imported_file = base64.encodestring(imported_file)
def get_import_file(self, imported_file):
""" Set method for import_file field """
return base64.decodestring(self._imported_file)
imported_file = property(get_import_file, set_import_file)
The following recipe:
ob_mock = Recipe(ImportJob,
import_file=ofile.read(),
import_result=EXCEL_DICT)
When I ran self.job = mommy.make_recipe('excel2db.job_mock') inside the testcase I get IntegrityError: column user_id is not unique
I'm doing something wrong?
mock_file.txt
is not included in the source distribution. You need to create and specify it in a MANIFEST.in
file.
It seems that even AutoField values are random.
Since Django delegate this value generation to the database, we should respect this behavior, by not automatically generating a value for AutoField.
Implement generator for FileField and ImageFileField in momy
Hi,
I just cloned the project and tried running the tests, and got:
$ python runtests.py
Traceback (most recent call last):
File "runtests.py", line 31, in <module>
runtests()
File "runtests.py", line 24, in runtests
from django.test.simple import DjangoTestSuiteRunner
File "/Users/hugo/.virtualenvs/novelas/lib/python2.6/site-packages/django/test/__init__.py", line 5, in <module>
from django.test.client import Client
File "/Users/hugo/.virtualenvs/novelas/lib/python2.6/site-packages/django/test/client.py", line 18, in <module>
from django.test import signals
ImportError: cannot import name signals
This error is totally non-sense, and I realized it was not finding test_settings.py
file. So I just created a dummy one and it worked.
I am using Django 1.1.1.
I am going to attach a pull request asap.
Since we already have a make_many method, why we do not have a prepare_many
one? I think this wil make a more consistent API.
(Pdb) mommy.make(Post)
*** DoesNotExist: Post matching query does not exist. Lookup parameters were {'pk': 1}
(Pdb) Post.objects.all()
[]
(Pdb) Article.objects.all()
[<Article: /article/3apmB5cyN0yDsyzZwmBhqhf0mDlsIA-BlNrdcJVul0zpiLIM-Y/HY1y3VDgxuERG5-a0Sys8RbzJaQpOl2hlKA9eZuGbF_S09o6mW>]
(Pdb)
Model Post extend model Article (not abstract).
mommy.make: Article created and not created Post
has anyone tried unpinning the django version?
I created a custom generator method to django's DateTimeField I followed the instructions which are described on basic usage documentation, but it didn’t work. When I execute my tests, the custom generator that I've created doesn't run.
I installed model_mommy through pip install. I made a little test: I removed the pip's version and installed github's version. And (tcharamm) it worked! So I suppose the pip's version is a little outdated
Model mommy enters infinite loop if self reference is there.
The current implementation of gen_url can generate an url as http://..com which is invalid.
Mommy, should provide support for predfined factories, to solve cases where the model has custom fields and to handle complex relations.
Also this will give a more controlled data generation, since it will be pre-defined. =)
We should be able to know the curent model_mommy version which is being used. Something like requests do:
In [1]: requests.__version__
Out[1]: '0.14.2'
Today we have the following output:
In [1]: model_mommy.__version__
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-2-91f3c9664a1a> in <module>()
----> 1 model_mommy.__version__
AttributeError: 'module' object has no attribute '__version__'
Pre-defined factories should be possible with model_mommy, to make it possible to handle custom fields and the cases where random data is not welcome. =)
I was looking to solve issue #95 and found myself in front of a bigger one. The seq
function does not provides atomicity to test methods. The problem is that it is not "reseted" after a test method execution or before a new one. So, let's take this following test method as an example that was extracted from here:
def test_increment_for_numbers(self):
dummy = mommy.make_recipe('test.generic.serial_numbers')
self.assertEqual(dummy.default_int_field, 11)
self.assertEqual(dummy.default_decimal_field, Decimal('21.1'))
self.assertEqual(dummy.default_float_field, 2.23)
dummy = mommy.make_recipe('test.generic.serial_numbers')
self.assertEqual(dummy.default_int_field, 12)
self.assertEqual(dummy.default_decimal_field, Decimal('22.1'))
self.assertEqual(dummy.default_float_field, 3.23)
dummy = mommy.prepare_recipe('test.generic.serial_numbers')
self.assertEqual(dummy.default_int_field, 13)
self.assertEqual(dummy.default_decimal_field, Decimal('23.1'))
self.assertEqual(dummy.default_float_field, 4.23)
If we create a new test that has exactly the same code, but just another name so it could run, it will fail. The test could be something like:
def test_increment_for_numbers_2(self):
dummy = mommy.make_recipe('test.generic.serial_numbers')
self.assertEqual(dummy.default_int_field, 11)
self.assertEqual(dummy.default_decimal_field, Decimal('21.1'))
self.assertEqual(dummy.default_float_field, 2.23)
dummy = mommy.make_recipe('test.generic.serial_numbers')
self.assertEqual(dummy.default_int_field, 12)
self.assertEqual(dummy.default_decimal_field, Decimal('22.1'))
self.assertEqual(dummy.default_float_field, 3.23)
dummy = mommy.prepare_recipe('test.generic.serial_numbers')
self.assertEqual(dummy.default_int_field, 13)
self.assertEqual(dummy.default_decimal_field, Decimal('23.1'))
self.assertEqual(dummy.default_float_field, 4.23)
If we run the tests, we have the following output:
======================================================================
FAIL: test_increment_for_numbers2 (test.generic.tests.test_recipes.TestSequences)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/bernardo/virtualenvs/model_mommy/test/generic/tests/test_recipes.py", line 330, in test_increment_for_numbers2
self.assertEqual(dummy.default_int_field, 11)
AssertionError: 14 != 11
The problem is that the seq
call used to define the field default_int_field
in this recipe is not created again when executing a new test method, it uses the same object instead, so it keeps it previous state.
I think that just the README is not handling and listing many possibilities of model_mommy usage. I think that we should use ReadTheDocs to do this kind of documentation. We can do it in a more easier and organized way.
When a ManyToMany relation uses 'through', you cannot add an object with .add.
When you attempt to do:
mommy.make(Employee)
When Employee has a field like:
references = models.ManyToManyField(Person, through='References')
model_mommy throws the following traceback:
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/home/dustin/.virtualenvs/lalo-django/local/lib/python2.7/site-packages/model_mommy/mommy.py", line 52, in make
return mommy.make(**attrs)
File "/home/dustin/.virtualenvs/lalo-django/local/lib/python2.7/site-packages/model_mommy/mommy.py", line 210, in make
return self._make(commit=True, **attrs)
File "/home/dustin/.virtualenvs/lalo-django/local/lib/python2.7/site-packages/model_mommy/mommy.py", line 248, in _make
return self.instance(model_attrs, _commit=commit)
File "/home/dustin/.virtualenvs/lalo-django/local/lib/python2.7/site-packages/model_mommy/mommy.py", line 270, in instance
m2m_relation.add(model_instance)
AttributeError: 'ManyRelatedManager' object has no attribute 'add'
I have a model like this:
from django.db import models
class MyModel(models.Model):
slug = models.SlugField(max_length=100)
duration = models.TimeField(_(u'Duração'))
But when I use model_mommy for this model, it gives me the following error message:
TypeError: <class 'django.db.models.fields.TimeField'> is not supported by mommy.
I'm using the following recipe definition:
documentcategory = Recipe(models.DocumentCategory,
name='An1I',
description='Analysis 1 für Informatiker',
)
document_summary = Recipe(models.Document,
dtype=models.Document.DTypes.SUMMARY,
category=foreign_key(documentcategory))
document_exam = Recipe(models.Document,
dtype=models.Document.DTypes.EXAM,
category=foreign_key(documentcategory))
document_software = Recipe(models.Document,
dtype=models.Document.DTypes.SOFTWARE,
category=foreign_key(documentcategory))
document_learning_aid = Recipe(models.Document,
dtype=models.Document.DTypes.LEARNING_AID,
category=foreign_key(documentcategory))
As you see, there are recipes for documents with different types. The problem is that in this case the name
attribute of the documentcategory
recipe is a primary key. Therefore when creating multiple documents, I get a violation of the primary key constraint of the documentcategory.
Is there a way to create a foreign key that references a recipe with a fixed primary key, which is created when it does not yet exist and that is referenced when it already exists?
In our scenario we have a model Prescription that cannot be generated by mommy (for whatever reason we just need our special Prescription). We've got a recipe to create instances of Prescription. Also, we have got lots of other models that have foreign keys to prescription, we want these models to be generated by mommy.
So, I would like to do something like:
MOMMY_CUSTOM_RECIPES = { 'Prescription': prescription_recipe }
to get mommy to do prescription_recipe.make() whenever mommy.make() is called on the prescription model (as instance, related instance, many_to_many instance, ... always).
I have many custom field types, most are subclass of Django's standard field types with some small modification (strip white space, use a different default form widget, etc.)
Trying to use model_mommy, first thing I get is an exception about my field type not being supported. So I check the docs and find I need to define a mapping for each custom field to a generator in my settings file.
Since my fields are mostly subclass of standard Django fields, I think I should be able to map my field to the built in model_mommy generators. I try:
from model_mommy import generators
MOMMY_CUSTOM_FIELDS_GEN = {
'turbia.models.CharField': generators.gen_string,
}
But importing generators from inside my settings file gives me a surprising error about:
ImproperlyConfigured: The SECRET_KEY setting must not be empty.
I guess there's a circular import because generators does:
from django.contrib.contenttypes.models import ContentType
from django.db.models import get_models
model_mommy could fix this by moving these imports into the gen_content_type()
function, but I'd still need to define a mapping for every custom field.
Couldn't model mommy use isinstance()
or issubclass()
for each generator and pass a list of compatible Django fields instead of an explicit mapping? This should work for subclasses. Perhaps this could be a fallback, in addition to an explicit mapping if there is still a need for that.
I think that we could set the make_m2m
as False by default in 1.0 release.
When using using mommy.make()
directly on a model class, I can override related (foreign key) fields no problem.
But when I create a recipe for the same model, and I don't use field=foreign_key(other_recipe)
, it doesn't work and the value I specify is ignored.
>>> from app1 import models as myapp
>>> from app2 import models as otherapp
>>> from model_mommy import mommy, recipe
>>> InviteResponse = recipe.Recipe('myapp.InviteResponse')
# Doesn't use the field value I pass in.
>>> ir = InviteResponse.make(subscriber__list=otherapp.List.objects.create(name='test-name-1'))
>>> ir.subscriber.list
<List: 2-381cbce7ae9; MpTZOopHRvPARKPxhWHbafQfjvpktCNFCrkGcPJlCOvfLAmiYj>
# Does use the field value I pass in.
>>> ir = mommy.make(myapp.InviteResponse, subscriber__list=otherapp.List.objects.create(name='test-name-2'))
>>> ir.subscriber.list
<List: 3-96f2324c273; test-name-2>
# If I use foreign_key() in the recipe and create a dummy recipe for the related model, it works.
>>> Subscriber = recipe.Recipe('otherapp.Subscriber')
>>> InviteResponse = recipe.Recipe('myapp.InviteResponse', subscriber=recipe.foreign_key(Subscriber))
>>> ir = InviteResponse.make(subscriber__list=otherapp.List.objects.create(name='test-name-3'))
>>> ir.subscriber.list
<List: 4-82e9b30d0e3; test-name-3>
Now it's not that hard to create a recipe for the related field and use foreign_key()
, but I shouldn't have to. If it works for models it should work for recipes. In this case app2
is a 3rd party app and I don't want to create empty recipes for 3rd party models when I don't care about their data, just so I can override related fields when creating objects in my app.
When the field have unique constraint (ex: a IntegerField code) and I do something like:
bobs_dog = mommy.make('family.Dog',)
I get random duplicate errors like:
IntegrityError: duplicate key value violates unique constraint "app_model_code_key"
I've found it quite useful to define recipes for individual test cases rather than the global case in README. It took a bit of reading to source to realize that the following was an option and it might be useful among the other recipe documentation.
class EmployeeTest(TestCase):
def setUp(self):
self.employee_recipe = Recipe(Employee, name=seq('Employee '))
def test_employee_list(self):
self.employee_recipe.make(_quantity=3)
# test stuff....
def test_employee_tasks(self):
employee1 = self.employee_recipe.make()
task_recipe = Recipe(Task, employee=employee1)
task_recipe.make(status='done')
task_recipe.make(due_date=datetime(2014, 1, 1))
# test stuff....
Right now I am installing commit f734decb017e537b42993ceb95768f489971b267
to have this feature but of course I'd rather pin model_mommy
to a version instead.
Hello,
I've noticed that (non-abstract) model inheritance causes some issues with setting of the fields on the parent when using make_one. For example:
class Foo(models.Model):
baz = models.IntegerField(default=0)
class Bar(Foo):
pass
bar = mommy.make_one(Bar, baz=1)
print bar.baz
>>> 0
(Please excuse my example if it's not 100% syntatically correct, I was just trying to illustrate what I'm seeing. :-) )
So, I dove into the code and I think I might have a solution. In mommy.py, line 181:
# This does not pass the attr dict, as it should, I think
model_attrs[field.name] = self.generate_value(field)
My proposed fix is:
if field in self.model._meta.parents.values():
parent_attrs = {}
for parent_model, auto_field in self.model._meta.parents.iteritems():
if field == auto_field:
parent_fields = [parent_field.name for parent_field in parent_model._meta.fields]
parent_attrs = {key:value for key, value in attrs.iteritems() if key in parent_fields}
break
model_attrs[field.name] = self.generate_value(field, **parent_attrs)
else:
model_attrs[field.name] = self.generate_value(field)
** EDIT - Hopefully no one started looking at this before the edit - I had to add the check to only pass through the fields that are on the parent model. I didn't get to really refactor it or anything, but it works in all my tests.
If someone else is seeing something differently, please let me know. This seems to be the correct behavior as Django doesn't currently allow you to override fields on child classes (and as far as I know, there isn't anything that they have slated to allow you to do so. In the future, if they do add that functionality, you could add another check to see if there are redefined fields in the child model). I should also say that with my local testing, this works. However, I haven't tested with two non-abstract models as parents, but it seems like it should still work.
Your feedback is greatly appreciated. Thanks.
...or I'm just dumb.
I can't seem to figure out a way to do this.
When I created a number of objects in a row (inside a loop iterator) using make_recipe with 'seq' method in the proper place no exception was threw. Even though the object have the unique constrain (thanks to seq)
However, the same can't be reproduced when using _quantity argument. Is it a bug? For me, both approaches should be equivalent (and the second one more recommended).
Did I missed something?
I get the following when I have three or more ambiguous model.
>>> from model_mommy import mommy; mommy.make('GCAccount') ambiguous_models: ['flatpage_sites', 'flatpage', 'testimonial_sites', 'testimonial', 'graduateapplied', 'aboutus_sites', 'aboutus', 'testimonial_sites', 'testimonial'] Traceback (most recent call last): File "<console>", line 1, in <module> File "/home/jonathan/workspace/gradcon4/venv/local/lib/python2.7/site-packages/model_mommy/mommy.py", line 68, in make mommy = Mommy(model, make_m2m=make_m2m) File "/home/jonathan/workspace/gradcon4/venv/local/lib/python2.7/site-packages/model_mommy/mommy.py", line 227, in __init__ self.model = self.finder.get_model(model) File "/home/jonathan/workspace/gradcon4/venv/local/lib/python2.7/site-packages/model_mommy/mommy.py", line 165, in get_model model = self.get_model_by_name(name) File "/home/jonathan/workspace/gradcon4/venv/local/lib/python2.7/site-packages/model_mommy/mommy.py", line 182, in get_model_by_name self._populate() File "/home/jonathan/workspace/gradcon4/venv/local/lib/python2.7/site-packages/model_mommy/mommy.py", line 206, in _populate unique_models.pop(name) KeyError: 'testimonial_sites'
Hello guys,
This is a simple issue, I guess.
I downloaded model_mommy from the PyPi using pip install model_mommy
. The custom field support was not present (as the currect docs state).
It took me while to figure this out, but when I did a pip install -e git+https..
the custom field generators support was in place.
We aren't accepting Django 1.5
I really only need to write a recipe to populate one field--not all of them. Is it possible to have the remaining fields auto-populate? If not, I'd be happy to add this functionality.
I was wondering if an API like the following isn't desirable:
from model_mommy import mommy
foo_obj, bar_obj = mommy.make_recipes(
'app.foo_recipe', 'app.bar_recipe'
)
This would avoid repeated calls to make_recipe
in more complex tests. What do you think?
When a choice is defined for a model field all the generated values should come from threre.
File "/home/john/venv/sop/local/lib/python2.7/site-packages/model_mommy/mommy.py", line 79, in make_recipe
return _recipe(mommy_recipe_name).make(_quantity=_quantity, **new_attrs)
File "/home/john/venv/sop/local/lib/python2.7/site-packages/model_mommy/recipe.py", line 30, in make
return mommy.make(self.model, **self._mapping(attrs))
File "/home/john/venv/sop/local/lib/python2.7/site-packages/model_mommy/mommy.py", line 52, in make
return mommy.make(**attrs)
File "/home/john/venv/sop/local/lib/python2.7/site-packages/model_mommy/mommy.py", line 210, in make
return self._make(commit=True, **attrs)
File "/home/john/venv/sop/local/lib/python2.7/site-packages/model_mommy/mommy.py", line 237, in _make
if not field.name in self.attr_mapping and (field.has_default() or field.blank):
AttributeError: 'TaggableManager' object has no attribute 'has_default'
This seems to be caused because django-taggit makes the TaggableManager field look like a regular Django field when in actuality it is not. It seems Milkman had a similiar issue - see here: ccollins/milkman#17
During PythonNordeste I was talking with @fernandogrd and we came up with a feature that could be interesting to have.
In many applications we have central models that always need to exist. For example, a get for an specific instance in a context processor... This means that, even using recipe, the developer always need to call make_recipe
on every setUp
method of the test cases. This is a little annoying.
We were talking and maybe it would be interesting to have another kind of recipe, a recipe that is created before every test execution. This would work like the initial data fixture that is always loaded. I think that we could use a test runner to do the job. What do you think?
I'll explain the idea given by @henriquebastos during our last lunch. Suppose that we have the following model:
class Person(models.Model):
name = models.CharField(max_length=60)
age = models.PositiveIntegerField()
Now, imagine some test situation that we need to create 4 objects with the same age but different, but not random, names. Today, the way we can achieve this with model mommy is as following:
person1 = mommy.make(Person, age=20, name='bob')
person2 = mommy.make(Person, age=20, name='alice')
person3 = mommy.make(Person, age=20, name='john')
person4 = mommy.make(Person, age=20, name='peter')
There is a lot of code repetition on the previous snippet and maybe we can improve this. This issue is to start this discussion to explore possibilities of how we can implement something better. The first suggestion was using something like this:
names = ['bob', 'alice', 'john', 'peter']
persons = mommy.make(Person, age=20, name=names)
So, we can instantiate an iterable object with the multiple values that we expect and pass it to the model attribute that we want to change. Although this is an easy approach, it could make the API more complex and confusing. I mean, on the previous code we have a model attribute receiving a list as a parameter, which is something king of odd if we think about model creation...
Well, let the discussion begins =)
I tried to install via pip in my ubuntu server and I see the error below:
copying model_mommy/init.py -> build/lib.linux-i686-2.7/model_mommy
copying model_mommy/mommy_recipes.py -> build/lib.linux-i686-2.7/model_mommy
copying model_mommy/recipe.py -> build/lib.linux-i686-2.7/model_mommy
error: can't copy 'model_mommy/models.py': doesn't exist or not a regular file
If I install from project ( git clone and run setup) it work.
What do you think about this? This would be useful by turning the syntax simpler in some tests I'm doing. For example:
mommy.make(SomeModel, related_model=RelatedModel.objects.get(id=10))
This could be done with:
mommy.make(SomeModel, related_model_id=10)
setting the attribute related_model_id manually before calling save().
Current, the following way to make Model Mommy works with custom fields is the following:
from model_mommy.mommy import Mommy
from generic.models import Person
from generic.fields import CustomField
def gen_func():
return 'random_value'
mommy_person = Mommy(Person)
mommy_person.type_mapping.update({CustomField: gen_func})
person = mommy_person.make()
assert person.custom_value == "random_value"
This current approach is very painfull when we think about code maintenance. I mean, imagine if we have to deal with this custom field in a lot of test cases... We would need to repat this code or come up with a refactoring using a base test case class tha wraps this kind of hack... Well, this sounds strange to me and I caught myself thinking in ways to solve this custom field problem in a more sane way.
I'm creating this issue so we can discuss different approachs. I have three ideas for now:
This first idea uses django's settings.py
file. We could have something like:
def gen_func():
return 'random_value'
from generic.fields import CustomField
MOMMY_CUSTOM_FIELDS_GEN = {
CustomField: gen_func,
}
So, during Mommy
object creation, we could look for this settings and uptate its type_mapping
dict.
The second idea relies on letting the field to define its own generator function. I'm not so sure about the API, but I was thinking about something like the this:
from django.db import models
class CustomField(models.Fields):
... #some field code
def mommy_generator():
return 'random_value'
I trend to not like this approach because it needs to change the production code for testing purpose and this does not sounds so good to me.
This was just an "evolution" of the field explicit config idea. To avoid, or at least mask, the problem of making production code talks with test code, we could use a decorator instead. Something like:
from django.db import models
from model_mommy import custom_field_gen
def gen_func():
return "random_value"
@custom_field_gen("gen_func")
class CustomField(models.Fields):
... #some field code
Well, there ain't none of them. I was just doing a brain dump about this issue and start the discussion. I think that the first approach is the more advisable one, but I really like to hear your opinions on this topic.
If you have a model with a ManyToManyField using the 'through' parameter, mommy displays the following error:
AttributeError: 'ManyRelatedManager' object has no attribute 'add'
around the line 155 in mommy.py
I guess it should not be so hard to fix.
Hi, I'm using Model Mommy to generate my projects initial data. I'm finding it very useful and it's way better than writing fixtures.
Is it a good idea to have a model mommy command to automatically load initial data? Something like what loaddata
does with initial_data.json
files. Maybe look for files like: mommy_init_data.py
.
mommy is not detecting already used values for unique fields.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.