torchbox / wagtail-experiments Goto Github PK
View Code? Open in Web Editor NEWA/B testing for Wagtail
License: BSD 3-Clause "New" or "Revised" License
A/B testing for Wagtail
License: BSD 3-Clause "New" or "Revised" License
If an experiment is set up with differences in inline child objects, the variant pages will fail to display them, and always show the children of the control page. This is because we fake the page ID of the variations to match the control page before the child objects have been fetched - as a result, the query to retrieve them will refer to the control page's ID.
I read the library works with Wagtail 1.7, but I'm hoping to use this framework once we upgrade our site to Wagtail 2.0. Also, by asking this question, I'm also volunteering to work on this issue with some guidance from more experienced wagtail / django / python devs (JS dev here) :D
0001_initial depends on migration from Wagtail's master branch:
('wagtailcore', '0030_index_on_pagerevision_created_at'),
I see no reason to depend on this migration. It seems to me that we can specify earlier migration, so we will be able to support wider range of Wagtail versions.
Probably will be cleaner to dump report as JSON and create a chart from JS instead of generating JS call in the report.html
template (see #4).
Are there additional docs anywhere that contain recipes/examples of how to set up both the "redirect to page" type A/B testing, as well as the "triggered by JS" version of A/B testing?
For example, rather than "real" pages, we're looking to do A/B testing on an in-model subroute using the RoutablePageMixin, and it would be useful to know what parts of wagtail-experiments we can tap into in order to decide "which render path inside the subroute" to pick for the current session, so we don't serve the same user different views on consecutive page interactions.
I have mutiple sites of which one is an overview website. I use experiments to a/b test whether people use the menu of the overview website to reach the other site (defined in sites and running well) or they use the link on the homepage of the overview website. It recognises the control and alternative page (shows one of them per session) but it doesnt count the visit to the page of the other site after clicking on either the link or using the menu. Is this a bug or something not possible yet?
A good a/b test starts by setting the goals and criteria so you know when you've got an actionable result. Currently wagtail-experiments does not provide a way for the user to set params that are important for reliability.
The following changes should significantly enhance the value of wagtail-experiments to both high and low traffic sites.
Possible new settings in UX:
The single biggest error in a/b testing appears to be deciding based on too small a sample. There's a lot of controversy over how small is too small. The consensus among pollsters is that 1000 responses is enough to represent 300+ million people, so at least we have an upper limit. It's also a good idea to limit the time frame to reduce influence by changing conditions.
Minimum sample to recommend action: [ ]
Maximum time frame: [ ]
Goals reached after many intermediate pages usually aren't very relevant to the experiment.
[x] Goal must be reached directly from an experiment page
[ ] Intermediate pages before goal are accepted (not recommended, but may be current behavior)
Sometimes you're testing different titles. Sometimes you're testing body changes with the same title. Both are important. If there are many alternative pages, we can make it easy to use the control page title.
[x] Use control page title
Possible ways to reduce cluttering the UX:
Hard code good defaults. But if users can't change them it will invite controversy about our defaults.
Allow users to add preferences via settings.py. Marketers are the primary target market for wagtail-experiments, so this isn't the best option.
Only show detailed settings if requested. The request could be on a Settings page.
Put detailed settings on a Settings page. All experiments would share the same settings.
It looks like wagtail-experiments relies on user id to pick alternatives and mark goal completions, but the session is used to track whether a user has entered or completed an experiment previously.
In many cases it's important to be able to run experiments on anonymous users. Would it be possible to add a token to the session, rather than relying on user id to select alternatives?
I have some fairly complex pages that couldn't be managed on wagtail but they are the the goal of some of my experiments. I am not sure how to go about this. My suggestion would be to instead of asking for a page as the goal of the experiment, one could also ask for either for a goal URL or page.
I have a TemplateSyntaxError at /admin/experiments/experiment/report/1/
'staticfiles' is not a registered tag library. Must be one of...
I checked the HTML file and saw: {% load staticfiles %}
but lately, I think its {% load static %}
, django.contrib.staticfiles
is installed.
This package is really great, thank you Torchbox ๐ .
Will there be support for redis? I think it will be could be useful.
Because of this deprecated import, experiments is currently not compatible with Django 3.
Even though Wagtail itself is.
from django.utils.encoding import python_2_unicode_compatible
The following line of code (in report.html
) does not format floats correctly in some languages:
'{{ history_entry.conversion_rate|floatformat:2|escapejs }}'{% if not forloop.last %},{% endif %}
For example, in my Dutch language based Django site this will format 12.3456 as 12,35. This breaks the report graph.
I think the solution is to change that line of code to:
'{{ history_entry.conversion_rate|stringformat:".2f"|escapejs }}'{% if not forloop.last %},{% endif %}
Useful for testing and demos. Here's a rough script which could be converted into a management command:
import os
from random import randrange
import datetime
from datetime import datetime, timedelta
import logging
logging.basicConfig()
def fake_experiment_data(slug, days=10, min=100, max=150, purge=False):
from experiments.models import Experiment, ExperimentHistory, Alternative
from django.db.models import F
experiment = Experiment.objects.get(slug=slug)
variations = experiment.get_variations()
control = experiment.control_page
if purge:
print "purging all history for %s" % experiment
ExperimentHistory.objects.filter(experiment=experiment).delete()
return
print "creating fake history data for %s" % experiment
for variation in variations:
for day in range(0, days):
date = datetime.now() - timedelta(days=day)
for x in range(1, randrange(min, max)):
history, _ = ExperimentHistory.objects.get_or_create(
experiment=experiment, variation=variation, date=date)
# increment the participant_count
ExperimentHistory.objects.filter(pk=history.pk).update(
participant_count=F('participant_count') + 1)
if variation == control:
# make the control page less likely to complete
if randrange(0,4) == 1:
ExperimentHistory.objects.filter(pk=history.pk).update(
completion_count=F('completion_count') + 1)
else:
if randrange(0,3) == 1:
ExperimentHistory.objects.filter(pk=history.pk).update(
completion_count=F('completion_count') + 1)
if __name__ == "__main__":
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'my_project.settings')
import django
django.setup()
fake_experiment_data('which-logo', purge=True)
fake_experiment_data('which-logo')
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.