Code Monkey home page Code Monkey logo

split's Introduction

Gem Version Build status Code Climate Test Coverage standard-readme compliant Open Source Helpers

📈 The Rack Based A/B testing framework https://libraries.io/rubygems/split

Split is a rack based A/B testing framework designed to work with Rails, Sinatra or any other rack based app.

Split is heavily inspired by the Abingo and Vanity Rails A/B testing plugins and Resque in its use of Redis.

Split is designed to be hacker friendly, allowing for maximum customisation and extensibility.

Install

Requirements

Split v4.0+ is currently tested with Ruby >= 2.5 and Rails >= 5.2.

If your project requires compatibility with Ruby 2.4.x or older Rails versions. You can try v3.0 or v0.8.0(for Ruby 1.9.3)

Split uses Redis as a datastore.

Split only supports Redis 4.0 or greater.

If you're on OS X, Homebrew is the simplest way to install Redis:

brew install redis
redis-server /usr/local/etc/redis.conf

You now have a Redis daemon running on port 6379.

Setup

gem install split

Rails

Adding gem 'split' to your Gemfile will autoload it when rails starts up, as long as you've configured Redis it will 'just work'.

Sinatra

To configure Sinatra with Split you need to enable sessions and mix in the helper methods. Add the following lines at the top of your Sinatra app:

require 'split'

class MySinatraApp < Sinatra::Base
  enable :sessions
  helpers Split::Helper

  get '/' do
  ...
end

Usage

To begin your A/B test use the ab_test method, naming your experiment with the first argument and then the different alternatives which you wish to test on as the other arguments.

ab_test returns one of the alternatives, if a user has already seen that test they will get the same alternative as before, which you can use to split your code on.

It can be used to render different templates, show different text or any other case based logic.

ab_finished is used to make a completion of an experiment, or conversion.

Example: View

<% ab_test(:login_button, "/images/button1.jpg", "/images/button2.jpg") do |button_file| %>
  <%= image_tag(button_file, alt: "Login!") %>
<% end %>

Example: Controller

def register_new_user
  # See what level of free points maximizes users' decision to buy replacement points.
  @starter_points = ab_test(:new_user_free_points, '100', '200', '300')
end

Example: Conversion tracking (in a controller!)

def buy_new_points
  # some business logic
  ab_finished(:new_user_free_points)
end

Example: Conversion tracking (in a view)

Thanks for signing up, dude! <% ab_finished(:signup_page_redesign) %>

You can find more examples, tutorials and guides on the wiki.

Statistical Validity

Split has two options for you to use to determine which alternative is the best.

The first option (default on the dashboard) uses a z test (n>30) for the difference between your control and alternative conversion rates to calculate statistical significance. This test will tell you whether an alternative is better or worse than your control, but it will not distinguish between which alternative is the best in an experiment with multiple alternatives. Split will only tell you if your experiment is 90%, 95%, or 99% significant, and this test only works if you have more than 30 participants and 5 conversions for each branch.

As per this blog post on the pitfalls of A/B testing, it is highly recommended that you determine your requisite sample size for each branch before running the experiment. Otherwise, you'll have an increased rate of false positives (experiments which show a significant effect where really there is none).

Here is a sample size calculator for your convenience.

The second option uses simulations from a beta distribution to determine the probability that the given alternative is the winner compared to all other alternatives. You can view these probabilities by clicking on the drop-down menu labeled "Confidence." This option should be used when the experiment has more than just 1 control and 1 alternative. It can also be used for a simple, 2-alternative A/B test.

Calculating the beta-distribution simulations for a large number of experiments can be slow, so the results are cached. You can specify how often they should be recalculated (the default is once per day).

Split.configure do |config|
  config.winning_alternative_recalculation_interval = 3600 # 1 hour
end

Extras

Weighted alternatives

Perhaps you only want to show an alternative to 10% of your visitors because it is very experimental or not yet fully load tested.

To do this you can pass a weight with each alternative in the following ways:

ab_test(:homepage_design, {'Old' => 18}, {'New' => 2})

ab_test(:homepage_design, 'Old', {'New' => 1.0/9})

ab_test(:homepage_design, {'Old' => 9}, 'New')

This will only show the new alternative to visitors 1 in 10 times, the default weight for an alternative is 1.

Overriding alternatives

For development and testing, you may wish to force your app to always return an alternative. You can do this by passing it as a parameter in the url.

If you have an experiment called button_color with alternatives called red and blue used on your homepage, a url such as:

http://myawesomesite.com?ab_test[button_color]=red

will always have red buttons. This won't be stored in your session or count towards to results, unless you set the store_override configuration option.

In the event you want to disable all tests without having to know the individual experiment names, add a SPLIT_DISABLE query parameter.

http://myawesomesite.com?SPLIT_DISABLE=true

It is not required to send SPLIT_DISABLE=false to activate Split.

Rspec Helper

To aid testing with RSpec, write spec/support/split_helper.rb and call use_ab_test(alternatives_by_experiment) in your specs as instructed below:

# Create a file with these contents at 'spec/support/split_helper.rb'
# and ensure it is `require`d in your rails_helper.rb or spec_helper.rb
module SplitHelper

  # Force a specific experiment alternative to always be returned:
  #   use_ab_test(signup_form: "single_page")
  #
  # Force alternatives for multiple experiments:
  #   use_ab_test(signup_form: "single_page", pricing: "show_enterprise_prices")
  #
  def use_ab_test(alternatives_by_experiment)
    allow_any_instance_of(Split::Helper).to receive(:ab_test) do |_receiver, experiment, &block|
      variant = alternatives_by_experiment.fetch(experiment) { |key| raise "Unknown experiment '#{key}'" }
      block.call(variant) unless block.nil?
      variant
    end
  end
end

# Make the `use_ab_test` method available to all specs:
RSpec.configure do |config|
  config.include SplitHelper
end

Now you can call use_ab_test(alternatives_by_experiment) in your specs, for example:

it "registers using experimental signup" do
  use_ab_test experiment_name: "alternative_name"
  post "/signups"
  ...
end

Starting experiments manually

By default new A/B tests will be active right after deployment. In case you would like to start new test a while after the deploy, you can do it by setting the start_manually configuration option to true.

After choosing this option tests won't be started right after deploy, but after pressing the Start button in Split admin dashboard. If a test is deleted from the Split dashboard, then it can only be started after pressing the Start button whenever being re-initialized.

Reset after completion

When a user completes a test their session is reset so that they may start the test again in the future.

To stop this behaviour you can pass the following option to the ab_finished method:

ab_finished(:experiment_name, reset: false)

The user will then always see the alternative they started with.

Any old unfinished experiment key will be deleted from the user's data storage if the experiment had been removed or is over and a winner had been chosen. This allows a user to enroll into any new experiment in cases when the allow_multiple_experiments config option is set to false.

Reset experiments manually

By default Split automatically resets the experiment whenever it detects the configuration for an experiment has changed (e.g. you call ab_test with different alternatives). You can prevent this by setting the option reset_manually to true.

You may want to do this when you want to change something, like the variants' names, the metadata about an experiment, etc. without resetting everything.

Multiple experiments at once

By default Split will avoid users participating in multiple experiments at once. This means you are less likely to skew results by adding in more variation to your tests.

To stop this behaviour and allow users to participate in multiple experiments at once set the allow_multiple_experiments config option to true like so:

Split.configure do |config|
  config.allow_multiple_experiments = true
end

This will allow the user to participate in any number of experiments and belong to any alternative in each experiment. This has the possible downside of a variation in one experiment influencing the outcome of another.

To address this, setting the allow_multiple_experiments config option to 'control' like so:

Split.configure do |config|
  config.allow_multiple_experiments = 'control'
end

For this to work, each and every experiment you define must have an alternative named 'control'. This will allow the user to participate in multiple experiments as long as the user belongs to the alternative 'control' in each experiment. As soon as the user belongs to an alternative named something other than 'control' the user may not participate in any more experiments. Calling ab_test() will always return the first alternative without adding the user to that experiment.

Experiment Persistence

Split comes with three built-in persistence adapters for storing users and the alternatives they've been given for each experiment.

By default Split will store the tests for each user in the session.

You can optionally configure Split to use a cookie, Redis, or any custom adapter of your choosing.

Cookies

Split.configure do |config|
  config.persistence = :cookie
end

When using the cookie persistence, Split stores data into an anonymous tracking cookie named 'split', which expires in 1 year. To change that, set the persistence_cookie_length in the configuration (unit of time in seconds).

Split.configure do |config|
  config.persistence = :cookie
  config.persistence_cookie_length = 2592000 # 30 days
end

The data stored consists of the experiment name and the variants the user is in. Example: { "experiment_name" => "variant_a" }

Note: Using cookies depends on ActionDispatch::Cookies or any identical API

Redis

Using Redis will allow ab_users to persist across sessions or machines.

Split.configure do |config|
  config.persistence = Split::Persistence::RedisAdapter.with_config(lookup_by: -> (context) { context.current_user_id })
  # Equivalent
  # config.persistence = Split::Persistence::RedisAdapter.with_config(lookup_by: :current_user_id)
end

Options:

  • lookup_by: method to invoke per request for uniquely identifying ab_users (mandatory configuration)
  • namespace: separate namespace to store these persisted values (default "persistence")
  • expire_seconds: sets TTL for user key. (if a user is in multiple experiments most recent update will reset TTL for all their assignments)

Dual Adapter

The Dual Adapter allows the use of different persistence adapters for logged-in and logged-out users. A common use case is to use Redis for logged-in users and Cookies for logged-out users.

cookie_adapter = Split::Persistence::CookieAdapter
redis_adapter = Split::Persistence::RedisAdapter.with_config(
    lookup_by: -> (context) { context.send(:current_user).try(:id) },
    expire_seconds: 2592000)

Split.configure do |config|
  config.persistence = Split::Persistence::DualAdapter.with_config(
      logged_in: -> (context) { !context.send(:current_user).try(:id).nil? },
      logged_in_adapter: redis_adapter,
      logged_out_adapter: cookie_adapter)
  config.persistence_cookie_length = 2592000 # 30 days
end

Custom Adapter

Your custom adapter needs to implement the same API as existing adapters. See Split::Persistence::CookieAdapter or Split::Persistence::SessionAdapter for a starting point.

Split.configure do |config|
  config.persistence = YourCustomAdapterClass
end

Trial Event Hooks

You can define methods that will be called at the same time as experiment alternative participation and goal completion.

For example:

Split.configure do |config|
  config.on_trial  = :log_trial # run on every trial
  config.on_trial_choose   = :log_trial_choose # run on trials with new users only
  config.on_trial_complete = :log_trial_complete
end

Set these attributes to a method name available in the same context as the ab_test method. These methods should accept one argument, a Trial instance.

def log_trial(trial)
  logger.info "experiment=%s alternative=%s user=%s" %
    [ trial.experiment.name, trial.alternative, current_user.id ]
end

def log_trial_choose(trial)
  logger.info "[new user] experiment=%s alternative=%s user=%s" %
    [ trial.experiment.name, trial.alternative, current_user.id ]
end

def log_trial_complete(trial)
  logger.info "experiment=%s alternative=%s user=%s complete=true" %
    [ trial.experiment.name, trial.alternative, current_user.id ]
end

Views

If you are running ab_test from a view, you must define your event hook callback as a helper_method in the controller:

helper_method :log_trial_choose

def log_trial_choose(trial)
  logger.info "experiment=%s alternative=%s user=%s" %
    [ trial.experiment.name, trial.alternative, current_user.id ]
end

Experiment Hooks

You can assign a proc that will be called when an experiment is reset or deleted. You can use these hooks to call methods within your application to keep data related to experiments in sync with Split.

For example:

Split.configure do |config|
  # after experiment reset or deleted
  config.on_experiment_reset  = -> (example) { # Do something on reset }
  config.on_experiment_delete = -> (experiment) { # Do something else on delete }
  # before experiment reset or deleted
  config.on_before_experiment_reset  = -> (example) { # Do something on reset }
  config.on_before_experiment_delete = -> (experiment) { # Do something else on delete }
  # after experiment winner had been set
  config.on_experiment_winner_choose = -> (experiment) { # Do something on winner choose }
end

Web Interface

Split comes with a Sinatra-based front end to get an overview of how your experiments are doing.

If you are running Rails 2: You can mount this inside your app using Rack::URLMap in your config.ru

require 'split/dashboard'

run Rack::URLMap.new \
  "/"       => Your::App.new,
  "/split" => Split::Dashboard.new

However, if you are using Rails 3 or higher: You can mount this inside your app routes by first adding this to the Gemfile:

gem 'split', require: 'split/dashboard'

Then adding this to config/routes.rb

mount Split::Dashboard, at: 'split'

You may want to password protect that page, you can do so with Rack::Auth::Basic (in your split initializer file)

# Rails apps or apps that already depend on activesupport
Split::Dashboard.use Rack::Auth::Basic do |username, password|
  # Protect against timing attacks:
  # - Use & (do not use &&) so that it doesn't short circuit.
  # - Use digests to stop length information leaking
  ActiveSupport::SecurityUtils.secure_compare(::Digest::SHA256.hexdigest(username), ::Digest::SHA256.hexdigest(ENV["SPLIT_USERNAME"])) &
    ActiveSupport::SecurityUtils.secure_compare(::Digest::SHA256.hexdigest(password), ::Digest::SHA256.hexdigest(ENV["SPLIT_PASSWORD"]))
end

# Apps without activesupport
Split::Dashboard.use Rack::Auth::Basic do |username, password|
  # Protect against timing attacks:
  # - Use & (do not use &&) so that it doesn't short circuit.
  # - Use digests to stop length information leaking
  Rack::Utils.secure_compare(::Digest::SHA256.hexdigest(username), ::Digest::SHA256.hexdigest(ENV["SPLIT_USERNAME"])) &
    Rack::Utils.secure_compare(::Digest::SHA256.hexdigest(password), ::Digest::SHA256.hexdigest(ENV["SPLIT_PASSWORD"]))
end

You can even use Devise or any other Warden-based authentication method to authorize users. Just replace mount Split::Dashboard, :at => 'split' in config/routes.rb with the following:

match "/split" => Split::Dashboard, anchor: false, via: [:get, :post, :delete], constraints: -> (request) do
  request.env['warden'].authenticated? # are we authenticated?
  request.env['warden'].authenticate! # authenticate if not already
  # or even check any other condition such as request.env['warden'].user.is_admin?
end

More information on this here

Screenshot

split_screenshot

Configuration

You can override the default configuration options of Split like so:

Split.configure do |config|
  config.db_failover = true # handle Redis errors gracefully
  config.db_failover_on_db_error = -> (error) { Rails.logger.error(error.message) }
  config.allow_multiple_experiments = true
  config.enabled = true
  config.persistence = Split::Persistence::SessionAdapter
  #config.start_manually = false ## new test will have to be started manually from the admin panel. default false
  #config.reset_manually = false ## if true, it never resets the experiment data, even if the configuration changes
  config.include_rails_helper = true
  config.redis = "redis://custom.redis.url:6380"
end

Split looks for the Redis host in the environment variable REDIS_URL then defaults to redis://localhost:6379 if not specified by configure block.

On platforms like Heroku, Split will use the value of REDIS_PROVIDER to determine which env variable key to use when retrieving the host config. This defaults to REDIS_URL.

Filtering

In most scenarios you don't want to have AB-Testing enabled for web spiders, robots or special groups of users. Split provides functionality to filter this based on a predefined, extensible list of bots, IP-lists or custom exclude logic.

Split.configure do |config|
  # bot config
  config.robot_regex = /my_custom_robot_regex/ # or
  config.bots['newbot'] = "Description for bot with 'newbot' user agent, which will be added to config.robot_regex for exclusion"

  # IP config
  config.ignore_ip_addresses << '81.19.48.130' # or regex: /81\.19\.48\.[0-9]+/

  # or provide your own filter functionality, the default is proc{ |request| is_robot? || is_ignored_ip_address? || is_preview? }
  config.ignore_filter = -> (request) { CustomExcludeLogic.excludes?(request) }
end

Experiment configuration

Instead of providing the experiment options inline, you can store them in a hash. This hash can control your experiment's alternatives, weights, algorithm and if the experiment resets once finished:

Split.configure do |config|
  config.experiments = {
    my_first_experiment: {
      alternatives: ["a", "b"],
      resettable: false
    },
    :my_second_experiment => {
      algorithm: 'Split::Algorithms::Whiplash',
      alternatives: [
        { name: "a", percent: 67 },
        { name: "b", percent: 33 }
      ]
    }
  }
end

You can also store your experiments in a YAML file:

Split.configure do |config|
  config.experiments = YAML.load_file "config/experiments.yml"
end

You can then define the YAML file like:

my_first_experiment:
  alternatives:
    - a
    - b
my_second_experiment:
  alternatives:
    - name: a
      percent: 67
    - name: b
      percent: 33
  resettable: false

This simplifies the calls from your code:

ab_test(:my_first_experiment)

and:

ab_finished(:my_first_experiment)

You can also add meta data for each experiment, which is very useful when you need more than an alternative name to change behaviour:

Split.configure do |config|
  config.experiments = {
    my_first_experiment: {
      alternatives: ["a", "b"],
      metadata: {
        "a" => {"text" => "Have a fantastic day"},
        "b" => {"text" => "Don't get hit by a bus"}
      }
    }
  }
end
my_first_experiment:
  alternatives:
    - a
    - b
  metadata:
    a:
      text: "Have a fantastic day"
    b:
      text: "Don't get hit by a bus"

This allows for some advanced experiment configuration using methods like:

trial.alternative.name # => "a"

trial.metadata['text'] # => "Have a fantastic day"

or in views:

<% ab_test("my_first_experiment") do |alternative, meta| %>
  <%= alternative %>
  <small><%= meta['text'] %></small>
<% end %>

The keys used in meta data should be Strings

Metrics

You might wish to track generic metrics, such as conversions, and use those to complete multiple different experiments without adding more to your code. You can use the configuration hash to do this, thanks to the :metric option.

Split.configure do |config|
  config.experiments = {
    my_first_experiment: {
      alternatives: ["a", "b"],
      metric: :my_metric
    }
  }
end

Your code may then track a completion using the metric instead of the experiment name:

ab_finished(:my_metric)

You can also create a new metric by instantiating and saving a new Metric object.

Split::Metric.new(:my_metric)
Split::Metric.save

Goals

You might wish to allow an experiment to have multiple, distinguishable goals. The API to define goals for an experiment is this:

ab_test({link_color: ["purchase", "refund"]}, "red", "blue")

or you can define them in a configuration file:

Split.configure do |config|
  config.experiments = {
    link_color: {
      alternatives: ["red", "blue"],
      goals: ["purchase", "refund"]
    }
  }
end

To complete a goal conversion, you do it like:

ab_finished(link_color: "purchase")

Note that if you pass additional options, that should be a separate hash:

ab_finished({ link_color: "purchase" }, reset: false)

NOTE: This does not mean that a single experiment can complete more than one goal.

Once you finish one of the goals, the test is considered to be completed, and finishing the other goal will no longer register. (Assuming the test runs with reset: false.)

Good Example: Test if listing Plan A first result in more conversions to Plan A (goal: "plana_conversion") or Plan B (goal: "planb_conversion").

Bad Example: Test if button color increases conversion rate through multiple steps of a funnel. THIS WILL NOT WORK.

Bad Example: Test both how button color affects signup and how it affects login, at the same time. THIS WILL NOT WORK.

Combined Experiments

If you want to test how button color affects signup and how it affects login at the same time, use combined experiments. Configure like so:

  Split.configuration.experiments = {
        :button_color_experiment => {
          :alternatives => ["blue", "green"],
          :combined_experiments => ["button_color_on_signup", "button_color_on_login"]
        }
      }

Starting the combined test starts all combined experiments

 ab_combined_test(:button_color_experiment)

Finish each combined test as normal

   ab_finished(:button_color_on_login)
   ab_finished(:button_color_on_signup)

Additional Configuration:

  • Be sure to enable allow_multiple_experiments
  • In Sinatra include the CombinedExperimentsHelper
      helpers Split::CombinedExperimentsHelper
    

DB failover solution

Due to the fact that Redis has no automatic failover mechanism, it's possible to switch on the db_failover config option, so that ab_test and ab_finished will not crash in case of a db failure. ab_test always delivers alternative A (the first one) in that case.

It's also possible to set a db_failover_on_db_error callback (proc) for example to log these errors via Rails.logger.

Redis

You may want to change the Redis host and port Split connects to, or set various other options at startup.

Split has a redis setter which can be given a string or a Redis object. This means if you're already using Redis in your app, Split can re-use the existing connection.

String: Split.redis = 'redis://localhost:6379'

Redis: Split.redis = $redis

For our rails app we have a config/initializers/split.rb file where we load config/split.yml by hand and set the Redis information appropriately.

Here's our config/split.yml:

development: redis://localhost:6379
test: redis://localhost:6379
staging: redis://redis1.example.com:6379
fi: redis://localhost:6379
production: redis://redis1.example.com:6379

And our initializer:

split_config = YAML.load_file(Rails.root.join('config', 'split.yml'))
Split.redis = split_config[Rails.env]

Redis Caching (v4.0+)

In some high-volume usage scenarios, Redis load can be incurred by repeated fetches for fairly static data. Enabling caching will reduce this load.

Split.configuration.cache = true

This currently caches:

  • Split::ExperimentCatalog.find
  • Split::Experiment.start_time
  • Split::Experiment.winner

Namespaces

If you're running multiple, separate instances of Split you may want to namespace the keyspaces so they do not overlap. This is not unlike the approach taken by many memcached clients.

This feature can be provided by the redis-namespace library. To configure Split to use Redis::Namespace, do the following:

  1. Add redis-namespace to your Gemfile:
gem 'redis-namespace'
  1. Configure Split.redis to use a Redis::Namespace instance (possible in an initializer):
redis = Redis.new(url: ENV['REDIS_URL']) # or whatever config you want
Split.redis = Redis::Namespace.new(:your_namespace, redis: redis)

Outside of a Web Session

Split provides the Helper module to facilitate running experiments inside web sessions.

Alternatively, you can access the underlying Metric, Trial, Experiment and Alternative objects to conduct experiments that are not tied to a web session.

# create a new experiment
experiment = Split::ExperimentCatalog.find_or_create('color', 'red', 'blue')

# find the user
user = Split::User.find(user_id, :redis)

# create a new trial
trial = Split::Trial.new(user: user, experiment: experiment)

# run trial
trial.choose!

# get the result, returns either red or blue
trial.alternative.name

# if the goal has been achieved, increment the successful completions for this alternative.
if goal_achieved?
  trial.complete!
end

Algorithms

By default, Split ships with Split::Algorithms::WeightedSample that randomly selects from possible alternatives for a traditional a/b test. It is possible to specify static weights to favor certain alternatives.

Split::Algorithms::Whiplash is an implementation of a multi-armed bandit algorithm. This algorithm will automatically weight the alternatives based on their relative performance, choosing the better-performing ones more often as trials are completed.

Split::Algorithms::BlockRandomization is an algorithm that ensures equal participation across all alternatives. This algorithm will choose the alternative with the fewest participants. In the event of multiple minimum participant alternatives (i.e. starting a new "Block") the algorithm will choose a random alternative from those minimum participant alternatives.

Users may also write their own algorithms. The default algorithm may be specified globally in the configuration file, or on a per experiment basis using the experiments hash of the configuration file.

To change the algorithm globally for all experiments, use the following in your initializer:

Split.configure do |config|
  config.algorithm = Split::Algorithms::Whiplash
end

Extensions

Screencast

Ryan bates has produced an excellent 10 minute screencast about split on the Railscasts site: A/B Testing with Split

Blogposts

Backers

Support us with a monthly donation and help us continue our activities. [Become a backer]

Sponsors

Become a sponsor and get your logo on our README on Github with a link to your site. [Become a sponsor]

Contribute

Please do! Over 70 different people have contributed to the project, you can see them all here: https://github.com/splitrb/split/graphs/contributors.

Development

The source code is hosted at GitHub.

Report issues and feature requests on GitHub Issues.

You can find a discussion form on Google Groups.

Tests

Run the tests like this:

# Start a Redis server in another tab.
redis-server

bundle
rake spec

A Note on Patches and Pull Requests

  • Fork the project.
  • Make your feature addition or bug fix.
  • Add tests for it. This is important so I don't break it in a future version unintentionally.
  • Add documentation if necessary.
  • Commit. Do not mess with the rakefile, version, or history. (If you want to have your own version, that is fine. But bump the version in a commit by itself, which I can ignore when I pull.)
  • Send a pull request. Bonus points for topic branches.

Code of Conduct

Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.

Copyright

MIT License © 2019 Andrew Nesbitt.

split's People

Contributors

andrehjr avatar andreibondarev avatar andrew avatar buddhamagnet avatar caser avatar coolzilj avatar dekz avatar dmeremyanin avatar eliotsykes avatar giraffate avatar gnanou avatar henrik avatar iangreenleaf avatar jonastemplestein avatar joshdover avatar knarewski avatar lautis avatar lloydpick avatar nberger avatar pakallis avatar peterylai avatar philnash avatar phoet avatar rdh avatar robin-phung avatar seabornlee avatar semanticart avatar stevenou avatar swrobel avatar woodhull avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

split's Issues

experiment data not saving

Hi,
first of all, thank you for split, it is great and looks very promising!

We have been trying to use it in our startup but we have found a stupid problem (I hope...). We have defined the a/b test in a view:

= ab_test('request_text', t('home.public_overlay.join'), t('home.public_overlay.join_alternative'))

and out finished test is marked in a controller action, reachable by a button near the previous ab text:

class RequestsController < ApplicationController
  def new
    finished 'request_text'
  end
end

Now, everything is working fine except the dashboard. In that we see people entering the experiment but it never finishes.
What it appears to be happening is that split is creating a new version of the experiment each time a user connects and/or finishes the experiment.

Would that be possible? Is there any more information I can offer you?

argument out of range

Following the instructions from the README when I visit "/split" in my app I get an error:

Environment

ruby 1.9.2p318
Rails 3.2.1
split (0.4.5)
Redis server version 2.4.15 (00000000:0)

View

<%= f.submit ab_test('csn_submit_button', 'Submit', 'Submit!'), data:{disable_with:'...'}, class:'btn btn-primary' %>

Controller

finished('csn_submit_button') if URI.parse(request.referrer).path == root_path

Error message

argument out of range

Backtrace

/Users/gabeodess/.rvm/rubies/ruby-1.9.2-p318/lib/ruby/1.9.1/time.rb in local
        self.local(year, mon, day, hour, min, sec, usec)
/Users/gabeodess/.rvm/rubies/ruby-1.9.2-p318/lib/ruby/1.9.1/time.rb in make_time
        self.local(year, mon, day, hour, min, sec, usec)
/Users/gabeodess/.rvm/rubies/ruby-1.9.2-p318/lib/ruby/1.9.1/time.rb in parse
      make_time(year, d[:mon], d[:mday], d[:hour], d[:min], d[:sec], d[:sec_fraction], d[:zone], now)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/split-0.4.5/lib/split/experiment.rb in start_time
      Time.parse(t) if t
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/split-0.4.5/lib/split/dashboard/views/_experiment.erb in evaluate_source
      <small><%= experiment.start_time ? experiment.start_time.strftime('%Y-%m-%d') : 'Unknown' %></small>
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/tilt-1.3.3/lib/tilt/template.rb in instance_eval
      scope.instance_eval(source, eval_file, line - offset)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/tilt-1.3.3/lib/tilt/template.rb in evaluate_source
      scope.instance_eval(source, eval_file, line - offset)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/tilt-1.3.3/lib/tilt/template.rb in cached_evaluate
      evaluate_source(scope, locals, &block)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/tilt-1.3.3/lib/tilt/template.rb in evaluate
      cached_evaluate(scope, locals, &block)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/tilt-1.3.3/lib/tilt/template.rb in render
      evaluate scope, locals || {}, &block
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/sinatra-1.3.3/lib/sinatra/base.rb in render
        output          = template.render(scope, locals, &block)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/sinatra-1.3.3/lib/sinatra/base.rb in erb
      render :erb, template, options, locals
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/split-0.4.5/lib/split/dashboard/views/index.erb in block in evaluate_source
    <%= erb :_experiment, :locals => {:experiment => experiment} %>
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/split-0.4.5/lib/split/dashboard/views/index.erb in each
  <% @experiments.each do |experiment| %>
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/split-0.4.5/lib/split/dashboard/views/index.erb in evaluate_source
  <% @experiments.each do |experiment| %>
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/tilt-1.3.3/lib/tilt/template.rb in instance_eval
      scope.instance_eval(source, eval_file, line - offset)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/tilt-1.3.3/lib/tilt/template.rb in evaluate_source
      scope.instance_eval(source, eval_file, line - offset)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/tilt-1.3.3/lib/tilt/template.rb in cached_evaluate
      evaluate_source(scope, locals, &block)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/tilt-1.3.3/lib/tilt/template.rb in evaluate
      cached_evaluate(scope, locals, &block)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/tilt-1.3.3/lib/tilt/template.rb in render
      evaluate scope, locals || {}, &block
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/sinatra-1.3.3/lib/sinatra/base.rb in render
        output          = template.render(scope, locals, &block)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/sinatra-1.3.3/lib/sinatra/base.rb in erb
      render :erb, template, options, locals
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/split-0.4.5/lib/split/dashboard.rb in block in <class:Dashboard>
      erb :index
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/sinatra-1.3.3/lib/sinatra/base.rb in call
            proc { |a,p| unbound_method.bind(a).call } ]
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/sinatra-1.3.3/lib/sinatra/base.rb in block in compile!
            proc { |a,p| unbound_method.bind(a).call } ]
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/sinatra-1.3.3/lib/sinatra/base.rb in []
            route_eval { block[*args] }
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/sinatra-1.3.3/lib/sinatra/base.rb in block (3 levels) in route!
            route_eval { block[*args] }
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/sinatra-1.3.3/lib/sinatra/base.rb in route_eval
      throw :halt, yield
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/sinatra-1.3.3/lib/sinatra/base.rb in block (2 levels) in route!
            route_eval { block[*args] }
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/sinatra-1.3.3/lib/sinatra/base.rb in block in process_route
        block ? block[self, values] : yield(self, values)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/sinatra-1.3.3/lib/sinatra/base.rb in catch
      catch(:pass) do
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/sinatra-1.3.3/lib/sinatra/base.rb in process_route
      catch(:pass) do
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/sinatra-1.3.3/lib/sinatra/base.rb in block in route!
          pass_block = process_route(pattern, keys, conditions) do |*args|
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/sinatra-1.3.3/lib/sinatra/base.rb in each
        routes.each do |pattern, keys, conditions, block|
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/sinatra-1.3.3/lib/sinatra/base.rb in route!
        routes.each do |pattern, keys, conditions, block|
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/sinatra-1.3.3/lib/sinatra/base.rb in dispatch!
      route!
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/sinatra-1.3.3/lib/sinatra/base.rb in block in call!
      invoke { dispatch! }
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/sinatra-1.3.3/lib/sinatra/base.rb in block in invoke
      res = catch(:halt) { yield }
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/sinatra-1.3.3/lib/sinatra/base.rb in catch
      res = catch(:halt) { yield }
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/sinatra-1.3.3/lib/sinatra/base.rb in invoke
      res = catch(:halt) { yield }
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/sinatra-1.3.3/lib/sinatra/base.rb in call!
      invoke { dispatch! }
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/sinatra-1.3.3/lib/sinatra/base.rb in call
      dup.call!(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/rack-1.4.1/lib/rack/auth/basic.rb in call
          return @app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/rack-protection-1.2.0/lib/rack/protection/xss_header.rb in call
        status, headers, body = @app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/rack-protection-1.2.0/lib/rack/protection/path_traversal.rb in call
        app.call env
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/rack-protection-1.2.0/lib/rack/protection/json_csrf.rb in call
        status, headers, body = app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/rack-protection-1.2.0/lib/rack/protection/base.rb in call
        result or app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/rack-protection-1.2.0/lib/rack/protection/xss_header.rb in call
        status, headers, body = @app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/rack-1.4.1/lib/rack/nulllogger.rb in call
      @app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/rack-1.4.1/lib/rack/head.rb in call
    status, headers, body = @app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/rack-1.4.1/lib/rack/methodoverride.rb in call
      @app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/sinatra-1.3.3/lib/sinatra/showexceptions.rb in call
      @app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/sinatra-1.3.3/lib/sinatra/base.rb in call
      result, callback = app.call(env), env['async.callback']
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/sinatra-1.3.3/lib/sinatra/base.rb in block in call
        synchronize { prototype.call(env) }
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/sinatra-1.3.3/lib/sinatra/base.rb in synchronize
          yield
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/sinatra-1.3.3/lib/sinatra/base.rb in call
        synchronize { prototype.call(env) }
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/journey-1.0.3/lib/journey/router.rb in block in call
        status, headers, body = route.app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/journey-1.0.3/lib/journey/router.rb in each
      find_routes(env).each do |match, parameters, route|
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/journey-1.0.3/lib/journey/router.rb in call
      find_routes(env).each do |match, parameters, route|
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/actionpack-3.2.1/lib/action_dispatch/routing/route_set.rb in call
        @router.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/omniauth-1.1.0/lib/omniauth/strategy.rb in call!
      @app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/omniauth-1.1.0/lib/omniauth/strategy.rb in call
      dup.call!(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/omniauth-1.1.0/lib/omniauth/builder.rb in call
      to_app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/sass-3.1.15/lib/sass/plugin/rack.rb in call
        @app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/warden-1.1.1/lib/warden/manager.rb in block in call
        @app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/warden-1.1.1/lib/warden/manager.rb in catch
      result = catch(:warden) do
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/warden-1.1.1/lib/warden/manager.rb in call
      result = catch(:warden) do
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/actionpack-3.2.1/lib/action_dispatch/middleware/best_standards_support.rb in call
      status, headers, body = @app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/rack-1.4.1/lib/rack/etag.rb in call
      status, headers, body = @app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/rack-1.4.1/lib/rack/conditionalget.rb in call
        status, headers, body = @app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/actionpack-3.2.1/lib/action_dispatch/middleware/head.rb in call
        @app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/actionpack-3.2.1/lib/action_dispatch/middleware/params_parser.rb in call
      @app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/actionpack-3.2.1/lib/action_dispatch/middleware/flash.rb in call
      @app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/rack-1.4.1/lib/rack/session/abstract/id.rb in context
          status, headers, body = app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/rack-1.4.1/lib/rack/session/abstract/id.rb in call
          context(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/actionpack-3.2.1/lib/action_dispatch/middleware/cookies.rb in call
      status, headers, body = @app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/activerecord-3.2.1/lib/active_record/query_cache.rb in call
      status, headers, body = @app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/activerecord-3.2.1/lib/active_record/connection_adapters/abstract/connection_pool.rb in call
        status, headers, body = @app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/actionpack-3.2.1/lib/action_dispatch/middleware/callbacks.rb in block in call
        @app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/activesupport-3.2.1/lib/active_support/callbacks.rb in _run__810018673512189422__call__2626213606992483835__callbacks
        object.send(name, &blk)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/activesupport-3.2.1/lib/active_support/callbacks.rb in __run_callback
        object.send(name, &blk)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/activesupport-3.2.1/lib/active_support/callbacks.rb in _run_call_callbacks
              self.class.__run_callback(key, :#{symbol}, self, &blk)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/activesupport-3.2.1/lib/active_support/callbacks.rb in run_callbacks
      send("_run_#{kind}_callbacks", *args, &block)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/actionpack-3.2.1/lib/action_dispatch/middleware/callbacks.rb in call
      run_callbacks :call do
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/actionpack-3.2.1/lib/action_dispatch/middleware/reloader.rb in call
      response = @app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/actionpack-3.2.1/lib/action_dispatch/middleware/remote_ip.rb in call
      @app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/actionpack-3.2.1/lib/action_dispatch/middleware/debug_exceptions.rb in call
        response = @app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/actionpack-3.2.1/lib/action_dispatch/middleware/show_exceptions.rb in call
        response  = @app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/railties-3.2.1/lib/rails/rack/logger.rb in call_app
        @app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/railties-3.2.1/lib/rails/rack/logger.rb in call
          call_app(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/actionpack-3.2.1/lib/action_dispatch/middleware/request_id.rb in call
      status, headers, body = @app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/rack-1.4.1/lib/rack/methodoverride.rb in call
      @app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/rack-1.4.1/lib/rack/runtime.rb in call
      status, headers, body = @app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/activesupport-3.2.1/lib/active_support/cache/strategy/local_cache.rb in call
            @app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/rack-1.4.1/lib/rack/lock.rb in call
      response = @app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/actionpack-3.2.1/lib/action_dispatch/middleware/static.rb in call
      @app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/railties-3.2.1/lib/rails/engine.rb in call
      app.call(env.merge!(env_config))
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/railties-3.2.1/lib/rails/application.rb in call
      super(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/rack-1.4.1/lib/rack/content_length.rb in call
      status, headers, body = @app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/railties-3.2.1/lib/rails/rack/log_tailer.rb in call
        response = @app.call(env)
/Users/gabeodess/.rvm/gems/ruby-1.9.2-p318@cityshare/gems/rack-1.4.1/lib/rack/handler/webrick.rb in service
        status, headers, body = @app.call(env)
/Users/gabeodess/.rvm/rubies/ruby-1.9.2-p318/lib/ruby/1.9.1/webrick/httpserver.rb in service
      si.service(req, res)
/Users/gabeodess/.rvm/rubies/ruby-1.9.2-p318/lib/ruby/1.9.1/webrick/httpserver.rb in run
          server.service(req, res)
/Users/gabeodess/.rvm/rubies/ruby-1.9.2-p318/lib/ruby/1.9.1/webrick/server.rb in block in start_thread
          block ? block.call(sock) : run(sock)

Resetting an experiment doesn't reset existing participants

If you reset an experiment it rolls the alternative counters back to zero but it doesn't empty the session for users who already have the alternative set in their cookie then the can finish the, now reset, experiment, without their participation being measured.

This could, in the extreme case, cause an infinite conversion rate if 0 users participated but 1 finished!

I suggest we put the version in the cookie and increment a version when resetting the experiment. The more extreme alternative would be to store the users session in redis but that add more overhead.

undefined method `ab_test' for #<#<Class:0x007fcc0cfb19b0>:0x007fcc0e5f8de0>

If I call ab_test in a view I get the exception mentioned.

Including Split::Helper in my ApplicationHelper will solve this, but not sure of the cause.

I'm presuming it's something to do with load order of Split and the if defined? Rails not getting executed, but can't seem to get to the bottom of it, so no failing test case at the moment.

Have you encountered this before?

Finished can be called multiple times

This is just a suggestion, not necessarily a bug. For example, say I have a header that I ab_test on the home page, then on the signup page I have a finished action. If the user reloads the signup page then it gets counted twice as finished and messes up the numbers. Personally I would rather track the finish events for each user so that this wouldn't happen, but maybe thats just me. I realize this adds a much higher data requirement.

Thanks

Experiment version number gets crazy when resetting the experiment

Hi,

this is a bug that I have not been able to reproduce because it does not happen every time. Basically when I reset and experiment the version number keeps incrementing and never stops...
To fix the problem I had to delete the experiment and recreate it.

Any idea where this could come from?
Greg

Ability to change variants?

Hello,

I have the following scenario:

  • Previously I was testing experiment "landing_page" with variants "original" and "new_copy".
  • We decided to run a new test.
  • We replaced this test with a new one, same experiment name "landing_page" with variants "original" and "screencast_and_testimonials".
  • Split will now only serve variant "original"
  • If you visit http://domain.com/?landing_page=screencast_and_testimonials, the screencast_and_testimonials variant is displayed to the user. However, we use the following code to record the AB variant into KISSmetrics, and when we manually request screencast_and_testimonials, it was being recorded as new_copy (variant #2 from the prior experiment):
  <% Split::Experiment.all.each do |experiment| %>
    <% km.set(experiment.name, ab_test(experiment.name, *experiment.alternative_names)) %>
  <% end %>

Is this expected behavior? e.g should we not reuse an experiment name with new sets of variants?

(As an aside, is there a better way to record the variants into KM other than that snippet?)

Thanks,
-Jason

Allow overriding of visitor share on a per experiment basis

At the moment every alternative is given an equal share of the users, if there are two alternatives then each should receive 50% of users, if there are 3 then 33.3% each etc

If you are introducing a new feature and want to test that it improves conversion, but you are unsure of the effects, maybe you only want 10% of the traffic to use that alternative whilst the other 90% go through the control, you should be able to specify the share when defining the test.

Custom session stores

You may wish to store the users session somewhere other than the users session (the rack type).

For example: store the a/b session against the current logged in users database record or a cookie shared between sub domains

Split the goal from the experiment name and allow goal tracking.

Say that I run an ecommerce site and the conversion is a purchase. Now assume that I'm using split to test a whole host of things, from which landing page to give the site visitor to which checkout form layout to which promotion to display.

The problem is that right now I need to do this on completion in my checkout controller:

 finished('select_lander');
 finished('checkout_form);
 finished('promotion_pick);

which is a bit insane. The DSL would be much cleaner if we can specify a NAME for an experiement and a GOAL, so, for example, we could do this:

ab_test('select_lander','purchase',option1, option2, etc) 

or

ab_test(name: 'select_lander',goal: 'purchase', [option1, option2, etc]) 

and then simply

finished('purchase')

or better

finished!('purchase') to indicate that it was an obtained goal.

Unless this functionality already exists and I'm missing it. Thanks.

Split vs. Vanity?

Hi!
First, thank you very much for this really useful tool!
I'm currently evaluating whether I should choose Vanity or Split for a Rails 3 project. You mention that Split is heavily inspired by Vanity, but what exactly would be the reason to go with Split instead of Vanity? Is there any documentation describing the differences and advantages?

Thanks a lot!
Michael

db_failover fails using Redis 3.0.1

I'm using Redis 3.0.1. When a redis connection is refused, it throws a Redis::CannotConnectError exception, not a ECONNREFUSED exception, therefore it still bubbles upwards and kills the app.

I suggest using a catch-all exception handler when db_failover is enabled, since it's quite important for production systems to "keep calm and carry on" and I don't want to check all possible exceptions this and future Redis could throw (Timeout exceptions? Peer disconnected exceptions?).

ab_test only output the alternative, it does not process the given block

The ab_test code block does not work as advertised. In the doc it says that:

<% ab_test("login_button", "/images/button1.jpg", "/images/button2.jpg") do |button_file| %>
And this line will not be shown either...
<%= img_tag(button_file, :alt => "Login!") %>
<% end %>

should output something like:

< img src='/images/button1.jpg '> or < img src='/images/button2.jpg' >

but the only thing it output is either
/images/button1.jpg or /images/button1.jpg

It seems like it ignores the block content and only output the alternative string.

Overriding alternatives not working (includes fix)

I couldn't get the alternative override to work consistently, and think I found why; when Split::Helper#ab_test looks for an override, it looks for a URL param corresponding to the experiment key, rather than experiment name, and the key sometimes contains the version:

https://github.com/jasonm/split/blob/edc42130ff7c64d9e5c0761a8e5abe4fa839a14b/lib/split/experiment.rb#L51-57

I've changed the override code to look for the experiment name instead of key:

jasonm@edc4213

I just spiked this out, and would like to provide test coverage, but wanted to make sure I understood the issue correctly first.

Thanks!

one type of ab testing per user

in recent rails a session is created lazily - that is it's created when the session is accessed.
if there is an ab_test in a view and no session is present, then the same user could be presented with different ab tests in the same "session".
(i.e. e-commerce site and basket is empty)

Put the dashboard in a separate gem

If I don't want to enable to the dashboard then I should not need my project to depend upon Sinatra.

If the dashboard were moved into a separate gem e.g. split-dashboard then it could be an optional install

Only one experiment per session

This may be related to #5, but when I was reading through the code base it seemed that Split allows for more then one experiment at the same time.

If this is correct I would think this would allow for skewed results as experiments may inadvertently affect each other. IMHO, the ab_test method should check to see if an experiment is already in place, and return its control if so.

Just my two cents. Maybe a config option would be a good tool for this.

Cheers!

weighted averages not working?

I'm setting up A/B testing for the first time and I'm wondering if I am misunderstanding how it works.

I have a page set up where the current option is weighted 1 and the new option is weighted 10. If I reload the page repeatedly, I should see the new option 9 times of out 10, right? Currently, it seems to be picking one option on initial load and sticking with it forever. Also, as I reload, the number of non-finished experiments doesn't increment. Should it?

Where does the test get started?

I'm loading a page with jQuery's load() method. It still hits the rails controller for the html and then passes it back and jquery loads a particular div within the page.

In development. I can pass in the ab_test query and get the proper layout. The test is 'product_cart_form' and my alternative is flipped. The div it's rendering is 'insertion' which contains my ab_test.

load(href + '?product_cart_form=flipped .insertion')

This hits the rails controller and properly displays the right test html. However, the test never shows up unless I place the parameter in and also never finishes correctly. Always going to the default.

Is the test not initialized in the controller? If not, can I manually start it, either via javascript or in the rails controller?

EDIT:

The ab_test was occurring in a partial one more level deep then the div that was being rendered by jquery. When I switched the test to testing two different partials instead of using a ab_test block inside one partial it works flawlessly. Not sure what to make of the prior problem.

confidence_level helper raising an error for some numbers.

While testing split I've come to a error when opening the dashboard because the Complex class was not able to convert the number to float. The number is 2.222670197524858e-18-0.03629895899899249i.

If you try on irb

n = Complex "2.222670197524858e-18-0.03629895899899249i"
n.to_f #error

this erros will be shown RangeError: can't convert 2.222670197524858e-18-0.03629895899899249i into Float

although it doesn't happen if you try to convert it to a float from a string like

n = "2.222670197524858e-18-0.03629895899899249i"
n.to_f #2.222670197524858e-18

one type of ab testing per user

in recent rails a session is created lazily - that is it's created when the session is accessed.
if there is an ab_test in a view and no session is present, then the same user could be presented with different ab tests in the same "session".
(i.e. e-commerce site and basket is empty)

Alternatives' weights are reset to 50% after first user

I may be clueless about how weighting works, but this is what I have

    enable = ab_test( "enable_jbc", {'false' => 0.99}, {'true' => 0.01} )

I would expect enable to be 99% false, but it's always about 50%. Did some digging and found that the following line of code will return alternative with equals weights, e.g. 1 after it's created in redis.

        experiment = Split::Experiment.find_or_create(experiment_name, *alternatives)

So only the first user will have the alternative with the highest weight, after that all alternatives will have equals weights

Using symbols as experiment names

It doesn't look possible to use symbols as experiment names; is this correct? Would it be useful to be able to do that (perhaps to take advantage of terse Ruby 1.9 hash syntax)?

Some things I came to know by using split

These are a couple of things I found out (or so I may think) by using split for ab-testing. Please feel free to add or comment anything.

  • The old version should be given as the first attribute in tha abTesting
    E.g abtest( AbTestName, OldVersion, NewVersion1, NewVersion2, etc)
    • This will help to set it as a fallback (called control) when the redis connection fails
    • Also this means that any other alternative is compared statistically with this in the dashboard
  • How to stop an experiment
    • Stop the experiment but have the experiment code in place without it running?
      Simply select the alternative you want from the dashboard. This will only show the alternative you have selected, while the experiment won’t count participants any more.
    • Stop the experiment permanently?
      You will have to remove all the ab_test code and finalization that was used for this experiment only and then delete the experiment (to remove all the data from redis) from the dashboard.
    • Stop the experiment temporarily?
      There is no such function currently (v 0.4.1) so as to PAUSE an experiment, but you can always select an option to show temporarily (probably the one you have set as the control) without removing the ab_test code and the experiment.
      When you want to retake the code simply click “Reset Data” or “Delete” from the dashboard. It will remove all the data and start the experiment from scratch with a new version number
  • Split Dashboard
    • Delete Button
      • It erases everything from the current expirement concerning the data assembled
      • Whenever it is clicked (either when an option is selected or not) if the ab_test code is in use the test will begin again with new data and a new version number.
      • In case you want to change the control attribute you should deploy your code with this change and then delete the experiment from the dashboard. When the first user runs the test you will see the experiment with the correct order.
    • Reset Data Button
      • Pretty much what Delete does but without deleting the experiment board.
    • Use This Button
      • With this button you can select the option to be used and finalize the test
    • Control Attribute
      • The control alternative is the version that all the other alternatives are compared to
    • Version Attribute
      • It refers to how many times the experiment has been reset or deleted
    • Participants Column
      • Total users that have been given that alternative
    • Non-Finished Column
      • Users that have run the test but haven’t finalized it yet. (Completed - Participants)
    • Completed Column
      • Total users that have run the test and followed every step till the test finalized (was completed)
    • Conversion Rate Column
      • The conversion rate has 2 attributes
        • The percentage that appears in black letters that is calculated by the percentage of those who took the test and finalized.
        • The percentage that appears in red or green color that is the percentage of comparison to the control alternative.
          This is calculated by: ( Alternatives’ Conversion / Controls’ Conversion) - 1

NameError: uninitialized constant Split::Experiment::InvalidArgument

When erroneously trying to use symbols or ints instead of strings as alternatives, I got

irb(main):017:0> ab_test('xyz', 1, 2, 3)
NameError: uninitialized constant Split::Experiment::InvalidArgument
    from /Users/przemek/.rbenv/versions/1.9.2-p290/lib/ruby/gems/1.9.1/gems/split-0.4.6/lib/split/experiment.rb:167:in `initialize_alternatives'
    from /Users/przemek/.rbenv/versions/1.9.2-p290/lib/ruby/gems/1.9.1/gems/split-0.4.6/lib/split/experiment.rb:143:in `find_or_create'
    from /Users/przemek/.rbenv/versions/1.9.2-p290/lib/ruby/gems/1.9.1/gems/split-0.4.6/lib/split/helper.rb:108:in `experiment_variable'
    from /Users/przemek/.rbenv/versions/1.9.2-p290/lib/ruby/gems/1.9.1/gems/split-0.4.6/lib/split/helper.rb:10:in `ab_test'
    from (irb):17
...

It looks like it's supposed to raise an InvalidArgument exception with a meaningful message, but instead it's a NameError. There are two places where InvalidArgument is used in experiment.rb, lines 139 & 167.

Bug when having multiple experiments

I currently have 2 experiments running localy to test them.

What I did:

  1. I used one alternative for each one of them
  2. I kept using my application as a user
  3. I reset both of them
  4. I deleted both of them

Then when I reran my application only one of the experiments ran and the other appeared to be closed no matter how many times I visited it.

Has anybody encountered this before?

Rails AB testing with “split” gem: Negative numbers on non-finished…?

Usage is quite simple:

some_signin_view_file.erb:

<% signin_mode = ab_test( 'log in style','LogIn_ATest','LogIn_BTest' )%>
.
.
Do something according to signin_mode...
and
some_post_signin_controller_file.rb:

finished("log in style", :reset=> FALSE)
Did one simple test with no problems. However, my second test yield negative numbers on the non-finished columns, and only on the first experiment (marked as control).

How can it be negative...? Am i missing something?

[EDIT: remove a non working workaround that was here...]

see also http://stackoverflow.com/questions/9964357/rails-ab-testing-with-split-gem-negative-numbers-on-non-finished/9972546#9972546

suggestion: have one ab_test active in more than one place

for a more important feature than the color of a button the ab test could intervene in more than one place (several controllers/views). I could be useful to have the same ab_test operate the same switch throughout the whole session.

Alternatives as hashes are not ordered in ruby 1.8

The order of keys in a hash in ruby 1.8 is not always the same as when it was created (unlike in ruby 1.9).

This means that when specifying alternatives as a single hash (for example when passing weights).

The two fixes I can see are:

  • Using ActiveSupport::OrderedHash in ruby 1.8 to patch the hash ordering (not ideal)
  • Deprecating passing a single hash, instead passing multiple hashes for alternatives, or at least two, one as the control and the second+ as other alternatives

Suggestion: lifetime split-data cookie

I'm using Split to test wether placing more ads on my rails site results in fewer returning visitors.

To test this I need to save the split session data for a lifetime in a cookie (i.e. 1 year +). The session that is used now is easily erased when a user logs in/logs out, or closes the browser, or after a few days, depending on the parameters used for session expiration.

I created a fork of Split for my own use, which uses the Rails construct cookies.signed, together with a custom expiration date. It was a quick hack without tests or documentation, and I'm pretty sure sinatra doesn't have the same construct, so I'm not creating a pull request for it since I don't deem it gem-worthy, but I think others would benefit from having a similar feature.

Hence, this feature suggestion.

Here's how I'm doing it, note that in an earlier commit I added the ability to set cookie_domain and cookie_expiration in the Split config block: dv@b66348b

Gracefully handle unstarted experiments in #finish helper method

When you add a new experiment to your code and deploy there is a currently a race condition.

If one of your visitors triggers the finish helper for the experiment before one of your visitors triggers the creation of the experiment then the finish method raises an exception.

Split should handle this without raising an exception.

option to disable split tests

i think it would be useful to have an option to disable split testing. this would be helping in continuos integration where it's not necessary to alway run split test and this would save on extra redis calls

Extract database connection logic

I would like to be able to provide adapters for different databases, such as mongodb, mysql.

But first that will require extracting the Split.redis method and it's calls out into a database api of some sort.

Retrieve ab_users based on a Proc as an alternative to storing in session data

We would like to be able to consider multiple client-side users as the same backend user (say, make it so one version of an experiment is shown for all users associated with a given account).

To make it as useful as possible, passing a Proc in to split may be the best way to accomplish this. Therefore, if the proc is passed in the configuration block, it replaces the #ab_user method's functionality. In our case, we would make a database call to see if user A is part of the same account as user B and return the same split session for both.

Incorrect Participant Count

I'm running split v0.4.0 with rails 3.2.3 on Heroku, and it seems to count each individual impression as a participant.

Any ideas on how to change this so it counts each session as a participant?

Detect Mobile

I'd like to be able to run certain tests only if the user is using a mobile device. Don't see this documented anywhere so I am assuming it would need to be added. I'd be glad to add Rack Mobile Detect and a convenience method in the right place, but want to run it by you all before I do so to make sure that is the approach you would suggest taking.

Thanks,
Doug

RangeError when trying to access dashboard

Hello,

I'm getting the following error when I try to access the dashboard at /split:

RangeError at /split
can't convert -8.659560562354933e-17+1.414213562373095i into Float
file: dashboard.rb location: to_f line: 32

Does this mean I've somehow setup the experiment wrong?

Thanks,
Ryan

Allow true/false ab_tests

It would be cool if ab_test would by default return true/false if no other arguments are given and thus a code block, which defines the experiment, is run or not.

Handling weights of alternatives

Is there the possibility to handle the weight of the alternatives from the dashboard, so as to avoid changing this by changing the code behind?

Thanks in advance.

Bots and humans

Its great that we can filter on bot user agents, but most bots forge their user agent. :(

One feature I liked in ABingo was their human detection support. It would be great to track participants and conversions, but not really include the data in the totals until the participant is proven to be human. With ABingo I would create a javascript callback that would indicate that the js was executed (which was inferred as a human) and it worked great.

It would be good to add support for something like this:

Split.is_human?
Split.human!

I also may be able to help with this if others also feel like its useful.

Cheers!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.