Code Monkey home page Code Monkey logo

champaign's Introduction

SumOfUs Development Wiki

A repository of ideas and documentation, for SumOfUs developers.

Please, read and contribute!

Open an issue is there's anything you'd like added, so we can prioritise.

champaign's People

Contributors

beathan avatar bmenant avatar ceciliarivero avatar chrislabarge avatar dependabot-preview[bot] avatar dependabot-support avatar dependabot[bot] avatar edubsky avatar ericboersma avatar eyko avatar greysteil avatar michaelt372 avatar nealjmd avatar osahyoun avatar pascalbetz avatar rodrei avatar saraf22 avatar shivashankar-f22labs avatar shivashankar-ror avatar subbiahsn avatar tahaf22labs avatar transifex-integration[bot] avatar tuuleh avatar venkateshf22 avatar vincemtnz avatar woodhull avatar yeseniamolinab avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

champaign's Issues

Updating and deleting Variants

I spent a little time exploring the SP API manually tonight. I realized the underlying assumption of the API is that if a field isn't referenced in the request, it shouldn't be updated or deleted. The same goes for variants (as they're fields of buttons) and the subfields of the variants.

What does this mean for us?

  • Independent update of Variants. If my_variant.save calls @button.save, it will save all the other Variations and the Button itself, which is potentially confusing. Instead, my_variant.save can make an API call directly to Button.update(id: @button.id, variants: {type => [serialize]}) and this will only update my_variant on the server.
  • Variant errors. There's actually more validation on Variants than anything else, so if a campaigner might be editing several variants, it's decently important to match the variant errors onto the variant itself (not onto the button, as it would be if we called @button.save) Making the call directly from the Variant save method lets us add the errors to the Variant itself.
  • Deleting variants means an API call. Right now, calling VariantCollection.remove(my_variant) just removes the variant from the collection. Instead, it should call my_variant.destroy and we should add a destroy method to Variant instances that just sets :_destroy: true, makes the save call, and calls @button.update_attributes(response) to remove the instance from the collection.
  • No required params for variants. After playing with it, I wasn't able to find any params that were required for any of the variant types, they just all default to null. In light of that, and because calling Variant.new without fields has been useful for testing, I reckon that new and update_attributes should just take a params hash as they do now rather than having required params as they used to.
  • :_destroy and :errors are going to need getters and (for _destroy) a setter

Redesigning the campaigner UI

Paul and I had a good session yesterday talking about the campaigner UI. Rather than several options to implement, we came up with really one vision to try and iterate on. There isn't a ton of flexibility in how we implement it. We need the campaigner to fill out a bunch of information sequentially, and we presumably want to break the filling-out into manageable chunks.

Right now, we're breaking it into manageable chunks according to what is saved together in the back end. However, that isn't the most intuitive distinction for campaigners. Instead, we want to break it into steps or chunks in the most intuitive way we can think of for campaigners.

Once we start putting together things that would need to be in different forms in the current Rails forms, it makes the whole idea of different forms stop making sense. Instead, we can have one Save button that is stuck to the corner, and any time you hit it, it saves everything for the page. It also makes the idea of auto-save easy to implement, and I've come around on a toggle between auto-save and a save button.

The plan, then, is to have a wizard-like UI that has all the steps laid out in the sidebar so you can click between steps at will. Logically different elements like body text or each plugin each get their card in the wizard, and you can flick between them. Paul has created a mockup with the basic idea.

I think that this would be a pretty ideal application of React and Flux. However, I really want to see something to play with quickly before getting back into the world of react. I'm going to start by just building a quick version with backbone and existing Rails forms.

Javascript dependency management

As Omar and I have started to dive into Flux and React to manage our UI, we've gone bananas on the global namespace. We need to decide on a way to manage the namespacing and importing of all our javascript code. We could do it with or without an external tool to help us do it, and some of the external tools rather assertively prompt us to manage our external javascript dependencies "properly" rather than paste them all into vendor/assets. I'm going to lay out all the options as I see them.

  1. Do it all by hand
    There are plenty of well established ways we can do this ourselves. My inclination would be

    // flux/widgets.js
    var chmpgn = chmpgn || {};
    chmpgn.flux = chmpgn.flux || {};
    
    chmpgn.flux.FluxMixin = Fluxxor.FluxMixin(React);
    chmpgn.flux.StoreWatchMixin = Fluxxor.StoreWatchMixin;
    // components/text_body_widget.js.jsx
    var chmpgn = chmpgn || {};
    chmpgn.components = chmpgn.components || {};
    
    chmpgn.components.TextBodyWidgetForm = React.createClass({
    
      mixins: [chmpgn.flux.FluxMixin],
    
      // ...
    });
    
    chmpgn.components.TextBodyWidget = React.createClass({
      render: function(){
        // we'd probably end up making a shortcut like cmp = chmpgn.components
        return (
          <chmpgn.components.TextBodyWidgetForm>
          </chmpgn.components.TextBodyWidgetForm>
        )
      }
    });

    And then to make sure that the references are defined in the right order, we get manual in application.js

    //= require react
    //= require react_ujs
    //= require fluxxor
    //= require_tree ./flux
    //= require_tree ./components
  2. Use ES6 exports
    ES6 has native support for export and import of modules. Unfortunately, it appears that an ES6 module is defined by its file. The rails asset pipeline concatenates all the files together. There's a gem to hack around this, but it needs node anyway, and if we're going to go that route, we can do better.

  3. Use module.js
    This guy makes a compelling case for using a lightweight system called module.js to manage js in a rails app.

    Basically all module.js lets you do is define the order that things get loaded in your application. For us I think it would look like this:

    // vendor.js
    modulejs.define('react', React);
    modulejs.define('fluxxor', Fluxxor);
    modulejs.define('jquery', function() { return jQuery; });
    // flux/widgets.js
    modulesjs.define('flux', ['react', 'fluxxor', 'jquery'], function(React, Fluxxor, $){
      return {
        FluxMixin: Fluxxor.FluxMixin(React),
        StoreWatchMixin: Fluxxor.StoreWatchMixin
      }
    });
    // components/text_body_widget.js.jsx
    modulejs.define('components/text_body_widget_form', ['react', 'flux', 'jquery'], function(React, flux, $){
      return React.createClass({
        mixins: [flux.FluxMixin],
        // ...
      });
    });
    
    modulejs.define('components/text_body_widget', ['components/text_body_widget_form', 'react', 'flux', 'jquery'], function(TextBodyWidgetForm, React, flux, $){
      return React.createClass({
        render: function(){
          // namespacing with JSX becomes less awkward
          return (
            <TextBodyWidgetForm>
            </TextBodyWidgetForm>
          )
        }
      })
    });

    And then for application.js we just do //= require_tree . and module will sort out the rest.

  4. Use Webpack
    I found an extensive article about using Rails with React and ES6.
    It's interesting not just because of his specific approach, but also about his attitude toward JS in a rails app. He argues that JS packages should be treated like first class citizens, like bundler packages, that the javascript ecosystem is too complex for copying files into vendor, and that when the javascript package with dependencies you want isn't gemified, its a bad situation. It's worth clicking through just for that dicussion.

    I've seen this argument in many different places, and I find it to be reasonable. It involves adding node to our docker container so that it can use npm, but then all our JS dependencies can be expressed in a json file like the Gemfile. We can then also just use Require.js or Common.js style module loading, which looks pretty much just like the Module.js version above, but those libraries are much more popular and heavier (recreates a lot of asset pipeline stuff, though without conflicting with the asset pipeline.)

    Another big argument for getting on board with npm management of JS assets is that people use NPM to package react components, like react-bootstrap, which could give us lots of free functionality.

    However, his webpack setup seems pretty heavy. I think if we were to go with this, it would be worth trying to see if we could trim it down a smidgen.

  5. Use Browserify
    This article describes how to set up Rails and React to play nicely together with react-rails and the asset pipeline while using Browserify to manage modules.
    I think his priorities are closest to ours. His approach also takes the "serious" approach to managing JS dependencies through npm. I really like his approach because he gets everything working together properly - asset pipeline, sprockets, JSX, npm, react-rails, and Common.js - and I think it's my favorite. With browserify, you just do var TextBodyWidget = require('TextBodyWidget') and then you don't have to do any weird namespacing on the JSX.

  6. Use rails-assets.org
    There is a third alternative to using npm or pasting javascript into vendor/. There's a site called rails-assets.org that will serve any Bower package as a gem. You put javascript dependencies straight into your Gemfile, and then it fetches them from rails-assets. Here's some people that do it.

    It seems super lightweight and would play nicely with approach 1 or 3 (manual or module.js). The only trouble is that we're dependent on this site staying up, though we're already dependent on rubygems.org and potentially the npm site.

So there we have a bazillion options. At this point I'm mostly just between 3 + 6 or 5, but I thought I would share everything I found cause this is basically all I did all day.

Associate liquid layout with campaign page

A stand rails select tag on campaign_pages/edit.slim for associating a page with any layouts present in the DB. Presences shouldn't be enforced, since we can just use a default master template if none chosen.

Drop JSON Schema

Move widget types away from json_schema model. Have attributes and validations declared within widget subclasses.

Javascript testing

I'm starting to think that Jest isn't the way to go for javascript testing. I read this recent discussion on discuss.reactjs.org and this Jest in Practice article that say that while Jest has cool ideas...

  • it's locked into an old version of node for complicated reasons
  • it's painfully slow
  • the article decided not to use autoMocking, which is jest's main feature.
  • it "had serious problems when trying to setMock"
  • almost everyone responding in the discussion uses Mocha instead to test
  • React comes with its own react-specific TestUtils built in anyway

The current standard seems to be Mocha + JSDOM, with Chai for assertion syntax and React TestUtils for react stuff. Thoughts?

Champaign vs SumOfUs.org

From @NealJMD on September 8, 2015 0:12

As we're getting ready to move forward with the designs from agency, I've started thinking about how to manage our designs with Champaign. There are a couple tradeoffs we have to consider, and I want to start a conversation so that we're ready by the time we get the designs next Monday.

Making static pages editable

There are plenty of static pages that will be part of our website - homepage, about us, staff, media, contact, etc. We want some of these to be editable by non-tech people. For example, adding a new staff member to the site probably shouldn't require a code change and a deploy.

On the other hand, some of these pages need programmatic logic. For example, the homepage pulls in recent victories and recent signatures from AK.

To decide if a page should be implemented using liquid in the database or slim in the repo, I think we'd want to evalute its logical complexity and its frequency of change.

  • Changes often, high complexity - staff page (interactive, includes photos of each staff)
  • Changes often, low complexity - jobs or FAQ pages
  • Changes rarely, high complexity - homepage
  • Changes rarely, low complexity - contact

The only category of those enumerated above that I think should be handled as a slim template in the repo is high complexity that changes rarely. I can only think of the homepage as something that fits this description.

Proposal - using campaign pages for all types of pages

When a user creates a page, they would select from a dropdown of the type of page that it is. The options that occur to me now are 'campaign', 'share', and 'base'. The primary function of choosing the type would be to change the url at which it's accessed - either /campaigns/my-slug or /share/my-slug or /my-slug. They can pick a layout too, which for things like the staff page would probably be a custom layout just for that one. Logic like pulling in recent signatures would be bundled into plugins, which would be referenced in the partials used in the layout.

One issue with this approach is consistent internationalization. I think the easiest way to handle it would be to have three different page instances for each slug, one for each language, and for the controller to sort out which one to serve.

Bundling assets outside of Champaign

Regardless of whether the pages of our design are implemented as liquid layouts or as Rails views, I think we'll want to have them in version control somewhere for development, along with the CSS and JS that makes it work. We don't want that stuff to exist in the main Champaign repo if we want our system to be usable by other people.

Up to this point, we've talked about bundling those things in the champaign-flute repo. However, I don't think that flute is the right place for that. Flute is a page-serving speed optimization. It's a bit complex and unconventional, and not something that I think we want to be a blocker for an MVP release. Furthermore, if we do get it working properly, it should be usable by anyone using Champaign, not bundled with the assets for SumOfUs.org.

Instead, I propose that we create a theme gem for the assets for SumOfUs.org. It would have a bunch of CSS and JS, and then it would have slim or liquid templates for each of the static pages. (The liquid layouts would get loaded into the db through a rake task like they are now). We could also give it the ability to override certain templates. For example, the campaign show page should probably have a slim template in the theme repo. Champaign would continue to hold the markup for the admin interface, and simple versions of the campaign pages, like it does now.

I've touched on a lot of different stuff here. I'd love to hear your thoughts. It could also be a case where what I've suggested here is a good enough first approach, but that as we move forward, we realize shortcomings that we want to change.

Copied from original issue: SumOfUs/sumofus#15

Image Cropping

  • Create a PaperclipProcessor to store cropped version.
  • Use ImageCropper.js for UI

Optimizely snippet

Eventually when we'll want to refactor SOU out of Champaign, the optimizely snippet should move.
Additionally, the Optimizely snippet is currently the same one used in ActionSweet, since that's the requirement for cross-domain tracking. Once we'll not want to test between the platforms any more, we might want to use a new snippet for Champaign / rename the old project.

Update seeds.rb and add documentation

I'm adding search back to the equation, which involves exploring the project structure as it is right now. There would be a couple of things that would make it easier for me to figure out what's going on.

  1. Some documentation on the basic vocabulary would be nice, and will definitely come in handy whenever we'll have a person research the project or looking to contribute. This might also be helpful for the campaigners learning to use the platform. Things I'd like to see explained are at least the following:
    • Liquid
    • Liquid layout
    • Liquid partial
    • Plugin
    • The form builder
  2. The seed file should be updated. It's much easier to see what's going on if you have some basic entries to play with

Update forms for remaining widgets

Presently, only TextBodyWidget is complete.

Note:

Old my_widget/_form.slim used all manual text_field_tag style forms. New ones get access to a proper form builder, see text_body_widget/_fields.slim

Champaign as a Rails engine?

From @NealJMD on September 8, 2015 21:20

This is a long term idea, but it's one that I just wanted to throw at the wall cause it just occurred to me. The awkward thing about Champaign as it stands as an open source project now is that it's actually a Rails app. To use it, you clone the repo and then you start editing to your heart's content.

That's weird cause if we release a new version of Champaign, there's no obvious way for an existing user to get those changes without doing some kind of crazy git voodoo to update their repo from ours. That bites us even more cause it's all the less reason for anybody to ever contribute to Champaign development - instead, they make enhancements they want to their own local repo, which can't be contributed back.

Instead, if Champaign was an engine like Devise, then the functionality wouldn't be copied over into a user's repo. Instead, the user would just include Champaign, and they would get the campaigning software, and then they could override whatever routes they wanted and include whatever static pages or other features they want.

It would also dramatically focus the scope of Champaign. Champaign would exist only to provide the tools for creating campaign pages and sharing tools and reporting signatures to ActionKit. Things like login support could be handled by Devise - but as a an engine on a parallel level with Champaign, not baggage that we bring along. Things like not supporting basic CMS functionality in Champaign makes more sense to me in this context too. It also would be a nifty way of separating SumOfUs.org from Champaign.

The problem I see is that it means to use Champaign, an org still has to build a Rails app. I could see maintaining a starter repo that has things like Devise and a CMS engine already hooked up, perfect for another org to clone, customize, and deploy. That way they would continue to get updated versions of Champaign and might even make PRs.

Copied from original issue: SumOfUs/sumofus#16

Bogged down setting up JS tests

Today I tried to write front-end tests for the donation flow, because I wanted to make sure that experience is well tested and does not mess up. I did not get very far.

Early on in my research I came across Zombie.js. It's actively maintained and well liked, but what made it really stand out for me is that it focuses on user story testing rather than unit testing. Most js testing frameworks load each bit of JS in isolation and test one method at a time. Zombie loads a page from your localhost, and encourages a Capybara-like approach to filling in content, clicking things, and testing for ajax requests or the presence of tags or texts.

Zombie needs another JS framework like Mocha or Jasmine to run it. I spent a long time trying to get Jasmine or Konacha (a rails integration gem for Mocha) to load Zombie and run the test I wanted. However, their approach is much more about pulling in individual pieces of JS, and I couldn't get them to run Zombie (which is made further complicated by the fact that Zombie runs ES6).

I backed off and just installed Mocha and Zombie through NPM and tried to run a simple test
./node_modules/.bin/mocha spec/javascripts/homepage_spec.js
where homepage_spec.js has

var Browser = require('zombie');

// We're going to make requests to http://example.com/signup
// Which will be routed to our test server localhost:3000
Browser.localhost('example.com', 3000);

describe('User visits signup page', function() {

  var browser = new Browser();

  before(function(done) {
    browser.visit('/', done);
  });

  it('should be successful', function() {
    browser.assert.success();
  });

  it('should have welcome text', function() {
    browser.assert.text('h1', 'Welcome to Champaign!');
  });

});

Two problems crushed me -

  1. When (outside the container) I tried to boot the local server and then connect to it through Zombie, the tests all failed with Error: connect ECONNREFUSED and no requests in the server log. I tried switching to port 80 with the server and the test, running both as sudo, but I couldn't get them to find each other.
  2. I tried to docker-build to see if they could find each other within the container. However, build fails because one of Zombie's dependencies needs node >= 0.10.40 and we have node 0.10.29. If I do apt-get install -y nodejs=0.10.40 or apt-get install -y nodejs=0.12 it says Version '0.12' for 'nodejs' was not found. Here it says you need to curl and run a setup script, but when I tried those the setup script failed for Ubuntu and Debian modes.

I'm stuck, and frustrated I haven't yet even gotten into the other potentially challenging parts of this like integrating it with the rest of the test suite. I would love some help or a suggestion of a different direction.

Will Champaign OSDI?

People and donations are covered well by OSDI.
Pages is something that could be proposed to OSDI for inclusion in the standard.

Rethink the way currency exchanges are being fetched.

Every time we visit a page, we make an HTTP request to check for currency exchange rates. We should move this to a background job or cron instead of doing it for every incoming request.
Another alternative, specially if we want real time data, is to move this to the front-end.

GoCardless

I spent a long time today researching GoCardless to gear us up for our integration. I think it's going to be pretty straightforward. Except for the UI, it's really almost identical to Braintree.

Resources

GoCardless has a ruby gem. It's a really minimal wrapper around their API, to the point that I haven't found any actual docs for it, just for their API. Those docs are here. It's tricky cause their are docs for their old API running around (which had some features the new one doesn't have), so make sure that's the one you're looking at.

I also found it very helpful to look at their example Sinatra app. It uses the hosted integration, and pretty much all the relevant logic is in this one file.

User interface

I went in to the research really hoping I would come out confident about doing a full integration without redirecting to their page. Unfortunately, I didn't. It's not that the requirements imposed are particularly onerous (you can read the three different requirement sets here, here, and here). Instead, it became clear after playing around for a while with a GoCardless page that it would take a lot of javascript engineering time to produce a form that's nearly as adaptable as the one provided by GC. Each country has a different format of IBAN number, but also supports local bank details. Those are different fields for every country, and aren't consistent between even UK and Germany (our main target markets). Whether IBAN is the default also depends on the country. I highly encourage you guys to play with the interfaces by switching country after clicking one of the buttons on this demo page.

The big goal on doing our own full integration would be to save users entering their name and email twice. However, I think that potential frustration pales in comparison to the potential frustration of us messing up the validation for one of the many countries, or of not supporting a country to keep our own validation simple. Furthermore, there's no reason that we can't revisit this down the line - none of the work using the hosted fields is work we wouldn't have to do if we did a full integration.

For now, I think that we should just add a GC-green button with the GC logo and a button that says "Direct Debit."

Controller layer

This part actually makes me really happy and proud - the only change that I think we need to make to make Api::BraintreeController work with both providers is to rename it to Api::PaymentsController and change the client method to:

  def client
    if params[:provider] == 'gocardless'
      PaymentProcessor::Clients::GoCardless
    else
      PaymentProcessor::Clients::Braintree
    end
  end

The rest will be handled by the new back end code that mirrors the existing logic for Braintree, but with different fields to shuffle around.

Data persistence

I think our existing tables for Braintree map very nicely to GoCardless:

  • Payment::GoCardlessCustomer
  • Payment::GoCardlessPaymentMethod
    • a payment method is known as a mandate in the GoCardless API
    • it's still a little unclear to me how long these are valid (for one-click in future)
  • Payment::GoCardlessTransaction
    • a transaction is known as a payment in the GoCardless API
  • Payment::GoCardlessSubscription

The fields that we store are different, but the models and the associations between them stay the same. I think the only major code that we need to write is PaymentProcessor::Clients::GoCardless::Subscription and ``PaymentProcessor::Clients::GoCardless::Transaction classes to handle creating the subscription and transaction with GC (the stuff handled in [app.rb`](https://github.com/gocardless/gocardless-pro-ruby-example/blob/master/app.rb) in the example app), and new GC versions of `BraintreeCustomerBuilder` and the other classes that are in `payment.rb` to record stuff to our own database.

Webhooks

GoCardless actually has a bunch of webhooks. At the beginning, I imagine we'll mostly just want to support the one to log transactions when they fire from subscriptions.

Tidy up

  • Remove all references to WidgetType and CampaignPageWidget
  • Cut unused methods ( for views, specs, and models)

Image links broken after recent changes

Some images in some campaign pages aren't showing up. For example, the Trudeau: Don't sell out seniors' health care page links to this header image, which is broken.

After checking, the file does indeed exist, albeit with a different extension: jgeg instead of jpg, which leads me to believe that this bug was introduced by this commit 61cb564

It appears that anything above our jpg size threshold (150K pixels) is saved as a jpg and the decide_format method returns a literal jpg extension, but any images uploaded before this update with a different extension are potentially broken now (e.g., any image with a jpeg extension).

Newly uploaded images seem to work, so if we rolled back, it could potentially break the latest uploads (if their original extension is not jpg).

Pivotal Tracker #136346405

Action API

A campaign page with a form attached will POST to /api/campaign_pages/:campaign_page_id/actions

Presently, the actions controller looks like:

class Api::ActionsController < ApplicationController
  def create
    Action.create_action(params)
    render json: { success: true }
  end
end

create_action needs to take all the params sent to it and create a new action record, with a JSONB field for storing all custom fields stated in the form.

Serving static assets is turned on

in config/environments/production.rb, Rails has an automatic config to disable serving static assets, because it's assumed that your nginx / apache front will do that for you. I enabled serving static assets from the Rails application for the MVP.

What share analytics to display (and where and when)

About ShareProgress analytics:

We're currently using the share_tests analytics in the share panel inpages/:friendly_id:edit. What I noticed today is that this field in the response gets populated with data only if there is an active A/B test. And active A/B test exists if there are more than one variant of the same share. Consequently, the data about number of shares or conversion rates is not available from this field in the response if there is only one type of the particular share. In other words, if there's only one facebook share, the number of shares is only available from ['response'][0]['share_types']['facebook'].

Javascript Framework

Managing the campaign page and its widgets is a complicated affair. It was suggested a couple of weeks ago we adopt a javascript structural framework for building an app like UX and to help us write maintainable, testable javascript.

Last week I looked into React JS and tried out some code on our campaign page. I really enjoyed using it. The framework is small, beautifully declarative and quick to learn.

It you haven't already, please take a look to see what it's all about. As mentioned in a previous post, Facebook have a great tutorial on getting started: https://facebook.github.io/react/docs/tutorial.html

If we decide to go with react (or another JS framework) it'll mean reconsidering how we structure our views and how we handle resources. A framework will make this far less complicated. For example, we'd be able to drop nested attributes and complicated service objects.

With that in mind, I've quickly put together a wireframe of how a user would create a campaign page:

  1. It would start with the name of the campaign:
    https://wireframe.cc/prvy2m

  2. This would take the user to the edit view, where all content would be managed inline and updated asynchronously.
    https://wireframe.cc/NUxtGG

This is mainly what I'm implementing in the react branch on champaign.

We should decide on a framework (if any) as soon as we can, since it'll significantly influence how we develop champaign going forward.

Caching is turned off for production

I turned caching off for now because it was configured for Redis, and that would've required setting up a Redis container and a multi-container deploy on AWS, and there was no time for that for Friday's MVP.

Refactoring the Braintree payment processor code

Refactoring the payment processor code a tad bit (Braintree)

1. Associations

Over time, we've cleaned up the associations and there isn't so much to do there any more, but I would still associate transactions with actions instead of pages.

  • One-off transactions would belong the action that is created when the transaction is created
  • Subscription transactions would either belong to the action their parent subscription belongs to, or would have a nil action_id, because the action and its page can be accessed through transaction.subscription.action.page_id
  • Subscriptions don't really need to be associated with pages because they're already associated with actions, and their page can be accessed through the action with subscription.action.page. There are 113 subscriptions that don't belong to actions on production, the first one created October 12th 2016, and the last one 12 days later, so that must have been a temporary fluke - but these will need to be fixed as the payment data will not be recorded right on AK for these. (I fixed these today.)

Wai?

One-off donations are a kind of a type of action, and an action is always created when a one-off transaction is made, so the association is pretty clear. The order of creation also supports this as it is right now, in PaymentProcessor::Braintree::Transaction#transact:

def transact
        @result = ::Braintree::Transaction.sale(options)
        if @result.success?
          @action = ManageBraintreeDonation.create(params: @user.merge(page_id: @page_id), braintree_result: @result, is_subscription: false, store_in_vault: @store_in_vault)
          Payment::Braintree.write_transaction(@result, @page_id, @action.member_id, existing_customer, store_in_vault: @store_in_vault)
        else
          Payment::Braintree.write_transaction(@result, @page_id, nil, existing_customer, store_in_vault: @store_in_vault)
        end
      end

(ManageBraintreeDonation includes ActionBuilder and creates the action).

There are a classes/methods in handling BT transactions with mega long parameter lists and while just passing an action instead of page_id and member_id would simplify things, that isn't enough to trim things down.

Associating transactions with their actions and dropping the page_id column

Actions to be associated with a customer's transactions on a given page can be found through Action.where(page_id: transaction.page_id, member_id: transaction.customer.member_id). In most cases this'll net just one action, as people tend to make just one transaction on a given page. But in case it nets more than one action, they can be associated with the transactions on the page using their created_at field.

@osahyoun you mentioned that you wanted to have a simple way of getting the transactions of a page, e.g. page.transactions. This could be done with a method that calls Payment::Braintree::Transaction.find(action_id: self.actions.donation.pluck(:id)) or possibly something less hideous but with similar logic :) I'm not actually sure how we're doing this now because there isn't a has_many association from pages to transactions.

The code to associate transactions and subscriptions with actions instead of pages needs to go live first with the page_id column still present. Afterwards can then run the script to associate the existing transactions and subscriptions and drop the column.

2. Other stuff

  • I'd like to rethink, or at least rename ManageBraintreeDonation, which is used for creating an action off of a one-off Braintree donation. It looks mighty complicated considering that there are only 8 fields for Action, two of which seem to be duplicates (subscribe_member and subscribed_user).
  • There are some mighty long parameter lists that I think we should try to mitigate by seeing whether a given parameter is really necessary, and by passing parameter objects instead. Examples:
    Payment::Braintree.write_transaction:
    def write_transaction(bt_result, page_id, member_id, existing_customer, save_customer = true, store_in_vault: false)
      BraintreeTransactionBuilder.build(bt_result, page_id, member_id, existing_customer, save_customer, store_in_vault: store_in_vault)
    end

Payment::Braintree.write_subscription:

def write_subscription(payment_method_id, customer_id, subscription_result, page_id, action_id, currency)
      if subscription_result.success?
        Payment::Braintree::Subscription.create(payment_method_id:      payment_method_id,
                                                customer_id:            customer_id,
                                                subscription_id:        subscription_result.subscription.id,
                                                amount:                 subscription_result.subscription.price,
                                                merchant_account_id:    subscription_result.subscription.merchant_account_id,
                                                billing_day_of_month:   subscription_result.subscription.billing_day_of_month,
                                                action_id:              action_id,
                                                currency:               currency,
                                                page_id:                page_id)
      end
    end

There are lots of small improvements that could be made - it'd be great if someone else was up for going through this and brainstorming together / pair programming (!!!), and I'd appreciate it if you could have a look at the BT transaction and subscription handling classes to see if you can identify areas you'd like to improve.

How to deal with static assets

Here's a recap of my research today on where and how to serve static assets. Would appreciate conversation on the topic under this GitHub issue.

PHUSION / PASSENGER DOCKER IMAGES
I had a look at the phusion passenger images. They contain a bunch of things but they're definitely not monstrous like I thought. They do have some bells and whistles we could just as well do without, and all of their text about 'running things up correctly / syslog / ssh server / more comes from the fact that they source from Baseimage-Docker, which is nice, but which is also something we could do, and sourcing from something like Alpine speaks more to my internal neckbeard as something that'd be cool. Sourcing from a different base image should be trivial. If we do want to have the proxy for static files in our application container, I could consider one of these images, but I'm not sure I'm a big fan of the idea.

NGINX PROXY IN SAME CONTAINER WITH THE RAILS APP

  • goes a little against the principle of running a single logical process in a container
  • makes the already-large-and-monolithic-image even larger and more monolithic -> this'll increase deploy time on CircleCI, smaller images are generally much preferred
    • using containers wrong is honestly a little embarrassing
  • the fact that we have a single-container deploy doesn't mean our environment is any less crowded (we'd still go load balancer -> eb nginx proxy -> static asset nginx proxy -> app)
  • we still deploy only one image, and our single deploy image is available for anyone who wants to deploy Champaign

NGINX PROXY IN ITS OWN CONTAINER

  • generally cleaner and the more 'proper' way to do containerization
    • although the proxy and the application are so tightly coupled that one could argue they are part of the single logical process
  • doesn't bloat our image size
  • we'd need a multi-container deploy
  • we'd still go load balancer -> eb nginx proxy -> static asset nginx proxy -> app
  • we'd be deploying from several images, which is not as friendly for others who might want to deploy Champaign
    • the other image is more or less just vanilla nginx
    • we could always have a deploy script and give that to people who are interested in a single-step deploy despite of having two images

MODIFYING THE EB NGINX PROXY THROUGH .ebextensions TO SERVE STATIC ASSETS

  • we'd only have one Nginx proxy as is appropriate
  • we wouldn't need to add a container of our own
  • hackish, but I read some posts about people who customize the AWS Nginx proxy through .ebextensions so not totally unheard of
  • lies on the presumption that people deploy to AWS
  • it'd still need to actually access the static files so we would need to do something like:
    • move them elsewhere on deploy time and set Nginx to look into e.g. S3 for static assets
    • mount a volume to the application container and copy static files there on application start

WHY'S THIS A BIT TRICKY?

  • Because Rails apps aren't the greatest candidates for microservices, which make the best candidates for containers. They're just so damn monolithic and opinionated by default. Doesn't mean that we can't use containers - probably only means that we need to carefully consider what should be in a container of its own and how to extract it out.
  • Because the proxy needs to sit in front of the application. We need a solution where requests to assets don't touch the application, and just having statics somewhere isn't enough if the application ends up serving them. For example, CloudFront is configured with an 'origin', which is where your distribution forwards requests if it doesn't have a cached copy of the file that was requested. After the first request for an asset, it's served out of the edge cache. This means that with just CloudFront, the Rails app would forward requests for static assets to CloudFront, which isn't as good as it could be.

Thread safety

We're running the app on Puma with a bunch of concurrent threads, but if you do that, you're supposed to make sure your app is thread-safe first. The risk if it's not is that when you actually start to get serious traffic, you'll have weird data inconsistency issues that are godawful to debug. I started reading a primer here, and I think at some point it would be good for us to sit down and talk through some do's and don't's about coding habits to stay (thread) safe.

ActionKit form fields and our form builder

To this point, our form builder has been fantastically flexible, and that's good news. Unfortunately, this does have some negative side effects that we need to control for. Specifically, ActionKit only knows how to take specific field names when we submit an action to them, and in order for our user experience to behave the way our campaigners expect, we need to make sure that those campaigners can't create forms that don't submit the data they expect.

The data that ActionKit can process:

Full Name: Stored in a field called name
Email Address: Required. Stored in a field called email
Phone Number: Stored in a field called phone
Address: Stored in a field called address1
City: Stored in a field called city
Non-US Political Region: Stored in a field called id_region
US State: Stored in a field called id_state
Australian State: Also stored in a field called id_state
Language: Stored in a field called lang. This should probably be included as a hidden field in every form which pulls from the current language the user is viewing the page on.

Any other fields that the campaigner wishes to include need to be prepended with the string action_, so that ActionKit knows they should be stored as custom data.

This gets tricky in two ways. The first is that we don't want people to have to build a separate form for each type of custom data. The way that we had talked about doing this yesterday was to allow a form builder which allowed users to create a form using the pre-populated field names (but allowing custom labels) for the values listed above, then to add custom action_ fields on the page creation itself. The second way it's tricky is that we often want to be able to query by those custom action fields, and currently there's no naming convention, meaning that one campaigner might create action_is_monsanto_shareholder and another might create action_monsanto_shareholder and another might create action_shareholder_monsanto. So, potentially having an autocomplete which will suggest previously used custom fields so that we reuse those fields when possible would be a likely future feature we might want to support.

Please let me know if you have any questions!

Picking a CSS methodology

I've been reading a lot about CSS methodologies and trying to pick a direction for our team. The big three are SMACSS, BEM, and OOCSS. OOCSS has been criticized for pushing too much of the style into the templates, which I think could prove especially tricky for us given the existing slim/liquid duality. BEM is more of a naming convention while SMACSS paints with broader and looser strokes, so they actually can apparently work together. I find BEM the easiest to understand, especially with help from this article.

I also find a style guide called North by the UI architect for IBM Watson. He criticizes all three of the big methodologies and then created another one with his friends. I like the North guide because it's so extensive and specific - it's not just CSS, but everything about front-end dev. Though I've been consulting that guide and have picked up some good tips, I think it makes sense to stay more mainstream on CSS style.

For now, I'm going to try writing with BEM with an organization that smacks of SMACSS. As I fall into it or struggle with it, I'll keep this up to date, but if anyone has an opinion on this stuff I'd love to hear it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.