Craft Ops
is a template which uses automation tools to build you a virtual
DevOps environment which is tailored for Craft CMS. Craft itself
is already incredibly easy to setup with tools like MAMP, and this project
aims to stay that way. This project's goal is to get you past the process
of dragging files over to FTP and using commands instead. Ideally you learn
a thing or two about Unix-like systems in the process.
To start, the ops workflows will be built around the use of AWS and Bitbucket. These products both offer free options and can be fully automated.
Please also note that use of Craft is subject to their own license agreement.
The goal of this project is to keep everything in one place. This cuts out any mystery as to how the project comes together as a whole. Just by browsing through a few files you can see how all of the services at play are configured. As well, the deployment command logic is laid out in one file making it easy to understand how all of the operations tie together. At the end of the day it makes it easier to onboard new people and pass the project between teams. The less reverse engineering the better!
You only need these tools installed, and both have builds for most systems.
The best way to get started with Windows is by installing the Git for Windows toolset. This installer gives you all the bits an pieces required to run git on Windows. As a bonus it comes with its own bash shell which allows you to operate your Windows system with a Unix-like command prompt. There are many shell options for Windows, but this is likely your best option.
Once you have built the dev
VM you can also use an experimental web based
terminal @ http://localhost:8000/wetty
. You may find this
a more enjoyable experience.
It is really easy, just clone this repo and vagrant up
the dev
box.
$ git clone https://github.com/stackstrap/craft-ops.git project_name
$ vagrant up dev
You can then hit the dev server at http://localhost:8000
.
The Craft Ops dev
vm runs the Harp static webserver locally and uses
nginx to proxy it's output to http://localhost:8000/static
. Any file within the
assets
folder will be served up at this location and parsed accordingly.
This will allow you to write pure SASS or CoffeeScript without the need to fiddle
with various Grunt or Gulp configurations. Harp is designed with a convention vs
configuration philosophy, so as long as you understand how to layout your files
it will just work.
You can add all of your bower components to the bower.json
file at the root of the
project. Just run bower install
and anything within bower_components
will be
available at http://localhost:8000/static/vendor
.
Browserify is an excellent way to make use of CommonJS
and
break your code up into modules
or use packages from npm. Browserify
is a nice improvement to the complex API of RequireJS and AMD, while offering the
same advantages. It also handily bundles all of your code into a single file so
that you cut down on HTTP requests. By default any changes to assets/js/_main.js
will automatically be output to assets/js/bundle.js
. Since everything in the
assets
folder is passing through harp, the file will be available at
http://localhost:8000/static/js/bundle.js
.
The ops setup is configured by sourcing data from a configuration object. The object is created by merging a series of YAML files on top of each other.
defaults.conf
- This file is the base layer and just for reference.
project.conf
- This is the main file where you should put custom properties.
private.conf
(optional) - This file is where you would store private project
data like access keys. You should .gitignore
this file or encrypt it if you
want to share it in the repo.
After you have setup your AWS account you will need to create a new user under IAM. As soon as you create this user you will be given two keys. Download this information and save it somewhere as it will not be available again.
You will also need to attach an Administrator Policy to the user. You can do this by clicking the user and going to it's full edit view. After this you will never need to log into AWS again.
The best way to handle bitbucket is to create a "team" for your repositories to live under. With teams Bitbucket allows you to generate an "API key" to use instead of your password. You can generate this token under "Manage team" in the top right corner. Make sure you have this key handy along with the name of the team you created.
First off you will need to set your project's name
in project.conf
. This value
will be used to name system related things, so leave out special characters.
name: project_name
Once you have your AWS and bitbucket keys you can put those values in the appropriate
YAML file. Technically you can put them in any one, but creating a private.conf
or
~/ops.conf
is your best best.
aws:
access_key: AJALDFJFNENNNKFDABKDBFE
secret_key: dsjaf3jk4jl5kj9fjej3l3404353jlgjaglh303
bitbucket:
user: teamname
token: dsafdsfjdks93kjfaj2oj23kjfkjandfk
If you would like to use the same credentials for all projects, you can keep all of the
above information in ~/ops.conf
on your host machine. This is a global config file
that is pulled in from your host system's $HOME
directory when the dev
box is
provisioned. You can keep access keys here if you need them for all projects. You
will need to run vagrant provision dev
if you change this file. This will allow you
to kick off a new Craft Ops project without having to get credentials each time.
For example you may want to keep your Bitbucket creds in the global config and keep individual AWS creds in private.conf for each project or client.
Make sure you are in the dev
vm
$ vagrant ssh dev
Run the fab
command to ready your project on Bitbucket and AWS
$ fab setup
Then up
the web
vm to build it
$ vagrant up web
Craft Ops uses the tool Fabric to manage the execution ssh commands. This allows us to assemble super simple commands for deploying our project and preforming common operations on it.
The Craft Ops setup automatically creates 3 "stages" on the web server. You have
the option of deploying to production
, staging
, or preview
.
To deploy your latest commit pushed to the bitbucket
remote you would run...
$ fab production deploy
You can also easily prefrom operations on the database and move "dumps" around.
Let's say you wanted to dump your production
database and use it for dev
...
$ fab production db:dump
$ fab production db:down
$ fab dev db:import
Perhaps you want to sync your production
uploads to your dev
vm...
$ fab production uploads:down
Or maybe you want to sync your dev
uploads to production
...
$ fab production uploads:up