Here is a summary of some points discussed on IRC, for the record.
I'm starting with what I noticed... feel free to discuss or mark as "wontfix" ;)
As I understood, this project (crappyspider) is meant to:
- have a list of URLs in a web application, for use in various "checks".
- quickly check HTTP status 200 OK of URLs
- do not check every URLs, but at least each "URL pattern" (a.k.a check /user/123/ is enough, we do not have to check every /user/{id}/ like /user/456/ and /user/789/).
I think the URL list should be moved to repositories of web applications. Could be done with #12. Could also be done using some "sitemap" feature in code of web application. It is the best way to ensure list of URLs is valid and updated as part of application's source code.
One argument about scrappy VS selenium/casperjs is speed: we do not need a browser. With speed in mind, I think the list of URLs can be unique for a given version of web application. I mean, we do not have to scan the application on every environment. We can run it once (in DEV or INTEGRATION?) and then use it on other environments.
But then how to support dynamic URLs that contain slugs/PKs? Let's use the same tools as for healthchecks... Healthchecks read and use live database, i.e. in PROD, real data is involved, not fake data like in tests. If we had to write some healthcheck that checks "an user can log in", then we will have to get_or_create() a known user sample. A real account, but used for healthchecks only. Then we make sure this data is available in every environment (kind of fixtures). We can do that too in this crappyspider project.
As I explained above, I think that list of URLs could be almost a static list maintained as part of the code. So I wonder if scrappy is the adequate tool for this purpose. I wonder if scrappy is not a bit overkill, where, typically, we just want one valid "live" URL for each pattern in urls.py. The list is finite. The list is updated along with urls.py.
Then, about scrappy itself, as a way to check a web application, I would recommend casperjs, selenium, or any other tool that is more JS-developer or designer friendly:
- I think we need functional tests (with browser)
- I think functional tests should be written by the ones who make the user interface, or by the ones who ask for the features (product owners). Typically, not the Django developers.
- I think front developers and desires will appreciate casperjs/selenium more than this carppyspider.
Last but not least, I think we have to check 405, 302 or 301 status codes, or POST/PUT/DETETE actions. Again, I think casperjs/selenium could be better than scrappy for that.