Comments (12)
Like the others here, I have problems figuring out how to get started and integrate my workflow.
Regarding your question, @isms: I think a small sample project with all the necessary steps (from preprocessing to model) already implemented would be very beneficial for beginners. No need to do anything fancy, just derive some features and train a decision tree. For example by using the Titanic Kaggle challenge most people should be familiar with: https://www.kaggle.com/c/titanic/data. Or even easier: by integrating an example withing the base project. If you're new it helps to get started, if you're experienced, you'll have no problems to delete it.
Once I understand it well enough and am able to use the project, I'm going to give it a shot. A simple example repository should only take me a few hours.
Things that I could not figure out right away:
- how to run it all? probably make. make shows possible commands, cool. okay, lets implement some bogus logic and try it.
- all example projects (linked below) have no Makefile to check out how they use it.
make data
yields an error when executingmake_dataset.py
:Error: Missing argument "input_filepath".
. I cannot figure out how to pass the commands asmake data --input_filepath=x/y
fails. Got away with deleting the click-aguments.- how does this work? https://github.com/drivendata/cookiecutter-data-science/blob/master/%7B%7B%20cookiecutter.repo_name%20%7D%7D/src/data/make_dataset.py#L30
- why is make_dataset pre-filled and build_features empty?
Proposed steps for a tutorial:
- overview that explains components (basically an improved file tree)
- download data
- build features
- train a model and predict data
- use visualize to generate some figures
- adapt
Makefile
to automatically build necessary files and tie it all together - generate docs
(I'm still learning and editing on the fly)
from cookiecutter-data-science.
Preface: Data scientist, not a software engineer.
I wrote up the first steps of using cookiecutter datascience here. If there's some way to make an open document (like a gist?), I wouldn't mind contributing the perspective of someone who has no idea what they're doing.
Some future steps I'd like to do are adding a git init, setting up some logins in .env, pip freezing requirements and putting in requirements.txt, and using an S3 bucket. Maybe commands for Mac (all the other datascience students used Mac) would be nice. Maybe some instructions for venv/conda people.
from cookiecutter-data-science.
I have similar questions about make_dataset.py
. This template is not simple enough for the novice or even intermediate data scientists to figure out. Better documentation that uses all features of this template would help a lot.
from cookiecutter-data-science.
hey @isspek, great timing. I actually released a project containing a minimum working example last weekend (although extended/adapted to LaTeX generation but this shouldn't matter).
Should be quite easy to grasp along the example implemented. You can find it here: https://github.com/lorey/data-intensive-latex-documents
BTW: I have the sad feeling that this project has been neglected by the authors. I've been using this for the last two years on several occasions and there has not been any significant update since. I have found no better alternative though.
from cookiecutter-data-science.
You can have a look at https://github.com/hackalog/bus_number/ which was from a tutorial we just gave at PyData NYC. There's a sizable framework in src/, but you should be able to see the basic linkage between the Makefile and the various scripts.
from cookiecutter-data-science.
from cookiecutter-data-science.
BTW: I have the sad feeling that this project has been neglected by the authors. I've been using this for the last two years on several occasions and there has not been any significant update since. I have found no better alternative though.
For context, there is a massive tension between most contributors' wish list ("Feature _______ should be added because in my work I do _____") and keeping the project general.
We tend to keep issues open to promote discussion, but there is a strong rationale for not adding complications, and we encourage people to fork the project for particular use cases.
from cookiecutter-data-science.
@pgr-me @mnarayan Thanks for raising this — if you're finding it confusing, there are probably others who are too.
In terms of how to improve the documentation/comments and potentially add to the content about how to use this repo, it'd be helpful if you could share some specifics here about what you found confusing or difficult.
from cookiecutter-data-science.
Sorry about the delay in responding - I've been on holiday the past two weeks.
I recommend generalizing this example so that it leverages all the functionality of the cookie-cutter-data-science framework. I was able to use this example to meet my needs, but it may be useful for others if you provide step-by-step instructions showing how users can take the default cookie-cutter-data-science framework and make it into said example. This could mean showing users how to:
- Use global variables in the Makefile
- Customize commands in the Makefile
- Make use of project rules in the Makefile
- Create and use .env files
from cookiecutter-data-science.
with the make_dataset.py main function it self i found that understanding the Click library was enough. This webcast made by the developer of Click is quite good https://youtu.be/kNke39OZ2k0 .
However, the workflow with make is beyond me. (not a unix person yet, but getting there)
I remember you mentioned in the tutorial/presentation/documentation that the idea of using make was inspired by the necessity of building data pipelines. I am an avid user of airflow so pipelines are natural to me. So maybe a practical example should be enough, like getting a standard modelling tutorial and pipeing the analysis through. It could even be a completed or iconic DrivenData competition/tutorial.
from cookiecutter-data-science.
@lorey Any update for this? Frankly, i couldn't understand how is it possible if we download data from internet with make_dataset script. Also how the data could be passed into interim folder.
from cookiecutter-data-science.
This is now included in the docs:
https://cookiecutter-data-science.drivendata.org/using-the-template/
from cookiecutter-data-science.
Related Issues (20)
- More documentation for newcomers HOT 1
- Dry run of ownership transfer HOT 2
- Announce v2 release HOT 1
- add documentation for running make on Windows HOT 8
- Make v1 template docs accessible in new docs
- Termynal markdown page should not be included HOT 1
- v2 release logistics checklist HOT 1
- ideas for documentation about just+pyproject.toml+mkdocs HOT 1
- Defend against broken paths from non-editable installs
- Document how to use the Python source code scaffolding
- Document conda-forge as a way to install make
- Option for Poetry support for package managent
- Update directory structure in README to reflect v2
- Add tag for v2 HOT 1
- Release package on conda-forge HOT 1
- Add badges to README
- config import fails when using V2 scaffolding HOT 2
- Consolidate linting and formatting to use ruff
- Add documentation about contributing and requesting tools
- Is there an example project? HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from cookiecutter-data-science.