Code Monkey home page Code Monkey logo

Comments (12)

lorey avatar lorey commented on July 20, 2024 14

Like the others here, I have problems figuring out how to get started and integrate my workflow.

Regarding your question, @isms: I think a small sample project with all the necessary steps (from preprocessing to model) already implemented would be very beneficial for beginners. No need to do anything fancy, just derive some features and train a decision tree. For example by using the Titanic Kaggle challenge most people should be familiar with: https://www.kaggle.com/c/titanic/data. Or even easier: by integrating an example withing the base project. If you're new it helps to get started, if you're experienced, you'll have no problems to delete it.

Once I understand it well enough and am able to use the project, I'm going to give it a shot. A simple example repository should only take me a few hours.

Things that I could not figure out right away:

Proposed steps for a tutorial:

  • overview that explains components (basically an improved file tree)
  • download data
  • build features
  • train a model and predict data
  • use visualize to generate some figures
  • adapt Makefile to automatically build necessary files and tie it all together
  • generate docs
    (I'm still learning and editing on the fly)

from cookiecutter-data-science.

trail-coffee avatar trail-coffee commented on July 20, 2024 8

Preface: Data scientist, not a software engineer.

I wrote up the first steps of using cookiecutter datascience here. If there's some way to make an open document (like a gist?), I wouldn't mind contributing the perspective of someone who has no idea what they're doing.

Some future steps I'd like to do are adding a git init, setting up some logins in .env, pip freezing requirements and putting in requirements.txt, and using an S3 bucket. Maybe commands for Mac (all the other datascience students used Mac) would be nice. Maybe some instructions for venv/conda people.

from cookiecutter-data-science.

mnarayan avatar mnarayan commented on July 20, 2024 4

I have similar questions about make_dataset.py. This template is not simple enough for the novice or even intermediate data scientists to figure out. Better documentation that uses all features of this template would help a lot.

from cookiecutter-data-science.

lorey avatar lorey commented on July 20, 2024 2

hey @isspek, great timing. I actually released a project containing a minimum working example last weekend (although extended/adapted to LaTeX generation but this shouldn't matter).

Should be quite easy to grasp along the example implemented. You can find it here: https://github.com/lorey/data-intensive-latex-documents

BTW: I have the sad feeling that this project has been neglected by the authors. I've been using this for the last two years on several occasions and there has not been any significant update since. I have found no better alternative though.

from cookiecutter-data-science.

hackalog avatar hackalog commented on July 20, 2024 1

You can have a look at https://github.com/hackalog/bus_number/ which was from a tutorial we just gave at PyData NYC. There's a sizable framework in src/, but you should be able to see the basic linkage between the Makefile and the various scripts.

from cookiecutter-data-science.

hackalog avatar hackalog commented on July 20, 2024 1

from cookiecutter-data-science.

isms avatar isms commented on July 20, 2024 1

BTW: I have the sad feeling that this project has been neglected by the authors. I've been using this for the last two years on several occasions and there has not been any significant update since. I have found no better alternative though.

For context, there is a massive tension between most contributors' wish list ("Feature _______ should be added because in my work I do _____") and keeping the project general.

We tend to keep issues open to promote discussion, but there is a strong rationale for not adding complications, and we encourage people to fork the project for particular use cases.

from cookiecutter-data-science.

isms avatar isms commented on July 20, 2024

@pgr-me @mnarayan Thanks for raising this — if you're finding it confusing, there are probably others who are too.

In terms of how to improve the documentation/comments and potentially add to the content about how to use this repo, it'd be helpful if you could share some specifics here about what you found confusing or difficult.

from cookiecutter-data-science.

pgr-me avatar pgr-me commented on July 20, 2024

Sorry about the delay in responding - I've been on holiday the past two weeks.

I recommend generalizing this example so that it leverages all the functionality of the cookie-cutter-data-science framework. I was able to use this example to meet my needs, but it may be useful for others if you provide step-by-step instructions showing how users can take the default cookie-cutter-data-science framework and make it into said example. This could mean showing users how to:

  • Use global variables in the Makefile
  • Customize commands in the Makefile
  • Make use of project rules in the Makefile
  • Create and use .env files

from cookiecutter-data-science.

GuiMarthe avatar GuiMarthe commented on July 20, 2024

with the make_dataset.py main function it self i found that understanding the Click library was enough. This webcast made by the developer of Click is quite good https://youtu.be/kNke39OZ2k0 .
However, the workflow with make is beyond me. (not a unix person yet, but getting there)

I remember you mentioned in the tutorial/presentation/documentation that the idea of using make was inspired by the necessity of building data pipelines. I am an avid user of airflow so pipelines are natural to me. So maybe a practical example should be enough, like getting a standard modelling tutorial and pipeing the analysis through. It could even be a completed or iconic DrivenData competition/tutorial.

from cookiecutter-data-science.

isspek avatar isspek commented on July 20, 2024

@lorey Any update for this? Frankly, i couldn't understand how is it possible if we download data from internet with make_dataset script. Also how the data could be passed into interim folder.

from cookiecutter-data-science.

pjbull avatar pjbull commented on July 20, 2024

This is now included in the docs:
https://cookiecutter-data-science.drivendata.org/using-the-template/

from cookiecutter-data-science.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.