Code Monkey home page Code Monkey logo

demos's Introduction

Demos

This repo holds all the jupyter notebook demos for the examples tab of Ivy's web. Relevant links:

All of the examples should be as comprehensible as possible, using easy-to-follow and attractive visuals (graphs, nicely formatted results, etc.), and follow the general tone of Ivy (feel free to include emojis and don't keep things super serious!).

Given that an internal release of the graph tracer is around the corner anybody should be able to start working on these examples shortly, so don't worry about not having access to the graph tracer / transpiler code for now, you can start to work on the notebook style/story!

If anyone has any question feel free to ping me (Guillermo) or use the Community/UX team discord channel!

Creating a Notebook for Demo

To ensure that similar formats are used across the demo notebooks, a template is created to help you get started! It is located in assets/01_template.ipynb! Please make a copy of it to start creating a demo!

  1. Firstly, please update the file name to be match the topic of your demo. Then, place the notebook in its relevant folder.

  2. Next, please edit the title and description accordingly to ensure that they are rendered correctly in the webpage.

  3. All contents should start behind the existing template cells, where:

  • The h2 (##) tags are used for section titles.
  • The h3 (###) tags are used for subsection titles.
  • All steps and explanation should go with the default, which is text or paragraph (p) without any tags.
  1. You have to include the new notebook path in the corresponding toctree which is located in index.rst. This is to ensure that the notebook is rendered in the contents in the left of the webpage.

  2. You may need to add a grid-item-card in the index.rst file to link the notebook to the webpage. Please refer to the existing examples for the format. You may also look into the grid-item-card documentation, or the card documentation on sphinx-design website.

demos's People

Contributors

a0m0rajab avatar aarsh2001 avatar ahmedo42 avatar bipinkrishnan avatar djl11 avatar dsantra92 avatar fnhirwa avatar guillesanbri avatar haidersultanarc avatar juliagsy avatar kareemmax avatar kurshakuz avatar mohame54 avatar muhammedashraf2020 avatar muzakkirhussain011 avatar nripeshn avatar rishabgit avatar soma2000-lang avatar therealbird avatar tomatillos avatar vaatsalya123 avatar valeriocost avatar vedpatwardhan avatar zhumakhan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

demos's Issues

Sarcasm Detection Demo

Objective: Create a machine learning model with ivy that can detect sarcasm in text data. Given the nuanced nature of sarcasm, which often relies on the context and tone, this project presents an intriguing challenge in the field of natural language processing (NLP). The goal is to contribute to more sophisticated text analysis tools that can navigate the complexities of human communication.

Task Details:

  • Dataset: The project will use the sarcasm dataset available on Kaggle, which you can find here: Sarcasm Dataset. This dataset provides a collection of sentences labeled for the presence or absence of sarcasm, offering a foundation for training and testing the sarcasm detection model.

  • Expected Output: Participants are to submit a Jupyter notebook that outlines the sarcasm detection model's development process, including data preprocessing, feature extraction from text, model training, and evaluation. The submission must also include the trained model files.

  • Submission Directory: Place your completed Jupyter notebook and associated model files in the Contributor_demos/Sarcasm Detection subdirectory within the unifyai/demos repository.

How to Contribute:

  1. Fork the unifyai/demos repository to your GitHub account.
  2. Clone the forked repository to your local system.
  3. Create a new branch specifically for your work on the Sarcasm Detection demo.
  4. Develop your model, carefully documenting your approach and findings in the Jupyter notebook.
  5. Store your notebook and model files in the Contributor_demos/Sarcasm Detection directory.
  6. Once your work is ready, push your branch to your forked repository.
  7. Open a Pull Request (PR) to the unifyai/demos repository with a clear title, such as "Sarcasm Detection Demo Submission".

Contribution Guidelines:

  • Ensure your code is well-documented to ease understanding and replication.
  • In your PR, provide a summary of your methodology, key insights, and any significant challenges you encountered, offering insights into your development process.

Food Ingredient Recognition Demo

Objective: Build a machine learning model using ivy that can recognize ingredients in food images. This tool aims to assist individuals with dietary restrictions or allergies in quickly identifying ingredients in packaged foods, enhancing their dining safety and convenience.

Task Details:

  • Dataset: The project will utilize the Food-101 dataset, which is available at Food-101 Dataset. This dataset comprises images of 101 food categories, each accompanied by a list of ingredients, providing a rich foundation for developing an ingredient recognition model.

  • Expected Output: Contributors are required to submit a Jupyter notebook detailing the development of the ingredient recognition model. This includes steps such as data preprocessing, model training, and evaluation, with a focus on image processing techniques. Additionally, the submission should include the trained model files.

  • Submission Directory: Place your completed Jupyter notebook and model files in the Contributor_demos/Food Ingredient Recognition subdirectory within the unifyai/demos repository.

How to Contribute:

  1. Fork the unifyai/demos repository to your GitHub account.
  2. Clone the forked repository to your local machine.
  3. Create a new branch specifically for your work on the Food Ingredient Recognition demo.
  4. Develop your model, ensuring to document the methodology and findings comprehensively in the Jupyter notebook.
  5. Save your notebook and model files in the Contributor_demos/Food Ingredient Recognition directory.
  6. Push your branch to your forked repository once your work is complete.
  7. Submit a Pull Request (PR) to the unifyai/demos repository, ensuring your PR title clearly indicates the project, such as "Food Ingredient Recognition Demo Submission".

Contribution Guidelines:

  • Your code should be well-documented to facilitate understanding and replication by others.
  • Summarize your approach, key insights, and any challenges you encountered in your PR description, providing a clear overview of your project journey.

Predicting Parking Space Availability Demo

Objective: Develop a machine learning model using ivy to predict the availability of parking spaces in urban areas. This model should factor in various elements such as time of day, day of the week, and weather conditions. The project aims to mitigate congestion and enhance traffic flow, offering a practical solution to a common urban challenge.

Task Details:

  • Dataset: Use the dataset provided by the Berkeley DeepDrive (BDD) initiative, available at BDD Data, which includes diverse data that could be relevant for predicting parking space availability. While the dataset encompasses various types of data, focus on extracting and utilizing information pertinent to parking space prediction.

  • Expected Output: Contributors are expected to submit a Jupyter notebook that encapsulates the entire model development lifecycle, including data preprocessing, feature selection, model training, and performance evaluation. Additionally, the trained model files should be included in the submission.

  • Submission Directory: Please place your completed Jupyter notebook and associated model files in the Contributor_demos/Predicting Parking Space Availability subdirectory within the unifyai/demos repository.

How to Contribute:

  1. Start by forking the unifyai/demos repository to your GitHub account.
  2. Clone the forked repository to your local environment.
  3. Create a distinct branch for your contributions related to this specific use case.
  4. Develop your predictive model, ensuring comprehensive documentation within the Jupyter notebook.
  5. Store your notebook and model in the specified Contributor_demos/Predicting Parking Space Availability directory.
  6. After finalizing your work, push the changes to your forked repository.
  7. Initiate a Pull Request (PR) to the unifyai/demos repository, with a clear title like "Predicting Parking Space Availability Demo Submission".

Contribution Guidelines:

  • Your code should be well-documented to ensure clarity and facilitate replication by others.
  • In your PR, include a concise summary of your approach, key findings, and any significant hurdles you overcame during the project.

Energy Consumption Forecasting Demo

Objective: Utilize ivy to develop a machine learning model that forecasts energy consumption in a building or region. By analyzing historical energy usage data, weather patterns, and other pertinent factors, this project aims to facilitate the optimization of energy usage and contribute to cost reduction efforts. This initiative is particularly relevant for those interested in sustainability, urban planning, and smart infrastructure development.

Task Details:

  • Dataset: The project will employ the SmartMeter Energy Use Data in London Households dataset, available at London SmartMeter Energy Use Dataset. This dataset provides detailed energy usage readings in a granular format, coupled with weather conditions and other potentially influential factors, offering a comprehensive base for predictive modeling.

  • Expected Output: Participants are required to submit a Jupyter notebook that thoroughly outlines the data preprocessing steps, exploratory data analysis, feature engineering, model development, training, and evaluation phases. Additionally, the submission should include the final trained model files.

  • Submission Directory: Please ensure that your completed Jupyter notebook and any associated model files are placed within the Contributor_demos/Energy Consumption Forecasting subdirectory of the unifyai/demos repository.

How to Contribute:

  1. Begin by forking the unifyai/demos repository to your own GitHub account.
  2. Clone the forked repository to your local system.
  3. Create a new branch dedicated to your work on the Energy Consumption Forecasting demo.
  4. Develop your forecasting model, ensuring to document each step and decision in the Jupyter notebook comprehensively.
  5. Save your notebook and any related model files in the specified Contributor_demos/Energy Consumption Forecasting directory.
  6. After completing your model, push the changes to your forked repository.
  7. Open a Pull Request (PR) to the unifyai/demos repository with a clear and descriptive title, like "Energy Consumption Forecasting Demo Submission".

Contribution Guidelines:

  • Your code should be well-documented to ensure it is accessible and understandable to others.
  • Include a summary of your approach, key insights, and any challenges you encountered in the PR description to provide context to reviewers and future contributors.

Fake News Detection Demo

Objective: Develop a machine learning model capable of distinguishing between real and fake news articles using ivy. This project addresses the critical issue of misinformation in the digital age, employing natural language processing (NLP) techniques to analyze and classify news content. It's an opportunity to contribute to the fight against fake news, leveraging technology to promote the dissemination of accurate information.

Task Details:

  • Dataset: Utilize the dataset available at Fake News Dataset for this task. This dataset contains a collection of news articles along with labels indicating their authenticity, providing a solid foundation for training and evaluating your model.

  • Expected Output: Your contribution should include a Jupyter notebook detailing your model's development process, encompassing data preprocessing, feature engineering, model training, and evaluation stages. Additionally, include the trained model files in the corresponding directory.

  • Submission Directory: Please submit your Jupyter notebook and model files in the Contributor_demos/Fake News Detection subdirectory within the unifyai/demos repository.

How to Contribute:

  1. Fork the unifyai/demos repository to your GitHub account.
  2. Clone the forked repository to your local system.
  3. Create a new branch dedicated to your work on this specific use case.
  4. Develop your model, ensuring to document the process comprehensively in the Jupyter notebook.
  5. Place your notebook and model files in the Contributor_demos/Fake News Detection directory.
  6. Upon completion, push your contributions to your forked repository.
  7. Open a Pull Request (PR) to the unifyai/demos repository with a clear and descriptive title, such as "Fake News Detection Demo Submission".

Contribution Guidelines:

  • Make sure your code is thoroughly documented to facilitate understanding and replication by others.
  • Summarize your methodology, significant discoveries, and any challenges you encountered in your PR description, providing valuable insights into your project.

Chest X-Ray Images (Pneumonia) Detection Demo(Intermediate)

Objective: Construct a machine learning model using ivy that can accurately diagnose pneumonia from chest X-ray images. This project seeks to harness the capabilities of deep learning in medical imaging, offering a tool that could potentially assist radiologists and healthcare providers in the early detection and treatment of pneumonia.

Task Details:

  • Dataset: This project will utilize the Chest X-Ray Images (Pneumonia) dataset available on Kaggle, which can be found here: Chest X-Ray Images (Pneumonia) Dataset. The dataset contains a large number of X-ray images categorized into pneumonia and normal conditions, providing a valuable resource for training and testing the diagnostic model.

  • Expected Output: Contributors are expected to deliver a Jupyter notebook that outlines the model development process in detail, from image preprocessing and augmentation to model training, validation, and testing. The submission should also include the trained model files.

  • Submission Directory: Please place your completed Jupyter notebook and any related model files in the Contributor_demos/Chest X-Ray Images (Pneumonia) subdirectory within the unifyai/demos repository.

How to Contribute:

  1. Fork the unifyai/demos repository to your GitHub account.
  2. Clone your forked repository to your local machine.
  3. Create a new branch specifically for your work on the Chest X-Ray Images (Pneumonia) Detection demo.
  4. Develop your model, ensuring comprehensive documentation of the process in the Jupyter notebook.
  5. Save your notebook and model files in the Contributor_demos/Chest X-Ray Images (Pneumonia) directory.
  6. Once your work is complete, push your branch to your forked repository.
  7. Submit a Pull Request (PR) to the unifyai/demos repository, clearly indicating the project in the title, such as "Chest X-Ray Images (Pneumonia) Detection Demo Submission".

Contribution Guidelines:

  • Ensure your code is well-documented, facilitating understanding and ease of replication.
  • Summarize your approach, key insights, and any significant challenges encountered in the PR description, providing a clear overview of your project journey.

Review and Feedback:

Submissions will be reviewed on an ongoing basis. Feedback or requests for modifications will be communicated through the PR discussion. The project maintainers will merge your contribution once it aligns with the project's standards and objectives, making a valuable contribution to the application of machine learning in healthcare.

This project offers a significant opportunity to impact public health positively by advancing the capabilities of AI in medical diagnostics. We look forward to your innovative solutions and contributions towards improving pneumonia detection through deep learning.

Plant Disease Recognition Demo

Objective: Develop a machine learning model capable of identifying plant diseases from images of leaves using ivy. This project aims to offer a valuable tool for farmers, enabling early disease detection and the implementation of preventive measures.

Task Details:

  • Dataset: For this task, please utilize the PlantVillage Dataset, which is accessible at this link: PlantVillage Dataset. This dataset contains a vast collection of leaf images representing different plant diseases, which will serve as the foundation for training your model.

  • Expected Output: Your submission should include a Jupyter notebook that demonstrates the model development process, including data preprocessing, model training, and evaluation. Additionally, include the trained model files in the same directory.

  • Submission Directory: Please place your completed Jupyter notebook and model files in the Contributor_demos/Plant Disease Recognition subdirectory within the unifyai/demos repository.

How to Contribute:

  1. Fork the unifyai/demos repository to your GitHub account.
  2. Clone your forked repository to your local machine.
  3. Create a new branch for your work on this specific use case.
  4. Develop your model and prepare the Jupyter notebook as per the task details.
  5. Place your work in the Contributor_demos/Plant Disease Recognition subdirectory.
  6. Once you've completed your work, push your changes to your forked repository.
  7. Submit a Pull Request (PR) from your repository to the unifyai/demos repository. Ensure your PR title clearly indicates the use case (e.g., "Plant Disease Recognition Demo Submission").

Contribution Guidelines:

  • Ensure your code is well-documented, making it easy for others to understand your methodology.
  • Provide a brief explanation within your PR description, summarizing the approach you took and any significant findings or challenges you encountered.

Predicting Taxi Demand Demo

Objective: Leverage ivy to build a predictive model that forecasts taxi demand in a city, integrating historical taxi usage data, weather conditions, and city events. This initiative aims to assist taxi companies in optimizing fleet management and enhancing customer service by anticipating demand fluctuations.

Task Details:

  • Dataset: The project will use the City of Chicago's Taxi Trips dataset, accessible at City of Chicago Taxi Trips 2013-2023. This dataset provides a comprehensive record of taxi trips, including timestamps and trip details, which, combined with weather data and event schedules, can form a solid basis for demand prediction.

  • Expected Output: Participants are expected to deliver a Jupyter notebook that outlines the entire model development process. This process includes data cleaning and preprocessing, exploratory data analysis, feature engineering, model training, and evaluation. The trained model files should also be included in the submission.

  • Submission Directory: Ensure that your completed Jupyter notebook and associated model files are placed within the Contributor_demos/Predicting Taxi Demand subdirectory of the unifyai/demos repository.

How to Contribute:

  1. Fork the unifyai/demos repository to your GitHub account.
  2. Clone the forked repository to your local machine.
  3. Create a new branch dedicated to your work on the Predicting Taxi Demand demo.
  4. Develop your predictive model, documenting the process thoroughly in the Jupyter notebook.
  5. Store your notebook and model files in the specified Contributor_demos/Predicting Taxi Demand directory.
  6. After finalizing your model, push the changes to your forked repository.
  7. Open a Pull Request (PR) to the unifyai/demos repository with a clear and descriptive title, such as "Predicting Taxi Demand Demo Submission".

Contribution Guidelines:

  • Ensure your code is well-documented for clarity and ease of understanding by others.
  • Provide a concise summary of your approach, significant findings, and any obstacles encountered in the PR description to offer insights into your project journey.

Music Genre Classification Demo

Objective: Develop a machine learning model capable of classifying music into different genres based on audio features such as tempo, pitch, and timbre using ivy. This project is an exciting opportunity for music enthusiasts to combine their passion for music with machine learning to create a model that can help in understanding and categorizing music more effectively.

Task Details:

  • Dataset: For this task, you'll be using the GTZAN Dataset, available at this link: GTZAN Dataset for Music Genre Classification. This dataset includes audio files across various music genres, providing a diverse range of audio features for training your model.

  • Expected Output: Your submission should include a Jupyter notebook that outlines the model development process, including data preprocessing, feature extraction from audio files, model training, and evaluation. Please also include the trained model files alongside the notebook.

  • Submission Directory: Place your completed Jupyter notebook and model files in the Contributor_demos/Music Genre Classification subdirectory within the unifyai/demos repository.

How to Contribute:

  1. Fork the unifyai/demos repository to your GitHub account.
  2. Clone your forked repository to your local machine.
  3. Create a new branch specifically for your work on this use case.
  4. Proceed to develop your model and document the process in the Jupyter notebook as outlined in the task details.
  5. Save your notebook and model files in the Contributor_demos/Music Genre Classification directory.
  6. After completing your work, push the changes to your forked repository.
  7. Submit a Pull Request (PR) to the unifyai/demos repository with a clear title indicating the use case, such as "Music Genre Classification Demo Submission".

Contribution Guidelines:

  • Ensure your code is clearly documented for ease of understanding and replication.
  • In your PR description, provide a concise summary of your approach, key insights gained, and any obstacles you overcame during the project.

Fix ordering of notebooks

Currently the list of notebooks uses the paths (which is the intended order), but the navigation at the bottom of the page uses the names (which is obv a different order)

Credit Card Fraud Detection Demo(Intermediate)

Objective: Develop a machine learning model with ivy to detect fraudulent transactions in credit card data. This project aims to address the critical issue of financial fraud, leveraging advanced analytics to identify suspicious activities.

Task Details:

  • Dataset: The project will utilize the Credit Card Fraud Detection dataset available on Kaggle, accessible here: Credit Card Fraud Detection Dataset. This dataset contains transactions made by credit cards in September 2013 by European cardholders. Transactions are labeled as fraudulent or genuine, providing a comprehensive basis for developing a detection model.

  • Expected Output: Contributors are expected to submit a Jupyter notebook that comprehensively documents the fraud detection model's development process. This includes data exploration, preprocessing, feature engineering, model training, and evaluation stages. The submission should also include the trained model files.

  • Submission Directory: Your completed Jupyter notebook and model files should be placed in the Contributor_demos/Credit Card Fraud Detection subdirectory within the unifyai/demos repository.

How to Contribute:

  1. Fork the unifyai/demos repository to your GitHub account.
  2. Clone the forked repository to your local machine.
  3. Create a new branch specifically for your work on the Credit Card Fraud Detection demo.
  4. Develop your model, ensuring to document each step in the Jupyter notebook, from data analysis to the application of machine learning algorithms for fraud detection.
  5. Save your notebook and model files in the Contributor_demos/Credit Card Fraud Detection directory.
  6. Push your branch to your forked repository once your work is complete.
  7. Submit a Pull Request (PR) to the unifyai/demos repository, making sure your PR title clearly indicates the project, such as "Credit Card Fraud Detection Demo Submission".

Contribution Guidelines:

  • Make sure your code is well-documented to facilitate understanding and replication by others.
  • Provide a summary of your methodology, significant findings, and any challenges you faced in the PR description, offering insights into your development process.

Review and Feedback:

Submissions will be reviewed on a rolling basis. Feedback or requests for modifications will be communicated through the PR discussion. Your contribution will be merged once it meets the project's standards and objectives, contributing significantly to the fight against financial fraud.

This project offers a meaningful opportunity to apply machine learning to a real-world problem with significant societal impact. We eagerly anticipate your innovative solutions and contributions to enhancing the security of financial transactions.

ToDo list of examples

This issue lists all the examples that we should work on, if you have suggestions about new examples you can post them here.

If you want to work on any of these examples, please create an issue with the same name as the list-item bold name and edit this comment to replace the name with the issue link so that everybody can know who is working on what ๐Ÿ˜„ .

When creating a new notebook, please use the name of the list item in camel case (i.e. First Example with Ivy -> first_example_with_ivy.ipynb).

A few points:

  • There will be tags classifying notebooks in related to "Ivy as a framework" and related to "Ivy as a transpiler"
  • Notebooks with ๐Ÿšจ should be prioritized!
  • Notebooks in learn the basics should be extremely short and to the point, tutorials should be mid-length, with examples and demos probably being more lengthy and involved

  • ๐Ÿšจ Quickstart (framework)(transpiler)

Learn the basics

  • ๐Ÿšจ Write framework-agnostic code (framework)
  • ๐Ÿšจ Unify code (transpiler)
  • ๐Ÿšจ Compile code (framework)(transpiler)
  • ๐Ÿšจ Transpile code (transpiler)
  • ๐Ÿšจ Lazy vs Eager (framework)(transpiler)
  • ๐Ÿšจ How to use decorators (framework)(transpiler)
  • ๐Ÿšจ Transpile any library (transpiler)
  • ๐Ÿšจ Transpile any model (transpiler)
  • ๐Ÿšจ Write a module using Ivy (framework)

Tutorials

  • ๐Ÿšจ Developing a convolutional network using Ivy (framework)
  • Developing a transformer using Ivy (framework)
  • ๐Ÿšจ Transpiling a torch model to build on top of it (transpiler)
  • #15
  • ๐Ÿšจ Transpiling a haiku model to build on top of it (transpiler)

Examples and demos

  • ๐Ÿšจ #6
  • ๐Ÿšจ (?) Developing DeepMind's PerceiverIO using Ivy (framework) (?)
  • Deploying a PyTorch model using TFLite (transpiler)
  • Running a timm model in JAX using TPUs (transpiler)
  • Benchmarking performance - Benchmark of the performance of a complex model after transpiling it to different backends (transpiler)
  • Using fvcore to measure FLOPs in a keras model - this example should explore this kind of tools and showcase how transpiling models can help compare them in a more fair and easy way (e.g. compare various models in different frameworks calculating the FLOPs with the same tool) (transpiler)
  • Transpiling a model from Hugging Face - add something else here (transpiler)

To be categorized (probably in the basics):

  • Debugging graphs (framework)(transpiler)
  • Exploring the graph (show_graph) (framework)(transpiler)

Bird Species Identification Demo

Objective: Develop a machine learning model using ivy to accurately identify various bird species from audio recordings of their songs. This project aims to aid bird watchers and conservationists by providing a tool that enhances the understanding and identification of bird species through their unique vocalizations.

Task Details:

  • Dataset: Utilize the BirdCLEF 2021 dataset, which is available at BirdCLEF 2021 Dataset. This dataset comprises a comprehensive collection of bird song recordings from multiple species, serving as a rich resource for training your model to recognize and differentiate bird calls effectively.

  • Expected Output: Contributors should submit a Jupyter notebook that clearly documents the journey of model development, from data preprocessing and audio feature extraction to model training and evaluation. The submission must also include the trained model files.

  • Submission Directory: Your complete Jupyter notebook and model files should be placed in the Contributor_demos/Bird Species Identification subdirectory within the unifyai/demos repository.

How to Contribute:

  1. Fork the unifyai/demos repository to your GitHub account.
  2. Clone your forked repository to your local machine.
  3. Create a new branch specifically for your work on the Bird Species Identification demo.
  4. Develop your model, ensuring to document each step in the Jupyter notebook, from analyzing audio data to applying machine learning techniques for species identification.
  5. Save your finalized notebook and model files in the Contributor_demos/Bird Species Identification directory.
  6. Push your branch to your forked repository upon completion.
  7. Submit a Pull Request (PR) to the unifyai/demos repository with a title that clearly reflects the project, such as "Bird Species Identification Demo Submission".

Contribution Guidelines:

  • Make sure your code is thoroughly documented to ensure ease of understanding and replication.
  • In your PR description, summarize your methodology, the insights you've gained, and any challenges you've faced during the development process.

Artwork Style Recognition Demo

Objective: Develop a machine learning model using ivy that classifies artwork into different artistic styles, such as impressionism, surrealism, and cubism, by analyzing images of paintings. This project is an engaging opportunity for art enthusiasts and historians to merge their passion for art with the power of machine learning, enhancing our understanding and appreciation of various art movements.

Task Details:

  • Dataset: The project will utilize the dataset available through the "Painter by Numbers" competition on Kaggle, which can be found here: Painter by Numbers Dataset. This dataset includes images of paintings from various artists and styles, providing a rich basis for developing and training your model.

  • Expected Output: Contributors are expected to submit a Jupyter notebook that documents the process of model development, including data preprocessing, feature extraction from images, model training, and evaluation. The trained model files should also be included with your submission.

  • Submission Directory: Your completed Jupyter notebook and model files should be placed in the Contributor_demos/Artwork Style Recognition subdirectory within the unifyai/demos repository.

How to Contribute:

  1. Fork the unifyai/demos repository to your GitHub account.
  2. Clone the forked repository to your local machine.
  3. Create a new branch specifically for your work on the Artwork Style Recognition demo.
  4. Proceed to develop your model, making sure to thoroughly document your methodology in the Jupyter notebook.
  5. Place your completed notebook and model files in the Contributor_demos/Artwork Style Recognition directory.
  6. Push your branch to your forked repository once you've completed your work.
  7. Submit a Pull Request (PR) to the unifyai/demos repository, ensuring your PR title clearly reflects the project, such as "Artwork Style Recognition Demo Submission".

Contribution Guidelines:

  • Ensure your code is well-documented to facilitate ease of understanding and replication by others.
  • Provide a brief explanation in your PR description, summarizing your approach, any significant findings, and challenges faced during the project.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.