Code Monkey home page Code Monkey logo

evolutionary_computing's Introduction

πŸ’« About Me:


Hi, I’m Haleshot, a final-year student studying B Tech Artificial Intelligence. I like projects relating to ML, AI, DL, CV, NLP, Image Processing, etc.

Currently exploring Python, FastAPI, projects involving AI and platforms such as HuggingFace and Kaggle.

🌐 Socials:

LinkedIn Twitch Twitter YouTube

πŸ’» Tech Stack:

Programming Languages:

C++ Python Β Β 

Frameworks and Libraries:

Qt NumPy Pandas scikit-learn SciPy Matplotlib OpenCV

Frontend:

HTML5 CSS3

Backend/Databases:

MySQL

Editors/IDEs:

Google Colab OpenCV PyCharm Visual Studio Visual Studio Code Jupyter Notebook

Version Control:

Git GitHub

Operating Systems:

Windows Ubuntu

Other:

Notion

πŸ“Š GitHub Stats:

GitHub stats GitHub streak stats
Top languages

Projects


evolutionary_computing's People

Contributors

avneesh777 avatar haleshot avatar

Stargazers

 avatar

Watchers

 avatar  avatar

evolutionary_computing's Issues

Generate Sphinx Documentation

Problem Description

The project contains function block explanations in the Sphinx comment format, but the Sphinx documentation has not been generated yet. Without proper documentation, it can be challenging for users to understand the functionality of the codebase and how to use the provided functions effectively.

Expected Behavior

Generating Sphinx documentation will provide clear and comprehensive documentation for the project, enabling users to understand the purpose of each function, its parameters, and return values. This documentation will serve as a valuable resource for developers and users alike, facilitating easier integration and usage of the codebase.

Proposed Solution

To generate Sphinx documentation for the project:

  1. Install Sphinx and necessary dependencies if not already installed.
  2. Configure Sphinx settings and directories to point to the relevant codebase.
  3. Run the Sphinx documentation generation command to parse the codebase and generate HTML or other documentation formats.
  4. Review and refine the generated documentation as needed to ensure clarity and completeness.
  5. Publish the documentation on a suitable platform or include it within the project repository for easy access.
  6. As the project will primarily be hosted via gh-pages, ensure that the generated documentation exists as a separate branch (e.g., gh-pages branch) for easy deployment and accessibility.

Impact

  • Severity: Medium
  • Priority: High

Generating Sphinx documentation is crucial for enhancing the usability and maintainability of the project. Clear documentation will streamline the onboarding process for new contributors and users, improving the overall accessibility and adoption of the project.

Enhancement for Graph Generation and Output Presentation

Problem Description

Currently, the final output displays graphs generated via st.pyplot within a loop for each generation. However, the output lacks clarity as there is no clear separation between the graphs and their corresponding accuracy and F1 score for each iteration. Additionally, the user needs to scroll through the code explanation to reach the output section, which can be inconvenient.

Expected Behavior

To improve user experience and output clarity, the following enhancements are proposed:

  1. Each graph generated via iteration should be accompanied by its corresponding accuracy and F1 score, with a clear separator to distinguish between iterations.
  2. Consider generating the final output in a separate webpage altogether, providing a more organized and user-friendly interface.
  3. Instead of displaying the standard Streamlit "running" indicator, consider implementing a splash screen or loading animation to indicate progress during graph generation.

Proposed Solution

To address these issues, the final code can be modified to include:

  • A clear separator (e.g., st.markdown('---')) between each graph and its corresponding accuracy and F1 score.
  • Utilizing Streamlit's st.spinner or custom loading animation components to indicate progress during graph generation.
  • Investigating options to generate the final output in a separate webpage or section for improved organization and navigation.

Impact

  • Severity: Medium
  • Priority: High

Implementing these enhancements will significantly improve the clarity and usability of the final output, making it easier for users to interpret and analyze the results of the algorithm. These changes will enhance the overall user experience and satisfaction with the application.

Add Column Headings to Dataset Files for Improved Visualization

Description:

Currently, the datasets used in the project lack column headings in their respective .data and .txt files. Adding column headings as the first line of each file will enhance the visualization process, especially when utilizing the pygwalker library for visualization tasks.

Action Items:

  • Add column headings as the first line of each .data and .txt file in the dataset folders.
  • Ensure that the column headings accurately represent the features or attributes of the dataset.
  • Verify that the addition of column headings does not disrupt the existing data structure.

Datasets Affected:

  • Wisconsin Breast Cancer Diagnostic Data Set (WBCD)
  • Fertility Data Set (Fertility)
  • Haberman’s Survival Data Set (Haberman)
  • Parkinsons Data Set (Parkinsons)
  • Iris Data Set (IRIS)
  • Wine Data Set (Wine)
  • Contraceptive Method Choice Data Set (CMC)
  • Seeds Data Set (Seeds)
  • Glass Identification Data Set (Glass)
  • Zoo Data Set (Zoo)

Source:

All dataset files were obtained from UCI Machine Learning Repository.

Expected Outcome:

The inclusion of column headings in the dataset files will facilitate better data interpretation and visualization, thereby enhancing the overall project's effectiveness.

Enhancement Request: Improved KNN Integration in Neural Network Classifier

Overview

This issue proposes an enhancement to the existing neural network implementation by integrating k-Nearest Neighbors (KNN) to enhance the classifier's performance. The key components involve optimizing the forward pass, refining the training process of the KNN classifier, introducing data visualization capabilities, and enhancing the evaluation of the neural network.

Components

  1. Forward Pass Optimization:

    • Enhance the efficiency of the forward pass in the neural network to improve computational performance.
  2. Training of KNN Classifier:

    • Optimize the training process of the KNN classifier for improved accuracy and responsiveness.
  3. Data Visualization:

    • Introduce or refine data visualization functionalities to provide insights into the dataset and model performance.
  4. Evaluation of Neural Network:

    • Enhance the evaluation process by incorporating KNN classification results for accurate assessment.

Expected Benefits

  • Improved overall performance and accuracy of the neural network classifier.
  • Enhanced interpretability through data visualization.
  • Better understanding of the model's behavior with the integrated KNN component.

Usage Scenario

This enhancement is particularly beneficial for scenarios where combining neural network capabilities with KNN classification can provide more robust and accurate predictions.

Implementation Steps

  1. Optimize Forward Pass:

    • Review and optimize the existing forward pass implementation.
  2. Train KNN Classifier:

    • Evaluate and enhance the training process of the KNN classifier.
  3. Data Visualization Integration:

    • Implement or refine functions for data visualization to aid in model analysis.
  4. Evaluate Neural Network:

    • Combine neural network output with KNN classification for improved evaluation metrics.

How to Contribute

If you are interested in contributing to this enhancement, please follow these steps:

  1. Fork the repository to your GitHub account.
  2. Create a branch for your work: git checkout -b knn-enhancement.
  3. Implement changes following the outlined steps.
  4. Create a pull request detailing the changes and improvements made.

Your contributions will be highly appreciated in enhancing the capabilities of our neural network classifier.

Additional Information

Feel free to discuss and provide suggestions in the comments section. Let's collaborate to create a more powerful and accurate neural network classifier.

Thank you for your contribution and support!

Enhancing PSO: Updates, Modifications, and Completion

Title:

Implement Particle Swarm Optimization (PSO) for Swarm Class Initialization, Optimization, and Updation

Description:

Issue Description:

The task involves implementing Particle Swarm Optimization (PSO) functionality for the Swarm class. This includes initialization of individuals, optimization of the swarm, and updating velocities and weights of the individuals within the swarm.

Proposed Changes:

  1. Initialization of Swarm Individuals:

    • Implement a function to initialize all individuals of the swarm with the provided data.
    • Extract input features and target labels from the data.
    • Initialize individuals with random weights and biases.
    • Initialize velocities of individuals.
    • Initialize local best individuals as a deep copy of the individuals.
    • Find the global best individual.
  2. Optimization of the Swarm:

    • Implement a function to execute the optimization algorithm until the maximum number of iterations is reached.
    • Update the swarm by iteratively updating velocities and weights of individuals.
    • Calculate the fitness of individuals.
    • Update local best and global best individuals.
  3. Updating Velocities and Weights:

    • Implement a function to update the velocities and weights of the swarm.
    • Calculate fitness and update local best and global best fitness.

Expected Outcomes:

  • Improved functionality of the Swarm class by integrating PSO capabilities.
  • Efficient optimization of weights and biases for various applications, such as neural network training and optimization problems.

Additional Information:

This issue is essential for enhancing the functionality of the Swarm class by incorporating PSO capabilities. Implementing PSO will enable the class to efficiently optimize weights and biases for various applications, such as neural network training and optimization problems.

Implement Visual Code Representation for Main Files

Description:
All the essential details and explanations for the main files have been successfully outlined using code blocks in the markdown format. To proceed with the next phase of the project, the actual implementation of these code blocks needs to be completed within the respective files. This involves pasting the code blocks into the codebase as actual implementations, ensuring that the visual representation matches the code's functionality.

Proposed Changes:

  1. Implement Code Blocks: Paste the code blocks provided in the markdown format into the respective files (KNN, PSO, and Final Output webpages) in the codebase.
  2. Ensure Consistency: Verify that the pasted code blocks accurately represent the functionality and structure of the corresponding sections in the files.
  3. Review for Completeness: Ensure that all necessary code blocks from the markdown explanations are implemented in the codebase to provide a comprehensive visual representation.
  4. Test Visual Representation: Verify that the visual representation of the code in the webpages accurately reflects the actual implementation and behavior of the code.

Expected Outcome:

  • Enhanced clarity and understanding of the codebase through visual representation in the webpages.
  • Improved usability and accessibility of the code for both developers and users.
  • Preparation for the next phase of displaying the final output with enriched visual elements.

Additional Context:
Completing the implementation of visual code representation is an essential step towards achieving the project's goals of promoting readability, usability, and collaboration within the development community. Your contributions in this regard will greatly contribute to the project's success.

Enhance Streamlit app with updated final_code.py integration

Description:
The final_code.py file has been updated and refined to include additional functionality and improvements. To reflect these changes in the Streamlit app, this enhancement proposes integrating the updated final_code.py file into the app's interface. Users will benefit from the enhanced features and capabilities provided by the latest version of the final_code.py script.

Proposed Changes:

  1. Update the Streamlit app to include the latest version of the final_code.py file.
  2. Ensure seamless integration of the updated script into the app's interface, allowing users to access and utilize its new features.
  3. Provide clear instructions or guidance within the app interface on how to utilize the updated functionality offered by the final_code.py script.

Expected Behavior:
Upon accessing the Streamlit app, users should be able to navigate to the section dedicated to the final_code.py script. They should then be able to interact with the updated script's features and functionalities directly within the app interface. The integration should be seamless and intuitive, enhancing the overall user experience.

Additional Context:
The updated final_code.py file introduces significant improvements and additional capabilities to the project. Integrating these updates into the Streamlit app ensures that users have access to the latest tools and functionalities available. This enhancement aligns with the project's goal of providing a comprehensive and user-friendly platform for data analysis and visualization.

Enhancement Request: Streamlit Support for Dataset Selection

Description

Currently, the project lacks an interactive way to explore different datasets. This enhancement aims to integrate Streamlit support to provide users with a seamless experience in selecting and visualizing datasets.

Proposed Changes

  1. Streamlit Integration:

    • Implement Streamlit components for easy dataset selection.
    • Add a dropdown menu allowing users to choose from a list of available datasets.
  2. Dynamic Visualization:

    • Upon selecting a dataset, dynamically update visualizations to help users understand the characteristics of the chosen dataset.
    • Leverage Streamlit's interactive plotting capabilities to display relevant information.

Expected Behavior

  • Users should see an enhanced interface with a dropdown menu for dataset selection.
  • Upon choosing a dataset, the application should automatically update visualizations to reflect the selected dataset.

Additional Context

  • Streamlit is a powerful tool for creating interactive and user-friendly interfaces.
  • This enhancement aligns with the goal of making the project more accessible and interactive.

Related Links

Compare PSO-Based Neural Network Optimization with Traditional Methods

Problem Description

The project utilizes Particle Swarm Optimization (PSO) to optimize neural network classifiers. While PSO offers a novel approach to optimizing neural network parameters, it's essential to assess its effectiveness compared to traditional methods commonly used for improving neural network classifiers.

Expected Behavior

Comparing PSO-based optimization with traditional methods will provide valuable insights into the efficacy and efficiency of PSO in optimizing neural network classifiers. The comparison can be based on various factors, including but not limited to:

  • Accuracy: Compare the accuracy achieved by PSO-optimized neural networks with those optimized using traditional methods.
  • F1 Score: Evaluate the F1 score obtained by PSO-optimized classifiers against traditional methods.
  • Convergence Rate: Assess the convergence rate of PSO-based optimization compared to traditional optimization techniques.
  • Scalability: Evaluate the scalability of PSO for optimizing neural networks concerning the size and complexity of datasets.
  • Robustness: Determine the robustness of PSO-optimized classifiers in handling noisy or unbalanced datasets compared to traditional methods.

Proposed Solution

To compare PSO-based optimization with traditional methods:

  1. Identify commonly used traditional methods for optimizing neural network classifiers, such as gradient descent, stochastic gradient descent, or genetic algorithms.
  2. Implement the selected traditional optimization methods and ensure they are compatible with the neural network architecture used in the project.
  3. Define evaluation criteria, including accuracy, F1 score, convergence rate, scalability, and robustness, for comparing the performance of PSO and traditional methods.
  4. Conduct experiments using benchmark datasets or custom datasets to evaluate the performance of PSO-optimized classifiers and classifiers optimized using traditional methods.
  5. Generate visualizations, such as plots, graphs, or tables, to illustrate the comparison results effectively.
  6. Analyze the findings to draw conclusions regarding the relative strengths and weaknesses of PSO-based optimization compared to traditional methods.

Impact

  • Severity: Medium
  • Priority: High

Comparing PSO-based optimization with traditional methods will contribute to a better understanding of the strengths and limitations of different optimization techniques for neural network classifiers. The insights gained from the comparison will inform future optimization strategies and guide decision-making in selecting the most suitable approach for specific use cases.

Markdown File Content Displayed Alongside Final Plots

Problem Description

When running the final file, the markdown content from the imported files is being displayed along with the final plots at the end of the execution. This results in unnecessary clutter and duplication of information in the output.

Expected Behavior

The final output should only display the final plots along with relevant information such as accuracy and F1 score. The markdown content from the imported files should be eliminated from the output.

Proposed Solution

To address this issue, we need to modify the final file or the way markdown content is imported and displayed. This could involve:

  1. Adjusting the import statements to only import functions/classes without displaying markdown content.
  2. Refactoring the final file to prevent the display of markdown content during execution.
  3. Utilizing Streamlit's capabilities to control the display of markdown content more effectively.

Impact

  • Severity: Low
  • Priority: Medium

This issue impacts the readability and clarity of the final output but does not affect the functionality of the code. Resolving this issue will enhance the user experience and make the output more concise and focused.

Integrated Functionality in KNN: Unifying Multiple Approaches for Enhanced Classification

Description

This repository addresses the integration of multiple functionalities within the K-Nearest Neighbors (KNN) algorithm to enhance its classification capabilities. KNN is a widely used non-parametric method for classification and regression tasks, but its performance can be further improved by incorporating various techniques and modifications.

In this repository, we explore different strategies for integrating multiple functionalities into the KNN algorithm, such as:

  • Handling high-dimensional data efficiently
  • Adapting dynamically to changing data patterns
  • Dealing with outliers and noisy data effectively
  • Optimizing for large-scale datasets to improve computational efficiency
  • Ensuring interpretability and transparency of the model
  • Integrating with other machine learning techniques to leverage their strengths

Enhance PSO File Documentation and Understanding

Description:
The PSO (Particle Swarm Optimization) file in the project's codebase contains essential functionality for optimizing neural network weights using the PSO algorithm. However, the documentation and understanding of the code can be improved to enhance clarity and facilitate better utilization.

Proposed Changes:

  1. Add Detailed Function Documentation: The Particle class and its associated methods need comprehensive documentation to explain their purpose, inputs, and outputs. This includes functions such as __init__, frac_class_wt, forward, calc_fitness, and kmeans_eval. Providing detailed explanations and examples will help users understand how to interact with these functions effectively.

  2. Clarify Variable Initialization: Comment lines explaining the initialization of weights and biases in the Particle class will enhance understanding, particularly regarding the range of initial weights and the rationale behind them.

  3. Explain Fitness Calculation: The calc_fitness method requires detailed documentation to clarify how fitness values are calculated based on similarity measures and class weights. This includes explanations of concepts such as similarity matrix computation, nearest neighbors retrieval, and fitness evaluation using equation 6 from the referenced paper.

  4. Document K-means Evaluation: The kmeans_eval method should be documented to explain its role in evaluating fitness using K-means clustering. Users would benefit from understanding how this evaluation method differs from the standard fitness calculation.

  5. Include Vel Class Documentation: Similarly, the Vel class needs proper documentation to explain its purpose and functionality within the PSO algorithm. This includes descriptions of its attributes and any methods it contains.

Expected Outcome:

  • Improved understanding of the PSO algorithm's implementation within the codebase.
  • Enhanced clarity on how to interact with the Particle and Vel classes and their methods.
  • Facilitated utilization of the PSO functionality for optimizing neural network weights.

Additional Context:
Enhancing the documentation of the PSO file aligns with the project's goal of promoting code readability and usability. Clear and comprehensive documentation fosters better collaboration and encourages contributions from the community.

Note: This issue aims to enhance the documentation and understanding of the PSO file. It does not involve modifying the actual functionality of the code but focuses on improving user comprehension and interaction with existing code.

Add Sidebar Option for Final Output Display in Streamlit

Description

Currently, the project lacks a user-friendly option to view the final output of the classification algorithm after performing exploratory data analysis (EDA) and visualization. This issue proposes adding a sidebar option in Streamlit to display the output of the final_code.py script, allowing users to observe the accuracy and F-score of the classification model.

Proposed Changes

Sidebar Option for Final Output:

  • Implement a sidebar component in Streamlit to provide users with the option to view the final output of the classification algorithm.
  • Integrate the final_code.py script into the Streamlit application and display its output in a designated section of the sidebar.
  • Enhance user experience by ensuring clear visibility and accessibility of the final output alongside the EDA and visualization sections.

Expected Behavior

  • Users will have the ability to navigate to the sidebar option labeled "Final Output" after completing the EDA and visualization steps.
  • Upon selecting the "Final Output" option, the Streamlit application will display the accuracy and F-score of the classification model obtained from running the final_code.py script.
  • The sidebar option will provide users with valuable insights into the performance of the classification algorithm and enhance their overall understanding of the project results.

Additional Context

Adding a sidebar option for the final output aligns with the project's goal of providing users with a comprehensive and interactive experience. By integrating the final_code.py script into the Streamlit application, we aim to empower users to assess the effectiveness of the classification model and make informed decisions based on the results.

Add Usage Instructions for k-Nearest Neighbor (kNN) Classification

Description

This issue requests the addition of a usage section for the k-Nearest Neighbor (kNN) implementation in the kNN file. The usage section should include clear instructions on how to utilize the kNN implementation for classification tasks.

Checklist

  • Define step-by-step instructions for using the kNN implementation.
  • Include a sample code snippet demonstrating the usage of the kNN classifier.
  • Ensure proper formatting and documentation of the code snippet.
  • Test the provided instructions to ensure accuracy.
  • Verify that the changes do not introduce any errors or conflicts.

Expected Outcomes

  • kNN file includes a new "Usage" section for kNN implementation.
  • The usage section provides clear step-by-step instructions.
  • A sample code snippet demonstrates the usage of the kNN classifier.
  • Code snippet is properly formatted and documented.
  • Provided instructions are accurate and tested.
  • No errors or conflicts are introduced by the changes.

Enhance kNN Implementation for Improved Performance and Flexibility

Description:

The current implementation of k-Nearest Neighbors (kNN) lacks certain features and optimizations that could enhance its performance and flexibility. This issue aims to modify the existing kNN implementation to address these shortcomings and improve its overall effectiveness.

Proposed Changes:

  1. Optimization: Review and optimize the kNN algorithm for better computational efficiency, especially for large datasets.

  2. Parameter Tuning: Implement methods to automatically tune the hyperparameters of kNN, such as the number of neighbors (k), to improve classification accuracy.

  3. Distance Metrics: Provide support for different distance metrics (e.g., Euclidean, Manhattan) to allow users to choose the most suitable metric for their dataset.

  4. Algorithm Variants: Consider incorporating variant algorithms of kNN, such as weighted kNN or distance-weighted kNN, to offer more flexibility and potentially improve classification results.

  5. Code Refactoring: Ensure code readability, maintainability, and adherence to best practices through refactoring and documentation updates.

Expected Outcomes:

  • A more efficient and flexible kNN implementation capable of handling diverse datasets and scenarios.
  • Improved accuracy and performance compared to the current implementation.
  • Enhanced usability with optimized hyperparameters and support for different distance metrics.

This issue serves as a starting point for discussing and implementing these modifications to the kNN algorithm. Contributions and suggestions from the community are welcome to make the implementation robust and versatile.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.