Topic: information-gain Goto Github
Some thing interesting about information-gain
Some thing interesting about information-gain
information-gain,To predict which drug might be appropriate for a future patient with particular type of illness using Decision Trees.
User: aabritidutta
information-gain,Polycystic Ovary Syndrome (PCOS) is a widespread pathology that affects many aspects of women's health, with long-term consequences beyond the reproductive age. The wide variety of clinical referrals, as well as the lack of internationally accepted diagnostic procedures, have had a significant impact on making it difficult to determine the exact etiology of the disease. The exact histology of PCOS is not yet clear. It is therefore a multifaceted study, which shares genetic and environmental factors. The aim of this project is to analyse simple factors (height, weight, lifestyle changes, etc.) and complex (imbalances of bio hormones and chemicals such as insulin, vitamin D, etc.) factors that contribute to the development of the disease. The data we used for our project was published in Kaggle, written by Prasoon Kottarathil, called Polycystic ovary syndrome (PCOS) in 2020. This database contains records of 543 PCOS patients tested on the basis of 40 parameters. For this, we have used Machine Learning techniques such as Logistic Regression, Decision Trees, SVMs, Random Forests, etc, A detailed analysis of all the items made using graphs and programs and prediction using Machine Learning Models helped us to identify the most important indicators for the same.
User: aastha1840
information-gain,AIKA is a new type of artificial neural network designed to more closely mimic the behavior of a biological brain and to bridge the gap to classical AI. A key design decision in the Aika network is to conceptually separate the activations from their neurons, meaning that there are two separate graphs. One graph consisting of neurons and synapses representing the knowledge the network has already acquired and another graph consisting of activations and links describing the information the network was able to infer about a concrete input data set. There is a one-to-many relation between the neurons and the activations. For example, there might be a neuron representing a word or a specific meaning of a word, but there might be several activations of this neuron, each representing an occurrence of this word within the input data set. A consequence of this decision is that we have to give up on the idea of a fixed layered topology for the network, since the sequence in which the activations are fired depends on the input data set. Within the activation network, each activation is grounded within the input data set, even if there are several activations in between. This means links between activations serve two purposes. On the one hand, they are used to sum up the synapse weights and, on the other hand they propagate the identity to higher level activations.
User: aika-algorithm
Home Page: https://aika.network
information-gain,Implemented decision tree algo. from scratch by calculating gini impurity and information gain for performing splits
User: akshay-madar
information-gain,In this project, we used 3 different metrics (Information Gain, Mutual Information, Chi Squared) to find important words and then we used them for the classification task. We compared the result at the end.
User: alimorty
information-gain,Implementation of decision tree from the scratch using entropy as criteria for information gain calculations.
User: aptr288
information-gain,Implemented a Decision Tree from Scratch using binary univariate split, entropy, and information gain. Used Gini index and Pruning for performance improvement.
User: ayanpahari
information-gain,
User: bovojon
information-gain,Experimental classification algorithms on german credit data implemented using scikit-learn library
User: chanioxaris
information-gain,A project of my course "Introduction to Pattern Recognition". Realize Decision Tree algorithm using PYTHON.
User: cirensangzhu
information-gain,Implementing your own decision tree algorithms and applying them to real world data!
Organization: cmsc422
information-gain,Implementation of classic machine learning concepts and algorithms from scratch and math behind their implementation.Written in Jupiter Notebook Python
User: daodavid
information-gain,Implementing decision tree using ID3 algorithm based on Information Gain and using post pruning for improving accuracy
User: deepesh-rathore
information-gain,A few R programs to simulate many fundamental data mining algorithms
User: derekmma
information-gain,Adaptive Reinforcement Learning of curious AI basketball agents
User: dimgold
information-gain,An innovative Python implementation of decision trees for machine learning, showcasing algorithmic learning from scratch with practical examples and a focus on AI principles.
User: dor-sketch
Home Page: https://pages.cs.wisc.edu/~dyer/cs540/hw-toc.html
information-gain,This is a Decision Tree implementation with Python which uses information gain to split attributes. It does not use any ML library.
User: electricalgorithm
information-gain,Learning informed sampling distributions and information gains for efficient exploration planning.
Organization: ethz-asl
information-gain,Focused on math and applied the methods into programming.
User: gawun92
information-gain,My first attempt at Elixir, a basic implementation of decision trees.
User: gorosgobe
information-gain,Information gain can be used to get information about the value of attributes regarding a conceived result.
User: graciaapfelthaler
information-gain,This project focuses on implementing and analyzing the learning process in decision trees using Connect 4.
User: hosnawhb
information-gain, Sequential Minimal Optimization (SMO) algorithm for the training of Support Vector Machines (SVM)
User: jamolinet
information-gain,calculates information entropy, and information gains.
User: jeff-tian
Home Page: https://id3.js.org/
information-gain,Visualisation for id3 classification algorithm.
User: jeff-tian
Home Page: https://id3.js.org/
information-gain,decision trees made easy
User: joctatorres
information-gain,A simple implementation of the ID3 algorithm in Rust.
User: kesnar
information-gain,
User: klstff
information-gain,IUB CSE 425 (Artificial Intelligence)
User: mirsahib
information-gain,In this repo, I implement a decision tree classifier from scratch using python and pandas. I use this decision tree to detect gender based on names. I use the first letter, first two letters, first three letters, last letter, last two letters, last three letters, etc. as features.
User: mshadloo
information-gain,Feature selection techniques on author text data
User: navanith007
information-gain,Content: Root node, Decision node & Leaf nodes, Attribute Selection Measure (ASM), Feature Importance (Information Gain), Gini index
User: ninad077
information-gain,A repository containing the source code, datasets, and ranked features for the Nested Bigrams method proposed in a paper published in ICDMW. This method is designed for authorship attribution in source code to address cybersecurity issues.
User: pegayus
information-gain,Built and implemented the Decision Tree successfully with split on the basis of information gain from scratch
User: priyanshiguptaaa
information-gain,Design and Implementation of Random Forest algorithm from scratch to execute Pacman strategies and actions in a deterministic, fully observable Pacman Environment.
User: providence-nate
information-gain,Greedy algorithm for classification tree
User: rachhshruti
information-gain,Implementation of Decision Tree using two heuristics for Assignment 01 of the course CS6375: Machine Learning.
User: rahul1947
information-gain,Implementation of Decision tree learning algorithm with chi-square pruning
User: sachinbiradar9
information-gain,To apply decision tree classifier on iris dataset
User: salonibhatiadutta
information-gain,Polycystic Ovary Syndrome (PCOS) is a widespread pathology that affects many aspects of women's health, with long-term consequences beyond the reproductive age. The wide variety of clinical referrals, as well as the lack of internationally accepted diagnostic procedures, have had a significant impact on making it difficult to determine the exact etiology of the disease. The exact histology of PCOS is not yet clear. It is therefore a multifaceted study, which shares genetic and environmental factors. The aim of this project is to analyse simple factors (height, weight, lifestyle changes, etc.) and complex (imbalances of bio hormones and chemicals such as insulin, vitamin D, etc.) factors that contribute to the development of the disease. The data we used for our project was published in Kaggle, written by Prasoon Kottarathil, called Polycystic ovary syndrome (PCOS) in 2020. This database contains records of 543 PCOS patients tested on the basis of 40 parameters. For this, we have used Machine Learning techniques such as Logistic Regression, Decision Trees, SVMs, Random Forests, etc, A detailed analysis of all the items made using graphs and programs and prediction using Machine Learning Models helped us to identify the most important indicators for the same.
User: shivangi1raghav
information-gain,Information gain of a car dataset was calculated in this notebook
User: sid-stha7
information-gain,q statistical learning
User: skeptiqos
information-gain,Feature Selection for Credit Scoring using Genetic Algorithm Wrapper(Information Gain)
User: snishikant
information-gain,
User: talmurshidi
Home Page: https://talmurshidi.github.io/calc-entropy-gain/
information-gain,"A set of Jupyter Notebooks on feature selection methods in Python for machine learning. It covers techniques like constant feature removal, correlation analysis, information gain, chi-square testing, univariate selection, and feature importance, with datasets included for practical application.
User: tanvirnwu
information-gain,Implementation of decision tree using information gain
User: xhsun
information-gain,Applying different machine learning algorithms on PCGA Prostate Cancer Gene Dataset for Feature Selection, Dimensional Reduction and Classification and Regression
User: znreza
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.