SMI, SCMI, and SCG need to have self.model.eval() in their select() methods. Without it, the embedding computation will not necessarily be the same for similar points, which may cause performance degradation.
Add links to the SIMILAR and effective eval papers
Remove the results right now from the main readme and rather link to the benchmark folders for each case. Each folder will have the detailed results for the corresponding case.
Is it possible to perform Active-Learning with a combination of SSL methods
such as Virtual Adversarial Training (VAT), Entropy Minimization (EntMin), etc?
I believe that this would be the major benefit of using DL for active learning.
Otherwise, one can use an easier model to train & tune after each iteration.
Do you also think Extreme Learning Machine could be useful as a one-shot
learning method to speed up the active-learning iterations with your library?
Implement a new condition for the data_train class that tests whether the testing accuracy has improved by a configurable amount within a configurable number of epochs. If this condition fails, then stop training.
I'm interest on utilizing https://github.com/decile-team/distil for active learning. My aim is to create a system where we select a subset of tasks from a larger pool and forward those representative samples to humans for manual review. Afterwards, it's crucial to distribute the decisions made on these representative tasks back to the subsets they were drawn from (or the subsets they represent). I'm exploring how to establish this reverse mapping from representative samples to subsets in order to effectively fan out our decisions.
Questions:
Does Distil offer an out-of-the-box feature for this?
If not, what would be the most effective approach to achieve it?
Though it says that distil coud be used with custom datasets there are no tutorials to support this claim and I have not been able to implement any kaggle datasets either. Please include instructions for if we want to use any dataset separate from the pre-defined
Hello authors,
It's great that you publish the source code, but the default hyper-parameters in config_cifar10_resnet_badge.json can not get similar accuracy in your project about badge strategy, so could you please share the hyper-params to produce the graph in README.
Hi, Thank you very much for the toolkit, I want to plot the experimental comparison result, but I don't konw how to plot the same effect as the paper, so can you provide the code for this plot? Thanks.