Code Monkey home page Code Monkey logo

interpretable-ml-covid-19's Introduction

Interpretable Machine Learning for COVID-19: An Empirical Study on Severity Prediction Task

Understanding how black-box models make predictions, and what they see in the pandemic.

https://arxiv.org/abs/2010.02006

Slides Available Here

Introduction

Introduction

The pandemic is a race against time. We seek to answer the question, how can medical practitioners employ machine learning to win the race in the pandemic?

Instead of targeting at a high-accuracy black-box model that is difficult to trust and deploy, we use model interpretation that incorporates medical practitioners' prior knowledge to promptly reveal the most important indicators in early diagnosis, and thus win the race in the pandemic.

Understanding high-accuracy Black-box models

In this research, we try to understand why those black-box models can make correct predictions. Is it possible to let black-box models speak, telling us how they make predictions? Will medical practitioners benefit from these models?

【Correct Predictions】

Neural networks make a correct prediction because it thinks the patient is old and has a high CRP which indicates severe virus infection, and a high NTproBNP.

Gradient Boosted Trees makes a similar correct prediction because it thinks the patient has a high CRP and NTproBNP, even though the patient shows little symptoms ( = 0).

【Wrong Predictions】

Decision Trees unfortunately makes a wrong prediction, because it thinks even though the patient is having a fever (38.4), but the CRP and NTproBNP are not high enough to be severe.

Credits

The raw dataset comes from hospitals in China, including 92 patients who contracted COVID-19. Our Research Ethics Committee waived written informed consent for this retrospective study that evaluated de-identified data and involved no potential risk to patients. All of the data of patients have been anonymized before analysis.

@article{wu2021interpretable,
  title={Interpretable machine learning for covid-19: an empirical study on severity prediction task},
  author={Wu, Han and Ruan, Wenjie and Wang, Jiangtao and Zheng, Dingchang and Liu, Bei and Geng, Yayuan and Chai, Xiangfei and Chen, Jian and Li, Kunwei and Li, Shaolin and others},
  journal={IEEE Transactions on Artificial Intelligence},
  year={2021},
  publisher={IEEE}
}

interpretable-ml-covid-19's People

Contributors

wuhanstudio avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.