Code Monkey home page Code Monkey logo

awful-ai's Introduction

Hi, I'm David

I'm a machine learning researcher and nature conservationist. I grew up in Germany's beautiful Black Forest as the son of Hoa boat people refugees. I build systems that tackle climate change and empower local and indigenous communities through algorithmic data marketplaces.

  • 🛰 Founder of GainForest, a decentralized green fund using machine learning to fight deforestation.
  • 🔭 Researcher at ETH Zurich working on ecological machine learning and ethical data markets.
  • 🌱 My goal is to save our planet with crazy technology 🌍

David Dao | Twitter

awful-ai's People

Contributors

auxdesigner avatar daviddao avatar ellyjonez avatar fabsta avatar fwsgonzo avatar jaworek avatar jvmncs avatar natewilliams2 avatar pawni avatar rubiel1 avatar shashi456 avatar showengineer avatar stekhn avatar taizweb avatar tdiethe avatar twsl avatar vocalfan avatar wasserfuhr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

awful-ai's Issues

Pitfalls of AI

Can we add biases that are made within ai ? I think that should be added because these models are trained on data curated by humans which on some level just portray the biases we have inherently.

And on the other hand, I think one section of ai research that belongs here is adversarial examples, which is very relevant in the case of autonomous car safety. Since these adversarial examples lead the neural net to misclassify and in some cases also distort the input image enough to not detect an object at all .

You have been added to awesome-humane-tech

This is just a FYI issue to notify that you were added to the curated awesome-humane-tech in the 'Awareness' category, and - if you like that - are now entitled to wear our badge:

Awesome Humane Tech

By adding this to the README:

[![Awesome Humane Tech](https://raw.githubusercontent.com/humanetech-community/awesome-humane-tech/main/humane-tech-badge.svg?sanitize=true)](https://github.com/humanetech-community/awesome-humane-tech)

https://github.com/humanetech-community/awesome-humane-tech

Uber God View

Just wondering what this has to do with AI. It seems like mostly a data misuse/privacy problem.

.

Sorry accidentally posted

Conflating poor performance with structural bias

In the list I see examples like Tay that are mixed together with systems that have problems on a design and structural level (PredPol for example). Tay was just a bad marketing idea, probably from some manager very confused about how the internet works and what are the results of letting it influence your product/model without proper counter-measures. A lot of the articles on that event were fear-mongering against AI as a whole and I think this list should serve better its purpose by staying clean of non-news like the one related to Tay and actually focus on problematic usages of machine lerning systems.

Automated Machine Gun

Would the Samsung SGR-A1 fit the list? It was developed to assist South Korea with the DMZ. The gun can be configured in such a way that it tracks humans and automatically fires upon them without human assistance. I would imagine that one could alter the system so that it combines some of the other dangerous models. For instance, one could probably combine the AI-based Gaydar with the sentry gun and automatically commit genocide.

Formal criteria for being awful

Can we define a formal set of community-driven criteria for what is considered to be awful to make it to the list? As discussed in #8 use cases in this list can be re-interpreted as missing domain knowledge or unintentional.

Right now this is my rough guiding thought process in curating the list:

Discrimination / Social Credit Systems

  • AI predictions are imperfect and only as good as the human-curated dataset and defined cost functions they were trained on (Algorithmic bias and Fairness)
  • People who work with AI models (such as police men in predictive policing) might not understand this and might mistake AI models as oracles, possibly influencing their world view (Societal impact)
  • AI applications at certain scales that don't consider this can make it into the list

Surveillance / Privacy

  • AI models are computer systems and thus suffer from all security, manipulation and robustness problems (Privacy & Security)
  • AI applications that don't consider these implications can make it into the list

Influencing, disinformation, and fakes

  • AI applications can be used to intentionally influence and harm people at unprecedented scales. They can make it into the list

We should use this issue to discuss and define better and more structured criteria (if possible)

[SUGGESTION] Get discriminated while flying

The following article shows an other AI, that might discriminate people by trying to predict, who on the plane is a terrorist. Feel free to add it to the list, if you think, that it qualifies for this repo.
Cheers

Nothing about Robotics and Self Driving Car?

I think you should add a section about dangerous robotics, such as self driving vehicles.

The death of Elaine Herzberg caused by an Uber car that was configured to not break in such situation to avoid car-sickness to Uber's customers comes to mind.

"According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior"

Also the crash of Flight 447, where the pilots faced the paradox of automation killing themselves, 228 passengers and the crew should be mentioned to clarify the UI/UX problems that need to be solved before widespread deploy of AI systems.

[Research] Fake News

Fake News Discussion

Fake news is horrible - and defending against it is difficult. This thread hopes to raise awareness of what fake news currently is and what it can become with the dawn of AI (aka fake news 2.0)

I'm interested, what we, as engineers, coders, and citizens can do against it?
Looking forward to your opinions and discussions!

Resources

Some resources I found helpful:

Models

Fact-checking

  • Input: sentence. Output: “Claim” or “Not Claim”
    Full Fact is an organization building automatic fact-checking tools for the benefit of the public. Part of their pipeline is a classifier that reads news articles and detects claims (classifies text as either “claim” or “not claim”) which can later be fact-checked (by humans now, by with ML later, hopefully).
    Video: Sentence embeddings for automated factchecking - Lev Konstantinovskiy

Is the application of A.I. to empower fossil fuel industry awful in light of climate change?

Artificial Intelligence is one of the fossil fuel industry's key technologies.

The first time I learned about this was by this article from the AINow Institute.

Vox just recently published an informative video about how Google, Amazon, and Microsoft having contracts with big players in the fossil fuel industry - even going so far in investing massive amounts of money in organizations that actively campaign against climate legislation, and promoting climate change denial.

At NeurIPS (our biggest scientific A.I. venue), several attendees of fossil fuel companies attended the conference and tried to hire A.I. people. (WiML (Women in Machine Learning) was even sponsored by Shell!)

Thank you sponsors 📣#WiML is thankful to our Gold sponsors @amazon
@Microsoft @splunk @Shell @IntelAI @unity3d
@NoodleAI @Twitter and @nvidia for all their support in making this event big.#WiML2019 #NeurIPS2019 🎉🎉 pic.twitter.com/KffOGEOqKd

— WiML (@wimlworkshop) December 2, 2019

Just 20 fossil fuel companies are responsible for 1/3rd of all emissions. There is no scientific scenario where we can prevent the climate crisis without rapid decarbonization.

Question: Can we consider A.I. applications for the fossil fuel industry as awful? Would it make sense to create a new category?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.