Code Monkey home page Code Monkey logo

mindsdb_evaluator's Introduction


MindsDB is the platform for building AI from enterprise data. You can create, serve, and fine-tune models in real-time from your database, vector store, and application data. Tweet

๐Ÿ“– About us

MindsDB is the platform for building AI from enterprise data.

With MindsDB, you can deploy, serve, and fine-tune models in real-time, utilizing data from databases, vector stores, or applications, to build AI-powered apps - using universal tools developers already know.

MindsDB integrates with numerous data sources, including databases, vector stores, and applications, and popular AI/ML frameworks, including AutoML and LLMs. MindsDB connects data sources with AI/ML frameworks and automates routine workflows between them. By doing so, we bring data and AI together, enabling the intuitive implementation of customized AI systems.

Learn more about features and use cases of MindsDB here.

๐Ÿš€ Get Started

To get started, install MindsDB locally via Docker or Docker Desktop, following the instructions in linked doc pages.

MindsDB enhances SQL syntax to enable seamless development and deployment of AI-powered applications. Furthermore, users can interact with MindsDB not only via SQL API but also via REST APIs, Python SDK, JavaScript SDK, and MongoDB-QL.

๐ŸŽฏ Solutions โš™๏ธ SQL Query Examples
๐Ÿค– Fine-Tuning FINETUNE mindsdb.hf_model FROM postgresql.table;
๐Ÿ“š Knowledge Base CREATE KNOWLEDGE_BASE my_knowledge FROM (SELECT contents FROM drive.files);
๐Ÿ” Semantic Search SELECT * FROM rag_model WHERE question='What product is best for treating a cold?';
โฑ๏ธ Real-Time Forecasting SELECT * FROM binance.trade_data WHERE symbol = 'BTCUSDT';
๐Ÿ•ต๏ธ Agents CREATE AGENT my_agent USING model='chatbot_agent', skills = ['knowledge_base'];
๐Ÿ’ฌ Chatbots CREATE CHATBOT slack_bot USING database='slack',agent='customer_support';
โฒ๏ธ Time Driven Automation CREATE JOB twitter_bot ( <sql_query1>, <sql_query2> ) START '2023-04-01 00:00:00';
๐Ÿ”” Event Driven Automation CREATE TRIGGER data_updated ON mysql.customers_data (sql_code)

๐Ÿ’ก Examples

MindsDB enables you to deploy AI/ML models, send predictions to your application, and automate AI workflows.

Discover more tutorials and use cases here.

AI Workflow Automation

This category of use cases involves tasks that get data from a data source, pass it through an AI/ML model, and write the output to a data destination.

Common use cases are anomaly detection, data indexing/labeling/cleaning, and data transformation.

This example showcases the data enrichment flow, where input data comes from a PostgreSQL database and is passed through an OpenAI model to generate new content which is saved into a data destination.

We take customer reviews from a PostgreSQL database. Then, we deploy an OpenAI model that analyzes all customer reviews and assigns sentiment values. Finally, to automate the workflow for incoming customer reviews, we create a job that generates and saves AI output into a data destination.

-- Step 1. Connect a data source to MindsDB
CREATE DATABASE data_source
WITH ENGINE = "postgres",
PARAMETERS = {
    "user": "demo_user",
    "password": "demo_password",
    "host": "samples.mindsdb.com",
    "port": "5432",
    "database": "demo",
    "schema": "demo_data"
};

SELECT *
FROM data_source.amazon_reviews_job;

-- Step 2. Deploy an AI model
CREATE ML_ENGINE openai_engine
FROM openai
USING
    openai_api_key = 'your-openai-api-key';

CREATE MODEL sentiment_classifier
PREDICT sentiment
USING
    engine = 'openai_engine',
    model_name = 'gpt-4',
    prompt_template = 'describe the sentiment of the reviews
						strictly as "positive", "neutral", or "negative".
						"I love the product":positive
						"It is a scam":negative
						"{{review}}.":';

DESCRIBE sentiment_classifier;

-- Step 3. Join input data with AI model to get AI output
SELECT input.review, output.sentiment
FROM data_source.amazon_reviews_job AS input
JOIN sentiment_classifier AS output;

-- Step 4. Automate this workflow to accomodate real-time and dynamic data
CREATE DATABASE data_destination
WITH ENGINE = "engine-name",      -- choose the data source you want to connect to save AI output
PARAMETERS = {                    -- list of available data sources: https://docs.mindsdb.com/integrations/data-overview
    "key": "value",
	...
};

CREATE JOB ai_automation_flow (
	INSERT INTO data_destination.ai_output (
		SELECT input.created_at,
			   input.product_name,
			   input.review,
			   output.sentiment
		FROM data_source.amazon_reviews_job AS input
		JOIN sentiment_classifier AS output
		WHERE input.created_at > LAST
	);
);

AI System Deployment

This category of use cases involves creating AI systems composed of multiple connected parts, including various AI/ML models and data sources, and exposing such AI systems via APIs.

Common use cases are agents and assistants, recommender systems, forecasting systems, and semantic search.

This example showcases AI agents, a feature developed by MindsDB. AI agents can be assigned certain skills, including text-to-SQL skills and knowledge bases. Skills provide an AI agent with input data that can be in the form of a database, a file, or a website.

We create a text-to-SQL skill based on the car sales dataset and deploy a conversational model, which are both components of an agent. Then, we create an agent and assign this skill and this model to it. This agent can be queried to ask questions about data stored in assigned skills.

-- Step 1. Connect a data source to MindsDB
CREATE DATABASE data_source
WITH ENGINE = "postgres",
PARAMETERS = {
    "user": "demo_user",
    "password": "demo_password",
    "host": "samples.mindsdb.com",
    "port": "5432",
    "database": "demo",
    "schema": "demo_data"
};

SELECT *
FROM data_source.car_sales;

-- Step 2. Create a skill
CREATE SKILL my_skill
USING
    type = 'text2sql',
    database = 'data_source',
    tables = ['car_sales'],
    description = 'car sales data of different car types';

SHOW SKILLS;

-- Step 3. Deploy a conversational model
CREATE ML_ENGINE langchain_engine
FROM langchain
USING
      openai_api_key = 'your openai-api-key';
      
CREATE MODEL my_conv_model
PREDICT answer
USING
    engine = 'langchain_engine',
    model_name = 'gpt-4',
    mode = 'conversational',
    user_column = 'question' ,
    assistant_column = 'answer',
    max_tokens = 100,
    temperature = 0,
    verbose = True,
    prompt_template = 'Answer the user input in a helpful way';

DESCRIBE my_conv_model;

-- Step 4. Create an agent
CREATE AGENT my_agent
USING
    model = 'my_conv_model',
    skills = ['my_skill'];

SHOW AGENTS;

-- Step 5. Query an agent
SELECT *
FROM my_agent
WHERE question = 'what is the average price of cars from 2018?';

SELECT *
FROM my_agent
WHERE question = 'what is the max mileage of cars from 2017?';

SELECT *
FROM my_agent
WHERE question = 'what percentage of sold cars (from 2016) are automatic/semi-automatic/manual cars?';

SELECT *
FROM my_agent
WHERE question = 'is petrol or diesel more common for cars from 2019?';

SELECT *
FROM my_agent
WHERE question = 'what is the most commonly sold model?';

Agents are accessible via API endpoints.

๐Ÿค Contribute

If youโ€™d like to contribute to MindsDB, install MindsDB for development following this instruction.

Youโ€™ll find the contribution guide here.

We are always open to suggestions, so feel free to open new issues with your ideas, and we can guide you!

This project is released with a Contributor Code of Conduct. By participating in this project, you agree to follow its terms.

Also, check out the rewards and community programs here.

๐Ÿค Support

If you find a bug, please submit an issue on GitHub here.

Here is how you can get community support:

If you need commercial support, please contact the MindsDB team.

๐Ÿ’š Current contributors

Made with contributors-img.

๐Ÿ”” Subscribe to updates

Join our Slack community and subscribe to the monthly Developer Newsletter to get product updates, information about MindsDB events and contests, and useful content, like tutorials.

โš–๏ธ License

For detailed licensing information, please refer to the LICENSE file.

mindsdb_evaluator's People

Contributors

lucas-koontz avatar minurapunchihewa avatar paxcema avatar stpmax avatar tomhuds avatar vijay-jaisankar avatar zoranpandovski avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mindsdb_evaluator's Issues

Feature Proposal: Test cases for `classification` and `regression`

I with to raise a PR that includes tests for the following methods:

  • evaluate_multilabel_accuracy
  • evaluate_regression_accuracy

As outlined in tests/test_forecast_accs.py, these will include at least

  • Random inputs
  • Test cases pertaining to small and large error values
  • Identical inputs and outputs (with zero error and hence maximum accuracy)

If this seems worthwhile, please assign me to the same.

Add evaluator class

This stateful mechanism should help enabling the following:

  • Track evaluations through time in a simple way
  • Store multiple-metric evals in a single object (users may want to eval a model with multiple metrics and consolidate all of these in a single artifact, which would be easier to produce from a single object)
  • Focus on reproducibility: there should be a simple mechanism that enables automatic logging of everything that is done to/with a marked instance
  • Scheduled support (i.e. cron jobs)? Not 100% sold on this one, it may be better to implement it within mdb proper

Incorrect label cast to integer

As reported by community. Using the Titanic dataset and one of {F1 score, recall, precision}. From MindsDB:

CREATE PREDICTOR mindsdb.titanic1
FROM files                      
(SELECT * FROM titanic)              
PREDICT Survived
USING
    accuracy_functions="['f1_score']";

Triggers the following error:

ValueError: pos_label=1 is not a valid label. It should be one of ['0', '1'], raised at: /usr/local/lib/python3.8/dist-packages/mindsdb/integrations/libs/ml_exec_base.py#135

Normal accuracy, balanced accuracy or roc auc score work okay.

Note: there is an additional issue with metrics that require parameter definition, like F1 score. Support for this is at the mindsdb_sql/mindsdb level (see respective issue for this).

[Bug] Mac M1 foes not have `np.float128`

If try to use mindsdb_evaluator at mac m1, then error will appear:

Traceback (most recent call last):
  File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/Users/alejandrovillegas/Documents/mindsdb/mindsdb/mindsdb/__main__.py", line 21, in <module>
    from mindsdb.api.http.start import start as start_http
  File "/Users/alejandrovillegas/Documents/mindsdb/mindsdb/mindsdb/api/http/start.py", line 9, in <module>
    from mindsdb.api.http.initialize import initialize_app
  File "/Users/alejandrovillegas/Documents/mindsdb/mindsdb/mindsdb/api/http/initialize.py", line 22, in <module>
    from mindsdb.api.http.namespaces.analysis import ns_conf as analysis_ns
  File "/Users/alejandrovillegas/Documents/mindsdb/mindsdb/mindsdb/api/http/namespaces/analysis.py", line 14, in <module>
    from mindsdb.api.mysql.mysql_proxy.classes.fake_mysql_proxy import FakeMysqlProxy
  File "/Users/alejandrovillegas/Documents/mindsdb/mindsdb/mindsdb/api/mysql/mysql_proxy/classes/fake_mysql_proxy/__init__.py", line 1, in <module>
    from .fake_mysql_proxy import FakeMysqlProxy
  File "/Users/alejandrovillegas/Documents/mindsdb/mindsdb/mindsdb/api/mysql/mysql_proxy/classes/fake_mysql_proxy/fake_mysql_proxy.py", line 3, in <module>
    from mindsdb.api.mysql.mysql_proxy.mysql_proxy import MysqlProxy
  File "/Users/alejandrovillegas/Documents/mindsdb/mindsdb/mindsdb/api/mysql/mysql_proxy/mysql_proxy.py", line 57, in <module>
    from mindsdb.api.mysql.mysql_proxy.executor import Executor
  File "/Users/alejandrovillegas/Documents/mindsdb/mindsdb/mindsdb/api/mysql/mysql_proxy/executor/__init__.py", line 1, in <module>
    from .mysql_executor import Executor
  File "/Users/alejandrovillegas/Documents/mindsdb/mindsdb/mindsdb/api/mysql/mysql_proxy/executor/mysql_executor.py", line 6, in <module>
    from mindsdb.api.executor.command_executor import ExecuteCommands
  File "/Users/alejandrovillegas/Documents/mindsdb/mindsdb/mindsdb/api/executor/command_executor.py", line 8, in <module>
    from mindsdb_evaluator.accuracy.general import evaluate_accuracy
  File "/Users/alejandrovillegas/Library/Python/3.9/lib/python/site-packages/mindsdb_evaluator/__init__.py", line 1, in <module>
    from mindsdb_evaluator.accuracy import *  # noqa
  File "/Users/alejandrovillegas/Library/Python/3.9/lib/python/site-packages/mindsdb_evaluator/accuracy/__init__.py", line 6, in <module>
    from mindsdb_evaluator.accuracy.general import evaluate_accuracy, evaluate_accuracies
  File "/Users/alejandrovillegas/Library/Python/3.9/lib/python/site-packages/mindsdb_evaluator/accuracy/general.py", line 16, in <module>
    SCORE_TYPES = (float, np.float16, np.float32, np.float64, np.float128,
  File "/Users/alejandrovillegas/Library/Python/3.9/lib/python/site-packages/numpy/__init__.py", line 311, in __getattr__
    raise AttributeError("module {!r} has no attribute "
AttributeError: module 'numpy' has no attribute 'float128'

Implement a bunch of accuracy funtions

Get AccStats to generate every single viable accuracy metrics it can think of for the problem types.

For starters let's do everything under sklearn's metrics + whatever @paxcema thinks is relevant for timeseries + @hakunanatasha if you have any ideas (or would like to warn about any sklearn metrics being bonk).

As with r2, I think it's better if we start with implementations that are wrappers over the sklearn function, and then add custom behavior if needed for various edge cases.

Also this shouldn't be equivalent to using all of those accuracy function for ensembling, the user should be able to specify a subset of functions that we ensemble on, but every single possible function should be computed by AccStats and added to the model data.

For now we can leave the defaults for ensembling to be the same.

Feature Request: Expected calibration error

I would like to raise a PR that implements the ECE metric in this file.

More specifically, the function proposed will take a list of softmax outputs, the y_true labels, and the number of bins. It will then calculate ECE as follows:

ECE calculation source

If this sounds like a good idea, please assign me to the same.

Thanks,
Vijay

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.