Code Monkey home page Code Monkey logo

avaiga / taipy Goto Github PK

View Code? Open in Web Editor NEW
8.6K 60.0 620.0 135.03 MB

Turns Data and AI algorithms into production-ready web applications in no time.

Home Page: https://www.taipy.io

License: Apache License 2.0

Python 79.91% JavaScript 0.48% TypeScript 18.40% Shell 0.03% Jinja 0.04% HTML 0.05% CSS 1.03% Jupyter Notebook 0.06% Dockerfile 0.01%
automation data-engineering data-ops data-visualization datascience developer-tools mlops orchestration pipeline pipelines

taipy's Introduction

Data and AI algorithms into production-ready web apps

Taipy is an open-source Python library for easy, end-to-end application development,
featuring what-if analyses, smart pipeline execution, built-in scheduling, and deployment tools.


Explore the docs »

Discord support · Demos & Examples

 

⭐️ What's Taipy?

Taipy is designed for data scientists and machine learning engineers to build full-stack apps.  

⭐️ Enables building production-ready web applications.
⭐️ No need to learn new languages or full-stack frameworks.
⭐️ Concentrate on Data and AI algorithms without development and deployment complexities.

 

User Interface Generation Scenario and Data Management
Interface Animation Back-End Animation

 

✨ Features

  • Python-Based UI Framework: Taipy is designed for Python users, particularly those working in AI and data science. It allows them to create full stack applications without needing to learn additional skills like HTML, CSS, or JavaScript.

  • Pre-Built Components for Data Pipelines: Taipy includes pre-built components that allow users to interact with data pipelines, including visualization and management tools.

  • Scenario and Data Management Features: Taipy offers features for managing different business scenarios and data, which can be useful for applications like demand forecasting or production planning.

  • Version Management and Pipeline Orchestration: It includes tools for managing application versions, pipeline versions, and data versions, which are beneficial for multi-user environments.

 

⚙️ Quickstart

To install Taipy stable release run:

pip install taipy

To install Taipy on a Conda Environment or from source, please refer to the Installation Guide.
To get started with Taipy, please refer to the Getting Started Guide.

 

🔌 Scenario and Data Management

Let's create a scenario in Taipy that allows you to filter movie data based on your chosen genre.
This scenario is designed as a straightforward pipeline.
Every time you change your genre selection, the scenario runs to process your request.
It then displays the top seven most popular movies in that genre.


⚠️ Keep in mind, in this example, we're using a very basic pipeline that consists of just one task. However,
Taipy is capable of handling much more complex pipelines 🚀


Below is our filter function. This is a typical Python function and it's the only task used in this scenario.

def filter_genre(initial_dataset: pd.DataFrame, selected_genre):
    filtered_dataset = initial_dataset[initial_dataset['genres'].str.contains(selected_genre)]
    filtered_data = filtered_dataset.nlargest(7, 'Popularity %')
    return filtered_data

This is the execution graph of the scenario we are implementing

Taipy Studio

You can use the Taipy Studio extension in Visual Studio Code to configure your scenario with no code
Your configuration is automatically saved as a TOML file.
Check out Taipy Studio Documentation

For more advanced use cases or if you prefer coding your configurations instead of using Taipy Studio,
Check out the movie genre demo scenario creation with this Demo.

TaipyStudio

 

User Interface Generation and Scenario & Data Management

This simple Taipy application demonstrates how to create a basic film recommendation system using Taipy.
The application filters a dataset of films based on the user's selected genre and displays the top seven films in that genre by popularity. Here is the full code for both the frontend and backend of the application.

import taipy as tp
import pandas as pd
from taipy import Config, Scope, Gui

# Taipy Scenario & Data Management

# Filtering function - task
def filter_genre(initial_dataset: pd.DataFrame, selected_genre):
    filtered_dataset = initial_dataset[initial_dataset["genres"].str.contains(selected_genre)]
    filtered_data = filtered_dataset.nlargest(7, "Popularity %")
    return filtered_data

# Load the configuration made with Taipy Studio
Config.load("config.toml")
scenario_cfg = Config.scenarios["scenario"]

# Start Taipy Core service
tp.Core().run()

# Create a scenario
scenario = tp.create_scenario(scenario_cfg)


# Taipy User Interface
# Let's add a GUI to our Scenario Management for a full application

# Callback definition - submits scenario with genre selection
def on_genre_selected(state):
    scenario.selected_genre_node.write(state.selected_genre)
    tp.submit(scenario)
    state.df = scenario.filtered_data.read()

# Get list of genres
genres = [
    "Action", "Adventure", "Animation", "Children", "Comedy", "Fantasy", "IMAX"
    "Romance","Sci-FI", "Western", "Crime", "Mystery", "Drama", "Horror", "Thriller", "Film-Noir","War", "Musical", "Documentary"
    ]

# Initialization of variables
df = pd.DataFrame(columns=["Title", "Popularity %"])
selected_genre = "Action"

## Set initial value to Action
def on_init(state):
    on_genre_selected(state)

# User interface definition
my_page = """
# Film recommendation

## Choose your favorite genre
<|{selected_genre}|selector|lov={genres}|on_change=on_genre_selected|dropdown|>

## Here are the top seven picks by popularity
<|{df}|chart|x=Title|y=Popularity %|type=bar|title=Film Popularity|>
"""

Gui(page=my_page).run()

And the final result:

 

☁️ Taipy cloud

With Taipy Cloud, you can deploy your Taipy applications in a few clicks and for free! To learn more about Taipy Cloud, please refer to the Taipy Cloud Documentation.

TaipyCloud

⚒️ Contributing

Want to help build Taipy? Check out our Contributing Guide.

🪄 Code of conduct

Want to be part of the Taipy community? Check out our Code of Conduct

🪪 License

Copyright 2021-2024 Avaiga Private Limited

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

taipy's People

Contributors

aaryaz avatar aastridvh avatar alexandresajus avatar arcanaxion avatar bobbyshermi avatar dependabot[bot] avatar dinhlongviolin1 avatar dr-irv avatar eltociear avatar fabienlelaquais avatar florian-vuillemot avatar florianjacta avatar forchapeatl avatar fredll avatar fredll-avaiga avatar gmarabout avatar haotran-avaiga avatar joaoandre avatar joaoandre-avaiga avatar jrobinav avatar marisogo avatar nevo-david avatar ooooo-create avatar pavle-avaiga avatar satoshi-sh avatar thienan1010 avatar toan-quach avatar trgiangdo avatar tsuu2092 avatar yarikoptic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

taipy's Issues

Add minimum version in setup.py for all repositories

We should specify the minimum version for each requirement in setup.py:

requirements = [
    "flask",
    "flask-cors",
    "flask-socketio",
    "markdown",
    "numpy",
    "pandas",
    "python-dotenv",
    "pytz",
    "simple-websocket",
    "tzlocal",
    "backports.zoneinfo;python_version<'3.9'",
    "flask-talisman",
]
  • taipy-core
  • taipy-rest
  • taipy-gui

Cache JSX rendered output

Description
Due to the introduction of _DataScopes, the render output in Page (PageRenderer) cannot be cached. Each time a new user comes in, the render methods is responsible for initializing all of the data in the user scopes. This is hugely unoptimized, but since the current approach has no visible impact on the performace, we might want to deal with this later down the line.

One possible solution to this problem is to cache the variable list and the expression list when the page is initiallly rendered and re-evaluated + apply to a new scope when a new user comes in.

Acceptance Criteria

  • Ensure new code is unit tested, and check code coverage is at least 90%
  • Propagate any change on the demos and run all of them to ensure there is no breaking change
  • Ensure any change is well documented

Find a way to run test after taipy installation from pypi

After pushing the taipy package on pypi we should test it to ensure the package works well and all components are correctly installed.

Moving structure to having src, make sure test succeed on both local and github with no import issue

Be able to run tests:

  • Taipy-rest
  • Taipy-airflow
  • Taipy-entreprise
  • Taipy

Expose the pb on a Friday meeting.

Test taipy with Pandas 1.3

Watson Studio can not work with a Pandas version higher than 1.3.

Todo:

  • Test if Pandas 1.3 works with Taipy-(core|Gui)
  • Update both setup.py

image.png

Deploy Taipy on Azure

@florian-vuillemot commented on Mon Jan 31 2022

Provide documentation about Azure deployment. This doc is for a customer with an experiment in Azure.

Platforms target

  • Container
  • Webapp
  • IAAS

Acceptance criteria

  • Accessible for an Azure begginer
  • Azure templating
  • Security should be integrated

Re-execution of 'main.py'

The code of "main.py" of a Taipy project is being re-runned by default on the Gui side with "use_reloader=True" (when a change is done to the code). It is also re-runned when a worker is created on the Core side. These behaviors could be changed or documented.

Because of this, it is also important to explain how to initialize the variables for the Gui in a good manner in order to get the desired result when the 'main.py' is being re-executed.

Version of Taipy:
taipy-core 1.1.1
taipy-gui 1.1.1

Expose a metric control

What would that feature address

Synthetical representation of complex datasets if am excellent feature to propose to end users: one simple
graphical object can represent a value, a trend, that can be understood in a glance.
We want to come up with a minimal set of easy-to-configure controls that can be used in such a Business User context.

Description of the ideal solution

A minimal set of highly customizable controls can be used just like any other control to represent such synthetic views of some data.

See if Plotly Express gauges could be a temporary workaround.

Acceptance Criteria

  • Ensure new code is unit tested, and check code coverage is at least 90%
  • Propagate any change on the demos and run all of them to ensure there is no breaking change
  • Ensure any change is well documented

archive taipy-legacy

Migrate remaining taipy-legacy issues to the appropriate repository and archive taipy-legacy

Make repositories public

Make Repositories public :

  • taipy
  • taipy-core
  • taipy-rest
  • taipy-gui
  • taipy-doc
  • taipy-getting-started

Release Taipy on Conda

Step 0:
We need to add to the release process the deployment of taipy on Conda.

  • Taipy
  • Taipy-core
  • Taipy-gui
  • Taipy-rest

First step: Deploy manually based on the documentation.

Second step: Start working on a proposal for automation.

All front-end values should be encapsulated

Description
Refactoring of variable binding to ensure we manage what's get send to the front-end ...

Acceptance Criteria

  • Ensure new code is unit tested, and check code coverage is at least 90%
  • Propagate any change on the demos and run all of them to ensure there is no breaking change
  • Ensure any change is well documented

Set max version in setup.py

Our setup.py contains the minimal version needed for each dependence but not the maximal.
We should set the latest version that we are supporting to limit possible failure due to new interface on our dependencies.

Warning: Each package provide its own release process. Some packages break the interface between "patch" or "minor". We should notice this kind of behaviour to stay updated without breaking our installation.

Extend Cycle concept

Description

We should be able to make cycles start date be different from the default one.
Examples:
I want my Daily cycles to start at 8AM and not 12PM.
I want to have a business day frequency.
I want my monthly cycles to start on the 5th day of the month.
I want a business month frequency.
etc.

This link could help to understand the need.

Release taipy-getting-started and taipy-doc

Outcome:

  • Release Process for repositories.
  • Process for releasing taipy-doc
  • Release for taipy-getting-started 1.0.1. (with scenario_selector fix)
  • Release for taipy-doc 1.0.0.

Tables with Filters

Would be nice to be able to filter data on displayed tables using an excel like method:
We could see the columns where filters apply:
image
One difficulty becomes that filters can be complex combinations (and/or) of formulas (for numbers or strings). We could certainly have a single formula in the first version!

Initialize taipy repository to generate project

Configure Taipy project and expose taipy-core, taipy-rest, taipy-guy APIs through Taipy project.

The goal is to be able to do a
pip install taipy
and just :

import taipy as tp

tp.gui
tp.configure_dtaa_node
...
  • build project
  • expose gui apis
  • expose core apis
  • expose rest apis (tp.rest) => Partial
  • update milestone project to replcae import taipy.core as tp

Design and bootstrap authentication

In the Enterprise version, Taipy must propose a way to facilitate user authentication. This authentication must be shared by all components gui, core, rest, airflow, ...

  • Propose a detailed design and architecture to implement authentication solution.
  • Share the proposal in the team
  • List all impacts on the existing implementation
  • Create implementation tickets in github

Taipy Install in Anaconda Navigator on MacOs

We use the Jupyter notebooks from within Anaconda Navigator. We did a !pip install taipy inside the notebooks., but the
from taipy import Gui led to errors.
image

It seems that this problem only occurs on MacOS

Sort mechanism for get_all_scenarios and get_masters functions

Description

When calling tp.get_scenarios() or tp.get_primary_scenarios(), the user should be able to provide an optional field for sorting the scenarios.
The sort should be increasing or decreasing.
The sort could be on the name (default), id, creation_date, or tag.

Pausing job

Description

  • Expose methods in Job_manager to pause and resume a job from its id or the entity.
_JobManager.pause(job.id)
_JobManager.resume(job.id)
_JobManager.pause(job)
_JobManager.resume(job)
  • Expose method in Taipy to pause and resume a job from its id or the entity.
tp.pause(job.id)
tp.resume(job.id)
tp.pause(job)
tp.resumue(job)
  • Expose method in Job to pause and resume itself.
job.pause()
job.resume()

Acceptance Criteria

  • Ensure new code is unit tested, and check code coverage is at least 90%
  • Propagate any change in the demos and run all of them to ensure there is no breaking change
  • Propagate any change in taipy-rest
  • Ensure any change is well documented

Logs do not not appear

@jrobinAV commented on Wed Jun 29 2022

from taipy import Config, Gui

import taipy as tp 

import time

def test(ini):
    print("Doing something")
    time.sleep(2)
    return ini

ini_cfg = Config.configure_data_node(id="ini_cfg", default_data=1)
final_cfg = Config.configure_data_node(id="final_cfg")

task_cfg = Config.configure_task(id="task_cfg", input=ini_cfg, output=final_cfg, function=test)

pipeline_cfg = Config.configure_pipeline(id="pipeline_cfg", task_configs=[task_cfg])
scenario_cfg = Config.configure_scenario(id="scenario_cfg", pipeline_configs=[pipeline_cfg])

md = "<|Run|button|on_action=create_and_submit|>"

def create_and_submit(state):
    tp.create_scenario(scenario_cfg).submit()
    print("Finished")

if __name__ == '__main__':
    tp.create_scenario(scenario_cfg).submit()
    Gui(md).run()

Move airflow to taipy-airflow

Airflow configs is still in the Taipy repository.
We should move it and allow Taipy config to call it.

Todo:

  • Move Airflow config code in Taipy-airflow
  • Move Airflow config tests in Taipy-airflow
  • Update Airflow Config to be based on Properties and not class field
  • Update Taipy to be based on properties and not Airflow config class field

Acceptance criteria

  • No Airflow code should stay in Taipy
  • Config interface should stay the same for end-user
  • Tests should be moved in Taipy-Airflow

Contributing process

  • Check if we need to specify the licence everywhere or just in the main file.
  • Contributing section of the documentation

Tables with multiple index

To be able to provide an easy display for the different shapes and forms of Pandas dataframes, there is a need to be able to display tables/dataframes with multiple indexes.
Either for the colonnes:
image
or for the rows:
image

See Th
There have been several requests on StackOverflow but it seems that there is a void there.

Add a 'Shared' Scope to connect multiple pipelines

What would that feature address
It will address the fact that we are not able to connect two pipelines without using the Scenario Scope. It will make the configuration much easier and flexible for specific graph.

Description of the ideal solution
The ideal solution would be for Taipy to detect the scope of each Data node and make them accessible for the pipelines associated.

Here is an example of tasks (predict, evaluate) that are parallel to each other. To connect two pipelines (one pipeline for each task), we need 'pred' Data Nodes ('pred_pipeline_1','pred_pipeline_2', 'pred_pipeline_3') to have different names and a Scenario Scope.
scenario_pipeline

The code of the graph above would be something like that

import taipy as tp
from taipy.core import Config, Scope

import pandas as pd

def evaluate(base_data):
    return base_data

def predict(base_data):
    return base_data

pipelines = ['pipeline_1', 'pipeline_2', 'pipeline_3']    
pipelines_cfg = []

for pipeline in pipelines:
    # put the data needed for each pipeline as their default data 
    parameters_cfg = Config.configure_data_node(id='parameters',
                                                   scope=Scope.PIPELINE)
    pred_cfg = Config.configure_data_node(id="pred"+pipeline,
                                                   scope=Scope.SCENARIO)
    task_pred_cfg = Config.configure_task(id=pipeline+"_pred",
                                     function=predict,
                                     input=parameters_cfg,
                                     output=pred_cfg)
    pipeline_pred_cfg = Config.configure_pipeline(id=pipeline+'_pred', task_configs=[task_pred_cfg])

    pipelines_cfg.append(pipeline_pred_cfg)

    evaluation_cfg = Config.configure_data_node(id='evaluation',
                                                scope=Scope.PIPELINE)
    task_evaluate_cfg = Config.configure_task(id=pipeline+"_pred",
                                     function=evaluate,
                                     input=pred_cfg,
                                     output=evaluation_cfg)
    pipeline_evaluate_cfg = Config.configure_pipeline(id=pipeline+'_evaluate', task_configs=[task_pred_cfg])

    
    pipelines_cfg.append(pipeline_evaluate_cfg)

                                     
scenario_cfg = Config.configure_scenario(id="scenario_1", pipeline_configs=pipelines_cfg)

The goal would be to simplify this by having a Shared Scope. The name would not be different, it will be easier to manage data and the Scope would be at the appropriate level

shared_pipeline

The code of the graph above would be something like that:

import taipy as tp
from taipy.core import Config, Scope

import pandas as pd

def evaluate(base_data):
    return base_data

def predict(base_data):
    return base_data

pipelines = ['pipeline_1', 'pipeline_2', 'pipeline_3']    
pipelines_cfg = []

# put the data needed for each pipeline as their default data 
parameters_cfg = Config.configure_data_node(id='parameters',
                                               scope=Scope.PIPELINE)
pred_cfg = Config.configure_data_node(id="pred",
                                               scope=Scope.SHARED)
    
task_pred_cfg = Config.configure_task(id='pred_task',
                                      function=predict,
                                      input=parameters_cfg,
                                      output=pred_cfg)
    
pipeline_pred_cfg = Config.configure_pipeline(id='pipeline_pred', task_configs=[task_pred_cfg])

evaluation_cfg = Config.configure_data_node(id='evaluation',
                                            scope=Scope.PIPELINE)
    
task_evaluate_cfg = Config.configure_task(id=pipeline+"_evaluate",
                                 function=predict,
                                 input=pred_cfg,
                                 output=evaluation_cfg)
    
pipeline_evaluate_cfg = Config.configure_pipeline(id='pipeline_evaluate', task_configs=[task_evaluate_cfg])

scenario_cfg = Config.configure_scenario(id="scenario_1", pipeline_configs=[pipeline_pred_cfg]*len(pipelines)+
                                                                            [pipeline_evaluate_cfg]*len(pipelines))    

There is no for loop and the name of the Data Nodes is the same. However, with this code, it is not possible to understand what the user wants to execute with the pipeline_configs and which pipelines have access to which Data Nodes.

Caveats
It would impact how the configuration and the access to the entities, and so on.

Taipy Authentication

Python -- taipy-auth

  • Boostrap the repos
  • Support to authentication validator (LDAP, OAuth2)
  • Tests

Front-end

  • To investigate -- token for ex

Add row for tables

ow, we can update the values of each cell of a table (Those that are editable).
Could we also add a row?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.