Code Monkey home page Code Monkey logo

aiconfig's People

Contributors

activescott avatar andrew-lastmile avatar anikait10 avatar ankush-lastmile avatar jonathanagustin avatar jonathanlastmileai avatar justinkchen avatar kyldvs avatar lastmileai-dev avatar rholinshead avatar rossdanlm avatar saqadri avatar smyja avatar sp6370 avatar suyoglastmileai avatar tanya-rai avatar tanya-rai-lm avatar vanshb03 avatar zakariassoul avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aiconfig's Issues

Add run-time AiConfig validator for config.load()

One thing I've noticed while working on aiconfig tests is that it's super easy to create invalid JSON manually and then the aiconfig runtime just silently ignores it -- for example, I had forgotten to put the model name/settings under a top-level metadata and had to add some console.logs to see why my metadata was null.
Programmatic editing and editing via SDK is good, but we may even run into issues there down the line if we update the expected schema contents.

We should have some run-time validation that the loaded aiconfig is valid. To start, can probably just do it in config.load and leverage some schema validation library (e.g., ajv) with our schema definition & error if it's not valid.

Simplify addition of system prompt to config via SDK

Currently, when adding a system prompt using the SDK, it requires specifying the model again even if a default model already exists. This felt unintuitive since my only desire was to add the system prompt not also add the model but I couldn't do it without it.

Additionally, the process was more taxing than felt necessary as it is deeply embedded in json. I wish the system prompt was at the same hierarchical value in the prompt structure as 'name', 'input', etc.

Improve support for chatbot-like AIConfigs (dynamic-length chain of prompts following a template)

Motivating pointers:

Suppose we want to write an AIConfig for a chatbot. Each interaction has a specific parametrized prompt (along with model settings), and we have a hard-coded initial state (e.g. the first message sent to the LLM to set the context).

This is currently not (well) supported. e.g. the AIConfig for cli-mate has an empty prompt list, and prompts are added dynamically. Instead, we should be able to represent an execution node that is as parametrized as possible, references itself, and can be unrolled dynamically during app execution.

cc @tanya-rai , @saqadri

[bug] Race condition when awaiting multiple `run()` calls on the same AIConfigRuntime

asyncio.gathering a list of run calls does not return the expected results because of a race condition. Every get_text_output() result has the same value even if multiple inputs are run concurrently (asyncio).

We think the race condition is

  1. run and store output (a)
  2. run and store output (b)
  3. read output (a) but the most recent one was from b, so you get output b
  4. read output (b) again

When it should be

  1. run and store output (a)
  2. read output (a)
  3. run and store output (b)
  4. read output (b)

Repro:

  1. Instantiate an AIConfigRuntime
  2. Create a list of different params inputs
  3. Run each input using run_and_get_output_text()
  4. asyncio.gather the results

If you do this enough times, hopefully you should be able to see the race condition. Even with different inputs, all the outputs will have the same value (probably the most recent one that was scheduled)

field warning, conflict with "model_"

/opt/homebrew/Caskroom/miniconda/base/envs/att7/lib/python3.11/site-packages/pydantic/_internal/fields.py:128: UserWarning: Field "model_parsers" has conflict with protected namespace "model".

Repro: Run customer_report/generate.ipynb first cell

Improvements to ModelParser Creation

Some improvements to the contract between parsers and config, which should make ModelParser creation easier:

  1. Add default callback event handling in the base ModelParser infrastructure

I think the base classes should be recording some of the events by default so we aren't left at the modelparser creator's discretion

Mainly, each on_x_start, on_x_end event for the core methods (serialize, deserialize, run) should just be logged by default, with ModelParser creators implementing the code between the events.

  1. Clarify & ensure consistency in handling of prompt outputs in run. Having ModelParsers mutate the input Prompt seems too heavy-handed; ideally ModelParsers should just return the outputs and the AIConfigRuntime will manage adding them to the config.

C# library for aiconfig

Request - Please incorporate in building the C# library for AiConfig. I am sure that it will certainly help the large audiences who primarily work with the Microsoft Technologies.

Typescript export/import

Not all types have been exported, ie JSONObject currently requires importing with import { JSONObject } from "aiconfig/dist/common";

Extensibility docs don't mention ModelParserRegistry

Extensibility docs explain how to create a model parser but not how to use it and register it.

ModelParserRegistry.register_model_parser should be documented there, as well as overall extensions structure.

Sherry also gave feedback on this!

[py] PaLM Model Parser doesn't check for API Key

PaLM model parser doesn't explicitly check for API key.

It currently needs to be manually configured by doing something like the following

import google.generativeai as palm
import os
palm.configure(api_key=os.getenv("PALM_KEY"))   

In this example the key must be an environment variable under the name "PALM_KEY"

Support AIConfig format lint/validation in VS Code editor

Depends on #291

As a major devX nicety, we should consider leveraging the schema we defined for #291 to apply in VS Code so that it provides linting of the config JSON file structure to match the schema. See https://code.visualstudio.com/docs/languages/json#_json-schemas-and-settings for info. Looks like it supports either defining via a $schema key in the JSON itself (would need to add that to the spec, but might be nice anyway to have automatic validation in VS Code as well as a form of versioning of the schema) or can have it as a setting based on file path/name/etc. (might make sense to have a common naming like .aiconfig.json in that case).

Update Parsers to Define Default Prompt / Prompt Data

When programmatically constructing a prompt for the config, the user must construct the data to serialize for the prompt creation. More importantly, our editor implementations currently need to be coupled with the model parsers to know how to construct prompts for different models.

If we instead expose an abstract method on the ModelParser class to define a default prompt (or even default data that can be serialized to the prompt), then we could leverage that for any model parser to construct or modify default prompts without needing to have/know all the required data values from the start.

Missing attribute `model_dump`

Hitting an issue within cookbooks in regards to missing attribute model_dump

`/usr/local/lib/python3.10/dist-packages/aiconfig/default_parsers/openai.py in run_inference(self, prompt, aiconfig, options, parameters)
269 for chunk in response:
270 # OpenAI>1.0.0 uses pydantic models. Chunk is of type ChatCompletionChunk; type is not directly importable from openai Library, will require some diffing
--> 271 chunk = chunk.model_dump(exclude_none=True)
272 # streaming only returns one chunk, one choice at a time (before 1.0.0). The order in which the choices are returned is not guaranteed.
273 messages = multi_choice_message_reducer(messages, chunk)

AttributeError: 'dict' object has no attribute 'model_dump'`

Break up run_inference function

Feedback from user:

  • They had to specify their own Azure endpoint for openai and do some custom auth, but it required changing just 2 lines of code out of a large function.
  • It would be good to break up the run_inference function to specific pieces that can be overridden

LLM Consistency Cookbook is buggy

Four bugs, present in both Colab and Local (vscode jupyter notebook) runs

  1. references "multi-llm-consistency.json", there is a similar file with underscores available but nothing matching this.
  2. multi_llm_consistency_json aliases its model names. This doesn't work. "ChatGPT" is never added to the ModelRegistry, so you get the error below
  3. palm doesn't work - I assume this is a config issue on my end but I didn't see a solution at a glance. What's expected behavior when one model type is unconfigured?
  4. run_with_dependencies is false by default, so the run doesn't work when the above issues are fixed.

Here's a fixed NB: https://colab.research.google.com/drive/199HTWPv5BdU5hhX5IC474pV8RNXtKFB2#scrollTo=ZhyBEWGcM6zB, though palm still doesn't work.

Error from #2:

[/usr/local/lib/python3.10/dist-packages/aiconfig/registry.py](https://localhost:8080/#) in get_model_parser(model_id)
     39             ModelParser: The retrieved model parser
     40         """
---> 41         return ModelParserRegistry._parsers[model_id]
     42 
     43     @staticmethod

KeyError: 'ChatGPT'

Output display with streaming

Output displays prompt response + ExecuteResult obj. Feels a little repetitive. Shown in image (gets messier when output is longer).

For chained prompts, streaming shows the previous prompt responses which makes sense but if there was some demarcation on the output corresponding to the prompt name that would be helpful. Otherwise it gets hard to read.

Screenshot 2023-11-19 at 12 54 35 AM

Combine config.run + config.get_output_text for efficient chaining

I almost always need the text output after running inference to use for chaining (supported in AI Workbooks). Would love to have these two functions combined instead of always having to do this:

verification_completion = await config.run("verification", params)
verification_response = config.get_output_text("verification")

API Capabilities

I think it's a good idea to build and expose an API for aiconfig. Currently, I see a huge limitation in terms of the library dependency with which the aiconfig can function. However, having an API let's the users to consume and develop their own libraries if needed.

Errors in multi_choice_message_reducer

Using the "getting started" cookbook notebook after pulling the most recent changes, I've seen the error pasted below. Given that it worked previously, it's from a commit in the past two days. If I had to speculate, it's an openai api backwards incompatibility issue. quick fixes like the below failed.

for choice in chunk["choices"]: -> for choice in chunk.choices:

Reproduces with openai versions 1.1.2, 1.2.4, and 1.3.5


Error:
Screenshot:
get_activities_bug

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
[/home/jacobjensen/Projects/llms/aiconfig/cookbooks/Getting-Started/getting_started.ipynb](https://file+.vscode-resource.vscode-cdn.net/home/jacobjensen/Projects/llms/aiconfig/cookbooks/Getting-Started/getting_started.ipynb) Cell 7 line 1
----> [1](vscode-notebook-cell:/home/jacobjensen/Projects/llms/aiconfig/cookbooks/Getting-Started/getting_started.ipynb#W5sZmlsZQ%3D%3D?line=0) await config.run("get_activities")

File [~/Projects/llms/aiconfig/python/src/aiconfig/Config.py:245](https://file+.vscode-resource.vscode-cdn.net/home/jacobjensen/Projects/llms/aiconfig/cookbooks/Getting-Started/~/Projects/llms/aiconfig/python/src/aiconfig/Config.py:245), in AIConfigRuntime.run(self, prompt_name, params, options, **kwargs)
    [242](https://file+.vscode-resource.vscode-cdn.net/home/jacobjensen/Projects/llms/aiconfig/cookbooks/Getting-Started/~/Projects/llms/aiconfig/python/src/aiconfig/Config.py:242) model_name = self.get_model_name(prompt_data)
    [243](https://file+.vscode-resource.vscode-cdn.net/home/jacobjensen/Projects/llms/aiconfig/cookbooks/Getting-Started/~/Projects/llms/aiconfig/python/src/aiconfig/Config.py:243) model_provider = AIConfigRuntime.get_model_parser(model_name)
--> [245](https://file+.vscode-resource.vscode-cdn.net/home/jacobjensen/Projects/llms/aiconfig/cookbooks/Getting-Started/~/Projects/llms/aiconfig/python/src/aiconfig/Config.py:245) response = await model_provider.run(prompt_data, self, options, params, callback_manager = self.callback_manager, **kwargs)
    [247](https://file+.vscode-resource.vscode-cdn.net/home/jacobjensen/Projects/llms/aiconfig/cookbooks/Getting-Started/~/Projects/llms/aiconfig/python/src/aiconfig/Config.py:247) event = CallbackEvent("on_run_complete", __name__, {"result": response})
    [248](https://file+.vscode-resource.vscode-cdn.net/home/jacobjensen/Projects/llms/aiconfig/cookbooks/Getting-Started/~/Projects/llms/aiconfig/python/src/aiconfig/Config.py:248) await self.callback_manager.run_callbacks(event)

File [~/Projects/llms/aiconfig/python/src/aiconfig/default_parsers/parameterized_model_parser.py:61](https://file+.vscode-resource.vscode-cdn.net/home/jacobjensen/Projects/llms/aiconfig/cookbooks/Getting-Started/~/Projects/llms/aiconfig/python/src/aiconfig/default_parsers/parameterized_model_parser.py:61), in ParameterizedModelParser.run(self, prompt, aiconfig, options, parameters, **kwargs)
     [59](https://file+.vscode-resource.vscode-cdn.net/home/jacobjensen/Projects/llms/aiconfig/cookbooks/Getting-Started/~/Projects/llms/aiconfig/python/src/aiconfig/default_parsers/parameterized_model_parser.py:59)     return await self.run_with_dependencies(prompt, aiconfig, options, parameters)
     [60](https://file+.vscode-resource.vscode-cdn.net/home/jacobjensen/Projects/llms/aiconfig/cookbooks/Getting-Started/~/Projects/llms/aiconfig/python/src/aiconfig/default_parsers/parameterized_model_parser.py:60) else:
---> [61](https://file+.vscode-resource.vscode-cdn.net/home/jacobjensen/Projects/llms/aiconfig/cookbooks/Getting-Started/~/Projects/llms/aiconfig/python/src/aiconfig/default_parsers/parameterized_model_parser.py:61)     return await self.run_inference(prompt, aiconfig, options, parameters)

File [~/Projects/llms/aiconfig/python/src/aiconfig/default_parsers/openai.py:277](https://file+.vscode-resource.vscode-cdn.net/home/jacobjensen/Projects/llms/aiconfig/cookbooks/Getting-Started/~/Projects/llms/aiconfig/python/src/aiconfig/default_parsers/openai.py:277), in OpenAIInference.run_inference(self, prompt, aiconfig, options, parameters)
    [274](https://file+.vscode-resource.vscode-cdn.net/home/jacobjensen/Projects/llms/aiconfig/cookbooks/Getting-Started/~/Projects/llms/aiconfig/python/src/aiconfig/default_parsers/openai.py:274) response = response.model_dump(exclude_none=True) \
    [275](https://file+.vscode-resource.vscode-cdn.net/home/jacobjensen/Projects/llms/aiconfig/cookbooks/Getting-Started/~/Projects/llms/aiconfig/python/src/aiconfig/default_parsers/openai.py:275)     if isinstance(response, BaseModel) else response
    [276](https://file+.vscode-resource.vscode-cdn.net/home/jacobjensen/Projects/llms/aiconfig/cookbooks/Getting-Started/~/Projects/llms/aiconfig/python/src/aiconfig/default_parsers/openai.py:276) # streaming only returns one chunk, one choice at a time (before 1.0.0). The order in which the choices are returned is not guaranteed.
--> [277](https://file+.vscode-resource.vscode-cdn.net/home/jacobjensen/Projects/llms/aiconfig/cookbooks/Getting-Started/~/Projects/llms/aiconfig/python/src/aiconfig/default_parsers/openai.py:277) messages = multi_choice_message_reducer(messages, chunk)
    [279](https://file+.vscode-resource.vscode-cdn.net/home/jacobjensen/Projects/llms/aiconfig/cookbooks/Getting-Started/~/Projects/llms/aiconfig/python/src/aiconfig/default_parsers/openai.py:279) for i, choice in enumerate(chunk["choices"]):
    [280](https://file+.vscode-resource.vscode-cdn.net/home/jacobjensen/Projects/llms/aiconfig/cookbooks/Getting-Started/~/Projects/llms/aiconfig/python/src/aiconfig/default_parsers/openai.py:280)     index = choice.get("index")

File [~/Projects/llms/aiconfig/python/src/aiconfig/default_parsers/openai.py:392](https://file+.vscode-resource.vscode-cdn.net/home/jacobjensen/Projects/llms/aiconfig/cookbooks/Getting-Started/~/Projects/llms/aiconfig/python/src/aiconfig/default_parsers/openai.py:392), in multi_choice_message_reducer(messages, chunk)
    [387](https://file+.vscode-resource.vscode-cdn.net/home/jacobjensen/Projects/llms/aiconfig/cookbooks/Getting-Started/~/Projects/llms/aiconfig/python/src/aiconfig/default_parsers/openai.py:387)     messages = {}
    [389](https://file+.vscode-resource.vscode-cdn.net/home/jacobjensen/Projects/llms/aiconfig/cookbooks/Getting-Started/~/Projects/llms/aiconfig/python/src/aiconfig/default_parsers/openai.py:389) # elif len(messages) != len(chunk["choices"]):
    [390](https://file+.vscode-resource.vscode-cdn.net/home/jacobjensen/Projects/llms/aiconfig/cookbooks/Getting-Started/~/Projects/llms/aiconfig/python/src/aiconfig/default_parsers/openai.py:390) #     raise ValueError("Invalid number of previous choices -- it should match the incoming number of choices")
--> [392](https://file+.vscode-resource.vscode-cdn.net/home/jacobjensen/Projects/llms/aiconfig/cookbooks/Getting-Started/~/Projects/llms/aiconfig/python/src/aiconfig/default_parsers/openai.py:392) for choice in chunk["choices"]:
    [393](https://file+.vscode-resource.vscode-cdn.net/home/jacobjensen/Projects/llms/aiconfig/cookbooks/Getting-Started/~/Projects/llms/aiconfig/python/src/aiconfig/default_parsers/openai.py:393)     index = choice["index"]
    [394](https://file+.vscode-resource.vscode-cdn.net/home/jacobjensen/Projects/llms/aiconfig/cookbooks/Getting-Started/~/Projects/llms/aiconfig/python/src/aiconfig/default_parsers/openai.py:394)     previous_message = messages.get(index, {})
    ```

OpenAI Model Parser support for GPT-4 Turbo with vision (image-to-text)

OpenAI just announced GPT-4 Turbo and Turbo with Vision API access. AI Config default model parser should support these new models.

Model Description Context Window Training Data
gpt-4-1106-preview GPT-4 Turbo: The latest GPT-4 model with improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more. Returns a maximum of 4,096 output tokens. This preview model is not yet suited for production traffic. 128,000 tokens Up to Apr 2023
gpt-4-vision-preview GPT-4 Turbo with vision: Ability to understand images, in addition to all other GPT-4 Turbo capabilties. Returns a maximum of 4,096 output tokens. This is a preview model version and not suited yet for production traffic. 128,000 tokens Up to Apr 2023

Updating model for config (+ documentation)

I want to easily be able to update the model of my config that has many prompts from GPT4 to llama. Would be nice to have a set_model function that does this for all prompts in my config.

I wish there was a clear example on how to update models in the set_metadata docs. I can imagine this being a common change to config and so having specific docs on this would be nice.

I feel set_metadata is a little broad and important metadata like the model deserves its own function but maybe there are design choices I'm missing context on.

[UX] A few documentation items

  • clarify why output is list
  • what exactly does resolve do?
  • why is get_output_text necessary? Why not directly available in run output?

Improve representation of the Execution history of a Config.

See #372 for a draft PR partially addressing this.

Previously, running the Chain of Verification demo only saves the most recent output of the verification prompt. The logs can be used to a degree for auditing, and maybe they're actually the right place to fully store execution history.

If we can easily access the executions history, we can more easily reconstruct the past state of the AIConfig, and understand its actions.

The notebook analogy that holds is that current AIConfig with only most recent output is like a saved notebook that may have been executed out of order. A full execution history that can be directly replayed allows much more robust debugging and can help build very nice mocks.


As an aside, in the Chain of Verification case it could be nice to have a pattern like map(VerificationPrompt, baseline_response.split('\n')). It could look like this, if we stick to a very literal handlebars syntax:

{
      "name": "final_response_gen",
      "input": "Cross-check the provided list of verification data with the original baseline response that is supposed to accurately answer the baseline prompt. \n\nBaseline prompt: {{baseline_prompt}} \nBaseline response: {{baseline_response_gen.output}}\nVerification data: "{{#each verification_candidates}} verification(this).output {{/each}}",
      "metadata": {
        "model": {
          "name": "gpt-4",
          "settings": {
            "system_prompt": "For each entity from the baseline response, verify that the entity met the criteria asked for in the baseline prompt based on the verification data. \n\nOutput Format: \n\n### Revised Response \nThis is the revised response after running chain-of-verification. \n(Please output the revised response after the cross-check.)\n\n### Failed Entities \nThese are the entities that failed the cross-check and are no longer included in revised response. \n(List the entities that failed the cross-check with a concise reason why)"
          }
        },
        "parameters": {
          "verification_candidates": [<a list param>]
        },
        "remember_chat_context": false
      },
 }

Batch inference

Similar to AI Workflows, we should enable local batch inference.

Let's discuss in this issue what the API should look like and what data format the results are created as.

Request from @2timesjay.

Remove unused models from registry

Great point brought up by Ankush in #410 (comment)

We can have case where user deletes a prompt, or re-uploads the entire AIConfig and a model is no longer being used. In that case we can unregister the model to save memory

Make bootstrapping easier for adding custom model parser

From Sherry:

Today it's not very obvious or intuitive to n00bs that you need to register model parser using

from aiconfig.registry import ModelParserRegistry

ModelParserRegistry.register_model_parser(MyParser(myParserConstructorArgs))

Some suggestions:

  1. Have generator script that creates skeleton template code and adds the model parser register automatically in the Config.py file (ex: ./createCustomModelParser MyClassName)
  2. In the UI editor (once that's build), add a button where you can upload file containing your class, and we read that class and automatically add it to Config.py registry. This was it can be saved as a global setting, and every AIConfigRuntime instnace will continue to have it as long as it's always declared. We can also have a ~.aiconfigEditorRc settings or whatever where you can define this, similar to .vimrc so you don't have to keep re-uploading every new local editor session

There are also two flows:

  1. Editing + modifying + saving AIConfig settings (ex: add_prompt(), save()`) --> this currently fails because it relies on the model_parser.serialize() steps --> causes friction if can't do this because model parser isn't registered yet
  2. Running inference --> ok to flag this as error if model parser isn't registered

Also we should add a sub-page here on how to add your own custom model parser: https://aiconfig.lastmileai.dev/docs/overview/create-an-aiconfig/?fbclid=IwAR1YqRLZ1ICX_4pNXiWx4xP7EA9n_n2-BOhfMow6ZWLUaWbJhGx6NaKM4Fw

Remove PromptWithOutputs Type

The definition is

export type PromptWithOutputs = Prompt & { outputs?: Output[] };

But the definition for Prompt already contains

  /**
   * Execution, display, or stream outputs.
   */
  outputs?: Output[];

So PromptWithOutputs is redundant and can be removed and replaced by just Prompt

[local editor] support specifying index in addPrompt

Currently, new prompts are added to the end of the prompts list in the config. We should update the addPrompt method to support an optional index into which the new prompt should be inserted. This will be useful for a few things:

  1. Local editor / UI representations should support adding prompts / cells in different locations in the editor, instead of just at the bottom
  2. For things like dependencies, conversation history, we are currently relying on the ordering of previous prompts in the list before the current prompt. If we want to add a new dependency or conversation message above the current prompt, we would currently need to remove and re-add all prompts up to that point. Instead, we should be able to just insert a new prompt at the desired index for this purpose

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.