Code Monkey home page Code Monkey logo

openprompt's Issues

Test performance mismatch between training and testing phases

I used the default setting in the experiments directory for classification and found that the performance during training is inconsistent with that during testing.

Here are the commands for reproducing the issue.

  • training:
    python cli.py --config_yaml classification_softprompt.yaml
  • test:
    python cli.py --config_yaml classification_softprompt.yaml --test --resume

I append the following lines to the yaml file to load the trained model:

logging:
  path: logs/agnews_bert-base-cased_soft_manual_template_manual_verbalizer_211023110855

The training log shows the performance on the test set is:

trainer.evaluate Test Performance: micro-f1: 0.7927631578947368

However, the testing log says:

trainer.evaluate Test Performance: micro-f1: 0.8102631578947368

bug:运行demo,出现数据越界

This demo is powered by OpenPrompt ...
Enter the text: >? '''
Albert Einstein was one of the greatest intellects of his time.
'''
Enter the Prompt Template: >? '''
<text_a> It is
<text_a> Albert Einstein is a
Albert Einstein was born in
'''
Select a Prompt Verbalizer:
1. Sentiment Verbalizer
2. Entity Verbalizer
3. Knowledge Probing
Enter a number between 1 to 3 : >? 1
Incorporating Template and Verbalizer into a PromptModel ...
Predicting ...
tokenizing: 1it [00:00, 198.96it/s]
Traceback (most recent call last):
File "c:\programdata\anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3437, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 1, in
runfile('C:/Users/franztao/Desktop/work/TableExtractionAndAnalysis/OpenPrompt/demo/demo.py', wdir='C:/Users/franztao/Desktop/work/TableExtractionAndAnalysis/OpenPrompt/demo')
File "C:\Program Files\JetBrains\PyCharm 2021.2.1\plugins\python\helpers\pydev_pydev_bundle\pydev_umd.py", line 198, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "C:\Program Files\JetBrains\PyCharm 2021.2.1\plugins\python\helpers\pydev_pydev_imps_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/Users/franztao/Desktop/work/TableExtractionAndAnalysis/OpenPrompt/demo/demo.py", line 192, in
logits = prompt_model(batch)
File "c:\programdata\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\openprompt-1.0-py3.8.egg\openprompt\pipeline_base.py", line 249, in forward
label_words_logits = self.verbalizer.process_logits(logits=logits, batch=batch)
File "C:\ProgramData\Anaconda3\lib\site-packages\openprompt-1.0-py3.8.egg\openprompt\prompts\manual_verbalizer.py", line 144, in process_logits
label_words_logits = self.project(logits, **kwargs) #Output: (batch_size, num_classes) or (batch_size, num_classes, num_label_words_per_label)
File "C:\ProgramData\Anaconda3\lib\site-packages\openprompt-1.0-py3.8.egg\openprompt\prompts\manual_verbalizer.py", line 119, in project
label_words_logits = logits[:, self.label_words_ids]
IndexError: index 157 is out of bounds for dimension 0 with size 3

How to save a model?

Hi, thanks for your excellent work.

I want to use PromptForClassification to handle a classification task,but I do not know how to save a model. I see state_dict in PromptForClassification,can I save a model in a way similar to torch.save?

Could you add more descriptions about the experiments?

Thanks for sharing the experimental configurations, it would be better if you could provide more detailed descriptions. For example, the results corresponding to different configurations, the devices used in the experiment, and the training time.

line 158 of the demo

An error was found again.

In line 158 of the demo, the label_words dictionary should not end with a comma, because then label_words is not a dictionary, but a tuple

Demo doesn't work with GPT2.

In demo/demo.py, If I change "roberta" in line 56 and "roberta-large" in line 57 to "gpt2", the following error occurs:

Traceback (most recent call last):
File "./demo/demo.py", line 179, in
data_loader = PromptDataLoader(
File "./openprompt/pipeline_base.py", line 87, in init
self.tokenize()
File "./openprompt/pipeline_base.py", line 130, in tokenize
inputfeatures = InputFeatures(**self.tokenizer_wrapper.tokenize_one_example(wrapped_example, self.teacher_forcing), **wrapped_example[1]).to_tensor()
File "./openprompt/data_utils/data_utils.py", line 177, in to_tensor
setattr(self, key, torch.tensor(value))
RuntimeError: Could not infer dtype of NoneType

Found a type hint bug

ManualTemplate.__init__的text参数的type hint是Optional[List[str]],现在是不是应该改成str了?

Generation task with t5

It would be good if you put examples for t5.

I tried an my own task with t5 and I got:

Traceback (most recent call last):
  File "cli.py", line 181, in <module>
    main()
  File "cli.py", line 170, in main
    runner.run()
  File "/home/pouramini/OpenPrompt/openprompt/trainer.py", line 76, in run
    total_loss = self.train_epoch(epoch)
  File "/home/pouramini/OpenPrompt/openprompt/trainer.py", line 507, in train_epoch
    loss = self.prompt_model(batch).mean()  #TODO:unbanlanced batch chunks
  File "/home/pouramini/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/pouramini/OpenPrompt/openprompt/pipeline_base.py", line 372, in forward
    return self._forward(*args, **kwargs)
  File "/home/pouramini/OpenPrompt/openprompt/pipeline_base.py", line 385, in _forward
    outputs = self.prompt_model(batch)
  File "/home/pouramini/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/pouramini/OpenPrompt/openprompt/pipeline_base.py", line 176, in forward
    outputs =  self.model(**input_batch)
  File "/home/pouramini/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/pouramini/transformers/src/transformers/models/t5/modeling_t5.py", line 1571, in forward
    encoder_outputs = self.encoder(
  File "/home/pouramini/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/pouramini/transformers/src/transformers/models/t5/modeling_t5.py", line 1003, in forward
    layer_outputs = layer_module(
  File "/home/pouramini/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/pouramini/transformers/src/transformers/models/t5/modeling_t5.py", line 639, in forward
    self_attention_outputs = self.layer[0](
  File "/home/pouramini/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/pouramini/transformers/src/transformers/models/t5/modeling_t5.py", line 546, in forward
    attention_output = self.SelfAttention(
  File "/home/pouramini/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/pouramini/transformers/src/transformers/models/t5/modeling_t5.py", line 503, in forward
    position_bias = position_bias + mask  # (batch_size, n_heads, seq_length, key_length)
RuntimeError: The size of tensor a (160) must match the size of tensor b (165) at non-singleton dimension 3

The task runs with gpt2 however.

Also here is configuration:

dataset:
  name: atomic
  path: /home/pouramini/atomic/

task: generation

train:
  num_epochs: 3
  batch_size: 16
  teacher_forcing: True
  gradient_accumulation_steps: 1
  max_grad_norm: 1.0

dataloader:
  max_seq_length: 160

environment:
  num_gpus: 1
  cuda_visible_devices:
generation: # Adding any arguments for generation here.
  parent_config: task
  max_length: 320

plm:
  model_name: t5
  model_path: /home/pouramini/pret/t5-base
  optimize:
    freeze_para: True

## LEARINING SETTING  ####################################################
learning_setting: full # selecting from "full", "zero_shot", "few_shot"

# few_shot:
#   parent_config: learning_setting
#   few_shot_sampling: sampling_from_train
reproduce:  # seed for reproduction 
  seed: 100

template: prefix_tuning_template
verbalizer:

prefix_tuning_template:
  parent_config: template
  text:
  mask_token: <mask>
  num_token: 5
  placeholder_mapping:
    <text_a>: text_a
    <text_b>: text_b
  prefix_dropout: 0.0
  mid_dim: 512
  optimize:
    name: AdamW
    lr: 0.00005
    betas:
      - 0.9
      - 0.999
    eps: 1.0E-8
    scheduler:
      num_warmup_steps: 0


Ways to deal with multi-token label words

Thank you for sharing this awesome project! I was wondering how you deal with the situation where manually designed label words are tokenized into different lengths, which makes it harder to compare the probabilities between labels. I would appreciate it if you can share your solution to this problem, thank you!

No logic rules were found in ptr_prompt.py.

I have read PTR: Prompt Tuning with Rules for Text Classification and want to find corresponding code implementation in ptr_prompt.py. However, I find that class PTRVerbalizer.process_logits() just simply sums logits of each label word. I want to ask if I should add logical rules by myself to make my own prompt program more aware of entity type information?
Concretely, as for relation "per:date_of_death": ["person", "was", "died", "on", "date"], I wish more attention should be paid to "person" and "date".

from typing import OrderedDict ImportError: cannot import name 'OrderedDict'

I found a bug while using the project OpenPrompt.
When the python version is less than 3.7, an error will be reported when using from typing import OrderedDict, which should be modified to from collections import OrderedDict
When the python version is equal to or higher than 3.7, from typing import OrderedDict will work normally
I think different import statements should be used depending on the python version.
Hope to adopt!

example运行报错

我copy了example的代码,运行之后报错信息最后是

File "D:\Anaconda3\envs\pytorch\lib\site-packages\openprompt\openprompt\prompt_base.py", line 170, in parse_text
d["text"] = text[i:j].rstrip(' ')
AttributeError: 'list' object has no attribute 'rstrip'

我的python版本是3.7并不是3.9,但据我所知list确实没有rstrip方法,不知道是哪里出了问题。

Failed to run the demo in `tutorial`

command:
python tutorial/1.1_mixed_template.py

output:

  File "tutorial/1.1_mixed_template.py", line 94, in <module>
    logits = prompt_model(inputs)
  File "/home/h/anaconda3/envs/openprompt/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/h/work/OpenPrompt/openprompt/pipeline_base.py", line 241, in forward
    outputs = self.verbalizer.gather_outputs(outputs)
TypeError: gather_outputs() takes 1 positional argument but 2 were given

单机多卡保存模型,单机多卡加载报错 resume

state_dict = torch.load(self.checkpoint_path(ckpt), pickle_module = dill, map_location = "cpu")

File "experiments/cli.py", line 202, in
main()
File "experiments/cli.py", line 72, in main
test_dataset = test_dataset,
File "experiments/cli.py", line 195, in trainer
res = runner.run(ckpt = 'best')
File "./openprompt/trainer.py", line 366, in run
self.fit(ckpt)
File "./openprompt/trainer.py", line 346, in fit
continue_training = self.training_epoch(self.cur_epoch)
File "./openprompt/trainer.py", line 299, in training_epoch
loss = self.training_step(batch, batch_idx)
File "./openprompt/trainer.py", line 436, in training_step
logits = self.model(batch)
RuntimeError: CUDA out of memory. Tried to allocate 744.00 MiB (GPU 0; 15.78 GiB total capacity; 13.90 GiB already allocated; 273.44 MiB free; 14.18 GiB reserved in total by PyTorch)

tutorial warn on initializing BertForMaskedLM is this expected or a bug?

I download bert-base-cased from https://huggingface.co/bert-base-cased/tree/main.
When I runl the 1.py in tutorial folder, the warn/error:

Some weights of the model checkpoint at bert-base-cased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.bias', 'cls.seq_relationship.weight']

  • This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
  • This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).

thanks!

使用 InputExample() 构建中文数据集,汉字读入为Unicode编码

Code

dataset = {}
dataset["train"] = []
for index,data in train_dataset.iterrows():
    input_example = InputExample(text_a = data["text"], label=data["class"], guid=data["id"])
    dataset["train"].append(input_example)
print(dataset["train"][10])

output

{
  "guid": 13,
  "label": 2,
  "meta": {},
  "text_a": "\u522b\u6025\u3002\u8bf4\u4e0d\u51c6\u7684\u3002\u7b2c\u4e00\u6b21\u8fc7\u7684\u65f6\u5019\u4e5f\u5ba1\u6838\u4e86\u5341\u51e0\u5929\u3002\u4e0d\u8fc7\u6700\u540e\u5168\u989d\u5ea6\u901a\u8fc7\u3002\u5229\u606f\u9ad8\u5c31\u6ca1\u7528",
  "text_b": "",
  "tgt_text": null
}

Does Prefix Tuning support T5 later?

I saw T5's source code doens't support adding past_key_values ​​to the encoder in their code base. in

class PrefixTuningTemplate(Template):
r"""This template different from most template in that this emplate doesn't need to
wrap the input sentences with the template. The new tokens are always prepended to
the language model. A mapping is used to map the new_tokens embeddings in to the
past_key_value, and then input into the language model. The mask token of this
template is automatically the last token. Currently, our implementation of
prefix_tuning doens't support adding past_key_values to the encoder side of an
encoder_decoder architecture such as T5 without modifying the T5 source code.
(T5's source code doens't support adding past_key_values to the encoder in their code base. )
, Does this show that the prefix tuning is not compatible with T5? Will T5 will be supported later? thanks.

Confused about the type hint of ManualVerbalizer.__init__

The type hint of ManualVerbalizer.init includes:

label_words (:obj:`Union[Sequence[str], Mapping[str, str]]`, optional): The label words that are projected by the labels.

However, I found two cases where the type of label_words is neither Sequence[str] or Mapping[str, str].

In Introduction with an Example,

from openprompt.prompts import ManualVerbalizer
promptVerbalizer = ManualVerbalizer(
    classes = classes,
    label_words = {
        "negative": ["bad"],
        "positive": ["good", "wonderful", "great"],
    },
    tokenizer = tokenizer,
)

In 0_basic.py,

myverbalizer = ManualVerbalizer(tokenizer, num_classes=2, 
                        label_words=[["yes"], ["no"], ["maybe"]])

bug:TypeError: _forward_unimplemented() got an unexpected keyword argument 'output_hidden_states'

File "/workspace/knowledgegraphcommon/business/text_classification/prompt/text_classification_prompt.py", line 185, in train
logits = self.prompt_model(inputs)
File "/root/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/workspace/OpenPrompt/openprompt/pipeline_base.py", line 263, in forward
outputs = self.prompt_model(batch)
File "/root/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/workspace/OpenPrompt/openprompt/pipeline_base.py", line 185, in forward
outputs = self.plm(**input_batch, output_hidden_states=True)
File "/root/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
TypeError: _forward_unimplemented() got an unexpected keyword argument 'output_hidden_states'

how to modify the batch size

At first, thank you for your clean and beautiful code.
I have a small question about the code.
In the prefixtuning template(tutorial 2.2), it seems that the code batchsize is fixed to 1, right?

Recommended mix template for text classification task

1.1_mixed_template.py Line 29 gives an example usage of MixedTemplate for maybe question answering task, is there any recommended mix template for text classification task?
For example is the following sentiment classification task code right ?

prompt_verbalizer = ManualVerbalizer(
    classes=classes,
    label_words={
        "negative": ["不"],
        "positive": ["很"],
    },
    tokenizer=tokenizer,
)

prompt_template = MixedTemplate(
    model=plm,
    tokenizer=tokenizer,
    text='{"soft"} {"soft"} {"soft"} {"placeholder":"text_a"} {"mask"}满意。'
)

Question about updating the repository

How do you keep this repository up to date? Do I need to clone the entire library every time and update it again as follows:

git clone https://github.com/thunlp/OpenPrompt.git
cd OpenPrompt
pip install -r requirements.txt
python setup.py install

bug: AttributeError: 'T5ForConditionalGeneration' object has no attribute 'roberta'

Token indices sequence length is longer than the specified maximum sequence length for this model (519 > 512). Running this sequence through the model will result in indexing errors
tokenizing: 250it [00:00, 374.45it/s]
Parameter containing:
tensor([[[4273]],

    [[ 150]],

    [[2087]]])

tensor([[-1.4704, -0.4191, -2.1847],
[-1.4162, -0.7810, -1.2059]])
Traceback (most recent call last):
File "/workspace/knowledgegraphcommon/business/text_classification/prompt/tutorial/0_basic.py", line 136, in
logits = prompt_model(inputs)
File "/root/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/workspace/OpenPrompt/openprompt/pipeline_base.py", line 266, in forward
outputs = self.prompt_model(batch)
File "/root/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/workspace/OpenPrompt/openprompt/pipeline_base.py", line 186, in forward
input_batch['inputs_embeds'] = self.plm.roberta.embeddings.word_embeddings(input_batch['input_ids'])
File "/root/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in getattr
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'T5ForConditionalGeneration' object has no attribute 'roberta'

run boolq_bert_large_cased_softprompt.yaml error

Traceback (most recent call last):
File "experiments/cli.py", line 199, in
main()
File "experiments/cli.py", line 69, in main
test_dataset = test_dataset,
File "experiments/cli.py", line 194, in trainer
res = runner.run()
File "./openprompt/trainer.py", line 358, in run
self.fit(ckpt)
File "./openprompt/trainer.py", line 338, in fit
continue_training = self.training_epoch(self.cur_epoch)
File "./openprompt/trainer.py", line 293, in training_epoch
loss = self.training_step(batch, batch_idx)
File "./openprompt/trainer.py", line 428, in training_step
logits = self.model(batch)
File "/mnt/yanghao/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "./openprompt/pipeline_base.py", line 245, in forward
outputs = self.verbalizer.gather_outputs(outputs)
TypeError: gather_outputs() takes 1 positional argument but 2 were given

ImportError: cannot import name 'InputExample'

After installing OpenPrompt according to the installation tutorial, runing demo.py shows an error that ImportError: cannot import name 'InputExample'

After you see it, I hope to reply as soon as possible, think you!

Missing config_default.yaml

FileNotFoundError: [Errno 2] No such file or directory: '~/anaconda3/envs/pytorch1.8/lib/python3.8/site-packages/openprompt-1.0-py3.8.egg/openprompt/config_default.yaml'

Call for Correct and Systematic Documentation

from openprompt import PromptForClassification
promptModel = PromptForClassification(
    template = promptTemplate,
    model = bertModel,
    verbalizer = promptVerbalizer,
)

Hi, I tried to follow the readme and I got the following err:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-11-6ad90754910f> in <module>
      1 from openprompt import PromptForClassification
----> 2 promptModel = PromptForClassification(
      3     template = promptTemplate,
      4     model = bertModel,
      5     verbalizer = promptVerbalizer,

TypeError: __init__() got an unexpected keyword argument 'model'

After searching through your code, I rectify arg model to plm.

However, to avoid frequent mistakes in the future, I want to know if you have correct and systematic documentation that can help me?

openprompt.prompt_base.py中存在一个小bug

我在跟着readme走到Step 3: Define a Template的时候发现一个bug:示例中定义的text = ["<text_a>", "It", "was", ""], 然而在openprompt.prompt_base.py中第168行此时的text[i:j]是一个list,不存在rstrip(' ')属性

two <mask> raise IndexError

when I set two in the ManualTemplate :

promptTemplate = ManualTemplate(
    text = ["<text_a>", "was", "<mask>", 'and', '<mask>'],
    tokenizer = bertTokenizer,
)

it will raise:
"openprompt/prompts/manual_verbalizer.py", line 119, in project
label_words_logits = logits[:, self.label_words_ids]
IndexError: index 2204 is out of bounds for dimension 0 with size 2

The whole code was copy from link

AttributeError is occured when initializing a PtuningTemplate

the code is:
prompt_template = PtuningTemplate(text=['<text_a>', '<new>', '<new>', '<mask>', '.'], model=bertModel, tokenizer=bertTokenizer)

get the reported error:
"OpenPrompt/openprompt/prompts/ptuning_prompts.py", line 63, in on_text_set self.num_new_token = sum([token == self.new_token for token in self.text])
AttributeError: 'PtuningTemplate' object has no attribute 'new_token'

Needing Tutorial for Generation Task

Hi, thanks for your excellent work.

I'm trying to apply OpenPrompt to generation task, but I have no idea.

If possible, could you provide a tutorial for generation task, like the tutorial of classification task in the readme?

Thanks anyway!

Please consider adding a todo or not implemented list on the home page

Some features such as Conditional Generation are missing or not easy to use, I have to manually edit the source code. And the docs are under constructing. Maybe adding a beta tag or list the not implemented features is a better way for the users.
Really thanks for sharing the code for prompt learning.

Possible omission {"soft"} in in mix template

Really appreciate the work!

It seems the mixed_template.py is ignoring the {"soft"} string.

As I find the following two mixed templates produce the same 'input_ids':

Template1 text
'{"placeholder":"text_a"} {"soft":"The"} {"soft"} {"soft":"the"} {"placeholder":"text_b"} {"soft"} {"mask"}.'

Template2 text
'{"placeholder":"text_a"} {"soft":"The"} {"soft":"the"} {"placeholder":"text_b"}{"mask"}.'

Input ids for both the above templates is:

{"input_ids": [[27, 183, 13802, 0, 0, 27, 174, 12, 3, 1544, 32099, 3, 5, 1, 0, 0 ...}

Note:

text_a = "I am hungry"
text_b = "I need to eat"

PS: It might also be ignoring its id from 'soft_token_ids'

Thanks!

how to use bart on openprompt?

I'd like to use prefix tuning template and BARTmodel (As its paper does) , but found this template only supports T5 and GPT2. How to use BART in this work? And I found that the attention module of the transformer BART has been modified in original paper. If I want to use BART here, do I also need to modify it?

TypeError in demo

I wanted to test the demo
I entered test for text and I like it becuase for template. I used my local copy of roberta-base

I got the following error

Traceback (most recent call last):
  File "demo.py", line 197, in <module>
    print(f"{color('Predicition')}: ", bertTokenizer.convert_tokens_to_string(bertTokenizer.convert_ids_to_tokens(pred)[0]))
  File "/home/pouramini/miniconda3/envs/op/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 718, in convert_ids_to_tokens
    index = int(index)
TypeError: int() argument must be a string, a bytes-like object or a number, not 'list'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.