Seungone Kim's Projects
The official tool for creating proceedings for conferences of the Association for Computational Linguistics (ACL).
This repository is where I store my codes for Competitive Programming and Implementation of Data Structures. Problems and content are sourced from ICPC, Baekjoon Online Judge, CodeForces and lectures from Yonsei University(CSI2103-01, CSI3108-01). All the work was done since 2019.06 and was committed on git since 2021.01.22
An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.
TensorFlow code and pre-trained models for BERT
Devlin, Jacob, et al. "Bert: Pre-training of deep bidirectional transformers for language understanding." arXiv preprint arXiv:1810.04805 (2018). (Implementation from scratch with PyTorch)
Implementation for "Bi-Directional Attention Flow For Machine Comprehension"(ICLR 2017) Paper
Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models
Project of synthesizing celebrity's hair into a target face. Using Pix2Pix model and a pretrained MobileNet, we train the pix2pix model.
Code base for Contrastive Explanation Verifier - based on Dense Passage Retriever codebase
[EACL 2023] CoTEVer: Chain of Thought Prompting Annotation Toolkit for Explanation Verification
This repository contains Assignments from the Cousera Deep learning specialization (Professor Andrew NG)
This repository contains Assignments from the Cousera GAN specialization (Professor Sharon Zhou)
This repository contains the 5 Assignments from cs224n 2019 winter class.
This is a repository where I store materials I summarized from studying DL,ML,NLP,CV,RL,GAN,RS fields.
Contains high quality implementations of Deep Reinforcement Learning algorithms written in PyTorch
Codes for our CCL 2021 paper: Incorporating Commonsense Knowledge into Abstractive Dialogue Summarization via Heterogeneous Graph Networks
Finetune mistral-7b-instruct for sentence embeddings
Exploring the Benefits of Training Expert Language Models over Instruction Tuning
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Official codebase for "FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets"
Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners
예쁘게 보이는 README.md 의 템플릿 파일입니다. 해당 템플릿을 여러분의 레퍼지토리에 추가하여 다른 사람들에게 효율적으로 소개해보세요.
The goal of this project is to enable users to create cool web demos using the newly released OpenAI GPT-3 API with just a few lines of Python.
Official code for paper "Not All Tasks are Born Equal: Uniderstanding Zero-Shot Generalization"
A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.
Pretrained ELECTRA Model for Korean