Topic: peft Goto Github
Some thing interesting about peft
Some thing interesting about peft
peft,CompanionLLM - A framework to finetune LLMs to be your own sentient conversational companion
User: adithya-s-k
peft,LLM Finetuning with peft
User: ashishpatel26
peft,Clipora is a powerful toolkit for fine-tuning OpenCLIP models using Low Rank Adapters (LoRA).
User: awilliamson10
peft,PyTorch Reimplementation of LoRA
User: baijiong-lin
peft,This is the official repository for the paper "Flora: Low-Rank Adapters Are Secretly Gradient Compressors" in ICML 2024.
Organization: borealisai
Home Page: https://arxiv.org/abs/2402.03293
peft,Official code implemtation of paper AntGPT: Can Large Language Models Help Long-term Action Anticipation from Videos?
Organization: brown-palm
Home Page: https://brown-palm.github.io/AntGPT/
peft,Collection of Tools and Papers related to Adapters / Parameter-Efficient Transfer Learning/ Fine-Tuning
User: calpt
Home Page: https://calpt.github.io/awesome-adapter-resources/
peft,This hands-on lab walks you through a step-by-step approach to efficiently serving and fine-tuning large-scale Korean models on AWS infrastructure.
User: daekeun-ml
peft,Genie in the Box: Distill Whisper STT => Mistral-7B => Phind/Phind-CodeLlama-34B-v2 => GPT 3.5 => Coqui's TTS/OpenAI TTS
User: deepily
Home Page: https://www.deepily.ai
peft,IISAN: Efficiently Adapting Multimodal Representation for Sequential Recommendation with Decoupled PEFT
Organization: gair-lab
peft,GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
User: garyfanhku
peft,This is AlpaGasus2-QLoRA based on LLaMA2 with AlpaGasus mechanism using QLoRA!
User: gauss5930
peft,Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates
User: guitaricet
Home Page: https://arxiv.org/abs/2307.05695
peft,Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调
User: hiyouga
peft,Efficiently Fine-Tune 100+ LLMs in WebUI (ACL 2024)
User: hiyouga
Home Page: https://arxiv.org/abs/2403.13372
peft,Finetuning of Falcon-7B LLM using QLoRA on Mental Health Conversational Dataset
User: iamarunbrahma
peft,An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)
Organization: internlm
Home Page: https://xtuner.readthedocs.io/zh-cn/latest/
peft,A full pipeline to finetune Alpaca LLM with LoRA and RLHF on consumer hardware. Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the Alpaca architecture. Basically ChatGPT but with Alpaca
User: jackaduma
peft,A full pipeline to finetune ChatGLM LLM with LoRA and RLHF on consumer hardware. Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the ChatGLM architecture. Basically ChatGPT but with ChatGLM
User: jackaduma
peft,A full pipeline to finetune Vicuna LLM with LoRA and RLHF on consumer hardware. Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the Vicuna architecture. Basically ChatGPT but with Vicuna
User: jackaduma
peft,Preprint: Asymmetry in Low-Rank Adapters of Foundation Models
User: jiacheng-zhu-aiml
peft,The open source implementation of ChatGPT, Alpaca, Vicuna and RLHF Pipeline. 从0开始实现一个ChatGPT.
User: jianzhnie
Home Page: https://jianzhnie.github.io/llmtech
peft,LLM Tuning with PEFT (SFT+RM+PPO+DPO with LoRA)
User: joyce94
peft,Finetune mistral-7b-instruct for sentence embeddings
User: kamalkraj
peft,该仓库主要记录 LLMs 算法工程师相关的顶会论文研读笔记(多模态、PEFT、小样本QA问答、RAG、LMMs可解释性、Agents、CoT)
User: km1994
Home Page: https://github.com/km1994/llms_paper
peft,[SIGIR'24] The official implementation code of MOELoRA.
User: liuqidong07
Home Page: https://arxiv.org/abs/2310.18339
peft,Simple UI for LLM Model Finetuning
User: lxe
peft,MindSpore online courses: Step into LLM
Organization: mindspore-courses
peft,Use PEFT or Full-parameter to finetune 300+ LLMs or 60+ MLLMs. (Qwen2, GLM4v, Internlm2.5, Yi, Llama3.1, Llava-Video, Internvl2, MiniCPM-V-2.6, Deepseek, Baichuan2, Gemma2, Phi3-Vision, ...)
Organization: modelscope
Home Page: https://swift.readthedocs.io/zh-cn/latest/LLM/index.html
peft,Fine-tune Mistral 7B to generate fashion style suggestions
Organization: neuralwork
peft,Fine-Tuning Falcon-7B, LLAMA 2 with QLoRA to create an advanced AI model with a profound understanding of the Indian legal context.
User: nisaaragharia
Home Page: https://huggingface.co/nisaar
peft,This repository contains the code to train flan t5 with alpaca instructions and low rank adaptation.
User: reason-wang
peft,Various training, inference and validation code and results related to Open LLM's that were pretrained (full or partially) on the Dutch language.
User: robinsmits
peft,[ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference
User: roim1998
peft,Curated list of open source and openly accessible large language models
User: sanjibnarzary
Home Page: https://github.com/sanjibnarzary/awesome-llm
peft,HEFT, randomHEFT and IPEFT algorithms for static list DAG Scheduling
User: sharma-n
peft,🚀 Easy, open-source LLM finetuning with one-line commands, seamless cloud integration, and popular optimization frameworks. ✨
User: simplifine-llm
Home Page: https://www.simplifine.com
peft,Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJEk6
Organization: stochasticai
Home Page: https://xturing.stochastic.ai
peft,Code for NOLA, an implementation of "nola: Compressing LoRA using Linear Combination of Random Basis"
Organization: ucdvision
Home Page: https://ucdvision.github.io/NOLA/
peft,(TCSVT 2024) Unsupervised Domain Adaption Harnessing Vision-Language Pre-training
User: wenlve-zhou
peft,Speech, Language, Audio, Music Processing with Large Language Model
Organization: x-lance
peft,UI tool for fine-tuning and testing your own LoRA models base on LLaMA, GPT-J and more. One-click run on Google Colab. + A Gradio ChatGPT-like Chat UI to demonstrate your language models.
User: zetavg
peft,[ICLR 2024] This is the repository for the paper titled "DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning"
User: zhengxiangshi
Home Page: http://arxiv.org/abs/2309.05173
peft,[ICCV 2023 oral] This is the official repository for our paper: ''Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning''.
Organization: ziplab
peft,Use QLoRA to tune LLM in PyTorch-Lightning w/ Huggingface + MLflow
User: zjohn77
peft,手把手带你实战 Huggingface Transformers 课程视频同步更新在B站与YouTube
User: zyds
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.