matlok
's Collections
Fine-Tuning
updated
Metadata Might Make Language Models Better
Paper
•
2211.10086
•
Published
•
4
Empirical Analysis of the Strengths and Weaknesses of PEFT Techniques
for LLMs
Paper
•
2304.14999
•
Published
•
2
PEFT for Speech: Unveiling Optimal Placement, Merging Strategies, and
Ensemble Techniques
Paper
•
2401.02122
•
Published
•
2
Zephyr: Direct Distillation of LM Alignment
Paper
•
2310.16944
•
Published
•
122
NEFTune: Noisy Embeddings Improve Instruction Finetuning
Paper
•
2310.05914
•
Published
•
14
Self-Play Fine-Tuning Converts Weak Language Models to Strong Language
Models
Paper
•
2401.01335
•
Published
•
64
LLaMA-Reviewer: Advancing Code Review Automation with Large Language
Models through Parameter-Efficient Fine-Tuning
Paper
•
2308.11148
•
Published
•
2
An Unsupervised Method for Estimating Class Separability of Datasets
with Application to LLMs Fine-Tuning
Paper
•
2305.15016
•
Published
•
5
DoRA: Weight-Decomposed Low-Rank Adaptation
Paper
•
2402.09353
•
Published
•
26
M2-CLIP: A Multimodal, Multi-task Adapting Framework for Video Action
Recognition
Paper
•
2401.11649
•
Published
•
3
When Scaling Meets LLM Finetuning: The Effect of Data, Model and
Finetuning Method
Paper
•
2402.17193
•
Published
•
23
DiffuseKronA: A Parameter Efficient Fine-tuning Method for Personalized
Diffusion Model
Paper
•
2402.17412
•
Published
•
21
Teaching Large Language Models to Reason with Reinforcement Learning
Paper
•
2403.04642
•
Published
•
46
Yi: Open Foundation Models by 01.AI
Paper
•
2403.04652
•
Published
•
62
Scaling Laws of RoPE-based Extrapolation
Paper
•
2310.05209
•
Published
•
6
Table-GPT: Table-tuned GPT for Diverse Table Tasks
Paper
•
2310.09263
•
Published
•
39
LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models
Paper
•
2403.13372
•
Published
•
62
The Unreasonable Ineffectiveness of the Deeper Layers
Paper
•
2403.17887
•
Published
•
78