5 Reasons Why Adapters are the Future of Fine-tuning LLMs
Predibase Predibase
1.48K subscribers
1,415 views
47

 Published On Feb 22, 2024

Fine-tuning pretrained open-source LLMs is rapidly gaining popularity as an alternative to training LLMs from scratch - which is cost prohibitive and requires massive data - or renting closed-source LLMs from commercial APIs. However, fine-tuning can also be expensive in terms of computational resources and time.

The good news: new research in efficient fine-tuning, namely adapter-based training, is making it possible for developers to fine-tune smaller highly performant LLMs at a fraction of the cost.

So what is adapter-based training all about, when does it make sense, and how can developers successfully implement these new efficient fine-tuning techniques?

Watch this on-demand session and demo with Daliana Liu, Host of ML Real Talk, and Geoffrey Angus, Engineering Leader at Predibase and co-maintainer of popular open-source LLM projects, Ludwig and LoRAX, to deep dive on all things efficient fine-tuning and adapter-based training.

Topics covered in the discussion:
• Adapters and efficient fine-tuning explained
• The impact of adapters and how they are shaping the future of fine-tuning
• Top use cases and considerations when starting with fine-tuning
• Best practices for training with adapters and optimizing fine-tuning jobs
• Looking beyond adapters to the next set of innovations in productionizing LLMs

Ready to get started?
• Efficiently fine-tune and serve any open-source LLM on cost-effective serverless infra with our free trial: https://predibase.com/free-trial
• Download the webinar slides: https://pbase.ai/AdapterWebinarSlides

show more

Share/Embed