Datta Nimmaturi.
I talk the model language so you can talk your language :)
Deep Learning Aficionado. Linux & Tech Enthusiast. Loves Math. Cricket & Chess Strategist
Rethink LoRA initialisations for faster convergence What is LoRA LoRA has been a tremendous tool in the world of fine tuning, especially parameter efficient fine tuning. It is an easy way to fine tune your models with very little memory requirements. LoRA was first introduced in this paper by Hu et …