Senior Data Scientist @ NVIDIA
P-Tuning: A Parameter Efficient Tuning to Boost LLM Performance
As more LLMs become available, industries need techniques for solving real-world natural language tasks. It has been shown that model prompting methods can elicit good zero– and few-shot performance from LLMs and help yield quality results on various downstream natural language processing (NLP) tasks. However, there is a limit to it. In this talk, we will demonstrate how to adapt p-tuning, a prompt-learning method, to low-resource language settings. We use an improved version of p-tuning implemented in NVIDIA NeMo that enables the continuous multitask learning of virtual prompts. In particular, we focus on adapting our English p-tuning workflow to Swedish.