LLM のプロンプトエンジニアリング
1章
論文紹介
- Neural Machine Translation by Jointly Learning To Align and Translate
- Attention Is All You Need
- Improving Language Understanding by Generative Pre-Training
- Language Models Are Unsupervised Multitask Learners
- Language Models Are Few-Shot Learners
2章
参考資料
3章 チャット形式への移行
参考資料
- Training Language Models to Follow Instructions with Human Feedback
- A General Language Assistant as a Laboratory for Alignment