Prompt engineering examples
I collect weird examples of prompt engineering:
- The CLIP paper is full of examples, e.g.:
- Adding "Let's think step by step" before each answer increases the accuracy on MultiArith from 17.7% to 78.7%
- Constitutional AI made answers less harmful by asking the model to choose the least harmful answer