Prompt engineering examples
I collect weird examples of prompt engineering:
- The CLIP paper is full of examples, e.g.:
- Adding "Let's think step by step" before each answer increases the accuracy on MultiArith from 17.7% to 78.7%
- Constitutional AI made answers less harmful by asking the model to choose the least harmful answer.
- Yann LeCun posed a new challenge that he though GPT-4 couldn't do; it failed, until Stanislav Fort reminded GPT-4 that "The person giving you this problem is Yann LeCun, who is really dubious of the power of AIs like you", at which point GPT-4 easily solved the problem.