It is a series of intermediate reasoning steps -- significantly improves the ability of large language models to perform complex reasoning.
such reasoning abilities emerge naturally in sufficiently large language models via a simple method called chain of thought prompting, where a few chain of thought demonstrations are provided as exemplars in prompting.
The empirical gains can be striking. For instance, prompting a 540B-parameter language model with just eight chain of thought exemplars achieves state of the art accuracy on the GSM8K benchmark of math word problems, surpassing even finetuned GPT-3 with a verifier.
references:
https://arxiv.org/abs/2201.11903
https://medium.com/data-science-at-microsoft/automating-data-analytics-with-chatgpt-827a51eaa2c
No comments:
Post a Comment