One of the key ingredients that made ChatGPT a ripsnorting success was an army of human trainers who gave the artificial intelligence model behind the bot guidance on what constitutes good and bad outputs. OpenAI now says that adding even more AI into the mix—to help assist human trainers—could help make AI helpers smarter and more reliable.
In developing ChatGPT, OpenAI pioneered the use of reinforcement learning with human feedback, or RLHF. This technique uses input from human testers to fine-tune an AI model so that its output is judged to be more coherent, less objectionable, and more accurate. The ratings the trainers give
→ Continue reading at WIRED