It provides a glimpse into how an AI model (GPT-4 from OpenAI) was enhanced through an application layer to create a more "magical" and more "Socratic" version of ChatGPT. The key engineering/design principle was to "allow the AI to think before it speaks" so that it could be a tutor to students.
"But perhaps the most intellectually interesting one is we realized, and this was an idea from an OpenAI researcher, that we could dramatically improve its ability in math and its ability in tutoring if we allow the AI to think before it speaks. So if you're tutoring someone and you immediately just start talking before you assess their math, you might not get it right. But if you construct thoughts for yourself, and what you see on the right there is an actual AI thought, something that it generates for itself but it does not share with the student. then its accuracy went up dramatically, and its ability to be a world-class tutor went up dramatically. And you can see it's talking to itself here. It says, "The student got a different answer than I did, but do not tell them they made a mistake. Instead, ask them to explain how they got to that step."
Applications (i.e., context, domain-specific knowledge) will no doubt facilitate improvements to large language models (LLM) and help them to become more widespread.
As educators, it is imperative to be watchful/critical of these technologies, while simultaneously evolving our understandings of learning and teaching experiences in the Intelligence Age (Natriello & Chae, 2022).
Natriello, G., & Chae, H. S. (2022). The Paradox of Learning in the Intelligence Age: Creating a New Learning Ecosystem to Meet the Challenge. In Bridging Human Intelligence and Artificial Intelligence (pp. 287-300). Cham: Springer International Publishing.