The Code.org’s AI chat lab is an innovative, AI-powered chatbot used to enhance the learning experience in the new Exploring Generative AI course. Students will learn how to use and build with this chatbot in the second unit of this course, Customizing Language Models.
AI chat lab was designed with a focus on privacy, content appropriateness, and responsible use, ensuring that students interact with AI in a controlled and educational environment. This targeted approach is designed to help protect students from inappropriate content but also encourages responsible communication practices, where their interactions are monitored to maintain a safe and productive learning space.
Below is a list of Frequently Asked Questions we’ve received:
Who can use AI chat lab?
Teacher accounts can view the tool but cannot interact with the chatbot unless they are a verified teacher or have logged in via Google, Microsoft, Facebook, or an LMS. Student accounts can view the tool but cannot interact with the chatbot unless they are in a teacher-led section that meets the above requirements. Learn how to become a verified teacher here and how to add students to a section here.
If you or your student(s) do not meet the above criteria, you will receive an error message when trying to chat with the chatbot.
How does AI chat lab handle data privacy and security?
Code.org does not send any student input or data to third parties—we own all of the data. To further protect privacy, all chat data is automatically deleted after 90 days, ensuring that student interactions are both secure and temporary.
What safeguards are built-in to AI chat lab?
Code.org has made significant efforts to ensure that students do not encounter anything in the tool that violates our policies or is inappropriate for the classroom. Customizations and chat messages from the students are checked by our content moderation policy and any messages are flagged and removed once they are detected. Similarly, any messages from the chatbot are checked by our content moderation policy and flagged before they are shown to students.
However, since we are using generative AI, there is no entirely predictable way to ensure the output will never be disruptive. We are continually improving the tool, and we encourage you to provide feedback to help us make it even safer by contacting us at artificialintelligence@code.org.
How can I ensure that AI Chat lab is safe for students?
As a teacher, you can view your students chat history with the bot in each level. You can view both clean and flagged student messages, ensuring that you have access to all of the activity going on in your classroom. You can learn more about how to view student chats here.
What should I do if AI chat lab gives incorrect or inappropriate responses?
While Code.org does its best to limit inappropriate responses, AI chatbots can occasionally generate unexpected outputs. This is explicitly addressed in the first lesson of the curriculum where students use AI chat lab. If this happens, consider turning it into a learning opportunity by discussing AI hallucinations or the sources of the data (you can watch our video on this here). We are always striving to improve, so please share any concerns or suggestions with us at artificialintelligence@code.org.
How do I provide feedback or report bugs found using AI chat lab?
You can provide feedback or report bugs to our support email or specifically to artificialintelligence@code.org. We value your input as we continuously work to enhance the tool.
What languages is AI chat lab available in?
The tool and corresponding curriculum are currently only available in English. The chatbot may or may not respond appropriately in other languages, so we recommend using it in English for the best experience.
What tech does the AI chat lab use?
Code.org uses OpenAI's GPT-4o mini to moderate content. The main chatbot experience is created by a group of open-source large language models, with the primary model being Mistral-7B-Instruct-v0.1.
Several fine-tuned models are also available in AI Chat labs;
- BioMistral-7B: A fine-tuned Large Language Model across specialized domains such as healthcare and medicine.
- Karen_TheEditor_V2_CREATIVE_Mistral_7B: A fine-tuned language model created to fix grammatical and spelling errors in US English without altering the style of the text.
- Mistral-Pirate-7b-v0.3: A fine-tuned language model made for generating intricate and authentic pirate-themed content.
- Arithmo-Mistral-7B: A fine-tuned model that is trained to answer and reason through mathematical problems.