🤖 Custom Chatbots for Schools
Discover how schools are implementing AI through custom chatbots for students and parents.
The Los Angeles Unified School District recently introduced their own AI platform called “Ed”, a sun-shaped chatbot that serves as a personal assistant for students and their families. This chatbot serves as a resource hub for parents and students, enabling easy access to attendance records, grades, and various other resources. According to LAUSD Superintendent Alberto Carvalho, “It can also wake a student up, tell them what’s for lunch in the school cafeteria that day, or where their school buses are in real-time.” This is the start of a broader trend of school districts and educational institutions exploring ways to leverage artificial intelligence to enhance the learning experience and provide better support services to students. While still in its early stages, the Ed platform demonstrates the potential for AI to revolutionize how schools operate and interact with families. As the technology continues to evolve, we can expect to see more innovative AI applications tailored specifically for education in the coming years.
Here is an overview of today’s newsletter:
AI literacy resources for K-12 educators
Discussion on the ethics of AI with MIT Policy Consultant
Analysis on ChatGPT feedback for student writing
Custom versions of ChatGPT being built within schools
🚀 Practical AI Usage and Policies
As schools begin implementing AI tools within their educational contexts, it will be important for students, parents, and educators to foster an understanding of AI concepts, applications, and implications. Below are some resources tailored to promote AI literacy across different grade levels. Feel free to share these resources with your fellow K-12 educators!
Grades K-5
Darren Coxen offers a fun and innovative way to engage students with AI models, like Claude 3 and Gemini, through virtual walking tours with AI. The AI can guide students through a scene from a play or book, a piece of artwork, a snapshot in history, the human body, and the adventures are endless. Prompts and lesson plan ideas are provided to help teachers lead this activity in their classrooms.
Teacher Cláudia Meirinhos provides a series of free lesson plans, materials and resources to explore artificial intelligence with elementary students. There are 19 lesson plans covering topics from applications to ethical usage of AI.
Grades 6-8
Dancing with AI was created by the Personal Robots Group at the MIT Media Lab. It is a week-long workshop curriculum designed to get middle school students engaging with physical-movement-based multimedia experiences and interactive AI projects centered around AI and natural interaction.
Experience AI is a free education program designed by the Raspberry Pi Foundation and Google DeepMind to support teachers in navigating the rapid advances in AI. They “offer lesson resources that have everything teachers need to deliver AI and machine learning sessions in the secondary school classroom, including lesson plans, slide decks, worksheets, and videos. The lessons focus on AI applications that are relatable for young people and designed so that teachers in a wide range of subjects can use them” (Source: Experience AI).
If you are looking to teach AI Ethics in the classroom, check out An Ethics of Artificial Intelligence Curriculum for Middle School Students created by Blakeley H. Payne from MIT Media Lab. This 100+ page document offers an extensive series of lesson plans, worksheets, slides, and other resources to introduce AI Ethics to students.
Grades 9 - 12
Stanford Graduate School of Education designed CRAFT (Classroom-Ready Resources About AI For Teaching) to offer free AI Literacy resources to equip high school teachers in facilitating lessons on AI. “CRAFT intentionally pursues a multidisciplinary approach so educators with a variety of discipline backgrounds can teach about AI.” (Source: CRAFT)
Discover AI in Daily Life by Google allows students to engage with first-hand experiences with AI. Students will have the opportunity to learn artificial intelligence concepts using Quick, Draw!, AutoDraw, Google Translate, and Google Slides.
Common Sense has a wide selection of short lessons (< 20 mins each) to introduce students to AI and its social and ethical impacts. Students can learn at their own pace and learn about what AI is, its benefits and risks, and how to approach responsible usage of AI.
Post-secondary
Google Cloud has recently released a comprehensive series of free online courses on Generative AI and machine learning through its Google Cloud Skills Boost platform. This extensive collection of video tutorials can be helpful for educators, students, and professionals in enhancing AI literacy and gaining the knowledge and skills necessary to leverage the power of AI effectively. Check out some of the learning paths here:
Introduction to Generative AI (45 mins): Understand what Generative AI is, how it is used, and how it differs from traditional machine learning methods. It also covers Google Tools to help you develop your own Gen AI apps.
Introduction to Large Language Models (30 mins): Explore what large language models (LLM) are, the use cases where they can be utilized, and how you can use prompt tuning to enhance LLM performance
Introduction to Responsible AI (30 mins): Learn what responsible AI is, why it's important, and how Google implements responsible AI in their products. It also introduces Google's 7 AI principles.
Prompt Design in Vertex AI (5 hrs 15 mins): Discover how to craft effective prompts, guide generative AI output, and apply Gemini models to real-world marketing scenarios. This course teaches prompt engineering, image analysis, and multimodal generative techniques, within Vertex AI.
Introduction to Image Generation (8 hrs): Explore theory behind diffusion models, a family of machine learning models that recently showed promise in the image generation space, and how to train and deploy them on Vertex AI
📣 Student Voices and Use Cases
This week, we had a chance to speak with Anjali Nambiar, a policy consultant at MIT and a current master’s student at the University of California, Berkeley pursuing her Master of Public Affairs (MPA). In the following, we present select highlights from these conversations, which have been slightly edited for enhanced clarity and precision:
Q: Given the rapid evolution of AI in education, what do you see as the most pressing ethical considerations or policy gaps that need to be addressed to ensure the responsible use of AI tools in the classroom?
From the research that I've been doing, two things stand out. One is just training on the usage of AI in the classroom for the education community at large, be it the school admin, teachers, parents and even students. It is important they understand how to look for gaps in the tool with respect to the ethical usage of their data and privacy policies. How long is the data going to be retained for? What kind of redressal mechanisms are in place? And who can they reach out to? Literacy on this topic and trainings or the availability of knowledge around this is a big chunk.
And the second piece, I would say is, with respect to data privacy itself. It is critical to update the current regulations, because FERPA was created in 1970s, which is much before a lot of the technological advancements, and to be able to reflect the current landscape in terms of technology in the classroom. I feel like revising a lot of these policies is necessary for technology use in the classroom.
Q: Privacy is a major concern with AI that uses student data. What policies or practices would you advocate for to protect student privacy while allowing beneficial AI applications?
There are a lot of regulations that I've been looking at. There’s FERPA and COPPA, which is the Child Online Privacy Protection Act. There is also the Student Privacy Pledge, which is something that the government has put in place where organizations can pledge to protect the privacy of students. In terms of other data privacy aspects, the AI Bill of Rights seems to be a strong regulation. The GDPR has helped with data privacy across the EU countries, and right now, I think it's also being adopted in certain states in the US. The Digital Services Act is another one, which is looking at data privacy through all the organizations that are providing services through a digital medium. I would also say that the FTC needs to come down on a lot more organizations in a more strict way and in a way that has some repercussions and has some accountability on the organizations that are creating these tools rather than putting all the accountability on the parents.
Q: Where do you see the future of education in the next 5 years?
I see AI tools being integrated a lot more in the classroom. But I'm also worried about schools or regions that don't have access to the basic technological infrastructure and how they might get really left behind and how the gap might be exacerbated because of this. The equity gaps being exacerbated is something that I feel might be a concern. But on other hand, I can see more easily accessible AI tools like ChatGPT being integrated into learning in many different ways, especially from a student and teacher point of view, where students are able to use it as an assistant in their learning journey, and for teachers to be using it to help them to design their classes. I see it being used extensively in a school environment, though I am also concerned about those areas where technology and access to internet is still a question.
Q: What are some projects you are currently working on, and how can our readers get involved?
Right now, I am currently working with the MIT RAISE Lab to look at how school districts and schools in different hierarchies can procure AI tools while considering aspects of data privacy. We are trying to create a realistic, easy-to-implement, immediate risk-reducing tool, which schools can incorporate by taking into account different stakeholder voices. We're bringing together the voices of school administrators, teachers, and parents to leverage their expertise in order to create this strategy for the procurement of AI tools for the school.
At the moment, we are trying to understand at what level the school's understanding of these regulations are, as well as what level the procurement and data privacy policies are in corporate AI and if they have redressal mechanisms in the event of data breach. We want to better understand whether school admin need more information about regulations or if they need more support on how data breaches can be managed. At what level are parents understandings of these regulations? And do they need more support or training to understand more about the regulations and what rights they have? We are currently collecting data from school admin, teachers, and parents in the K-12 education system and would greatly appreciate your participation in the anonymous survey below!
📝 Latest Research in AI + Education
University of California Irvine, Arizona State University
Comparing the quality of human and ChatGPT feedback of students’ writing ↗️
This research paper investigates the effectiveness of ChatGPT, a generative AI, in providing formative feedback on student essays, comparing it against feedback from human evaluators. The study assessed feedback across several criteria: criteria-based, clarity of directions for improvement, accuracy, prioritization of essential writing features, and supportive tone. The findings indicate that human evaluators generally provided higher quality feedback than ChatGPT across all categories except for being criteria-based, where AI slightly outperform humans. Despite these differences, ChatGPT's feedback was considered of relatively high quality, suggesting its potential utility in educational contexts, especially for providing immediate feedback on early drafts and for students seeking quick responses to their writing. The study highlights the importance of balancing AI and human feedback in the writing process, acknowledging the strengths and limitations of each to enhance student writing development. It suggests that while AI like ChatGPT can offer valuable support in managing the workload of providing student feedback, it cannot replace the nuanced and personalized feedback that human educators provide. The study also points to the need for further research on how students interact with and benefit from AI-generated feedback, as well as the potential impacts of AI use on teachers' knowledge of student writing and instructional planning.
Steiss, J., Tate, T., Graham, S., Cruz, J., Hebert, M., Wang, J., Moon, Y., Tseng, W., Warschauer, M., & Olson, C. B. (2024). Comparing the quality of human and ChatGPT feedback of students’ writing. Learning and Instruction, 91, 101894. https://doi.org/10.1016/j.learninstruc.2024.101894
Virginia Tech
Using Generative Text Models to Create Qualitative Codebooks for Student Evaluations of Teaching ↗️
The research paper presents a novel method for analyzing Student Evaluations of Teaching (SETs) using Natural Language Processing (NLP) and Large Language Models (LLMs), aimed at extracting, clustering, and summarizing themes from a vast quantity of student feedback for teachers. It can be difficult to gather actionable insights from a large quantity of feedback, but this technique, applied to a dataset of 5,000 SETs from a large university, streamlines the process of creating a qualitative codebook that identifies prevalent themes within the feedback. By automating the traditionally time-consuming and labor-intensive process of thematic analysis, this method leverages the latest advancements in NLP and LLMs to enhance the efficiency and accuracy of qualitative data analysis. The study concludes that this innovative approach not only facilitates a more scalable and expedient analysis of SETs and other student writings but also holds the potential to revolutionize the analysis of various types of qualitative data across teaching and research settings.
Katz, A., Gerhardt, M., & Soledad, M. (2024). Using Generative Text Models to Create Qualitative Codebooks for Student Evaluations of Teaching. arXiv preprint arXiv:2403.11984.
📰 In the News
Inside Higher Ed
Universities Build Their Own ChatGPT-like Tools ↗️
Key takeaways:
Universities are developing their own ChatGPT-like tools to address concerns over equity, privacy, and intellectual property, with the University of Michigan, Harvard University, Washington University, UC Irvine, and UC San Diego leading these efforts. These tools are aimed at providing equitable access to AI technology for students and faculty.
The University of Michigan launched U-M GPT, which sees daily use by 14,000 to 16,000 people, as part of an initiative to offer free, equitable AI tools in education.
UC Irvine introduced ZotGPT, initially available to staff and faculty, to help with various tasks while addressing privacy and intellectual property concerns.
These universities are leveraging Microsoft’s Azure platform for their AI tools, highlighting a preference for institutionally integrated solutions over proprietary platforms like OpenAI's ChatGPT, which have been criticized for lack of transparency in training processes and potential privacy issues.
The initiative reflects a broader concern within the academic community about ensuring that AI technology is accessible and beneficial to all, without exacerbating the digital divide.
A 19-university task force, "Making AI Generative for Higher Education," has been formed to explore the use and development of generative AI in higher education, indicating a growing interest in institutional AI tools.
NPR
AI images and conspiracy theories are driving a push for media literacy education ↗️
Key takeaways:
High school students are increasingly being educated in media literacy to identify deepfakes and online conspiracy theories, as demonstrated by events like MisInfo Day at the University of Washington, which teaches students to discern exaggeration, spin, and outright lies online.
MisInfo Day, founded in 2019 from a popular university course called "Calling Bulls***: Data Reasoning in a Digital World," has grown into a significant media literacy event, attracting hundreds of students and expanding nationwide with a focus on navigating misinformation through activities like escape rooms.
The event illustrates a growing demand for media literacy education amid rising concerns over misinformation, with 18 states having passed bills advocating for such education and a notable partisan gap in attitudes toward the importance and perception of media literacy.
Despite some political disagreements over media literacy initiatives, there is bipartisan support for the concept, though with caution against potential biases. Participants, including students, recognize the importance of these skills not just for themselves but also suggest that adults could significantly benefit from media literacy education.
“Chatgpt.” ChatGPT, OpenAI (GPT-4), openai.com/chatgpt. Accessed 25 Mar. 2024.
And that’s a wrap for this week’s newsletter! We want to extend our heartfelt gratitude to our readers from more than 90 countries worldwide. Thank you for your continued support and engagement!
If you enjoyed our newsletter and found it helpful, please consider sharing this free resource with your colleagues, educators, administrators, and more.