π©π½βπ» Large Language Models in Education Today
Explore how Large Language Models (LLMs) are shaping the future of learning and teaching. Stay up-to-date on the most popular LLMs and their applications in the classroom.
Large Language Models (LLMs), such as ChatGPT, are trained on massive amounts of text datasets, enabling them to discern linguistic patterns and generate text that closely resembles human language in response to inputs or prompts. In this newsletter, we will dive into some of the most popular LLMs used by students and teachers today and explore the applications, limitations, and possibilities of these tools. As the use of AI technologies exponentially rises, what aspect of AI in education are you most concerned about?
Here is an overview of todayβs newsletter:
Comparison of some of the most popular LLM tools used by students and educators
The impact of ChatGPT is changing the student experience
How educators around the world are addressing these technologies in the classroom
Enjoying this newsletter? We are launching a referral program in the hope that more educators can access these free resources:
1 referral: Early access to our AI x Education Resource Hub for Educators
5 referrals: We will help you craft a ChatGPT prompt for your specific teaching needs
10 referrals: 20-minute 1-on-1 consulting session with our AI x Education team to discuss how AI can be useful for your classroom or school
π Practical AI Usage and Policies
Here are some creative ways educators can leverage ChatGPT and other AI tools in different subjects to enhance student engagement:
ChatGPT-4 (Plus)
ChatGPT-4 is the paid, more powerful, and newest platform for ChatGPT built on GPT-3.5. OpenAI has designed GPT-4 to be the most accurate and capable version of ChatGPT, featuring a plethora of exclusive utilities.
Unique Capabilities:
Plugins: ChatGPT 4 includes plugins such as Wikipedia and Wolfram to reach a far greater variety of topics and mathematics capabilities.
Code Interpreter: This feature allows the user to upload code and data, interpret and test hypotheses, create graphs and charts, and output files.
Accepts image prompts (releasing slowly to the public): This is a major breakthrough for GPT-4 since it can analyze documents with images, diagrams, and screenshots.
Browse with Bing: This feature allows ChatGPT to browse the internet and retrieve the latest information online.
DALLΒ·E 3: This feature allows ChatGPT to access OpenAIβs latest image generation model DALLΒ·E 3. The user can generate pictures from simple text prompts.
Perplexity
Perplexity is an AI-powered search engine that is designed to offer accurate and informative answers to complex or challenging questions. This is a powerful tool with these unique capabilities:
Real-Time Data: Perplexity has the ability to utilize the most up-to-date information from the internet to generate relevant outputs.Β
Source References: Sections of the response include in-text references to links and resources used to generate the response.Β
Search Focus: ββPerplexity has a unique setting that allows users to narrow down the sources used to generate the response, such as published academic sources.Β
File Uploads: Users have the ability to upload files in plain text, code, or PDF format as content to generate the response
Claude 2
Claude was released in March 2023 by Anthropic, a company founded by former OpenAI employees.Β They have been dedicated to creating a helpful, honest, and harmless AI system through its emphasis on Constitutional AI. Its unique capabilities include:
Longer Prompts: Claude has the capacity to process a maximum of 100,000 tokens per prompt, approximately equivalent to 75,000 words, which is 12 times greater than the standard allowance provided by GPT-4.
File Uploads: Users can attach files up to up to 5 files at a time, each one up to 10MB for Claude to read, analyze, and summarize.Β
Data Analysis: Claude has the ability to analyze hundreds of pages as compared to ChatGPT which can analyze approximately 50 pages of data at a time.
Safety Guardrails: In addition to having humans review outputs and select the most helpful and least harmful responses, Claude utilizes a second AI model called Constitutional AI to improve the safety of its output.
π Latest Research in AI + Education
How ChatGPT is Transforming the Postdoc Experience (Read the paper βοΈ)
According to a global postdoc survey, one in three postdocs are using AI to assist with writing as well as grapple with all the literature in their chosen field. This is a major indicator of the wide operability of AI in education - the impact is not limited to just high school or even college students, it extends as far as postdoctoral candidates who are already extremely well-versed in their fields. Further, two-thirds of those who used these tools indicated that it affected their day-to-day work by taking the drudgery out of their academic work. Certain tasks like syntactic nuances in code or citations for an English paper do not necessarily add a lot to a studentβs repository of knowledge but demand a significant chunk of time and effort nonetheless. It is areas like this where generative AI could perhaps be impactful while still retaining the intellectual purity of the core of the assignment.
Nordling, Linda. βHow CHATGPT is transforming the Postdoc Experience.β Nature, vol. 622, no. 7983, 2023, pp. 655β657, https://doi.org/10.1038/d41586-023-03235-8.
On the Dangers of Stochastic Parrots: Can Large Language Models be Too Big? (Read the paper βοΈ)
The seemingly never-ending generative AI capabilities have been spurred by an ever-growing push for larger models trained on more data than the human mind could possibly comprehend. The largest trained language model before GPT3 had 10 billion parameters. ChatGPT3 has 175 billion parameters in comparison and GPT4 has 1.7 trillion parameters. Larger and larger language models seem inevitable, but are they necessary? There are not only financial and opportunity costs to the push for larger language models but also potential for substantial harm - a model trained on exceedingly and unnecessarily large amounts of data leads to a risk of overfitting and innate biases in the output. This could lead to problems like stereotyping and denigration that could end up harming already marginalized and stereotyped groups. Researchers must weigh the benefits and risks in the push for larger models.
Bender, Emily M., et al. βOn the dangers of stochastic parrots.β Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 1 Mar. 2021, pp. 610β623, https://doi.org/10.1145/3442188.3445922.
π£ Student Voices and Use Cases
We interviewed students to learn more about their experience using AI in schools. This week, we had a chance to speak with Yash Yardi, a high school junior at Illinois Math and Science Academy (IMSA). Here are some of our highlights:
Q: How do you currently utilize AI tools in your classrooms?
Currently, in most of the courses at IMSA, AI support is not permitted by tools such as ChatGPT and BingGPT, as teachers, especially from the English department, believe this permits plagiarism and dishonest work ethics. However, the history department has adopted AI in a different mindset, although this department still agrees with the English department that it should not be permitted, they have openly expressed the use of AI in plagiarism detection, through some disclosed tools and Turnitin, a popular utility for submitting assignments as text or files, designed specifically to combat plagiarism. The Computer Science department and specifically, the Machine Learning course, is an anomaly to this trend, since this class is entirely designed to develop a key element of large AI projects. I will be taking this course next semester and this will be my first opportunity to use AI directly in a course at my school. Despite courses not allowing the use of most AI tools, initiatives such as IMSA.ai are growing, with the purpose of developing tools to support students and help teachers while also being permitted for use by the instructors.
Q: What are the most urgent problems caused by AI you think educators should address first?
According to several teachers, schools, and businesses, LLMs have caused some of the largest plagiarism problems in educational history due to the exceptional accuracy and speed of models such as Google Bard, ChatGPT, BingGPT, and Anthropic Claude 2. These have all been used to complete assignments, take quizzes, write AI-generated essays, making it more difficult for students to learn the content they need. The Department of Education has stated that AI cannot replace teachers, guardians, or education leaders despite its incredible capabilities and intelligence, so with this in mind and taking into account that when calculators were first introduced, those too were banned, AI may be unnecessary to remove from the education system. Rather, educators can demonstrate how AI can be helpful without directly cheating and instead be a supplemental tool which can simplify and explain topics in ways different from the teacher and provide review materials as opposed to directly giving the answers. All of this can be done when educators supply the students with "healthy prompts" for these LLMs, since the type of prompt is always going to determine the response. This is the least educators can do to limit the power of these LLMs, since only the companies behind them can create limited versions for educational purposes or for any other reasons.
Q: What do you think educators and administrators could do to better incorporate AI into classrooms?Β
I think educators and administrators alike can incorporate seminars on the healthy use of AI, as currently, most students do not have the guidance to use tools such as ChatGPT for learning purposes, even when they think they may be doing so. Similarly to teaching younger generations about the irrational use of substances, AI used poorly or solely for plagiarism due to its incredible accuracy and efficiency is equivalently detrimental to a student's education, so creating a program or seminar over the course of each school year to teach students about using AI with the right prompt styles in order to save time and help them, but also not to provide them with the direct answers can be extremely beneficial. On top of this, educators can point out patterns in the AI's thinking and derive the process of how it simplifies topics for students.
Opportunity to Learn about AI Usage from Your Students! βοΈ
There is less than a week left in the AI Classroom Challenge! If you have already shared this, we would appreciate it if you could remind your students about this challenge. We have designed submissions such that they can be completed in just 1.5 hours (and probably less if you use AI), and students are also free to submit multiple entries if they wish.
It's also not too late to share this with your students! Once you fill out this form, we'll send you resources for easy sharing and the executive summary of the results.
π° In the News
ABC News
Students and Professors Grappling with AI in Academia βοΈ
Key takeaways:
College students at UC Davis used ChatGPT to assist with outlines for a specialized writing assignment
Always a risk associated with overly relying on AI, need to ensure that the chunk of the product is original workΒ
A professor at Furman University had to give a student a failing grade after the student confessed to using ChatGPT for the final
Another college studentβs assignment was flagged for AI use and she was penalized despite claiming that she did not use AI
There has already evidently been a whole spectrum of events related to the use of AI tools in education. This is definitely not limited to only K-12 - colleges are facing an equally strenuous task of outlining unambiguous rules for the use of AI in coursework. There have been instances of students being heavily penalized after being flagged for AI use - some who confessed to it as well as those who insist that they did not. Plagiarism checkers like Turnitin often incorrectly flag studentsβ assignments as AI-generated. It is imperative to develop a structured approach to these issues to prevent students from being wrongly penalized.
CBS News
Teachers Partner with AI Detection Platform GPTZero βοΈ
Key takeaways:
Second-largest teacherβs union in the country has partnered with a company that detects the use of AI in student submissions
GPTZero can help educators with and not against generative AI
GPTZero was initially developed in January and has since launched new tools, for example, a tool that allows students to certify their content as human
As we saw in the earlier article, the AI detectors do not work well. AI in the classroom is here to stay for the foreseeable future and adapting educational policies is vital for using AI as a supporting tool in coursework.
We hope you enjoyed this weekβs edition of our AI x Education Newsletter! If you have any resources you would like to share with our community of 4.6k+ educators in our future newsletters, please submit them below!
If you enjoyed our newsletter and found it helpful, please consider sharing this free resource with your colleagues, educators, administrators, and more.