Many students today use AI tools like ChatGPT on their own, but several platforms are now introducing collaborative features designed for group projects. These shared spaces enable students to co-create knowledge hubs while promoting transparency and communication with both teachers and classmates. Tools like Boodlebox and Perplexity are leading this shift, and in today's edition, we'll explore how AI collaboration could reshape the educational landscape.
Here is an overview of today’s newsletter:
Exploration of Perplexity’s new feature “Spaces” for education
Insights from a Berkley student’s perspective on AI in college
Comparison survey of AI literacy across Asian and African countries
The issue of AI detectors wrongly accusing students of cheating
Join us on 10/25 for our next webinar in the AI x Education Webinar Series, where we will feature Sarah Newman, Director of Art & Education at metaLAB at Harvard. As generative AI becomes increasingly accessible, it undeniably influences how students approach assignments and learning. Should schools and universities ban AI from the classroom, embrace it as a powerful learning tool, or find a balance between the two? With students already leveraging AI creatively, this webinar will explore how institutions can balance innovation and academic integrity.
Sarah Newman will share best practices for creating AI course policies, drawing from her experiences workshopping ideas with students at Harvard and engaging with educators across the US and internationally. Attendees will gain concrete strategies and policy templates and a deeper understanding of how to craft AI guidelines that foster ethical use while enhancing learning, curiosity, and criticality.
Whether you're an educator, administrator, or institutional leader, this session will equip you with the tools to thoughtfully and responsibly engage with AI. Register now for free through this link!
🚀 Practical AI Usage and Policies
Perplexity recently released “Spaces”, which are AI-powered collaboration hubs that allow students to collaborate with others and search through their own class/project files in addition to the internet. After setting up the Space, students can invite collaborators such as classmates or teachers, connect internal files, and customize the AI assistant by choosing their preferred AI model and setting specific instructions for how it should respond.
Ways to Use Perplexity Spaces
Create a Knowledge Hub for Your Class
Students can import course documents, notes, and syllabi in one place to draw information from.
Ask Questions from Your Class Material
You can ask specific questions from your materials and retrieve the answers in seconds.
See it in action:
↪ Could you summarize the main points from Week 5’s lecture?
↪ What was the solution to Exercise 2 from Homework 1?
↪ Compare the solutions to Questions 2 and 5 from Homework 1
↪ Based on the syllabus, what are the key topics I should focus on for the upcoming midterm?
↪ Please organize all my notes from The Good Life courseGet Help from the Web
If you need to find resources outside of your notes, Perplexity can also pull information from the web. Whether it’s a study guide, coding tutorials, or even a set of flashcards for a chapter in your textbook, you can easily ask Perplexity to do the searching for you.
See it in action:
↪ Can you find me a set of online flashcards for each chapter in our syllabus?
↪ Can you find online practice problems for Week 6 of the syllabus that align with the chapters we've covered?
↪ Can you suggest articles related to Week 6 of the syllabus to help with my project?
↪ Can you locate math problems for Week 4 of the syllabus that match the difficulty of our homework?Create a Study Guide
Perplexity Spaces can create a custom study guide to prepare students for an upcoming exam using all the uploaded class materials. It can also generate a study schedule, breaking down what you should focus on each day.
See it in action:
↪ Can you create a custom study guide for our midterm next week based on the syllabus (from week 1 to week 6)?
↪ Look through all my uploaded materials and generate a study schedule leading up to our final exam.Collaborate on Group Projects
Perplexity Spaces isn’t just for individual use—it’s perfect for collaborating on group projects. You can share your space with team members, allowing everyone to access the same materials, ask questions, and generate project plans.
See it in action:
↪ Can you create a project plan for Exopt2, including task division and deadlines?
↪ Can you summarize the key steps we need to take to finish our exopt1 group project on time?
↪ Create a shared calendar for our group that includes deadlines and progress check-ins for Exopt1
↪ Can you draft an outline for our group’s Exopt2 presentation with suggestions on who should cover each section?
📣 Student Voices and Use Cases
This week, we had a chance to interview Amy Jain, a fourth-year student at UC Berkeley pursuing majors in Cognitive Science & Computer Science with a certificate in Entrepreneurship & Technology from the College of Engineering. In the following, we present select highlights from these conversations, which have been slightly edited for enhanced clarity and precision:
Q: In what ways have your professors integrated AI tools into the classroom, and could you offer a specific example of how you interacted with these tools?
In one of my upper-division Statistics courses on Statistical Methods for Data Science, we use a tool that’s become a bit of a game-changer for me and I use almost on the daily —a chatbot called Pingpong. Just like its name suggests, it mimics the back-and-forth of a conversation, guiding us like a tutor without simply giving away any answers. It was developed by Sharad Goel from Harvard, and initially, it was called StatGPT. What I love about Pingpong is that it’s designed to help you discover the solution on your own, rather than spoon-feeding answers, which really keeps the learning process intact.
Q: As a computer science major, how have you seen your classes or your field change given the advancements of these generative AI technologies?
There’s two parts to this…
I’ve noticed both exciting changes and slower transitions in how generative AI is impacting the field. One great example is CS61A, the introductory computer science class at Berkeley, which I took in my first semester. Although I'm no longer in the class, they’ve since integrated platforms like Azure OpenAI Service to enhance the learning experience. Former students even created a 61A-Bot that provides contextual feedback to help current students tackle complex coding challenges. From what I hear, it’s been incredibly effective in reducing homework completion times and offering immediate assistance during study sessions—essentially acting as a virtual tutor.
In my own classes, especially upper-division ones, the change is happening, but it’s slower. Curriculum updates take time, and while AI advancements are moving at lightning speed, it’s difficult for academic systems to keep pace—even at one of the top computer science programs in the world. It’s surreal to be living through this period of rapid innovation, where AI is evolving all around us, but the structured courses take longer to adapt. And honestly, that makes sense.
However, outside the classroom, I’m seeing rapid shifts in extracurriculars related to AI. Entire clubs like AI Entrepreneurs at Berkeley have sprung up in the last couple of years, and there’s been a surge in hackathons focused on AI technologies. Some students are even dropping out to pursue AI startups, calling it the 'second gold rush' in California—this time with computers instead of shovels. Applications to labs like the Berkeley Artificial Intelligence Research (BAIR) Lab have also skyrocketed. So while the changes in coursework are slower, there’s no denying that AI is reshaping the environment in other ways, and at different speeds.
Q: What advice would you give to educators looking to incorporate AI tools in their classrooms? Based on your own experience, what worked well and what do you think can be improved?
I approach this question the same way I would evaluate the startup pitch decks that come through the deal flow at Pear VC. Don’t retrofit a solution to the problem. It’s always important to focus on the purpose, not the tool. As a founder of a tech company myself, one of the common questions you hear at conferences is: Is your solution a painkiller or a multivitamin? Is it solving a real problem, or is it just making things easier? It’s critical to first clarify the learning objectives and pain points before deciding to use an AI tool simply because it’s trendy. Otherwise, we risk creating a distraction rather than a benefit, and the integration becomes superficial, without any real impact.
AI in education doesn’t just have to benefit students through things like hyper-personalization, but it can also help teachers. I think Gradescope (a company founded by Berkeley Professors, actually, GO BEARS!) focuses on providing instant feedback on assignments & streamlining the grading process for instructors… It’s also is a perfect example of how AI can automate those boring, repetitive tasks, thereby freeing up teachers to focus more on teaching and meaningful student engagement. Both students and professors benefit from this shift in the learning environment.
What worked well in my experience is starting with clear goals before integrating AI tools. In one of my classes, we used AI-driven feedback systems that allowed for immediate correction of errors. This kind of real-time feedback transformed how I approached problem-solving—it wasn’t just about getting answers, but about understanding the process.
What could be improved is the mindset around AI adoption. There needs to be a shift away from using AI for the sake of it and more focus on thoughtful, purpose-driven implementation. AI tools should solve real classroom challenges—whether that’s reducing administrative burden for teachers or helping students grasp difficult concepts—not just be a flashy addition. Educators also need support in understanding how to use these tools effectively so that the integration feels seamless and truly enhances the learning experience.
Q: In your opinion, what are the most exciting possibilities for AI in transforming the future of education?
If there’s one thing I’m most excited about in AI for education, it’s making hyper-personalized learning accessible to everyone. And everyone means EVERYONE. Irrespective of socioeconomic status, location, or school funding… these will no longer be the barriers they once were, and I think that’s a real game changer. We’re already seeing this with tools like Khanmigo here in the U.S. and Squirrel AI in China. I know Khanmigo offers a virtual tutor experience that adapts to the student’s learning pace & gives guidance just like an in-person tutor would… the only difference is that a resource like Khanmigo isn’t reserved for just the wealthy or those living in areas with access to elite education. It almost democratizes education.
The best use case application would be a young, bright student from a rural town who didn’t have access to the best teachers or tutoring but was able to use AI-powered tools to learn at their own pace and get a level of support that they otherwise never would’ve gotten. Use cases like these help me realize that AI is helping to level the playing field. And that’s exactly the type of applications we should be considering more of! The social impact side of things, the difference it will actually make in benefiting people’s lives - and not just the B2B SaaS applications that Silicon valley raves about regarding AI…
It’s also about allowing students who’ve never had access to the resources we take for granted—like here at UC Berkeley—to learn from some of the best virtual “tutors” available. We already know that we live in industrial-revolution-era education system, where classrooms follow a one-size-fits-all model so the benefit of an AI can adapt in real time to address whether a student is more visual, prefers step-by-step reasoning, or needs more interactive engagement etc. adds up significantly in the long-term for overall comprehension and content retention too.
📝 Latest Research in AI + Education
University of Cologne, Rotterdam School of Management, Erasmus University
AI Meets the Classroom: When Does ChatGPT Harm Learning? ↗️
The research examined how LLMs like ChatGPT impact learning outcomes in coding classes through field observations and experimental studies. They found two contrasting effects: LLMs can enhance learning when used as "personal tutors" for explanations and concept clarification, but impair learning when students rely on them excessively to solve practice exercises, especially through copy-pasting solutions. The study also found that beginners benefit more from LLM access but are also more likely to misuse them, and students tend to overestimate their learning progress when using LLMs.
Opinion: The findings from this research provide compelling evidence for why educational institutions must implement strict oversight of LLM usage, particularly regarding copy-paste practices. When students can easily copy-paste exercise solutions from LLMs, they bypass the crucial "learning-by-doing" process that builds deep understanding and problem-solving skills. This is especially concerning for beginners who, while standing to gain the most from proper LLM usage, are paradoxically the most susceptible to falling into harmful copy-paste habits. The fact that students consistently overestimate their learning progress when using LLMs adds another layer of risk - they may believe they're mastering the material when they're actually developing surface-level understanding at best. Without proper guardrails and monitoring, we risk creating a generation of students who mistake solution retrieval for genuine learning, ultimately undermining their educational development and future capabilities.
Lehmann, M., Cornelius, P. B., & Sting, F. J. (2024). AI meets the classroom: When does ChatGPT harm learning? arXiv. https://arxiv.org/abs/2409.09047
Frontiers in Communication Journal
Artificial intelligence literacy among university students—a comparative transnational survey ↗️
The study surveyed 1,800 university students across four Asian and African countries (Malaysia, Egypt, Saudi Arabia, and India) to assess AI literacy levels. Key findings included significant disparities based on nationality (with Malaysian students scoring highest), academic specialization (science/engineering students performed better), and academic degrees. Interestingly, gender showed no significant impact, and students with lower grades sometimes demonstrated higher AI literacy. The average AI literacy score was 2.98 out of 5, indicating moderate capability.
Opinion: The findings from this transnational study underscore the critical importance of achieving equitable AI literacy across educational institutions and geographical boundaries. The significant disparities observed between countries and academic specializations point to a concerning trend that could exacerbate existing socioeconomic inequalities as students enter the workforce. For instance, the study revealed that science and engineering students demonstrated notably higher AI literacy compared to those in humanities and medical sciences, suggesting that certain academic paths may inadvertently create technological advantages or disadvantages. As AI increasingly becomes integrated into workplace processes, these educational disparities could translate into professional barriers, with some graduates better equipped to leverage AI tools than others. This is particularly concerning given that AI literacy is now considered a fundamental human right and requirement for societal advancement, as noted in the study's introduction. The observed variations in AI perception and willingness to use AI across different demographics further suggest that without intentional intervention to standardize AI education, we risk creating a workforce divided between those who can effectively harness AI technologies and those who cannot, potentially perpetuating and amplifying existing social and economic inequities.
Hasan, M. H. M., Bawazir, A., Alsabri, M. A., Alharbi, A., & Okela, A. H. (2024). Artificial intelligence literacy among university students—a comparative transnational survey. Frontiers in Communication, 9. https://doi.org/10.3389/fcomm.2024.1478476
📰 In the News
Bloomberg
AI Detectors Falsely Accuse Students of Cheating—With Big Consequences ↗️
Key takeaways:
About two-thirds of teachers now regularly use AI detection tools to spot AI-generated content in student work, but even with small error rates, these tools can falsely flag legitimate student writing, resulting in serious academic consequences.
Students who are neurodivergent, non-native English speakers, or those who write in a more formulaic style are particularly vulnerable to false accusations - a Stanford study found that while these tools were highly accurate for US-born students, they flagged over half of non-native English speakers' essays as AI-generated.
In response to false accusations, students are taking extensive defensive measures, including screen recording themselves writing, using pre-screening tools, and avoiding legitimate writing assistance programs like Grammarly - significantly impacting their learning experience.
The AI detection industry is growing rapidly (with $28M in funding since 2019), but some educators and institutions are moving away from these tools due to accuracy concerns, with some arguing that schools should focus on adapting to AI's presence in education rather than trying to eliminate it.
Harvard Medicine
How Generative AI Is Transforming Medical Education ↗️
Key takeaways:
Harvard Medical School is proactively integrating generative AI into its curriculum, including a mandatory one-month introductory course for incoming Health Sciences and Technology students and a new PhD track in AI in Medicine (AIM), which received over 400 applications for just seven spots.
While AI tools can match or exceed medical professionals' performance on some tests and help reduce administrative burden (like clinical note-taking), educators emphasize the importance of maintaining critical thinking skills and the ability to verify AI's recommendations, as AI still makes more reasoning errors than experienced physicians.
The school is developing specialized "tutorbots" trained on HMS curricula and testing AI tools that can help match students with relevant patient cases during clinical rotations, aiming to enhance learning opportunities while maintaining the institution's educational standards.
HMS faculty acknowledge both the risks (like potential bias in AI training data) and benefits of AI in medicine, suggesting that future successful medical professionals will be those who can effectively harness AI while maintaining the human elements of patient care and clinical reasoning.
“Chatgpt.” ChatGPT, OpenAI (GPT-4), openai.com/chatgpt. Accessed 21 Oct. 2024.
And that’s a wrap for this week’s newsletter! If you enjoyed our newsletter and found it helpful, please consider sharing this free resource with your colleagues, educators, administrators, and more.