🌱 The Environmental Impact of AI
Dive into the critical environmental consequences of AI, both its positive and negative impacts on the world
As we approach Earth Day, it is crucial to examine the environmental impact of AI technology with your students. The rise in AI's popularity has led to a significant increase in computing power and electricity consumption, contributing to higher carbon dioxide emissions. For instance, the training process for OpenAI's GPT-3 language model alone released an estimated 500 tons of carbon dioxide into the atmosphere. However, AI also presents innovative solutions to mitigate climate change and promote sustainability. Companies like DI Pathways leverage AI-driven energy-efficient solutions to help institutions reduce their carbon footprints. As AI usage continues to grow, students, educators, and the general public need to recognize AI’s role in addressing environmental challenges while aiming to minimize its negative ecological impact. How familiar are you with the environmental implications of AI?
Here is an overview of today’s newsletter:
Educational resources on the environmental impact of AI
Transparency policy for completing assignments and projects
A study conducted on students’ understanding of machine learning
Faultiness of current AI detection software systems
🚀 Practical AI Usage and Policies
Below is a curated list of articles and educational resources to help educators and students better understand the implications of AI on the environment:
Articles:
4 ways AI can help with climate change, from detecting methane to preventing fires (NPR)
AI and Sustainability: Will AI Help or Perpetuate the Climate Crisis? (Stanford University)
Training a single AI model can emit as much carbon as five cars in their lifetimes (MIT Technology Review)
Tackling climate change with machine learning (MIT Sloan)
9 ways AI is helping tackle climate change (World Economic Forum)
Is generative AI bad for the environment? A computer scientist explains the carbon footprint of ChatGPT and its cousins (The Conversation)
Educational Resources:
AI’s Impact on the Environment (AI for Education) offers a classroom guide and sources to facilitate discussion on the environmental impact of Generative AI
Environmental Impact of AI (MIT RAISE) is a quick 20-30 minute workshop to introduce students to the topic of computational and environmental costs associated with training AI models. Classroom slides and scripts are provided
Can AI help Climate Change? (IBM Technology) is a video that discusses ways we can combine chemistry with AI to develop solutions to prevent climate change
How Rose Farms in Kenya are Using AI to Battle Climate Change (BBC News) is a recent news clip that can serve as a helpful case study to introduce students to ways AI technology can be used to promote sustainability
AI’s Impact on the Environment (AI x Education) offers a classroom guide and sources to facilitate discussion on the environmental impact of Generative AI.
Environmental Impact of AI (MIT RAISE) is a quick 20-30 minute workshop to introduce students to the topic of computational and environmental costs associated with training AI models. Classroom slides and scripts are provided.
Can AI help Climate Change? (IBM Technology) is a video that discusses ways we can combine chemistry with AI to develop solutions to prevent climate change.
📣 Student Voices and Use Cases
This week, we had a chance to speak with Mashiko Lortkipanidze, a junior at Minerva University. She is currently pursuing her major in Computer Science concentrating in Mathematics and a minor in Economics. In the following, we present select highlights from these conversations, which have been slightly edited for enhanced clarity and precision:
Q: What advice would you give to educators looking to incorporate AI policy in their classrooms?
I recently had the opportunity to advise my university in creating an AI policy and am in the process of writing an AI guidebook. In terms of creating a policy, the main principle for me is always transparency and figuring out the best way to communicate where exactly the AI was used. The best way that I found so far is to break down the steps one takes to complete a certain assignment. For example, if you're writing a research paper, you can break it down into categories like the literature review phase, coming up with a hypothesis, and writing the main body. The focus is to break down the tasks you completed into the trackable steps or components of the assignment. And then, for each of the steps, create a table and say, "Here I used an AI tool to brainstorm the title. Here I used an AI tool to refine my hypothesis question. Here I used an AI tool to improve my writing in the body paragraph." Then you can write an outline, and this also helps with project management because you can clearly see the broken down steps of what you have to do, but also makes your work more transparent and allows you to see the tools you used to increase productivity in every single step.
Q: As a student, could you share some examples of how you've utilized AI technology in your studies?
I have two favorite use cases that I use all the time. First of all, I hate when mathematics textbooks are like, "Oh, this is easy," and then they drop an equation that came out of nowhere and you're like, "How did you get there? How is that easy?" If you don't understand how they arrived there, you're kind of stuck. You cannot continue going, which is daunting. So I really like using AI to interact with my mathematics textbooks. I usually paste in what I have to read into ChatGPT and ask questions about it. Usually, I try to ask how they got to the equation step by step. If I don't understand a step, I can go even further and ask more. This way, you can really use the conversation to dive deeper and deeper into where it all comes from because I feel like that's knowledge. You need to lay the bricks to understand what's on top, right? It's kind of like a pyramid. And if those bricks are missing, it doesn't sit right in your brain. And I think using ChatGPT really helps me to have the whole picture. In terms of the second use case, I use AI to help with planning out my schedule because I have a lot of deadlines. At Minerva University, we have an assignment due every week. And if you don't plan it out, you will be lost. I basically try to break it down into steps and then ask ChatGPT or other AI tools to distribute it over my week.
Q: Have you faced any challenges or concerns while using AI? If so, what were they?
I was recently reading about jailbreaking, which is how ChatGPT can provide harmful information to the users. Even though there are safeguards in place, there's still many loopholes, and this paper was talking about how it's safeguarded and tested. For instance, once you attach an image, you can break the system and the safeguards, and then the harmful information can become easily accessible through generated text. The example given in the paper was how to make some kind of gun more stable. You would need to try really hard to get that information through traditional search engines like Google, but through ChatGPT, it could be a simple answer.
Q: How do you envision AI impacting education in the next five years?
I'm really curious about how this will develop. I am especially interested to see if there will be any sort of gap between universities that strictly banned AI tools and universities that didn't. It would be interesting to see if the productivity and skill levels will be different among students who used AI tools throughout their education. In the next 5 years, I think that the teachers will become better because they will have more tools to help them, perhaps with delegating tasks and automatic grading. Teachers will also be able to focus on improving their lessons and have more time to connect with students and think about the best ways to teach what they teach and have AI tools helping them do this. I think students can also become more engaged in the material through making the material more personally interactive for them. I hope that it will be a transformative change rather than deteriorating.
📝 Latest Research in AI + Education
University of Pennsylvania
This study explores teenagers' understanding of machine learning applications (MLPAs) from a knowledge-in-pieces (KiP) perspective, diverging from traditional approaches that seek coherent theories or focus on misconceptions. By engaging youths aged 14-16 in cooperative inquiry methods, the research identifies that participants possess fragments of understanding about MLPAs, notably recognizing that these applications learn from training data and identify patterns to produce varied outputs. This insight challenges prior notions of youths' simplistic views on ML, revealing instead a nuanced comprehension that MLPAs operate based on both input and training data, designed by humans. These findings advocate for a more nuanced approach to AI literacy in education, emphasizing the potential of youths' everyday knowledge as a foundation for formal learning. By comparing this informal understanding with the ML Pipeline used in instruction, the study suggests that everyday knowledge could scaffold learning toward achieving instructional goals. It proposes that educational tools and activities should leverage these pre-existing knowledge pieces, focusing on pattern recognition and the design and influence of training data in MLPAs, to foster a deeper understanding of ML concepts. This approach not only recognizes the value of informal knowledge but also suggests pathways for integrating AI literacy into K-12 education in a way that is relevant and accessible to students.
Morales-Navarro, L., & Kafai, Y. B. (2024). Investigating Youths' Everyday Understanding of Machine Learning Applications: a Knowledge-in-Pieces Perspective. arXiv preprint arXiv:2404.00728.
British University Vietnam, James Cook University Singapore
GenAI Detection Tools, Adversarial Techniques and Implications for Inclusivity in Higher Education ↗️
This study delves into the reliability of six major Generative AI (GenAI) text detection tools when faced with AI-generated content that has been intentionally modified to evade detection. It was found that these tools' accuracy, initially low at 39.5%, further drops to 17.4% with manipulated content, suggesting some evasion techniques are particularly effective. This significant decrease in accuracy, coupled with the risk of false accusations, suggests that GenAI detectors are not yet reliable for confirming academic integrity violations. The research points towards a nuanced approach to GenAI in higher education (HE), advocating for their use in a supportive, non-punitive manner while highlighting the need for alternative assessment strategies that accommodate the limitations of these technologies. It raises critical concerns about inclusivity, especially regarding non-native English speakers and those with limited access to technology, who might be unfairly disadvantaged by the current use and potential biases of these detectors. The study concludes that while GenAI detectors aim to promote fairness and integrity, their current limitations and potential for misuse necessitate a careful, critical approach to their implementation in educational settings, emphasizing the need for policies that ensure equitable and responsible use of AI technologies.
Perkins, M., Roe, J., Vu, B. H., Postma, D., Hickerson, D., McGaughran, J., & Khuat, H. Q. (2024). GenAI Detection Tools, Adversarial Techniques and Implications for Inclusivity in Higher Education. arXiv preprint arXiv:2403.19148.
📰 In the News
CNN
Teachers are using AI to grade essays. But some experts are raising ethical concerns ↗️
Key takeaways:
Diane Gayeski, a professor at Ithaca College, uses AI, specifically ChatGPT, to grade essays by providing initial feedback and improvement suggestions to her students, who are also encouraged to use the tool for revising their drafts.
The use of AI in education is expanding, with a report indicating that 50% of college students and 22% of faculty members utilized AI tools in the Fall of 2023 for various purposes, including grading, feedback, lesson planning, and assignment creation. However, there are concerns about accuracy, plagiarism, and integrity.
Ethical considerations arise from using AI for grading and feedback, especially regarding the personal connection between teachers and students and potential infringement on students' intellectual property. Experts caution against uploading sensitive works like dissertations to AI tools without student consent due to the risk of these works being used to train AI algorithms.
Discussions on formulating policies around the use of AI in educational settings are ongoing, with emphasis on the need for transparency, consent, and the cautious development of guidelines to prevent oversimplification and ensure that policies reflect the nuanced realities of teaching and learning with AI.
EdSurge
Can Using a Grammar Checker Set Off AI-Detection Software? ↗️
Key takeaways:
Marley Stevens, a junior at the University of North Georgia, became viral on TikTok for highlighting her ordeal of being accused of cheating due to her use of Grammarly, a grammar-checking software. Her university accused her of using AI to write a paper, impacting her scholarship eligibility and placing her on academic probation.
The controversy stems from AI-detection tools like Turnitin flagging her paper as written by AI, despite Stevens insisting the work was her own, aided only by Grammarly's grammar and spell-check features. This incident has sparked a broader conversation on the reliability of AI-detection systems and their implications for students.
Stevens' situation has led to widespread attention, with her sharing evidence online to prove her case and raising funds through GoFundMe for potential legal action against her university. Her story has resonated with other students facing similar accusations, highlighting systemic issues with current AI-detection methodologies.
The incident raises critical questions about the balance between utilizing AI for academic assistance and the threshold at which such use is considered cheating. It also underscores the need for clearer policies and an understanding of AI tools' capabilities and limitations, both from educational institutions and AI-detection service providers.
“Chatgpt.” ChatGPT, OpenAI (GPT-4), openai.com/chatgpt. Accessed 9 Apr. 2024.
And that’s a wrap for this week’s newsletter! Please share in the comments below if you have any helpful resources or thoughts on the environmental impact of AI.
If you enjoyed our newsletter and found it helpful, please consider sharing this free resource with your colleagues, educators, administrators, and more.