ChatGPT in Classrooms: How to Make it the Next Big Thing and Not the Next Big Problem | مركز سمت للدراسات

ChatGPT in Classrooms: How to Make it the Next Big Thing and Not the Next Big Problem

Date & time : Monday, 3 July 2023

With generative artificial intelligence platforms like ChatGPT looming large over U.S. classrooms, Jeffrey Ellis-Lee, an Advanced Placement (AP) government teacher at the Maxine Greene High School for Imaginative Inquiry in Manhattan, decided to conduct an experiment. He fed essay questions from previous years’ AP government exams into ChatGPT to see how it would do. The answers, much to his chagrin, were perfect. So, he gave his students an assignment: Take the seven-point rubric for grading the AP exam and make ChatGPT’s answers better.

And they did.

Ellis-Lee is one of millions of educators across the country grappling with the same issue: What should they do with generative artificial intelligence, like ChatGPT, in the classroom?

Ban it? That was the reaction to calculators in the 1980s, email in the 1990s and Wikipedia in the 2000s. Now, calculators are allowed to be used on the SAT and the ACT exams, and email and Wikipedia are teaching tools.

When New York City Public Schools tried to ban ChatGPT and other generative AI, students brought in hot spots to bypass the school’s internet and access them anyway. The schools have since walked back the ban.

The answer is to embrace it and to demand that AI developers adhere to a set of carefully crafted, ethical and accountability-rich regulations for the use of AI in our schools.

AI can revolutionize the way teachers teach and kids learn, but without the appropriate oversight and meaningful accountability, the pitfalls — such as data privacy violations, pre-programmed prejudice and unequal access — will not only cancel out the benefits, but also will harm students. If we have learned anything from the impact of social media on our kids’ well-being — which is so significant that the U.S. surgeon general issued an advisory warning regarding the mental health risks for adolescents — it’s that we can’t make the same mistake with generative AI.

Policymakers have to get ahead of the risks. They cannot play legislative catch-up with a tool so profoundly influential that the CEOs of the world’s leading artificial intelligence companies are warning that AI could replace humans entirely.

At the moment, creators of generative AI models cannot fully explain some of the technology’s greatest pitfalls, including why it sometimes generates false data, or how it will recognize deepfakes, which are manipulated videos or images that seem real, but aren’t. Racist and cultural biases are already appearing in AI algorithms, resulting in things like facial recognition programs that only recognize white faces, crime prediction programs that zero in on Black and Latinx faces, and robots that identify women as homemakers.

Policymakers need to develop regulations — with accountability — quickly, and they need to include educators in the process. Without educators’ input, programs could result in diminished teacher-student interaction time, leading to isolation and slowed emotional growth. Kids in low-income districts may not have the hardware or the internet access to take advantage of generative AI, and there could be no way of ensuring the information the programs are using is even accurate.

The American Federation of Teachers (AFT) is committed to representing educators, students and our schools in the development of AI policy, and in providing the professional development and tools educators need to use it. We are not going to let tech companies use our kids as guinea pigs again — we saw how that worked with social media. This time around, we demand real regulations with consequences attached.

First, we need assurances that our students’ data, their families’ data and teachers’ data are secure. If AI is going to be in our classrooms, we need to know the highest levels of data security and privacy come with it, and that there are consequences for misusing it.

Next, we need to set up safeguards against bias. All AI systems need to be trained and tested on data that includes everyone and treats everyone equally — and users need to be able to provide feedback on any perceived bias.

The educational value of AI needs to be safeguarded and accessible so that all students can use it, regardless of ability, background or learning style. Along the same lines, AI should promote collaborative learning, and its educational impact must be objectively evaluated by a third party.

Overall, each of the guidelines points to one thing: human oversight and decision-making. AI should augment, not replace, human educators — and it should be regulated by humans to ensure accuracy, equity and accessibility.

AI is here. Our kids are already using it, and its role in our lives is only going to expand. Educators can learn to harness its strengths and teach our kids how to benefit from it, much like Jeffrey Ellis-Lee did. Generative AI is the “next big thing” in our classrooms, but developers need a set of checks and balances so it doesn’t become our next big problem.

 

source: themessenger 

MailList

Subscripe to be the first to know about our updates!

Follow US

Follow our latest news and services through our Twitter account