The Software and Information Industry Association (SIIA), a trade association comprised of over 380 global tech companies, has recently created guidelines for the responsible development of artificial intelligence (AI) tools for education. "Principles for the Future of AI in Education" prioritize civil rights, inclusion, and educational equity as critical considerations when implementing AI technologies in K-12 and colleges and universities.
One of the primary objectives outlined in the guidelines is the protection of student privacy and data. The SIIA stresses the importance of making AI technologies as transparent as possible so that both students and teachers can fully grasp the tools they are utilizing. Furthermore, the guidelines encourage educational technology (ed-tech) developers to collaborate with schools and institutions to demystify the risks and opportunities associated with new AI technologies in education. The SIIA emphasizes the necessity for the ed-tech industry to work closely with the education community, promoting AI literacy among students and teachers alike.
SIIA President Chris Mohr stressed the critical role AI plays in the learning environment and the significance of developing principles to govern the advancement and deployment of these innovative technologies.
"With AI being used by many teachers and educational institutions, we determined it was critical to work with the education technology industry to develop a set of principles to guide the future development and deployment of these innovative technologies,” he said. “Partnering with teachers, parents, and students will be critical to improving educational outcomes, protecting privacy and civil rights, and understanding these technologies. I commend our member companies who embraced this initiative to collaborate and for their commitment to support our children and teachers.”
The SIAA principles were developed by the AI in Education Steering Committee, which comprises prominent ed-tech developers, including AllHere, ClassDojo, Cengage, D2L, EdWeb.net, GoGuardian, InnovateEDU, Instructure, MIND Education, McGraw Hill, and Pearson. Their extensive expertise in the field contributed to developing these guidelines, reflecting the ongoing conversation surrounding AI and children's privacy in Congress and legislative bodies at multiple levels.
Sara Kloek, Vice President of Education and Children's Policy at SIIA emphasizes the association's mission to promote responsible AI use, safeguard student privacy, ensure educational equity, uphold civil rights, and foster the development of crucial future skills.
In a public statement, Sara Kloek, vice president of education and children's policy at SIIA, said, "AI and kids' privacy have dominated the conversation in Congress and in the states this year."
"As the trade organization representing the leading companies in ed tech, it is our mission to advance the responsible use of AI to enhance a learner's educational experience while at the same time protecting their privacy, promoting educational equity, upholding civil rights, and developing essential skills for the future."
The release of these guidelines coincides with the increasing adoption of AI in K-12 schools and universities to enhance instruction. Policymakers at the state and federal levels are actively discussing how to regulate AI tools in the future.
Head of Privacy and Data Policy at GoGuardian, Teddy Hartman, underscores the responsibility of ed-tech companies to ensure student safety and well-being. “AI holds immense promise to help educators personalize instruction to the needs of every individual student at a scale that hasn’t, until now, been possible,” Hartman said. “Today’s industry-wide commitments are an important step toward ensuring the responsible use of AI in K-12 classrooms.”
SIIA's guidelines for the responsible development of AI tools in education are intended to prioritize educational equity, inclusion, and civil rights. These guidelines emphasize the importance of protecting student privacy and data, promoting transparency, demystifying AI technologies, and fostering AI literacy among students and teachers. Ed-tech companies play a crucial responsibility in prioritizing student safety and well-being as they develop AI tools for the educational environment. The guidelines reflect ongoing discussions on AI and children's privacy at various legislative levels and coincide with the increasing adoption of AI in educational institutions.