Sen. Rick Scott (R-Fla.) is pushing his Artificial Intelligence Shield for Kids (ASK) Act, which aims to protect kids from harm. But it needs some tweaks – the definition of a “child” and of AI are both problematic.
Lately, AI’s been getting a lot of attention. ChatGPT has drawn people in with its human-like language, although it has its flaws like hallucinating articles and making false accusations. AI does come with risks – it’s been used for criminal purposes and some have wildly suggested it could lead to human extinction.
But it’s done a lot of good too! It helped develop a drug to fight dangerous bacteria, can help farmers protect their crops from pests, has written church sermons and can place drive-through orders and answer medical questions. It can identify heart failure, improve dentistry and help paralyzed humans regain mobility – even helping The Beatles finish their last song!
Kids need protecting, no doubt. AI can do great things, but not all of it is suitable for kids – some AI in social media can even be addictive! Parents need tools to help guide, monitor and protect their children, so app and website developers should provide them.
Scott’s ACT law has two fundamentals: parental permission and feature disablement. It targets numerous types of computing systems beyond potentially harmful AI in social media (the presumed target). But it’s so broad that it’ll likely harm computer science and digital citizenship education.
First off, who does the law apply to? Everyone under 18 gets the full treatment – no exceptions for emancipated minors, 17-year-old military enlistees or bright students going to college early. We let kids learn to drive at 15 and get their license at 16 – why not provide graduated access to covered AI systems with increasing responsibility as they mature?
The second consideration is what technologies are covered. The ACT law uses the 2019 John S. McCain National Defense Authorization Act’s broad definition of AI (which wasn’t designed to protect youth). This definition includes systems that “perform tasks with minimal human oversight,” can “learn from experience and improve performance,” are designed to “act like humans,” can “approximate cognitive tasks” and solve tasks that require “human-like perception, cognition, planning, learning, communication or physical action.” It also includes systems that “act rationally” and achieve goals using things like “perception, planning, reasoning, learning, communicating, decision making and acting.”
This law wouldn’t just stop orgs from using AI (or even software that isn’t AI) for good, like preventing cyberbullying and improving healthcare. It would also limit its use in education, video games, art, computer science edu and create a statutory AI exposure issue. Take, for example, if a kid makes a programming assignment AI and shares it with another kid without their parent’s permission – bam – they’d be breaking the law!
This bill has a great goal, but needs more nuance before it becomes law. It should exempt regular ed uses like AI-driven adaptive learning, digital citizenship and computer science programming. Plus, AI algorithms designed for protecting kids and sites like GitHub & dev instruction sites should be excluded so that all youth have access to these resources. It should also factor in different maturity levels of kids, like the Children’s Online Privacy Protection Rule & Children and Teens’ Online Privacy Protection Act do.
Enacting this bill would have huge consequences – it could make ed harder and stop some kids from benefiting/learning AI tech. Plus, there could be First Amendment issues with content regulations. This parental consent thing doesn’t make sense when education is key to keeping up with foreign competitors & preparing for a changing job market.