Members of the Joint Standing Committee on the Judiciary discussed the potential uses, and concerns, of artificial intelligence technology (AI) with representatives from tech companies including Microsoft, DataRobot and ROC.AI during an interim meeting Monday morning.
“It’s inevitable at some point in time, you will probably need to start thinking about some rulemaking in this space,” Scott Swann, CEO of ROC.AI, said. “And so as I talk to you, really what the messages I want to throw to you is just to give you a little bit better understanding that not all AI is bad, but they’re absolutely things you should probably be concerned about.”
He told the committee that AI programs, like ChatGPT, take in vast amounts of information used for pattern recognition that could be used to analyze documents, bolster school security and recognize license plates on traffic camera footage.
But it also comes with privacy concerns and questions about what’s actually within the programs’ codes.
Swann spoke about the origin of the “AI supply chain” and the need to be wary of “black boxes” from other countries in the technological arms race, like China and Russia.
He previously worked for the FBI, helping create their Next Generation Identification biometric program for criminal identification.
“The problem is that if you train these kinds of algorithms, then you have the power to put in these embedded rules, so no one is actually going to be able to scan for that,” Swann said.
But another panelist, Ted Kwartler of AI company DataRobot, disagrees. He argues that, in the near-term, much of these new programs can be manageable with the right know-how.
“I don’t think that AI is really a black box,” Kwartler said. “And I know that’s a hot take. But I think that if you are technical, or that it’s explained to you in the way that you can understand it, and it’s contextualized, anyone in this room, by the end of today, I can get them running code to actually build it out.”
Del. Chris Pritt, R-Kanawha, said he is concerned about how to regulate such technology.
“If nobody who’s in charge of enforcing this has the skills, I mean, if it’s so unique, it’s so emerging, that nobody can enforce those guardrails, what’s the solution?” Pritt asked.
Others on the committee, like Del. Evan Hansen, D-Monongalia, have experimented with using ChatGPT to write proposals. He asked about the ethics of using AI in the policymaking process moving forward.
“Is it ethical or okay for state employees to use ChatGPT to write a proposal or write a report?” Hansen asked. “Or is it okay for vendors for the state of West Virginia to do that? Are there states that are regulating that? And if so, where’s the line?”
“I think this body would have to think about what makes sense for them,” Kwartler said.
Earlier this year, the West Virginia Legislature passed House Bill 3214. The law creates a pilot program that will collect data on the health of state roads using AI.