DeKALB – A paradigm shift spurred by emerging artificial intelligence technologies has prompted changes in all facets of society, two Northern Illinois University professors said last week.
The change has been in the works for at least a generation, said David J. Gunkel, professor of media studies and presidential research, and Andy Jeon, an emerging technologies researcher and assistant professor of marketing.
In the 1950s, Arthur Samuel programmed software to play checkers against a human. In 1962, the program beat a renowned checkers player. And 35 years later, Deep Blue, a supercomputer built by IBM, beat reigning world chess champion Garry Kasparov.
In 2011, IBM Watson (a question-answering system built by IBM) beat “Jeopardy” champions Brad Rutter and Ken Jennings in a televised competition.
That cycle of artificial intelligence growth has continued today.
Gunkel said that between 1962 and 2015 – when computer program AlphaGo beat a professional player in the strategy board game Go – people began ascribing the credit for the win to the AI program instead of its creators.
It no longer was Samuel’s software beating a human player at chess. Instead, it was IBM Watson itself beating “Jeopardy” champions at their own game.
“Initially, people didn’t know where to assign responsibility for the [Go] win. The engineers said, ‘No, it’s not us,’ ” Gunkel said. “People then found themselves unable to attribute the win to some entity, and if you look at what happened on the reporting on the subject, eventually it was decided AlphaGo was the winner – that we would assign the win to the algorithm. AlphaGo is named as one of the championship players on the Go rosters worldwide.”
That’s the scary part, and that’s why it’s cutting-edge technology. And that’s the thing about why we need to be careful about uses of ChatGPT – because we don’t know where it’s going to be, actually. We need to research a lot before we can responsibly use this technology.”
— Andy Jeon, emerging technologies researcher, NIU assistant professor of marketing
Gunkel and Jeon gave presentations about AI on Oct. 4 to a crowd of a few dozen in a back room at Fatty’s Pub and Grille in DeKalb.
Jeon said generative AI can produce new and original outputs, meaning content based on patterns learned from vast data sets.
ChatGPT, a well-known version of generative AI, generates text from a database the AI was trained on, including Google. The AI then transforms the patterns it finds in the data set into a unique output. Because the algorithm’s understanding of input data is constantly being tweaked, however, Jeon said that the engineers who developed ChatGPT don’t understand everything the program does to generate its output.
“That’s the scary part, and that’s why it’s cutting-edge technology,” Jeon said. “And that’s the thing about why we need to be careful about uses of ChatGPT – because we don’t know where it’s going to be, actually. We need to research a lot before we can responsibly use this technology.”
Jeon and Gunkel both advocated for more research on the topic and said federal regulation could become necessary as the use of AI grows.
Although the new technology has gained notoriety for seemingly not needing humans to complete tasks, Gunkel said, artificial technology has a dirty secret.
“This is another dirty secret of OpenAI and these generative AI systems: The language, the way they get the AI to do that, is they have human people check the algorithm and tag various things,” Gunkel said. “The tagging is offshore. It’s done in Kenya, and it’s done by people who are paid pennies to spend time staring at screens tagging [sometimes explicit content]. So there’s a lot of human labor behind the AI.”