Skip to main content
Back to top

For anyone who’s interacted with an AI chatbot or generative AI tool, there’s often a moment of bewilderment at how advanced these applications have become. Even when the results appear strange or unexpected, it all begs the question: How did the AI come up with that?

In many cases, it’s a question that also happens to defy explanation, but that could be changing.

That’s because amidst the current arms race to roll out increasingly powerful and robust AI technologies, there’s a parallel effort underway. Largely independent of the engineers who are developing the applications themselves, this secondary effort involves the work of mathematicians looking to augment our understanding of how these AI entities actually work — or more specifically, how they arrive at the decisions they make.

“The demand for explainability is increasing,” Dr. Barnabas Bede, director for the BS in Computer Science in Machine Learning degree program at DigiPen, says. “Many critical applications, like in the military or the medical field, require explainability. No military person will trust an AI that makes a decision that is not transparent about it, and same for medical applications.”

As one of those mathematicians seeking solutions to the explainability problem, Bede — along with a handful of his colleagues both at DigiPen and beyond — has been leading the charge into one particular avenue of inquiry, using a branch of mathematics he knows better than almost anyone: fuzzy systems.

That’s the main goal, to explain what a neural network’s decisions are, what AI is deciding under the hood.

Fuzzy systems are a type of mathematical framework, first developed in the 1960s, used to model or describe uncertainty, particularly in scenarios where standard probability or binary logic don’t readily apply. Fuzzy systems work best, Bede says, when applied to situations that rely on rules-based reasoning, such as control systems or — as it turns out — artificial intelligence.

“Fuzzy systems are good for showing an if-then connection, a cause-and-effect relation,” Bede says.

Beginning in 2019, Bede published his first paper exploring a promising connection between neural networks — one of the foundational models used as the backbone for many of today’s groundbreaking AI applications — and a particular type of fuzzy system known as the Takagi-Sugeno fuzzy system. By incorporating the Takagi-Sugeno methodology into a neutral network, Bede demonstrated the potential for an interpretable neural network system, in which the AI decision outputs could actually be explained.

“All the AI methods that are now popular, these large language models like ChatGPT and all the generative AI that we see, generating images like with DALL-E — those are neural networks at their core,” Bede says.

Soon after publishing his first findings, Bede partnered with DigiPen assistant professor Peter Toth from the Department of Computer Science, as well as Dr. Vladik Kreinovich from the University of Texas at El Paso, to continue his research.

“We published a paper where we proved that one layer of a neural network is equivalent to one layer of a Takagi-Sugeno fuzzy system,” Bede says, referring to a publication presented at the 2023 annual meeting of the North American Fuzzy Information Processing Society (NAFIPS), a group for which Bede recently served as president.

This past May, the same authors took their findings yet another step further, presenting once again at this year’s NAFIPS conference in South Padre Island, Texas. Their newest paper, titled “Equivalence between TSK Fuzzy Systems with Triangular Membership Functions and Neural Networks with ReLU Activation on the Real Line,” received an Outstanding Paper award from the conference.

Although he admits the field of research is a long way from being able to plainly decipher the labyrinthine rules and decision patterns for the most complex large language models, Bede says he can envision a scenario in which the decision output for even those AI-driven applications could be retroactively analyzed with a strong level of confidence.

“That’s the main goal, to explain what a neural network’s decisions are, what AI is deciding under the hood,” Bede says.

DigiPen Duo Shoots to the Finals in Explainable AI Competition

Meanwhile, Bede and Toth aren’t the only ones at DigiPen making strides toward explainable AI innovations, as evidenced by two additional sessions from the NAFIPS 2024 conference.

Seniors Leo Huang and Garret Obenchain, both from the BS in Computer Science in Machine Learning program, presented on an AI agent they had developed and trained as part of their year-long Computer Science Project (CSP) class at DigiPen. The two students, who went by the team name 360° No Slope, created their AI system to compete last March in the 2024 Explainable Fuzzy Challenge (XFC), an annual college competition hosted by the University of Cincinnati, in conjunction with Thales USA and Complex Engineering Systems Journal.

The XFC competition invites student teams to program an original AI agent to control the spaceship in a computer simulation titled “Asteroid Smasher,” loosely based on the Asteroids classic arcade game. Student teams deploy their autonomous agents to compete for the highest score, based on a set of rules that can alter and change from year to year. Participants deploy their AI agents in a series of live head-to-head matches, with scoring based on successful AI behavior, such as destroying asteroids and avoiding collisions. Teams are also judged on the explainability of their AI agents, with an opportunity to present on their methodologies to a juried panel.

Huang and Oberchain finished second overall after making it to the final round, facing off against the winning team from the University of Alberta.

“They made a really great agent,” Bede, who advised the 360° No Slope team, says. “They did a lot of research, and they were also in my deep learning class, which is basically a neural networks class. It was actually amazing to see how anything I was teaching in the deep learning class, they tried out immediately in this environment to see whether it worked.”

The 360° No Slope team was one of two DigiPen teams to compete in this year’s XFC event, the other team comprising current juniors Sam Mbugua, Nathan Delcampo, and David Mann. This marked the third year that a DigiPen team had competed, with a prior team winning the first-place XFC prize in 2022.

MSCS Grad Merges Fuzzy AI with Physics Simulation

Also presenting at the NAFIPS conference, MS in Computer Science graduate Mike Riches (2023) presented on his paper, “Explainable Fuzzy AI for Physics Simulation,” which he co-authored with Bede and DigiPen Dean of Faculty Erik Mohrmann.

That paper was based on Riches’ thesis project exploring problems relating to rigid-body collisions within physics simulation engines.

“In physics engines, there is a parameter that controls how to solve a physics simulation so that objects do not penetrate each other,” Bede says, noting that sometimes those parameters fail, causing objects to slip into one another in a phenomenon known as clipping. “Previous approaches had a constant for the coefficient,” he says, referring to the mathematical calculations used to prevent clipping. “Mike used a fuzzy system to predict what’s the best coefficient in this simulation, and then he made the system explainable so that we could explain what’s going on with the physics.”

Overall, Bede says he was pleased to see such a strong showing of DigiPen-based research at this year’s event and to connect with all of the other conference attendees who are doing important work in the field of explainable AI.

“It was good to see everyone at the NAFIPS conference,” he says. “It was quite fun.”