By Dave DeFusco
At school, work or even in sports, people are used to the idea that higher scores mean better performance. Top of the class? Straight A’s. Best player? Most points. But what if rankings based only on final scores miss the full story? What if the student with the highest GPA isn’t necessarily the best, depending on how consistent they were or how much luck played a role?
That’s the question a team of researchers led by Dr. David Li, director of the Katz School’s M.S. in Data Analytics and Visualization, set out to answer. Alongside lead authors Xinyan Cui, a student in the M.S. in Artificial Intelligence, and Angela Li of the Applied Mathematics & Physics Department at Stony Brook University, the team presented their work this March at the 59th Annual Conference on Information Science and Systems at Johns Hopkins University.
Their research, “When a Straight-A Student Isn’t the Best: Fuzzy Ranking and Optimization from a Probabilistic Perspective,” introduces a more flexible, realistic way to rank people, teams or products, especially when there’s a lot of uncertainty involved.
“We’re often asked to rank things as if we have perfect information,” said Xinyan Cui, one of the paper’s lead authors. “But in real life, we don’t.”
Take GPA, for example. A student with a 4.0 average might look like the best. But what if another student earned mostly A’s but in much harder courses or in a tougher grading system? Traditional ranking systems like GPAs assume all scores are equal and perfectly reliable, which is rarely the case.
The same goes for sports teams, hiring decisions or even online product reviews. “The reality is, data can be noisy, incomplete or vary depending on context,” said Angela Li, co-lead author. “We wanted a ranking method that reflects that complexity.”
The team’s solution: fuzzy rankings grounded in probability theory. Instead of saying definitively, this person is number one, the fuzzy method says, this person is most likely to be number one, but here’s the range of possibilities.
Here’s how it works:
- Each person, product or team is treated like a bell curve, also known as a Gaussian distribution, with an average performance score and a measure of how consistent that performance is.
- The system then runs comparisons between these distributions to estimate how likely it is that one entity ranks higher than another.
- The result is a probabilistic ranking—a more flexible view that accounts for both performance and variability.
“It’s like saying, this student has a 75% chance of being in the top three, not just assigning them a fixed rank,” said Cui.
Behind the scenes, the system uses advanced optimization techniques. Think of it like adjusting dials on a machine to get the clearest picture possible. The researchers combine an initial guess, a starting point based on the data; a gradient descent algorithm, a common method in AI for gradually finding the best solution; and constraints to ensure the results stay realistic and within expected ranges.
This process helps them fine-tune the model to find the most accurate probabilistic rankings possible. To test their system, the team applied it to a case study that mimicked real-life complexity. In their example, not everyone had complete data. Some had better scores but wider performance swings; others were more consistent but didn’t reach the same peaks.
“Our fuzzy ranking method didn’t just spit out one answer, it gave us a whole picture,” said Angela Li. “We could see not only who performed well, but how reliable those performances were.”
That kind of insight, she said, can make a big difference when the stakes are high, like choosing scholarship recipients, hiring candidates or awarding grants.
Dr. Li said this work has important implications far beyond academia.
“Rankings are everywhere, from college admissions to employee evaluations to product recommendations on Amazon,” he said. “But most of them don’t account for uncertainty. They pretend we know everything.”
Fuzzy rankings, by contrast, embrace what we don’t know. “Our method gives people a more honest and transparent way to make decisions,” he said. “It’s not about throwing away scores, it’s about adding context.”
That context can help reduce controversy, especially in situations where traditional rankings feel arbitrary or unfair. By visualizing probability ranges, stakeholders can better understand how final decisions were made. In a world that values rankings, this research reminds us that best isn’t always black-and-white.
“We’re not saying fuzzy rankings should replace all systems,” said Cui. “But in situations with lots of uncertainty or incomplete data, they can provide a richer and fairer way to evaluate. It’s a step toward making our systems smarter—and more human.”