Researchers develop tool to bring transparency into AI
25 de outubro de 2023
Hi-network.com
Researchers at Columbia and Lehigh universities have developed a tool, called DeepXplore, that could help bring transparency into artificial intelligence (AI) systems. Currently, AI systems often make decisions without humans being able to understand how those decisions are made and why a solution is chosen over another.DeepXplore works to expose flaws in a neural network by tricking it to making mistakes; it does so by feeding confusing inputs into the network, to expose cases on flawed reasoning by clusters of neurons. The researchers describe this process as 'reverse engineering the learning process to understand its logic'. While DeepXplore cannot certify that an AI system is bug-free, researchers plan to work on improve it, 'to open the black box and make machine learning systems more reliable and transparent'.