Agentic AI is a sophisticated AI system that combines the data extraction and content creation abilities of GenAI with a reasoning engine, which simulates logical thinking and decision-making. This offers exciting new possibilities for performing tasks and making decisions. However, similar to GenAI, these system’s guardrails and functionality vary. When looking at agentic AI systems for your institution, consider the following questions to better ensure responsible, transparent use.
As agentic AI becomes more common, some are concerned it could eventually replace traditional academic roles or stifle creativity, originality and critical thinking. But that doesn’t have to be true if a tool is developed responsibly – it can be structured intentionally to support, rather than replace, human critical thinking.
It can also act as a catalyst for deeper reflection and reasoning. According to Hosseini and Seilani (2025), one of the ways that AI developers can ensure this happens is to design agentic AI systems that work with humans in collaborative partnership; for example, introduce “dynamic goal-sharing, negotiating in real time, shared decision-making, and adaptive task allocation.”1
With many AI systems, it’s not necessarily clear how they have reached their responses or reveal the sources they have used to arrive at their results. To guard against this, Viswanathan (2025) points to the importance of making agentic AI as transparent as possible. “Systems must be designed with inherent explainability and features that allow stakeholders to understand the reasoning behind autonomous decisions. This includes implementing mechanisms for tracking decision pathways and maintaining comprehensive audit trails of system actions.”2
AI should deliver responses that are accurate, bias-free, accountable and fair. But with non-academic grade tools, in which responses aren’t grounded in verified scholarly content, the number of hallucinations (meaning false or misleading AI outputs presented as fact) can be high. In an article in Library Journal, Nicole Hennig, eLearning Developer at the University of Arizona Libraries, said concerns over fabricated sources has led her institution to warn students to avoid non-academic grade AI tools when looking up articles.
She explains: “The articles sound very plausible because [the AI tool] knows who writes on certain topics, but it’ll make up things most of the time because it doesn’t have a way to look them up.”
While eliminating bias and hallucinations in AI tools remains a challenge, they can be minimized. And here again, human input remains vital, according to Gridach, et al., including “robust oversight mechanisms, human-in-the-loop architectures, and frameworks to evaluate and mitigate these risks during training and deployment.”4
Another major area of concern is how users’ data and the queries they enter will be stored and handled. Finding a reputable provider that uses secure, established systems and is transparent about its privacy policies can help to address these fears.
In a recent interview with Library Connect, Don Simmons, Assistant Professor at Simmons University’s School of Library and Information Science, suggested five basic steps that libraries can take to improve the use of AI at their institutions5:
You can also develop exercises to nurture students’ evaluation skills. These can take the form of fact-checking challenges; for example, librarians can present students with fake news articles, biased summaries or plagiarized texts and guide them to critique and detect flaws using their AI literacy skills. These exercises can also be used to reinforce the importance of checking the credibility of sources when using non-academic grade tools.
You can also hold training sessions for colleagues in the form of workshops, tutorials or online resources. And as Simmons says, they don’t have to be complicated. For example, Hennig has plans for an online “AI Tool Exploration Hour,” during which faculty will be able to spend time “individually or collectively playing with and exploring one or more [AI] tools,” with breakout groups and in-person meetings optional.6
And, if there are other librarians or faculty using agentic AI, encouraging them to share their learnings and experiences can help you better understand this new technology and use it responsibly.
Learn more about agentic AI by reading our guide, Agentic AI in academia: How to adopt for research, learning, and innovation.