Tips & tricks
Tips for choosing and using agentic AI
By Brittany Ballard
25 Nov 2025
read ( words)
Share
https://blog.scopus.com/tips-for-choosing-and-using-agentic-ai/ thumbnail image

Agentic AI is a sophisticated AI system that combines the data extraction and content creation abilities of GenAI with a reasoning engine, which simulates logical thinking and decision-making. This offers exciting new possibilities for performing tasks and making decisions. However, similar to GenAI, these system’s guardrails and functionality vary. When looking at agentic AI systems for your institution, consider the following questions to better ensure responsible, transparent use.

Does it promote “human in the loop”?

As agentic AI becomes more common, some are concerned it could eventually replace traditional academic roles or stifle creativity, originality and critical thinking. But that doesn’t have to be true if a tool is developed responsibly – it can be structured intentionally to support, rather than replace, human critical thinking.

It can also act as a catalyst for deeper reflection and reasoning. According to Hosseini and Seilani (2025), one of the ways that AI developers can ensure this happens is to design agentic AI systems that work with humans in collaborative partnership; for example, introduce “dynamic goal-sharing, negotiating in real time, shared decision-making, and adaptive task allocation.”1

Does it explain how it works?

With many AI systems, it’s not necessarily clear how they have reached their responses or reveal the sources they have used to arrive at their results. To guard against this, Viswanathan (2025) points to the importance of making agentic AI as transparent as possible. “Systems must be designed with inherent explainability and features that allow stakeholders to understand the reasoning behind autonomous decisions. This includes implementing mechanisms for tracking decision pathways and maintaining comprehensive audit trails of system actions.”2

What steps does it take to combat bias and hallucinations?

AI should deliver responses that are accurate, bias-free, accountable and fair. But with non-academic grade tools, in which responses aren’t grounded in verified scholarly content, the number of hallucinations (meaning false or misleading AI outputs presented as fact) can be high. In an article in Library Journal, Nicole Hennig, eLearning Developer at the University of Arizona Libraries, said concerns over fabricated sources has led her institution to warn students to avoid non-academic grade AI tools when looking up articles.

She explains: “The articles sound very plausible because [the AI tool] knows who writes on certain topics, but it’ll make up things most of the time because it doesn’t have a way to look them up.”

While eliminating bias and hallucinations in AI tools remains a challenge, they can be minimized. And here again, human input remains vital, according to Gridach, et al., including “robust oversight mechanisms, human-in-the-loop architectures, and frameworks to evaluate and mitigate these risks during training and deployment.”4

How does it handle and use your users’ data?

Another major area of concern is how users’ data and the queries they enter will be stored and handled. Finding a reputable provider that uses secure, established systems and is transparent about its privacy policies can help to address these fears.

Using agentic AI to increase AI literacy at your institution

In a recent interview with Library Connect, Don Simmons, Assistant Professor at Simmons University’s School of Library and Information Science, suggested five basic steps that libraries can take to improve the use of AI at their institutions5:

 

  1. Familiarize yourself with the technology.
  2. Build your AI literacy skills.
  3. Make the rules clear. This includes liaising with colleagues and the institution administration to ensure AI policies for users are up to date and clearly promoted. If none are currently in place, work together to establish them.
  4. Don’t reinvent the wheel. According to Simmons: “There are so many different examples of courses and trainings out there already.”
  5. Launch your own AI trainings. “These can be as simple as one-shot workshops on how AI can help students build their resumes, or more complex programs. Don’t worry if you are still fairly new to AI. In our profession, I firmly believe no-one is truly an AI expert. We are all learning all the time.”

You can also develop exercises to nurture students’ evaluation skills. These can take the form of fact-checking challenges; for example, librarians can present students with fake news articles, biased summaries or plagiarized texts and guide them to critique and detect flaws using their AI literacy skills. These exercises can also be used to reinforce the importance of checking the credibility of sources when using non-academic grade tools.

You can also hold training sessions for colleagues in the form of workshops, tutorials or online resources. And as Simmons says, they don’t have to be complicated. For example, Hennig has plans for an online “AI Tool Exploration Hour,” during which faculty will be able to spend time “individually or collectively playing with and exploring one or more [AI] tools,” with breakout groups and in-person meetings optional.6

And, if there are other librarians or faculty using agentic AI, encouraging them to share their learnings and experiences can help you better understand this new technology and use it responsibly.

Learn more about agentic AI by reading our guide, Agentic AI in academia: How to adopt for research, learning, and innovation.

References
  1. Hosseini, S. & Seilani, H. (2025). The role of agentic AI in shaping a smart future: A systematic review. Array. Volume 26. https://doi.org/10.1016/j.array.2025.100399
  2. Viswanathan, P. S. (2025). AGENTIC AI: A COMPREHENSIVE FRAMEWORK FOR AUTONOMOUS DECISION-MAKING SYSTEMS IN ARTIFICIAL INTELLIGENCE. INTERNATIONAL JOURNAL OF COMPUTER ENGINEERING AND TECHNOLOGY. Vol. 16 No. 01. https://ijcet.in/index.php/ijcet/article/view/223
  3. Thornton, H. (April 2024). AI in Academia. Library Journal. https://www.libraryjournal.com/story/academiclibraries/ai-in-academia
  4. Gridach, M. et al. (March 2025). Agentic AI for Scientific Discovery: A Survey of Progress, Challenges, and Future Directions. arxiv. https://doi.org/10.48550/arXiv.2503.08979
  5. Willems, L. (July 2025). The role of AI in universities is growing — what does that mean for librarians? Library Connect.
  6. Dolan, E. (April 2024). PsyPost. ChatGPT hallucinates fake but plausible scientific citations at a staggering rate, study finds