Measuring agentic AI adoption and control frameworks in finance

Atta Ul Mustafa (1) , Ahmet Faruk Aysan (2)
(1) Hamad Bin Khalifa University, Qatar ,
(2) Hamad Bin Khalifa University, Qatar

Abstract

Agentic artificial intelligence (AI) systems can execute actions rather than merely generate content, raising distinct governance and operational risk questions for financial institutions. This study measures how agentic AI is entering U.S. finance firms’ annual filings by treating disclosures as text-as-data. We assemble a balanced panel of 2,500 firm–year observations (500 firms per year) from 2021–2025 and implement an auditable dictionary-and-context approach that flags agentic references and then quantifies the surrounding “controls density” (governance and safety language) within the same local disclosure window. Agentic disclosures are absent in 2021–2023, appear in 2024 (0.4% of firm-years), and increase in 2025 (1.6% of firm-years), indicating a late but accelerating diffusion phase. Within the set of agentic-mention filings, autonomy evidence remains rare. However, it focuses on regions with higher control density, consistent with governance maturity serving as a prerequisite for action-taking deployments. The analysis provides a transparent measurement framework and baseline statistics for tracking the emerging shift from AI discussion to action-oriented, agentic deployments in finance.

Full text article

Generated from XML file

References

Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal, S., Bennett, P. N., Inkpen, K., Teevan, J., Kikin-Gil, R., & Horvitz, E. (2019). Guidelines for human–AI interaction. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. DOI: https://doi.org/10.1145/3290605.3300233

Bartlett, R., Morse, A., Stanton, R., & Wallace, N. (2022). Consumer-lending discrimination in the FinTech era. SSRN Working Paper. DOI: https://doi.org/10.1016/j.jfineco.2021.05.047

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. DOI: https://doi.org/10.1145/3442188.3445922

Fuster, A., Goldsmith-Pinkham, P., Ramadorai, T., & Walther, A. (2022). Predictably unequal? The effects of machine learning on credit markets. The Journal of Finance, 77(1), 5–47. DOI: https://doi.org/10.1111/jofi.13090

Firth, D. (1993). Bias reduction of maximum likelihood estimates. Biometrika, 80(1), 27–38. DOI: https://doi.org/10.1093/biomet/80.1.27

Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé III, H., & Crawford, K. (2021). Datasheets for datasets. Communications of the ACM, 64(12), 86–92. DOI: https://doi.org/10.1145/3458723

Gentzkow, M., Kelly, B., & Taddy, M. (2019). Text as data. Journal of Economic Literature, 57(3), 535–574. DOI: https://doi.org/10.1257/jel.20181020

Hassan, T. A., Hollander, S., van Lent, L., & Tahoun, A. (2019). Firm-level political risk: Measurement and effects. The Quarterly Journal of Economics, 134(4), 2135–2202. DOI: https://doi.org/10.1093/qje/qjz021

Heinze, G., & Schemper, M. (2002). A solution to the problem of separation in logistic regression. Statistics in Medicine, 21(16), 2409–2419. DOI: https://doi.org/10.1002/sim.1047

King, G., & Zeng, L. (2001). Logistic regression in rare events data. Political Analysis, 9(2), 137–163. DOI: https://doi.org/10.1093/oxfordjournals.pan.a004868

Loughran, T., & McDonald, B. (2011). When is a liability not a liability? Textual analysis, dictionaries, and 10-Ks. The Journal of Finance, 66(1), 35–65. DOI: https://doi.org/10.1111/j.1540-6261.2010.01625.x

Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), Article 115. DOI: https://doi.org/10.1145/3457607

Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019). Model cards for model reporting. Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAccT). DOI: https://doi.org/10.1145/3287560.3287596

Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J.-F., Breazeal, C., Crandall, J. W., Christakis, N. A., Couzin, I. D., Jackson, M. O., Jennings, N. R., Kamar, E., Kloumann, I. M., Larochelle, H., Lazer, D., McElreath, R., Mislove, A., Parkes, D. C., Pentland, A., Roberts, M. E., Shariff, A., Tenenbaum, J. B., & Wellman, M. (2019). Machine behaviour. Nature, 568(7753), 477–486. DOI: https://doi.org/10.1038/s41586-019-1138-y

Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., & Barnes, P. (2020). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAccT). DOI: https://doi.org/10.1145/3351095.3372873

Wang, L., Ma, C., Feng, X., Zhang, Z., Yang, H., Zhang, J., Chen, Z., Tang, J., Chen, X., Lin, Y., Zhao, W. X., Wei, Z., & Wen, J.-R. (2024). A survey on large language model based autonomous agents. Frontiers of Computer Science, 18(6), 186345. DOI: https://doi.org/10.1007/s11704-024-40231-1

Wooldridge, M., & Jennings, N. R. (1995). Intelligent agents: Theory and practice. The Knowledge Engineering Review, 10(2), 115–152. DOI: https://doi.org/10.1017/S0269888900008122

Qu, C., Dai, S., Wei, X., Cai, H., Wang, S., Yin, D., Xu, J., & Wen, J.-R. (2025). Learning tools with large language models: A survey. Frontiers of Computer Science, 19(8), 198343. DOI: https://doi.org/10.1007/s11704-024-40678-2

Authors

Atta Ul Mustafa
atul88769@hbku.edu.qa (Primary Contact)
Ahmet Faruk Aysan
Author Biographies

Atta Ul Mustafa, Hamad Bin Khalifa University

Atta Ul Mustafa is a PhD student in Islamic Finance and Economy at the College of Islamic Studies, Hamad Bin Khalifa University, Qatar, and a Researcher-Lecturer at the International Center for Research in Islamic Economics (ICRIE), Minhaj University Lahore, Pakistan. His research lies at the intersection of Islamic finance, fintech, and applied econometrics, including work on the global adoption of generative Al, the safe- haven properties of sukuk, and the resilience of Islamic financial markets during crises. He also applies machine-learning methods to climate and energy finance, as well as the resource curse in the MENA region, examining renewable energy investment, institutional quality, and sustainable development outcomes. Email: atul88769@hbku.edu.qa. 

Ahmet Faruk Aysan, Hamad Bin Khalifa University

Ahmet Faruk Aysan is a Professor and Associate Dean for Research at Hamad Bin Khalifa University and a full member of the Turkish Academy of Sciences (TÜBA). He previously served as a Board Member and as a member of the Monetary Policy Committee at the Central Bank of the Republic of Türkiye. He has also consulted for leading institutions, including the World Bank, Oxford Analytica, and the Central Bank of Türkiye. His research has been recognized with multiple awards, such as the Islamic Economics Research Award (IKAM) and the Arab Fintech Forum’s Fintech Researcher of the Year Award. He also received HBKU’s Research Excellence Award, along with Boğaziçi University Foundation Publication and Academic Promotion Awards, and the Ibn Khaldun Prize. Dr. Aysan is a Research Associate at University College London’s Centre for Blockchain Technologies (UCL CBT), a Research Fellow at the Economic Research Forum, and a Non-resident Fellow at the Middle East Council on Global Affairs. He also chairs the MENA Chapter of the Academy of Sustainable Finance, Accounting, Accountability & Governance (ASFAAG). Email: aaysan@hbku.edu.qa.

Ul Mustafa, A., & Aysan, A. F. (2026). Measuring agentic AI adoption and control frameworks in finance. Modern Finance, 4(1). https://doi.org/10.61351/mf.v4i1.557

Article Details

No Related Submission Found