AI for Auditors: Challenges and Opportunities for Career Advancement

Victor Fang
Author: ISACA Now
Date Published: 10 June 2024
Read Time: 4 minutes

Editor’s note: ISACA is introducing new training courses on artificial intelligence, including a course on introduction to AI for auditors. Victor Fang, Ph.D, an AI expert who contributed to the course, recently visited with the ISACA Now blog to share his perspective on how IT auditors can position themselves for success in the evolving AI landscape. See the interview with Fang below, and find out more about ISACA’s additional new AI courses here.

ISACA Now: What interests you most about AI in an audit context?

The surge of generative AI in 2023 has revolutionized various sectors, heralding a future deeply impacted by AI. However, the development of responsible AI and comprehensive governance frameworks lags, posing ethical and operational challenges. In the audit domain, there's a critical opportunity to guide AI's enterprise adoption, ensuring its use not only enhances audit efficiency but also upholds the principles of integrity and accountability.

ISACA Now: What are some of the main challenges for auditors when it comes to keeping up with how AI is being adopted and implemented on the enterprise landscape?

The main challenges for AI auditors: 

  • Lack of implementations of AI auditing framework: There are phenomenal frameworks like NIST AI RMF and ISACA’s COBIT, but they didn’t provide tactical implementations of how IT auditors will need to audit. 
  • Algorithms: The foundation of AI/ML lies in complex algorithms, necessitating specialized knowledge to grasp the mathematical principles and operational functionalities.
  • AI auditing is tied to data: AI and ML products heavily depend on data – data for training, validation and inference, different from traditional software development. 
  • Complex lifecycle: AI/ML has its unique lifecycle, involving various roles and accountability. 
  • The rapid evolution of AI: The pace of progress, particularly with the rise of generative AI, introduces new challenges, such as reliance on third-party foundation models (covered in the upcoming course) developed by a few tech giants (OpenAI, Google, Meta, etc.), complicating accessibility and transparency.

ISACA Now: What might the consequences look like for organizations that do not involve auditors in their AI usage?

According to OECD AI Incidents Monitor, we’ve seen 8,000 AI incidents globally already covering sectors such as financial services, legal and healthcare, with an astonishing 1,200% increase year over year.

The consequences for organizations not committed to proper AI auditing will expose risks including, but not limited to:

  • Brand damage
  • Financial loss
  • Business interruption
  • Compliance violation 

These are exactly why global regulators are stepping in to provide guardrail for responsible AI adoption. 


ISACA Now: What is an aspect of ISACA’s new course on this topic that you think learners will find especially valuable?

This 2024 version is a timely extension of our widely circulated 2022 machine learning audit publications, with the necessary addition of generative AI and new AI regulation overviews. 

I think learners will find these elements valuable: 

  • Understanding different categories of AI algorithms that auditors will encounter, including generative AI. 
  • A holistic view of the AI development lifecycle and the key roles that auditors should identify. 
  • The FANG principles for auditing AI: Fidelity, Accountability, Non-discrimination, Governance. These principles simplify the complex auditing processes for the auditors. 
  • An overview of global AI regulations and frameworks: NIST, COBIT, EU AI Act, etc., and how they are related to key data regulations 
  • How to audit third-party AI dependencies and the latest generative AI technology: GPT / LLM / RAG 

ISACA Now: How will AI audit knowledge help individuals stand out in their current roles or with a future employer?

The enterprise adoption of AI is inevitable, and we’ve seen a significant rise the past couple years. I envision soon there will be a board-level executive role of “Chief AI Security Officer” in large organizations that focus on AI safety and security, similar to the role of CISO for cybersecurity at the board level. 

Knowledge in AI auditing can significantly enhance an IT auditor’s professional career by:

  • Providing niche expertise: Demonstrates specialization in rapidly evolving, high-demand AI adoptions
  • Enhancing risk management: Equips individuals with skills to identify and mitigate AI-related risks, crucial for maintaining operational integrity
  • Offering strategic insight: Facilitates informed decision-making on AI's business impact, promoting responsible and effective technology use
  • Ensuring regulatory compliance: AI regulations are being actively implemented by almost all jurisdictions globally, so this knowledge becomes essential in navigating the increasing legal and ethical standards surrounding AI
  • Driving innovation: Encourages the optimal and ethical use of AI, fostering innovation and efficiency within organizations
  • Leadership opportunity: Positions individuals to lead in the adoption of best AI practices, influencing organizational culture toward ethical AI use.
  • Competitive edge: Distinguishes individuals in the job market as forward-thinking professionals adept at guiding companies through the complexities of AI integration

I am glad to partner with the ISACA team on this new series of AI training courses, and we look forward to a safe future of AI adoptions.

Additional resources