Only 26% of Organizations Have Implemented Generative AI Standards: Navigating Enterprise AI Governance

Only 26% of organizations have fully integrated generative AI standards into their governance frameworks. This statistic highlights a significant gap between the adoption of AI technologies and the es...
Only 26% of Organizations Have Implemented Generative AI Standards: Navigating Enterprise AI Governance
Written by WebProNews
  • In an era where artificial intelligence (AI) is rapidly transforming industries, the rise of generative AI has been nothing short of meteoric. From creating content to driving business decisions, generative AI tools are now integral to operations across sectors. However, a recent survey by Jefferson Wells reveals a concerning reality: only 26% of organizations have fully integrated generative AI standards into their governance frameworks. This statistic highlights a significant gap between the adoption of AI technologies and the establishment of robust governance mechanisms to manage them.

    The Disparity Between Adoption and Governance

    The Jefferson Wells 2024 Internal Audit Priorities Survey provides a snapshot of the current landscape, where the adoption of generative AI is accelerating faster than the implementation of corresponding standards. This imbalance poses risks that could undermine the potential benefits of AI technologies. As Tim Lietz, National Practice Leader for Risk & Compliance at Jefferson Wells, points out, “Technology and cybersecurity are more critical than ever as organizations navigate a rapidly evolving risk landscape. Our survey highlights the urgent need for Internal Audit functions to adapt to advancements in AI and cybersecurity.”

    The survey indicates that while 37% of organizations plan to increase staff to meet the heightened demand for technology skills, this move alone may not be sufficient. The challenge lies in not only acquiring the necessary expertise but also in embedding it within a structured governance framework that can effectively mitigate risks associated with AI.

    The Importance of Comprehensive AI Standards

    Generative AI has been a game-changer, offering unprecedented opportunities for innovation and efficiency. Yet, as organizations rush to deploy these technologies, many overlook the need for comprehensive AI standards. According to the survey, only a quarter of organizations have successfully integrated these standards into their governance frameworks, leaving the majority exposed to potential risks.

    “Many organizations are still figuring out how to best measure and communicate the value of their Generative AI initiatives,” notes a senior director of AI strategy at a leading pharmaceutical company. This sentiment underscores the need for clear guidelines and standards that not only govern the use of AI but also ensure that its value is accurately assessed and communicated.

    Cybersecurity: The Ever-Present Concern

    While generative AI is a rising concern, cybersecurity remains the top risk identified by internal audit leaders. The intersection of these two areas—AI and cybersecurity—creates a complex challenge for organizations. As AI tools become more embedded in business processes, they also become potential targets for cyber threats. The survey emphasizes that effective management of these risks requires a deep understanding of both AI and cybersecurity.

    The report from Deloitte’s State of Generative AI in the Enterprise supports this, highlighting that only 23% of organizations feel highly prepared to manage the risks associated with generative AI. This lack of preparedness is a red flag, especially as AI becomes more integrated into critical business functions.

    The Path Forward: Building AI Governance

    To bridge the gap between AI adoption and governance, organizations must prioritize the development and implementation of comprehensive AI standards. This involves not just creating policies but also ensuring that they are integrated into the broader governance framework of the organization.

    Lietz emphasizes the importance of leveraging external expertise to address skill gaps in AI and cybersecurity. “Internal Audit departments must expand their capabilities and leverage external expertise for skill gaps,” he says. This approach can help organizations build a robust governance structure that can effectively manage the risks associated with AI while also maximizing its benefits.

    Moreover, organizations should focus on data management as a critical component of AI governance. The Deloitte report points out that 55% of organizations have avoided certain generative AI use cases due to data-related issues. Enhancing data security, improving data quality, and updating data governance frameworks are essential steps in ensuring that AI initiatives are both safe and effective.

    The Future of AI Governance

    As the adoption of generative AI continues to rise, the need for comprehensive governance will only become more pressing. Organizations that fail to implement robust AI standards risk not only operational disruptions but also significant reputational damage. On the other hand, those that invest in building a strong governance framework will be better positioned to leverage AI for sustained competitive advantage.

    In conclusion, the findings of the Jefferson Wells survey serve as a wake-up call for organizations to prioritize the integration of AI standards into their governance frameworks. By doing so, they can ensure that the adoption of generative AI is not only beneficial but also secure and sustainable. As the landscape of AI continues to evolve, the organizations that will thrive are those that recognize the importance of governance as a cornerstone of their AI strategy.

    Get the WebProNews newsletter delivered to your inbox

    Get the free daily newsletter read by decision makers

    Subscribe
    Advertise with Us

    Ready to get started?

    Get our media kit