We are delighted to confirm that Information Security, Compliance, and Privacy professional, Onur Korucu will speak at PrivSec & GRC Connect London, this month.

Taking place on March 12 and 13 at Park Plaza, Riverbank, London, PrivSec & GRC Connect London provides a platform for organisations to address the cumulative nature of risk.

PrivSec & GRC Connect London’s comprehensive agenda is led by subject matter experts, business chiefs and industry leaders, giving attendees a deep-dive into challenges and solutions on the rapidly evolving GRC landscape.

Session panellist, Onur Korucu is Managing Partner at GovernID. She began her professional journey in multinational professional services firms such as KPMG, PwC, and Grant Thornton.

In her PrivSec & GRC Connect London session, Onur will discuss bias, discrimination and other threats associated with Artificial Intelligence (AI). Below, she answers questions on her professional journey and introduces the panel debate’s key issues.

Key Risks in Artificial Intelligence: Bias, Discrimination and other Harms

  •  Wednesday, 13th March 2024 (Day 2), 11:55-12:35pm GMT
  • Theatre: Governance, Risk, and Compliance (GRC) Sponsored by Auditboard

Click here to register for free to PrivSec & GRC Connect London


 

 

Could you outline your career pathway so far?

I began my professional journey in multinational professional services firms such as KPMG, PwC, and Grant Thornton, where I gained extensive experience as a consultant and senior management member across various global regions.

Previously, I held the role of Senior Manager for GRC, Cyber Security, and Data Protection at Microsoft - Avanade UK & Ireland. Presently, I continue as a Microsoft Subject Matter Expert (SME) and hold shares in three companies specializing in data protection, cyber security, and AI automation. Additionally, I serve as the Managing Partner at GovernID, focusing on data protection and privacy-enhanced technology solutions. Furthermore, I am an advisory board member for Technology and Law at TerzionDX, a Microsoft partner specializing in AI, robotics, automation solutions, and security-focused infrastructure services. 

With a focus on emerging technologies, I am an information security, compliance, and privacy professional. In addition to my technical engineering and M.Sc. degrees, I hold an LL.M degree in Information and Technology Law and completed a Business Analytics executive Master’s program at the University of Cambridge.

I have also served as a lecturer for Cyber Security Masters programs at universities in Istanbul and London. As a guest lecturer, I have shared my expertise at the University of Westminster, the University of Portsmouth, and Istanbul Sabancı University. I authored a book on risk-based global approaches to enhancing data protection and have contributed articles to prestigious publications such as Harvard Business Review and Tomorrow Magazine, covering trending technology, cyber security, data protection, and privacy trends.

I am a Women in Tech world ambassador, board member, and International Association of Privacy Professionals (IAPP) Ireland Chapter Chair. Recognized for my contributions, I have been nominated for several awards, including GRC Role Model of the Year, Technology Consulting Leader, Cyber Women of the Year, The Risk Leader, and Technology Businesswoman.

Could you describe how AI perpetuates bias and discrimination, and what problems does this present to organisations that want to leverage the technology?

The issue of bias and discrimination perpetuated by AI poses complex challenges for organizations seeking to leverage this technology.

From the standpoint of cybersecurity and privacy, AI’s inclination towards bias can heighten existing vulnerabilities within data protection frameworks, potentially jeopardizing the integrity of sensitive information.

A significant concern revolves around the training data utilized in the development of AI algorithms. If these datasets contain inherent biases or mirror historical prejudices, the resulting AI models may inadvertently perpetuate and magnify these biases when making decisions or forecasts. This introduces considerable privacy risks, particularly in sectors such as finance, healthcare, and law enforcement, where biased AI algorithms could yield discriminatory outcomes for individuals.

Furthermore, the opacity surrounding AI algorithms compounds the difficulty of identifying and addressing bias. Traditional cybersecurity measures may struggle to detect and rectify bias embedded within intricate AI systems, leaving organizations susceptible to ethical and legal ramifications. Moreover, biased AI models may lead to challenges in regulatory compliance, eroding trust with regulators and stakeholders and exposing organizations to potential financial penalties or reputational harm.

In essence, combatting bias and discrimination in AI necessitates a comprehensive approach that integrates cybersecurity and privacy considerations throughout the entire lifecycle of AI systems. Organizations must prioritize transparency, accountability, and ethical governance frameworks to mitigate bias and uphold privacy principles in the era of AI.

Are there clear steps that organisations can take to help mitigate these risks and deal with AI’s current functional limitations?

From a cybersecurity and privacy perspective, organizations can implement several strategies to mitigate the risks associated with AI and address its current functional limitations effectively.

  • Data Governance and Quality Assurance: Organizations should prioritize robust data governance practices to ensure the quality, fairness, and diversity of training data used for AI models. This involves thorough data validation, cleansing, and anonymization processes to minimize biases and inaccuracies in the dataset.

  • Ethical AI Frameworks: Developing and adhering to ethical AI frameworks can guide organizations in designing, deploying, and monitoring AI systems responsibly. These frameworks should incorporate principles of fairness, transparency, accountability, and inclusivity to mitigate biases and discriminatory outcomes.

  • Algorithmic Transparency and Explainability: Implementing mechanisms for algorithmic transparency and explainability can enhance trust in AI systems and enable stakeholders to understand the decision-making processes behind AI-generated outcomes. Techniques such as model interpretability and explainable AI (XAI) facilitate transparency by providing insights into how AI algorithms arrive at their conclusions.

  • Continuous Monitoring and Evaluation: Establishing robust monitoring and evaluation mechanisms is crucial for detecting and addressing biases, errors, and limitations in AI systems over time. Regular audits, performance assessments, and feedback loops can help identify and rectify issues promptly, ensuring the ongoing effectiveness and fairness of AI models.

  • Collaboration and Knowledge Sharing: Encouraging collaboration and knowledge sharing within the organization and across industry sectors can facilitate the exchange of best practices, insights, and lessons learned in AI governance and risk management. Engaging with regulatory bodies, industry associations, and academia can also provide valuable guidance and expertise in navigating the complexities of AI. 

  • Investment in Talent and Expertise: Investing in talent development and cultivating expertise in AI governance, cybersecurity, and privacy is essential for building organizational capabilities to address AI-related risks effectively. Training programs, certifications, and partnerships with academic institutions can help equip employees with the necessary skills and knowledge to navigate the evolving landscape of AI.

By adopting these proactive measures and integrating cybersecurity and privacy considerations into their AI strategies, organizations can mitigate risks, enhance trust, and maximize the benefits of AI technology while safeguarding the rights and interests of individuals and communities.

Don’t miss Onur Korucu exploring these issues in depth at PrivSec & GRC Connect London in the session:

Key Risks in Artificial Intelligence: Bias, Discrimination and other Harms.

Recent months have seen a boom in the effectiveness and accessibility of AI technologies. Automation is helping companies work more efficiently in more and more fields. 

But there is a significant risk in failing to recognise and address the potential downsides of AI, including bias, discrimination and functional limitations. This session will explore how organisations can manage and mitigate AI risks.

Also on the panel:

  • Federico Iaschi, Head of Cyber Security Resilience and Observability, Virgin Media O2 (Panel Host)
  • Elizabeth Smith, Senior Data Protection and Customer Solutions Expert DPOrganizer

Details

Key Risks in Artificial Intelligence: Bias, Discrimination and other Harms

Theatre: Governance, Risk, and Compliance (GRC) Sponsored by Auditboard

Time: 11:55am – 12:35pm GMT

Date: Wednesday 13 March 2024

The session sits within a packed agenda of insight and guidance at PrivSec & GRC Connect London taking place March 12 and 13, 2024.

Discover more at PrivSec & GRC Connect London

GRC, Data Protection, Security and Privacy professionals face ongoing challenges to help mitigate risk, comply with regulations, and help achieve their business objectives - they must… 

  • Continually adopt new technologies to improve efficiency and effectiveness.
  • Build a culture of compliance and risk awareness throughout the organisation.
  • Communicate effectively with stakeholders and keep them informed of GRC activities.

PrivSec & GRC Connect London takes you to the heart of the key issues, bringing together the most influential GRC, Data Protection, Privacy and Security professionals, to present, debate, learn and exchange ideas.

 

Click here to register for free to PrivSec & GRC Connect London