Data is essential for AI models, which in turn enable corporate teams to understand patterns, map outcomes and generate insight on an unimaginable scale.

But as cybercriminals take advantage emerging tech, AI and generative AI are also proving critical for organisational defence, offering opportunities to boost security, detect threats, and enhance response efforts.

Using this new wizardry safely is a tricky task: data privacy, intellectual property, ethics, and contracts, are just some of the major issues business communities have to address.

Speaking exclusively to GRC World Forums, Managing Partner of Cyber Data Law Solicitors Emma Green, elaborates:

“Lack of explainability and interpretability in complex generative models hinders trust in critical sectors like healthcare or finance. Furthermore, security vulnerabilities expose generative AI to adversarial attacks, necessitating ongoing research for robust security measures,” she says.

In the article below, we go further into some of these issues and reveal how organisations can harness AI safely and responsibly.

 

A runaway train? Generative AI and the concerns for PrivSec professionals

  • Day 1 (March 12th)
  • Privacy & Security (P&S Theatre)
  • 15:00 - 15:40pm

Click here to register for free to PrivSec & GRC Connect London

Abiding by regulatory frameworks such as the EU’s GDPR, or the CCPA in the US isn’t merely a compliance obligation, it is a cornerstone of responsible data governance.

Companies can bolster their adherence by adopting various strategies. For example, conducting regular data audits offers insight into data acquisition and storage, identifying areas for enhancement. 

Speaking exclusively to GRC World Forums, Senior Information Security Project Manager Alexandra Khammud comments on how the second version of the UK’s Data Protection and Digital Information Bill may throw up new challenges:

“The new version of the UK Data Protection and Digital Information Bill likely incorporates feedback from stakeholders and experts, addressing concerns raised during consultations and parliamentary debates,” Alexandra says.

“Organisations will need to undertake several steps to accommodate the new Bill. Firstly, they may need to review and update their data handling policies and procedures to ensure alignment with the new regulatory requirements,” Alexandra continues.

UK Data Protection Bill No.2 – What has changed?

  • Day 1 (March 12th)
  • Privacy & Security (P&S Theatre)
  • 10:00 - 10:40am

CLICK HERE TO REGISTER FOR FREE TO PRIVSEC & GRC CONNECT LONDON

By integrating privacy-by-design principles into AI projects, whether developed internally or outsourced, organisations can prioritise data privacy from the project’s inception, mitigating the risk of data breaches. Transparent and accessible privacy policies can be used to champion an organisation’s approach to data use, storage, and protection, fostering trust and compliance. 

Appointing a Data Protection Officer will ensure your organisation has a centralised resource for overseeing compliance, which can in turn be strengthened by ongoing staff development and awareness initiatives.

Organisations can enhance their readiness for changing legal landscapes by prioritising training and legal consultations. Taking a proactive approach should ensure that adherence to laws matures beyond obligation into a culture that nourishes ethical practices and promotes security at every juncture.

Speaking exclusively to GRC World Forums, Data Protection Officer at MHR, Lesley Holmes highlights the vital role played by “tabletop” exercises to help offset risk.

“One of the key benefits of tabletop exercises is that they allow teams to see weaknesses in their response plans and procedures,” Lesley says.

“For example, they may discover that certain team members are unclear about their roles and responsibilities, or that there are gaps in their technical capabilities. By identifying these weaknesses during a tabletop exercise, organisations can take steps to address them before a real incident occurs,” Lesley continues.

Ransomware Tabletop Exercises: Steps to Improve your Companies Responsiveness

  • Day 2 (March 13th)
  • Privacy & Security (P&S Theatre)
  • 10:00 - 10:40am

CLICK HERE TO REGISTER FOR FREE TO PRIVSEC & GRC CONNECT LONDON

Prioritising ethics & minimising bias

When gathering data, securing informed consent through transparent means is imperative, as it ensures individuals understand the purpose and scope of data usage, optimising compliance.

This process provides individuals with control over their data, building confidence that companies can use to forge a competitive edge.

Further, the use of AI systems requires vigilance to mitigate biases, as biased data can lead to unfair or discriminatory outcomes, which in turn erodes trust.

Speaking exclusively to GRC World Forums, Senior Legal Counsel Michael Whitbread advises on the course organisations should take to ensure ethics always comes first:

“There’s no one-size-fits-all approach for ensuring ethical corporate behaviour. But companies could do worse than taking an audit of the AI technologies they are using / developing, and ensuring proportionate governance measures are in place,” Michael says.

“These could be as simple as setting up a regular rhythm of consultation forums between key players in the organisation, including Product, Sales, Information Security, Legal and HR, right through to establishing a data governance committee with formalised terms of reference and accountabilities, documented data governance frameworks, public statements of AI ethics, and active campaigns promoting the safety measures of the organisation,” Michael continues.

Key Risks in Artificial Intelligence: Bias, Discrimination and other Harms

  • Day 2 (March 13th)
  • GRC (Governance, Risk, and Compliance) Theatre, sponsored by AuditBoard
  • 11:55 - 12:35pm

CLICK HERE TO REGISTER FOR FREE TO PRIVSEC & GRC CONNECT LONDON

External audits and ethics boards with diverse expertise can further oversee AI development, culminating in a comprehensive framework that aligns with ethical and legal AI standards.

In-house counsel

In-house legal teams play a pivotal role in data-centric AI ventures, primarily focusing on establishing and maintaining policies that govern data collection, storage, and usage.

These policies must be continually updated to align with evolving laws, ethical standards, and technological advancements. Working alongside various departments such as technology, human resources, and marketing teams, in-house counsel ensures that these policies are consistently implemented across the business.

Legal teams should also have a hand in educating and training employees on data governance and legal compliance best practices. Through regular training sessions, educational materials development, and communication of significant legal updates, in-house counsel ensures that staff members are well-informed and capable of dealing with data responsibly.

Speaking exclusively to GRC World Forums, Jennifer Geary, Author of “How to be a Chief Risk Officer”, emphasises how to frame risk so that GRC strategies can be leveraged to best effect:

“Too many organisations view risk as a final hurdle to jump through, an obstacle to getting things done. They view risk as a brake on the organisation,” Jennifer says.

“Risk shouldn’t be about that - it should be grounded in strategy, in the mission of the organisation. Every action, new venture, product or service should be designed from the outset with an understanding of the true risks and how to manage them. That way, everyone in the organisation acts as a risk manager, operating with freedom within an agreed framework,” Jennifer continues.

Data Governance for DPOs: merging Compliance and Privacy

  • Day 2 (March 13th)
  • 11:40 - 12:20pm

CLICK HERE TO REGISTER FOR FREE TO PRIVSEC & GRC CONNECT LONDON

Furthermore, in-house counsel should outline, review, and negotiate contracts related to AI projects and data, including data licensing, non-disclosure, and service agreements. These processes demand a solid comprehension of organisational targets, legal threats, and industry contractual norms.

To conclude, in-house counsel monitors compliance surveillance functions to ensure that internal behaviours meet with external legal requirements. This encompasses routine compliance checks and measuring of AI initiatives to ensure they play out lawfully.

Discover more at PrivSec & GRC Connect London

GRC, Data Protection, Security and Privacy professionals face ongoing challenges to help mitigate risk, comply with regulations, and help achieve their business objectives - they must… 

  • Continually adopt new technologies to improve efficiency and effectiveness.
  • Build a culture of compliance and risk awareness throughout the organisation.
  • Communicate effectively with stakeholders and keep them informed of GRC activities.

PrivSec & GRC Connect London takes you to the heart of the key issues, bringing together the most influential GRC, Data Protection, Privacy and Security professionals, to present, debate, learn and exchange ideas.

 

Click here to register for free to PrivSec & GRC Connect London