According to reports, the UK’s Information Commissioner’s Office (ICO) was sent over 3,000 reports of cyber incidents through last year.

The stats paint a broader picture of pressures within the tech industry: the necessity for robust data security protocols, especially concerning AI technology development. In response, the ICO has sent out a caution to tech firms, warning them to incorporate data protection into every phase of AI creation as part of a more comprehensive approach to safeguarding personal information.

The regulator insists that AI systems processing personal data must adhere to existing transparency and data protection and guidelines, with information used during AI training, testing and execution also included. The Information Commissioner for the UK, John Edwards, is to address technology chiefs this week on the need for more stringent data privacy measures in the AI era.

“As leaders in your field, I want to make it clear that you must be thinking about data protection at every stage of your development, and you must make sure that your developers are considering this too,” Mr Edwards will say in a talk based on privacy, emerging tech and AI.

As reported by Developer Tech, Zoho UK MD, Sachin Agrawal has underlined how AI is steadily transforming business processes, and that data protection must be integrated at a baseline level as a result.

As per research conducted by Zoho, 36% of UK companies surveyed understand data privacy as a fundamental component to success. On the other hand, just under half of the firms polled (42%) said they are in line with current regulatory standards and data laws.

The gap underscores the urgent requirement for better education in the corporate world so that organisations can galvanise client data security through all levels of data management, whether AI-based or not.

Mr Agrawal has also condemned misuse of customer data, a practice that is commonplace throughout the sector. Criticising such behaviours as unethical, Mr Agrawal asserts that

principles should take priority, guiding companies to be more respectful of the data ownership rights of clients and customers.

“We believe that customers own their data, not us, and using it solely to improve our products is the right approach,” he stated. It’s a philosophy that not only delivers legal compliance but also strengthens relationships, forging crucial trust.

Know the risks

With the growing adoption of AI technology, the call for ethical data practices is anticipated to increase significantly. Businesses that fail to prioritize their customers’ best interests in

their data policies may lose customers to more ethically-minded competitors.

Speaking to GRC World Forums, Senior Privacy Counsel at OneTrust, Adomas Siudika advises:

“Deployers of high-risk AI systems will have to be prepared to update their publicly facing disclosures confirming not only the use of AI systems but also outlining whether there is any personal/sensitive information processed in any stage of the AI system lifecycle, how it is processed and explaining how individuals can exercise their hybrid AI/Privacy-linked rights e.g. a right to opt-out from being subject to AI system.

“If we want to support the ethical and legal use of AI technology, all parties involved in the AI lifecycle, including developers, deployers, traders, and users of AI, must work together and build trust in AI systems.”

The issues are central to discussions taking place this week at PrivSec Global, where experts will discuss the latest developments in regulation in the AI age.

Not to be missed…

Preparing for a new regulation: Applying GDPR lessons prepare businesses for the EU AI Act

  • Date: Wednesday 22 May, 2024
  • Time: 11:00 - 11:30 GMT

As we approach the sixth anniversary of GDPR on May 25th, professionals in risk, privacy, and technology eagerly await the implementation of the EU AI Act, set to become the first global framework regulating AI usage.

While GDPR aimed to harmonise EU laws, the AI Act addresses the burgeoning AI landscape, seeking to regulate its development and utilisation in Europe. Businesses now turn to risk and privacy professionals to navigate this new regulatory landscape, leveraging their expertise from prior regulatory implementations. 

With GDPR, businesses had two years to integrate this impactful regulation into their privacy management programs on a global scale. Now, the countdown begins for the AI Act, requiring businesses to apply its provisions to their AI programs, including AI Impact assessments, risk measurement, and understanding third-party AI usage within the next two years.

In this session, the panel will discuss:

  • The relationship between GDPR and the AI Act: Key legal differentiators and similarities
  • Best practices for preparing for the imminent AI Act
  • Strategies to promote responsible AI use and development within your organisation


Decoding AI Data Protection: Debunking Myths and Unveiling Facts

  • Date: Thursday 23 May, 2024
  • Time: 14:15 - 15:00 GMT

Artificial Intelligence (AI) has revolutionised data processing and analysis, but it also raises significant concerns regarding data protection and privacy. This panel discussion aims to unravel the complexities of AI data protection by debunking myths, unveiling facts and delving into the nuts and bolts of safeguarding data in AI-driven ecosystems.

The sessions sits within a packed two-day agenda of insight and guidance at PrivSec Global, livestreaming through Wednesday 22 and Thursday 23 May, 2024. 

Click here to book your place at PrivSec Global today

Discover more at PrivSec Global

As regulation gets stricter – and data and tech become more crucial – it’s increasingly clear that the skills required in each of these areas are not only connected, but inseparable.

Exclusively at PrivSec Global on 22 & 23 May 2024, industry leaders, academics and subject-matter experts unite to explore these skills and the central role they play in privacy, security and GRC.

Click here to register