We are very happy to announce that Information Security leader, Igor Gutierrez will speak at PrivSec Global, this month.

Streaming live May 22 and 23, PrivSec Global unites experts from both Privacy and Security, providing a forum where professionals across both fields can listen, learn and debate the central role that Privacy, Security and GRC play in business today.

Igor Gutierrez is an Information Security Officer and DPO at B. GROB do Brasil S.A. Leveraging over 23 years’ Cybersecurity, Data Privacy and GRC experience, Igor combines deep technical knowledge with a wealth of practical industry skill. He is the winner of two national awards as a Security Leader and is well-known on national and international speaking circuits.

Igor appears exclusively at PrivSec Global to discuss strategies that organisations need to adopt in order to mitigate risk on today’s threat landscape.

Below, Igor answers questions on his professional journey and the themes of his PrivSec Global session.

Could you outline your career pathway so far?

I’m an Information Security Officer and Data Protection Officer with over 24 years dedicated to Cyber Security and GRC and the last six years to Data Privacy. I am also the winner of two national awards as a Security Leader. National and International Keynote Speaker.

I started out in a Global IT company, then moved on to the financial sector to help define cyber security strategies. For the last 12 years I have been working in the automation and electromobility industry.

Working in different business areas has helped me to have a very broad view of how to apply cybersecurity strategically in complex scenarios, helping me to develop the ability to create cybersecurity departments from scratch.

How is AI technology introducing complexity and new risk to the threat landscape that businesses must negotiate through 2024?

We cannot deny that AI, GenAI (Generative AI) and LLMs (Large Language Models) have revolutionized the way many companies have driven their business. Companies from all sectors have gained a competitive advantage by integrating AI into their production processes and decision-making.

However, just like any disruptive technology, cybercriminals are always one step ahead in making their attacks more effective, demanding less and less effort.

Many GenAI technologies have helped cybercriminals create advanced phishing and spear phishing campaigns and even sophisticated attacks with code developed entirely in AI to bypass robust security layers created by the security industry.

The targeting and scope of cyber campaigns will likely become cheaper, more accurate, and broader due to the automation of target discovery. This would mean that threat actors will be able to generate increasingly personalized attacks.

Generative AI offers the possibility of automating and improving many elements of the targeting chain of an attack such as collecting data on target individuals or companies and profiling, selection and prioritization of targets through bulk data collection and advanced analytics through LLM. And most impactful, real-time LLM interaction with campaign targets, replacing the need for human threat actors.

Another imminent concern is the effectiveness that AI can provide to state-supported hostile cyber activities, both in terms of espionage and sabotage in addition to data breach/destruction, extortion or reputational damage. Additionally, the offering of attacks as a service has increasingly grown, mainly with DarkGPT.

All attacks involve some type of risk-reward analysis. A rational attacker will use means by which he can extract some form of utility for his activities, be it money, information, reputation, political objectives or others.

Fundamentally, they seek to execute their campaigns without attracting the attention of adversaries such as intelligence agencies, restricting the scope of the attack to only the most necessary targets.

Methods to hide the actor’s fingerprints, obfuscate the components of their attack, or exfiltrate funds through complicated chains of intermediaries are common. LLMs could potentially improve these obfuscation activities through automated cleaning of malware identifying characteristics, cyber campaigns, or through intentional fingerprinting.

The scenario we’re currently facing brings us the challenge of balancing this complexity between security and advancement within companies.

The widespread adoption of GenAI applications in the workplace is quickly raising concerns about exposure to a new generation of cybersecurity, data protection, digital fraud and disinformation threats.

One of the main concerns relates to data submitted to AI models like ChatGPT and Copilot that can be exposed expanding the attack surface of organizations sharing internal information in cloud-based GenAI systems.

These concerns, as well as challenges in meeting compliance and government requirements, are triggering a backlash against GenAI applications, especially by industries and sectors that process personal data.

Despite the benefits that AI offers, many companies are reviewing the use of GenAI not only because of the risks associated with data privacy and security, but also due to the lack of corporate policies and adequate training. The business pressure to make the process and development of its products more effective has become a great challenge in cyber teams.

Could you pinpoint some of the emerging trends and tactics that organisations are putting in place to mitigate these risks?

First of all, companies need to define very clear policies regarding the use of AI and align them with business objectives and expectations. Many companies have used AI resources without even analyzing whether this will actually be beneficial to their business model, which ends up exposing the company to unnecessary risks.

Cyber criminals will always focus their attacks on the human factor and the failures of the tools that support the business. It is necessary to work together within companies so that the technology is implemented within the limits that can be considered safe.

It is extremely important that companies in the same sector also share attack information (TTPs and IoCs) and AI models used for defense so that together they can have more agility in counterattacks and the evolution of their defences, becoming a more expensive target to be attacked. Likewise, there needs to be collaboration between governments, regulators and technology companies to better manage AI risks through smart policies that can support this important technology.

Using LLMs tuned for code analysis to identify exploitable programming errors has the potential to reduce the “cost per vulnerability” by orders of magnitude relative to human investigation by performing at-scale scans of open source repositories. AI-enhanced dynamic analysis tools to discover vulnerabilities can be a great ally for cyber teams.

I believe that those companies that do not have well-defined, documented and audited processes may even have problems acquiring cyber-assurance.

I strongly recommend that companies invest in Digital Risk Protection tools so that they are aware of how exposed they are not only on the internet but mainly on the Darkweb and its countless forums, this will bring greater awareness of their current risks and provide an overview of actions to be taken even before defining their strategies for using AI.

AI is here to stay permanently, the success we will have with its adoption will be proportional to how robust our strategies are.

Don’t miss Igor Gutierrez debating these issues in depth in the PrivSec Global panel: Adaptive Strategies: Responding to the Shifting Landscape of Threats

Organisations face an ever-evolving array of threats, ranging from cyberattacks and data breaches to emerging vulnerabilities and geopolitical risks.

This panel discussion will illuminate the dynamic nature of the threat landscape and explore proactive strategies for effectively responding to evolving threats. Join us as we explore best practices, innovative approaches and emerging trends in responding to the dynamic threat landscape.


  • Clare Messenger, Global Commercial Head of Mobile Intelligence Solutions, JT Group Limited



Session: Adaptive Strategies: Responding to the Shifting Landscape of Threats

Time: 11:30 – 12:15 GMT

Date: Wednesday 22 May 2024

The session sits within a packed two-day agenda of insight and guidance at PrivSec Global, livestreaming through Wednesday 22 and Thursday 23 May, 2024.

Discover more at PrivSec Global

As regulation gets stricter – and data and tech become more crucial – it’s increasingly clear that the skills required in each of these areas are not only connected, but inseparable.

Exclusively at PrivSec Global on 22 & 23 May 2024, industry leaders, academics and subject-matter experts unite to explore these skills and the central role they play in privacy, security and GRC.

Click here to register