Artificial Intelligence (AI) is reshaping the cybersecurity arena; it’s transforming how we guard digital resources and protect vital infrastructure, in turn unlocking efficiency on previously unimaginable scales.

However, AI use is also engendering organisational vulnerability, and this is particularly in evidence when the technology is exploited by malicious actors intent on disruption.

This April at PrivSec & GRC Connect Chicago, experts get to the heart of these critical issues. The in-person event will enable business leaders, heads of industry and data practitioners to better understand how we can forge data frameworks that encourage innovation while promoting good governance, regulatory compliance and security.

In the run-up to PrivSec & GRC Connect Chicago, we look at three key ways in which AI is changing security protocols and influencing the perspectives of the decision-makers.

Incident handling and resolution

SOAR (Security Orchestration, Automation, and Response) systems powered by AI are able to streamline the detection, assessment, and management of cyber incidents. Built in with current security platforms, these tools can execute high-detail incident response workflows, from initial alert assessment to containment and resolution, all autonomously.

Opportunity

Automation significantly accelerates response times, allowing security teams to concentrate on strategic roles instead of repetitive tasks. This approach also delivers cost-effective benefits.

Threat

There are risks with automated processes, especially with chatbots managing customer service and security roles. In the UK, dissatisfaction with malfunctioning chatbots and subpar technology has driven consumer satisfaction with customer service to its lowest point in eight years.

Trust is at stake here, as revealed by a recent ONS survey that found that over one third (36%) of adults surveyed said they did not think AI could have a positive impact on their lives. Limited knowledge about AI was cited as a key reason, alongside concerns that AI is risky and untrustworthy.

CEO of the Institute of Customer Service, Jo Causon, told the Telegraph recently:

“The issue is that where the tech goes wrong, it really frustrates us. Banking apps can be brilliant when they run smoothly. The issue can be if you get into that cycle of doom with a chatbot when you’re not getting your issue resolved and you’re repeating yourself several times.”

 

Don’t miss at PrivSec & GRC Connect Chicago

“Best Practices for Zero-Day Vulnerability Attack Responses & Emergency Assessments”

  • Day 2 (Wed April 17)
  • 12:00-12:30am CST

Click for more

Identity and access management (IAM)

AI is revolutionising traditional IAM methods by integrating behavioural biometrics. The wizardry assesses user behaviour patterns, including keystrokes, mouse dynamics, and thought trends, making sure that the person authorised to use the system is who they say they are.

Opportunity

This method can really boost security, providing a discreet protective layer that learns each user’s unique behaviours, thereby greatly minimising unauthorised access.

Threat

Rolling out behavioural biometrics demands a delicate balance with privacy worries. Additionally, the technology needs to differentiate between unauthentic access efforts and genuine, albeit unusual, user behaviour.

Don’t miss at PrivSec & GRC Connect Chicago

“From Challenge to Champion: Mastering Risk Management with GRC Technology”

  • Day 2 (Wed April 17)
  • 4:00-4:30pm CST

Click for more

Risk identification

Firms are increasingly employing AI to sift through vast data pools to swiftly pinpoint patterns and irregularities. As a result, businesses can identify emerging risks and vulnerabilities with greater ease.

Cutting-edge cyber-risk intelligence software utilises AI to scrutinise diverse data sets, from historical security breaches and dark web discussions to malware trends, predicting potential security gaps.

Opportunity

This proactive approach allows firms to strengthen their security posture and focus their defences on the most probable attack routes, bringing down chances of a major breach.

Threat

Managing these intricate data pools calls for ongoing AI model training to ensure precision. Moreover, there’s the threat of false alarms which can take attention and resources from genuine problem areas.

Don’t miss at PrivSec & GRC Connect Chicago

“The role of AI in GRC & Privacy”

  • Day 1 (Tue April 16)
  • 9:30-10:15am CST

Click for more

 Speaking exclusively to GRC World Forums, AI leader, Michael Bourrelli said:

“CEOs must grapple with a multitude of ethical and technical questions as they prepare their organisations for the age of AI.

“Ethically, they must consider issues such as data privacy, algorithmic bias, and accountability. Ensuring fairness and transparency in AI-driven decisions is paramount to maintaining trust among stakeholders.

“On the technical front, concerns revolve around security vulnerabilities, interpretability of AI systems, and robustness against adversarial attacks. Addressing these challenges requires a holistic approach that combines ethical principles with technical expertise.”

Discover more at PrivSec & GRC Connect Chicago

GRC, Data Protection, Security and Privacy professionals face ongoing challenges to help mitigate risk, comply with regulations, and help achieve their business objectives - they must…

  • Continually adopt new technologies to improve efficiency and effectiveness.
  • Build a culture of compliance and risk awareness throughout the organisation.
  • Communicate effectively with stakeholders and keep them informed of GRC activities.

PrivSec & GRC Connect Chicago takes you to the edge of the debate, uniting the most influential GRC, Data Protection, Privacy and Security professionals, to present, debate, learn and exchange ideas.

This dynamic and content-rich experience takes place over April 16-17 at the Crowne Plaza Chicago West Loop.

Click here to register for free to PrivSec & GRC Connect Chicago