Cadastre-se agora para um orçamento mais personalizado!

Mandiant warns about growing use of AI by malicious hackers

17 de agosto de 2023 Hi-network.com

Researchers from Google's subsudiary Mandiant have unveiled a concerning trend: cybercriminals, hacktivist groups, and other digital adversaries are harnessing AI to craft sophisticated fake images and videos. This escalating adoption of AI is exploiting the general population's inability to distinguish between real and fabricated digital content. Although the use of AI for intrusion operations is currently limited to social engineering, researchers have detected a shift toward experimenting with publicly available AI tools, particularly in generating compelling images and high-quality content.

Mandiant's experts highlight the substantial potential of generative AI technologies to empower malicious actors, enhancing their capabilities in scaling activities beyond their inherent capacities and fabricating realistic yet deceptive content. The researchers compare this augmentation to the advantages offered by legitimate penetration testing frameworks like Metasploit or Cobalt Strike, which are also frequently misused by hackers.

The researchers also raise the concerning prospect of AI assisting in malware creation, even though human intervention may be necessary to rectify errors. This dynamic, while challenging, could still provide support to proficient malware developers and even those with limited technical expertise.

The advancements in AI tools offer quicker and more effortless production of credible content, potentially heightening the effectiveness of information and influence operations. AI-generated headshots via generative adversarial networks (GANs) have been utilized since 2019 by multiple parties for various information campaigns, bolstering fabricated identities.

Text-to-image models such as OpenAI's DALL-E or Midjourney introduce another layer of concern. Though less observed to date, these models possess the potential to pose a more significant deceptive threat than GANs due to their broader applications and potentially harder-to-detect nature, both by humans and AI-driven detection systems.

Moreover, AI is enhancing the art of social engineering, enabling the creation of more convincing phishing materials targeted at specific individuals or organizations. Large language models that power technologies like OpenAI's ChatGPT and Google's Bard are instrumental in crafting tailored deceitful content to deceive victims into divulging sensitive information or credentials.

Madinant's analysis coincides with the White House's announcement of expediting an executive order on AI use within federal agencies, alongside ongoing congressional efforts for regulatory measures. FBI Director Christopher Wray's earlier warning about the rising threat of AI-enabled attacks, primarily emanating from China, underscores the urgency of addressing this issue, including potential threats to American AI companies.

Notably, a Chinese-linked information operation named Dragonbridge was tracked sharing AI-generated images, including a depiction of President Trump in an orange jumpsuit in jail in March 2023, underscoring the alarming misuse of AI.

tag-icon Tags quentes : Cibercrime Segurança cibernética

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.