Audio-Jacking: Deep Faking Phone Calls

Learn more about audio jacking → Learn more about Cybersecurity threats in 2024 → The possibilities of generative AI are massive, but that includes attempts to leverage Large Language Models (LLM) in hacking, especially social engineering. Chena Lee, Chief Architect of Threat Intelligence at IBM Security, as provided a proof of concept for such an attack called “Audio Jacking.“ Jeff Crume walks through how an audio jacking attack might work and what you can do to protect yourself against it. Get started for free on IBM Cloud → Subscribe to see more videos like this in the future →
Back to Top