Low media literacy endangers Australia’s cybersecurity

Media Endangers

A recent study by Western Sydney University has revealed worryingly low levels of media literacy among Australians. This deficiency poses a significant IT security risk, particularly given the sophisticated capabilities of deepfake and other AI technologies. The report underscores the urgency for a national response to mitigate these risks.

The growing threat of disinformation and deepfakes presents new challenges that require high levels of media literacy across the nation. AI can produce highly convincing disinformation, increasing the risk of human error. Individuals lacking media literacy are more likely to fall prey to such schemes, potentially compromising sensitive information or systems.

Tanya Notley, an associate professor at Western Sydney University involved in the Adult Media Literacy report, explained that AI complicates media literacy. “It’s getting harder and harder to identify where AI has been used,” she said. Individuals must understand how to verify information and differentiate between credible sources and those likely to post deepfakes.

Unfortunately, about 1 in 3 Australians report having low confidence in their media literacy skills. The connection between media literacy and cybersecurity is critical. A study found that 74% of Chief Information Security Officers (CISOs) consider human error to be the most significant vulnerability in organizations.

Low media literacy exacerbates this issue, making individuals more susceptible to cyber security threats like phishing scams and social engineering. An infamous example cited in the report occurred in May, when cybercriminals successfully used a deepfake of an engineering company, Arup, to convince an employee to transfer $25 million to overseas bank accounts. Improving media literacy is not just an educational issue but also a national security imperative.

Notley emphasized the need for a multi-pronged approach to enhance media literacy, including:

1.

Low media literacy heightens cybersecurity risks

Media Literacy Education: Implementing robust programs in educational institutions and community organizations to equip individuals with the skills to critically evaluate digital content, covering both traditional media and AI-generated content.

2. Regulation and Policy: Developing and enforcing regulations to hold digital platforms accountable for the content they host, mandating transparency about AI-generated content and preventing the spread of disinformation. 3.

Public Awareness Campaigns: Launching national campaigns to raise awareness about the risks of low media literacy and the importance of being critical consumers of information. 4. Industry Collaboration: Partnering with organizations like the Australian Media Literacy Alliance to develop tools and resources that help users identify and resist disinformation.

5. Ongoing Training and Education: Making media literacy a mandatory part of employee training with regular updates. The IT industry has a unique responsibility to integrate media literacy as a core component of cybersecurity.

By developing tools to detect and flag AI-generated content, tech companies can help users navigate the digital landscape more safely. While technology poses risks, technology-powered solutions can also mitigate these risks. Building a culture without blame is crucial.

People often hesitate to report errors due to fear of punishment. Encouraging a free and confident exchange of information can enhance defenses against misinformation. Ultimately, improving media literacy across the nation is essential for bolstering Australia’s cybersecurity landscape.