Low media literacy heightens cybersecurity risks

Cybersecurity Risks

A recent study by Western Sydney University revealed worryingly low levels of media literacy among Australians, which poses a significant IT security risk. This is especially concerning as newer AI technologies heighten the capabilities of deepfakes. Human error remains the leading cause of cybersecurity breaches, and low media literacy exacerbates this issue.

While AI offers benefits, it also presents challenges such as disinformation and deepfakes that require high levels of media literacy to mitigate. Associate Professor Tanya Notley from Western Sydney University, involved in the Adult Media Literacy report, explained that AI introduces complexities to media literacy. “It’s really just getting harder and harder to identify where AI has been used,” she said.

Notley emphasized the need for individuals to verify the information they see and distinguish between credible sources and potential deepfakes. Currently, about 34% of Australians report having low confidence in their media literacy. Education plays a crucial role, with only 25% of Australians with a low level of education feeling confident in verifying online information.

The connection between media literacy and cybersecurity is critical. A survey found that 74% of Chief Information Security Officers (CISOs) consider human error the most significant vulnerability in organizations. When individuals cannot assess the credibility of information, they become more susceptible to cybersecurity threats, including phishing scams, social engineering, and other manipulations leading to security breaches.

In May, cybercriminals used a deepfake of an engineering company, Arup, to convince an employee to transfer $25 million to a series of Hong Kong bank accounts, highlighting the real-world consequences of low media literacy. Improving media literacy is not just about education; it is a national security imperative. Notley stressed that a multi-pronged approach is necessary.

This approach should include:

1.

Low media literacy raises security risks

Media Literacy Education: Educational institutions and community organizations should implement robust media literacy programs to help individuals critically evaluate digital content, including AI-generated content.

2. Regulation and Policy: Governments should develop and enforce regulations that hold digital platforms accountable for the content they host, mandate transparency about AI-generated content, and ensure proactive measures to prevent the spread of disinformation. 3.

Public Awareness Campaigns: National campaigns are needed to raise awareness about the risks associated with low media literacy and the importance of being critical consumers of information, targeting all demographics. 4. Industry Collaboration: The IT industry should collaborate with organizations like the Australian Media Literacy Alliance to develop tools and resources to help users identify and resist disinformation.

5. Training and Education: Media literacy should become a mandatory part of employee training, regularly updated as the digital landscape changes. The IT industry has a unique responsibility to integrate media literacy as a core component of cybersecurity.

Tech companies should develop tools to detect and flag AI-generated content, helping users navigate the digital world more safely. Despite concerns about human error, CISOs are optimistic that AI-powered solutions can mitigate these risks, emphasizing the potential of technology to address the problems it creates. Encouraging a culture without blame is crucial.

Employees must feel confident to speak up and report concerns without fear of punishment. One of our greatest defenses against misinformation is the free and confident exchange of information. Therefore, CISOs and IT teams should actively encourage employees to flag suspicious content and report potential deepfakes immediately.

By raising awareness and improving media literacy, Australia can better safeguard its cybersecurity landscape against the evolving threats posed by AI technologies and disinformation.