Ignatian Newsletter: 2024 - Edition #17

ICT News

Written by
Victor Dalla-Vecchia
ICT Manager

Cybersafety Part 10: Protecting Personal Data in the Digital Age

We live in a ‘Brave new [digital] world.’ How can you be sure that news items, social media feeds and advertisements you receive on your phone and browser are not part of a highly orchestrated attempt by state or private actors to influence your political or social views, not just your purchase preferences?

Governments across the world are concerned about the unprecedented volume of user information companies like Facebook and Google collect and the risk that this data could be monetised and used by third parties for targeted advertising or scams, or by nefarious actors to influence the political decision making of unsuspecting voters.

Australian and New Zealand governments would dearly like law enforcers to be able to police the algorithms companies like Facebook and Google use to determine what shows up in search results and newsfeeds, and be able to see the encrypted text messages exchanged between criminal organisations under investigation.

In this brave new digital age, our personal data is a commodity which we are increasingly forced to hand over to corporate interests in exchange for increasingly essential services. Our hapless selves are at one end, the data behemoths at the other, with our elected leaders floundering in the middle trying to stem the privacy degradation tide.

Adding to this concern, your mobile phone is possibly listening to your every word and sending back data that is being mined for targeted advertising. Or consider the Orwellian prospect that your digital footprint could, as is being experimented with in China, be used along with facial recognition technology to build a social credit registry. The system rewards with social benefits those who act as good citizens and takes social benefits away from those who are not deemed to be acting in the best interests of society.

Now add AI: the era of ‘deep fake’ impersonation for political, commercial or social gain, and the increased risk of data breach from your personally identifiable information (PII) being triangulated from disparate data sets (including prompts you add into generative AI apps like ChatGPT) by cyber criminals/state actors.

What can we do to protect the privacy of our personal data, some of which we have control over, but also other data that is gathered without our knowledge or consent? How can we protect ourselves against our personal data being harvested for social manipulation or commercial exploitation? Regulation and legislation are one part of the solution, but equally we need service providers to be good corporate citizens and enforce socially responsible rules.

Ultimately, it is up to us to develop our cyber awareness and think carefully before handing over our personal data when subscribing to online services, or when using generative AI apps to help us with our everyday domestic or professional tasks.