Bills
18 September 2025 • New South Wales Parliament
View on Parliament WebsiteThe Hon. EMILY SUVAAL ( 15:48 :30 ): I move:
That this bill be now read a second time.
The Government is pleased to introduce the Crimes Amendment (Intimate Image and Audio Material) Bill 2025. Deepfake material in the context of intimate images commonly refers to realistic, sexually explicit material which is wholly created or generated by digital means—including through generative AI technology—without the consent of the person who the image is intended to represent. Deepfakes are ordinarily understood to include images, video and audio. The defining character of deepfake material is that it is intended to be an extremely realistic depiction that can be treated as a genuine depiction of the subject. Because deepfakes are intended to deceive by design, their distribution and creation can be harmful in the same way as it is harmful when real intimate images are recorded and shared without consent.
I seek leave to incorporate the remainder of my speech in Hansard.
Leave granted.
In 2024, the eSafety Commissioner said that there had been a significant rise in explicit deepfakes, with potentially a 550 per cent increase year on year since 2019. Of those deepfakes, 98 per cent were pornographic, and 99 per cent of those images were of women and girls.
We know how harmful these images are. Research shows that the non-consensual development and sharing of deepfakes causes embarrassment, ridicule and distress. In some cases, the harm can extend to psychological, physiological, professional and socio‑economic impacts. Threats to make such material public can also cause fear as a form of harassment, intimidation or coercion and control.
Currently, part 3, division 15C of the Crimes Act 1900 contains offences for the non-consensual recording or distribution of intimate images, or threatening to record or distribute intimate images. These provisions were introduced in 2017—a time in which the digital landscape was vastly different.
ChatGPT and other forms of generative AI were not as widespread as they are now, and so these offences focussed on real intimate images, or images which were digitally altered to appear to be intimate images.
That is not the digital landscape we are faced with today, and the offences in this division require updating to ensure they remain fit for purpose. This bill is focussed on modernising the criminal law to ensure it keeps pace with evolving technology. It will strengthen and expand existing offences relating to the non-consensual recording or distribution of intimate images to ensure we have a robust response to deepfake sexual material, including material generated by AI.
The bill will criminalise three key behaviours in relation to sexually explicit deepfakes:
a.First, it will criminalise the creation of intimate image or audio material that are digitally generated, and criminalise the alteration of material to be intimate image or audio material, where such creation or alteration is done without the consent of the subject of the image. Currently it is only unlawful to distribute altered images.
b.Second, it will criminalise the distribution of wholly digitally generated intimate image or audio material without the consent of the subject.
c.Third, it will criminalise threatening to engage in the behaviours I have just mentioned.
The bill will also expand all offences to cover sexually explicit audio material in the same way as intimate images. This includes not just deepfake audio, but also real sexually explicit audio, or audio that has been altered to be sexually explicit.
Amendments have also been made so that key supporting provisions, most notably the power of the court to order rectification action—commonly known as "take down powers"—also apply to these expanded offences.
The reforms in this bill relate to intimate images and audio of adults.
a.This is because in the case of sexual deepfakes involving children, the child abuse material offences under part 3, division 15A of the Crimes Act 1900 apply.
b.These offences cover all depictions and descriptions of children in sexual contexts, regardless of how they are made or produced. They currently apply to depictions of those under 16, and I note that the Crimes Amendment (Sexual Offences and Female Genital Mutilation) Bill 2025 that I introduced earlier this year will raise this age to 18, if passed.
I now turn to the detail of the bill.
Currently, the relevant offences in part 3, division 15C of the Crimes Act rely on the definition of "intimate image" set out under section 91N, which provides that an intimate image is an image of a person's private parts or an image of a person engaged in a private act, in circumstances where a reasonable person would reasonably expect to be afforded privacy; or an image that has been altered to show such things.
In short, this definition only covers a real image or a real image which has been altered, for example through software like Photoshop. Current use of technology, and particularly generative AI, to create deepfake material could be considered distinct from taking a base image and altering it. This is because generative AI draws on a large dataset to create new material. Even though the end product draws from existing data and images, it could be argued that such images are not "altered", but rather "generated" or "produced". This would place AI‑generated images outside of the scope of the existing intimate image offences.
This is one of the fundamental limitations of the existing offences that the bill will address. It does this by amending these foundational definitions under schedule 1, item [2].
First, the definition of intimate image will now refer to "intimate image material". This will include an image of the private parts of a "simulated person" or of a simulated person engaged in a private act, in circumstances in which a reasonable person would reasonably expect to be afforded privacy.
Second, the bill defines "simulated person" as a person depicted in digitally generated material that either:
a.expressly purports to be a genuine depiction of an identifiable real person, or
b.so closely resembles a real person that a reasonable person who knew the real person would consider it likely to be a genuine depiction of that person.
The first limb of this definition covers images that are expressly billed as a depiction of the subject—this is often the case with deepfake sexual material. The second limb is intended to provide fulsome coverage where the subject is not expressly identified, but the fact that the image is designed to be a depiction of them is undeniable—this will avoid a potential loophole that a person claims the resemblance is coincidental.
The purpose of the definition of "simulated person" is to ensure that the kinds of material we are capturing are those which are, by design, able to be misinterpreted as a real image. When such deepfakes are shared, the harms for the subject of the images can be as serious as the sharing of a real intimate image. It can also be very hard for victims to prove that the images are not real.
I want to highlight the importance of the term "genuine", which aligns with this policy purpose. It is not defined, so it takes on its ordinary meaning. The Macquire Dictionary defines "genuine" as being truly such, real or authentic.
This means depictions of simulated persons are not things such as AI‑generated art, where either through the image itself or the context in which it is depicted, it clearly not intended to be taken as a real image of the person.
The definition of simulated person also acts as a safeguard against inappropriate capture of material that does not have the same character because it is clearly a fictious depiction. Where such fictitious images are, for example, still offensive there are avenues to report them and have them removed under the Online Safety Act 2021 through eSafety Australia, which remain available.
Taken together, these definitions will expand the scope of the existing offences under section 91Q, for distribution of intimate images without consent, and section 91R, subsection (2), for threats to distribute intimate images without consent, to also apply to deepfake material that has been digitally generated, including that which is generated through AI.
Changing the definition of "intimate image" will only expand the scope of offences relating to distribution. This bill also seeks to ensure that our laws cover the production of digitally generated images.
Currently, it is an offence under section 91P to record an intimate image without consent. Section 91N defines "record" as to "record, take or capture an image by any means."
AI‑generated images are not recorded as such, but rather created or altered. The provisions relating to "recording" may be limited to actions involving a "real" image, rather than an AI‑generated deepfake or a non -sexual image that is then altered to be sexually explicit. This limitation could also apply to the offence for threatening to record an intimate image under section 91R subsection (1).
This brings me to the second major change made by this bill. Schedule 1, item [15] creates a new offence under proposed section 91PA, which specifically criminalises both the alteration of material to make it intimate image or audio material without consent and the creation of intimate image or audio material involving a simulated person, without the need to distribute the material. This new offence fills the existing gap which arises from the confined definition of the term "record".
As with the existing offence for recording an intimate image without consent under section 91P, this new offence will carry a maximum penalty of three years imprisonment, a fine of 100 penalty units, which is $11,000, or both. The consent of the Director of Public Prosecutions will be required for the prosecution of a person under the age of 16.
Threatening to engage in this behaviour will also be a criminal offence under section 91R, through the amendments contained in schedule 1,item [21].
Schedule 2 makes consequential amendments to other legislation so that this offence is treated the same as the exiting section 91P offence, including defining it as a personal violence offence and as a prescribed sexual offence.
These offences will be subject to safeguards, including the exceptions in section 91T.
Taking an image of a real person and altering it to be sexually explicit, or creating a new sexually explicit image of a real person in a manner that is meant to be akin to a genuine image, without their consent, is deeply concerning and clearly unacceptable behaviour.
The creation of deepfakes without consent represents a harmful imposition on a person's privacy and autonomy, even if the material is not ultimately shared.
Criminalisation of this behaviour clearly denounces it and we hope that it will deter people from engaging in this harmful conduct. Criminalising production of deepfakes will also help to support a broader cultural shift in how this behaviour is viewed.
These measures are consistent with comparable offences in Victoria and South Australia, which cover the production of sexually explicit material, including that made by AI.
Deepfakes do not solely take the form of intimate images. Deepfakes can also include audio -only material. This brings me to the third major change in the bill, which is the expansion of offences in part 3, division 15C to cover sexually explicit audio material.
Audio deepfakes of a sexual nature can be harmful, in the same way that deepfake images can be harmful—they can be humiliating or can be used as a form of intimidation. This is particularly true if the generated audio includes a statement which purports to identify the real person.
Currently, New South Wales offences only apply to intimate images.Schedule1, item[2] will insert a definition of intimate audio material into section 91N, which will apply to any audio that is sexual in nature or relates to engagement in a private act, in circumstances where a reasonable person would reasonably expect to be afforded privacy. This definition will include real audio, altered audio and audio of a simulated person - in effect, the same scope as intimate images captured as amended by the bill.
The bill makes consequential amendments throughout schedule 1, with the effect that all offences in the division will cover intimate audio material in the same way that they apply to intimate images.
These amendments signal to the community that regardless of the medium, non-consensual creation and distribution of sexual deepfakes without consent is invasive and harmful in similar ways to intimate images which are made or shared without consent.
I have spoken of the main substantive changes in the bill, but I also want to note that the supporting provisions for the intimate image offences have also been expanded accordingly.
Most critically, these are the powers for rectification under section 91S. This enables a court, following a guilty finding, to order the offender to take reasonable actions to remove, retract, recover, delete or destroy relevant material connected to that conviction, in a period specified by the court. Failing to comply with such an order is a criminal offence which carries a maximum penalty of two years imprisonment, a fine of 50 penalty units, which is $5,500, or both.
Schedule 1 , i tems [24] to [26] make amendments that ensure these powers apply to all new offences and material types that will be covered by the bill's substantive reforms.
Additionally, section 91T sets out a number of exceptions to the offences for recording or distribution of intimate images, which include:
a.that the conduct was done for genuine medical or scientific purposes, or
b.that the conduct was done by a law enforcement officer for genuine law enforcement purposes, or
c.that the conduct was required by a court or reasonably necessary for the purpose of legal proceedings, or
d.that a reasonable person would consider the conduct acceptable, with reference to, among other things, the age, intellectual capacity, vulnerability or other relevant circumstances of the depicted person, and the degree to which the accused person's actions affect the privacy of the person depicted in the image.
Schedule 1 , items [27] to [30] make consequential amendments to ensure these exceptions, which are important and necessary safeguards, also apply to the new material and offences introduced by this bill.
I note that these exceptions do not currently apply to the offence under section 91R threatening to engage in the relevant conduct. This is because such threats cannot have a legitimate purpose, as they are only to intimidate, coerce or control another. The bill does not change this position.
Finally, noting the rapid pace of technological development, schedule 1,item [31] will require a statutory review to be undertaken 12 months after commencement, with a report to be tabled within a further six months. This is a shorter period than normal for a statutory review, but it indicates the need to closely monitor these offences, with an opportunity to further refine once we are able to consider operational data.
Section 2 of the bill notes that it will commence on a date to be fixed by proclamation. This is to enable necessary systems updates to be made, and for education and training for justice agencies.
This bill is an important step forward in modernising the criminal law to keep pace with changing technology. The rapid rise of generative AI has provided many benefits and efficiencies, but like all new tools it has also opened the door to abuse and misuse.
The ease with which sexually explicit deepfake material can now be created, and then shared to embarrass, humiliate or used as a threat to coerce or control another person is unacceptable. Such conduct should be criminal. That is what this bill does, and why these reforms are so important.
I am proud to introduce these reforms to send the message that sexually explicit and non-consensual deepfakes are absolutely unacceptable. They are a form of abuse, objectification and dehumanisation of, in particular, women and girls in our society and there is no place for this type of behaviour in our State.
I commend the bill to the House.
Second Reading Debate
The Hon. SUSAN CARTER ( 15:49 :51 ): The Opposition is happy to support the Crimes Amendment (Intimate Image and Audio Material) Bill 2025. We welcome its introduction to the House and recognise that it is a good start to deal with this important issue. But it is only a start and more work needs to be done. We also recognise that the bill builds on a foundation laid by the Coalition and follows the introduction of our previous legislation. We do not understand why the Government finds it so hard to work collegially on important and, frankly, bipartisan issues. But, as we did with the bill to better protect our war memorials, we will strive for collegiality with this bill. Its subject matter is simply too important for pettiness so we are happy to support it.
The bill aims to tackle the rapidly growing and deeply concerning issue of sexually explicit deepfakes, which cause severe harm to many individuals, particularly women, adolescents and even children. Advances in artificial intelligence have made it alarmingly easy to create realistic fake images, video, audio and text that depict people without their consent in sexually explicit situations. It is far more than just a digital nuisance; it is a profound violation of privacy, dignity and safety. It is electronic exploitation and an unwelcome commodification and reduction of the human person.
The Government's bill broadly mirrors the Opposition's legislation introduced in the other place and largely repeats its definitions, its penalties and the way in which it seeks to amend the law. However, what it does not do to the same extent is include the use of deepfake text. That remains to be addressed. Sexually explicit deepfakes are used as tools of harassment, intimidation and exploitation. The victims—overwhelmingly women—suffer not only emotional trauma and reputational damage but also ongoing psychological and social harm. The spread of such material in schools, workplaces and communities contributes to a culture that normalises gendered abuse and leaves victims feeling powerless.
The bill amends the Crimes Act to expand existing offences related to the non-consensual recording and distribution of intimate images to include digitally created or altered material, particularly deepfake images and audio. I acknowledge that the legislation on which we are now building—the intimate image legislation—was first introduced in Parliament under the leadership of the now Opposition leader and then Attorney General, the Hon. Mark Speakman. That legislation addressed what was a contemporary issue then, broadly termed "revenge porn"—the misuse of existing and real images that may have been taken with or without consent at the time.
In 2025 the contemporary challenges we face are not so much the real images but the altered or created images, audio, video and text. Schedule 1 [2] to the bill extends the existing definition to cover images generated by AI or other digital means. The definition ensures protection against realistic fake images that are designed to resemble a real person—a "simulated person". Additionally, schedule 1 [15] introduces a new offence under section 91PA of the Crimes Act, criminalising the creation or alteration of an intimate image or audio material involving a simulated person without consent. That addresses a significant gap in the existing law, which only criminalises distribution or recording of real images.
Schedule 1 [2] extends offences to cover sexually explicit audio material through the insertion of the terminology "intimate audio material" into section 91N of the Crimes Act, reflecting the same protections that apply to images. Schedule 1 [24] to [26] further strengthen enforcement powers under section 91S by expanding rectification orders that allow courts to require offenders to remove, delete or retract harmful material. This is one of the more important provisions in the bill because, while it is one thing for it to be an offence to produce such material, it is another thing and important for victims that there are take-down powers. Take-down and rectification powers are essential tools for victim protection in this expanding digital age. Moreover, schedule 1 [27] to [30] amend section 91T to clarify exceptions to the offences, ensuring legitimate uses such as medical, legal or law enforcement purposes are protected. Schedule 2 to the bill contains consequential amendments to other laws.
This is a good bill. But, as I flagged earlier, it notably fails to include provisions addressing sexually explicit deepfake text such as fabricated messages or communications generated by AI. It is a significant gap, given that AI‑generated text can cause harm and can be used to coerce and intimidate. Without explicitly addressing that medium, the law does not fully capture the scope of digital abuse emerging with generative AI. That is why the Opposition moved amendments in the other place to broaden the scope of the bill. Those amendments included a prohibition on deepfake text material to ensure that victims have comprehensive protection against all forms of digitally fabricated sexual abuse.
We are all aware that the technology of generative text fakes is very common. It uses AI applications to allow computers to generate text content on almost any subject, and it is incredibly close to actual human writing. Excluding text ignores not only the existing use of the application of this technology but also the associated risks of the application of this technology. The CSIRO broadly defines deepfakes as "synthetic media generated using artificial intelligence, including images, videos, audio and even text". That is exactly why we believe the bill should be extended to texts and why we moved amendments in the other place to give effect to the inclusion of text as a form of deepfake material prohibited alongside audio and images.
We remain disappointed that the Government did not support those important amendments. As good as the bill is, that remains a continuing gap in protection for women—a gap that will only broaden with time and technological change. Automated generative technology, created by machine learning models, is used to create text for fake news and to copy the writing styles and speech of authors, songwriters and celebrities to either mimic them, misrepresent them or even monetise the imitation of their works. We need to look at the way existing AI technology utilises text in combination with the range of changes in behaviour around sexual interactions being seen across society, particularly in the digital space. Sexting chat rooms as a space for individuals to engage in sexually explicit text, image or video exchanges are on the rise. OpenAI and other leading AI tools have moved to explore how to responsibly—as they say; I think that is a question for us to explore—allow users to make AI‑generated porn and other explicit content, including text.
There is also an increase in the use of AI chatbot companions, the impact of an increase in catfishing on online dating and other apps, and the prolific use of deepfakes in bullying and intimidation, including in circumstances of intimate partner violence and revenge porn. Technological capability utilising machine learning models to create text when applied to changes in sexual interaction and discourse and people's attitudes to the way technology in the digital realm is used to engage in sexualised conversation is not a challenge of the future; it is a challenge of today. It is clear that the application of deepfake text technology to sexually explicit behaviours cannot be ignored. The law must prohibit the creation or use of sexually explicit deepfake text to ensure that we also send the message that it is criminal behaviour and it will be backed up by the same penalties as those for image or audio. It is deeply disappointing that the Government has not taken the opportunity to do exactly that.
Members know that we exist in a world where people can apply technology to our names, faces and voices in a way that can create significant risk to all of us. I hesitate to suggest that members of Parliament are real celebrities, but we do have public profiles. Our reputations and our credibility within our communities are important when we are showcasing our work on behalf of the community. If sexually explicit images or videos of us were created using deepfake technology, we would expect to be protected, as would other members of the community. If an audio recording were created using a deepfake version of a member's voice, we would expect that to be treated as a breach of the law. If an image of a member is used with sexually explicit text underneath, in the member's usual tone and language, by inference suggesting to the community that those are their words because they were attached to their image, they would also expect that to be treated as a breach of the law. The Opposition asks for that to be considered by the Government, because that is an important lacuna in the bill as presented.
However, the Opposition recognises all that is good within the bill. The Opposition supports the bill but remains committed to strengthening the law to fully reflect the evolving technological landscape. We owe it to the victims, particularly women and girls, to provide clear and strong protections and hold perpetrators to account for the digital violations they commit. I commend the bill to the House.
Ms ABIGAIL BOYD ( 1 6:00 :48 ): The Greens support the Crimes Amendment (Intimate Image and Audio Material) Bill 2025, which brings New South Wales into line with other jurisdictions in Australia by making it a crime to create and distribute sexually explicit material made or altered using artificial intelligence without consent, also commonly known as sexual deepfake materials. The bill amends the Crimes Act 1900 to expand existing offences related to the production and distribution of intimate images without consent to include the creation and distribution of sexually explicit image and audio materials that have been digitally generated by artificial intelligence [AI].
Technology-facilitated abuse is an insidious and violent form of abuse and is commonly a serious marker of increasing and escalating domestic, family and sexual violence. That form of abuse is often used as a tool to control, coerce, punish, humiliate or otherwise inflict harm on a victim. In recent years the non-consensual creation, alteration and sharing of sexually explicit material using artificial intelligence has been spotlighted as the latest form of technology-facilitated violence being utilised by perpetrators of abuse. To be clear, deepfakes and "nudify" apps are a form of gender-based violence designed for the purpose of sexually harassing and abusing women and girls and inflicting coercive control to extort victim-survivors. A 2023 study found that women make up 99 per cent of the individuals targeted in sexual deepfakes, with people with disability, First Nations peoples, the LGBTIQA+ community and younger people also heavily targeted. Even more concerningly, children are increasingly being targeted as victims of that abuse.
Earlier this month Australia's eSafety Commissioner launched enforcement action against a technology company based in the United Kingdom for enabling the creation of such material. That company operates two of the world's most visited AI-generated nudify websites. According to the eSafety Commissioner, those two services have been visited by 100,000 users every month in Australia alone. With new forms of technological advances presenting new opportunities for perpetrators of abuse every day, it is crucial that we act to not only criminalise that behaviour but also guard against highly dangerous new forms of technology-facilitated abuse before they take hold.
Importantly, the existing offences in the Crimes Act and the expanded offences in this bill only target individuals after offences have occurred or been threatened. While that is important, it fails to adequately prevent abuse and violence before it occurs. That requires targeted frontline prevention work within communities as well as legislative action to prevent the creation of that material by banning the technology being used to do so. Tech companies can and must do much more to mitigate the harm caused through the existence of those tools by safeguarding against deepfake generators, acting as soon as content is flagged to remove it, and using indicators to ensure the origins of materials can be identified. Companies can also act by working with other platforms to proactively share information and flag suspicious activity.
But, of course, we cannot possibly rely on businesses, most of which are multinational corporations that have little regard for ethics, to do the right thing. That is why The Greens have long called for government action to hold tech companies to account for enabling the creation and distribution of illegal deepfakes and nudify apps and impose a positive duty of care to prevent the creation of harmful material. I understand that, in the past month, the Federal Government has announced plans to put the onus on tech platforms for failing to prevent users from creating illegal deepfakes. It is important that that is done urgently and comprehensively and is not kicked further down the road. I urge the New South Wales Government to commit to doing more than what is in the bill before us, including proactively advocating for the Federal Government to take sweeping action to hold tech companies to account and prevent that form of technology-facilitated abuse from occurring in the first place.
I now turn to the contents of the bill. Schedule 1 [2] to the bill inserts new definitions into section 91N of the Crimes Act for "digitally generated", "simulated person", "intimate audio material" and "intimate image material". The definition of intimate audio material applies to any audio that is sexual in nature or relates to engagement in a private act in circumstances where a reasonable person would reasonably expect to be afforded privacy, including real audio, altered audio and audio of a simulated person.
The definition of intimate image material replaces the existing definition of intimate image in the Crimes Act to include images that have been altered to appear to show a person's private parts or a person engaged in a private act and images of a simulated person's private parts or of a simulated person engaged in a private act. Currently it is only unlawful to distribute images of that nature that have been altered. Under the existing provisions that were introduced in 2017, it could be argued that images altered or created using generative AI are not technically altered but are generated or produced, thus falling outside the scope of existing intimate-image offences. Simulated images would also not be captured under the existing offence. The new and expanded definitions in the bill ensure that all forms of non-consensual sexual deepfakes are considered.
New section 91PA introduces a maximum penalty of 100 penalty units or imprisonment for three years, or both, for intentionally altering an image or audio of another person in a way that makes it intimate, or for creating an intimate image or audio material of a simulated person without the consent of the real person it represents. For those offences, prosecution of a person under the age of 16 years for an offence must not be commenced without the approval of the Director of Public Prosecutions. I note that The Greens have some concerns about those new offences resulting in the disproportionate criminalisation of young people. The prosecution of a person under 16 should only occur in exceptional circumstances, given that much of this offending is perpetrated by young people in schools who are not equipped to understand the extent of the harm that their actions cause.
New section 91R (1A) introduces a maximum penalty of 100 penalty units or imprisonment for three years, or both, for threatening to alter or create an image or audio in a way that makes it intimate, or for creating an intimate image or audio material of a simulated person without the consent of the real person it represents. The new offences proposed in the bill only relate to images and audio of adults and not children because such material involving children is already captured under division 15A of part 3 of the Crimes Act 1900. Those existing offences already cover all depictions and descriptions of children in sexual contexts, regardless of how they are made or produced.
The bill makes consequential changes to the language in the Crimes Act to refer to the new offences where necessary and also makes consequential changes to other legislation, including the Child Protection (Working with Children) Act 2012, Crimes (Domestic and Personal Violence) Act 2007, Criminal Procedure Act 1986 and National Disability Insurance Scheme (Worker Checks) Regulation 2020 to insert the new offences where relevant.
New section 91U inserts a requirement for the Minister to review the amendments made in the bill 12 months after commencement, with a report to be tabled in Parliament within six months of that review. The Greens welcome the introduction of a timely statutory review of the provisions to ensure we are monitoring the implementation and enforcement so that any unintended consequences are addressed and rectified, which is important in the context of the rapidly evolving technological world we live in.
I understand that the domestic and family violence sector is broadly supportive of the reform. However, the sector has raised serious concerns about the impact on the ground without adequate resourcing of the investigative mechanisms to police crimes involving sexual deepfake materials. Resourcing on the ground is not keeping pace with the rapid development of AI technology. Police are already not adequately trained on even the base level offence, with substantial ongoing concerns raised about police failing to properly respond to technology‑facilitated abuse offences. The new offences in the bill must be accompanied by training and resources or they will be ineffective at best and actively harmful to victim-survivors and young offenders at worst. We must also ensure that communities are educated about the new offences and that avenues for reporting are accessible, safe and trauma-informed. The Greens are concerned that failing to do so will result in an increase in misidentification of victim-survivors, not to mention the disproportionate criminalisation of marginalised young people.
I also note that the existing offences are almost never prosecuted, similar to the Federal offence of using a carriage service to transmit material of that nature. We already know that sexual violence is the least likely violent crime to be reported, investigated, prosecuted and convicted. In the rare instances that it is reported and investigated, victim-survivors of sexual violence are continually faced with significant obstacles across nearly every stage of the justice system. There is an enormous amount of work to be done to address the underlying systemic failings that allow those forms of violence and abuse to occur and ensure victim-survivors can safely access the support, justice, healing and recovery that they need.
Preventing gendered violence and ending cycles of abuse requires targeted prevention work within communities through education and engagement, as well as fundamental changes in educational policies, structures and environments. Experts have been calling on governments to take far bolder and more ambitious action to prevent gender-based violence for decades. With new forms of technology being advanced every day, it has never been more important to invest in prevention. Specialist frontline domestic, family and sexual violence services are without a doubt the most equipped to understand and navigate emerging forms of technology‑facilitated abuse, with workers on the front line dealing with those nuances every single day.
As I have said countless times in this place, the expert workers on the front line who are best placed to respond to this are already struggling to keep up with demand because the Government is starving them of the funding needed to do so. While legislative reform is an important step in addressing gender-based violence, there is simply no substitute for funding the front line. I once again call on the Government to urgently provide existing specialist domestic, family and sexual violence services with the core funding they need to deal with the ramifications of the passage of this bill and to continue carrying out some of the vitally important work that will turn the gender-based violence crisis around. Finally, I thank the Attorney General and his office for engaging with The Greens on the bill and for the work that is being done in this space beyond that. The Greens support the bill.
The Hon. NATASHA MACLAREN-JONES ( 16:10 :31 ): I contribute to debate on the Crimes Amendment (Intimate Images and Audio Material) Bill 2025. In 2016 I chaired the Standing Committee on Law and Justice inquiry into remedies for the serious invasion of privacy in New South Wales. At the time there was community concern about the significant rise in the distribution, or threatened distribution, of intimate images without a person's consent, known as "revenge pornography", and the lack of privacy protections with emerging technologies. The committee made several recommendations, including the introduction of a statutory tort of serious invasions of privacy. At the time, the New South Wales Government did not support the recommendation, mainly because the Commonwealth was not willing to pursue it.
Having said that, the New South Wales Government indicated that it would examine introducing a new criminal offence for the non-consensual sharing of intimate images. One year later the then Attorney General, Mark Speakman, introduced the Crimes Amendment (Intimate Images) Bill 2017, which marked a significant milestone in Australian law. For the first time it was a criminal offence to record, distribute or threaten to share intimate images without the other person's consent. Importantly, the legislation also empowered victims to apply for rectification orders, enabling courts to require that such images be removed or destroyed. Nine years on, in June this year, the Commonwealth enacted a statutory tort for serious invasions of privacy, now contained in schedule 2 to the Privacy Act 1988.
The technologies we examined during the inquiry were those that were prevalent at the time. They included social media platforms like Facebook and Twitter, along with clunkier mobile phone cameras, surveillance cameras, tracking devices and data breaches. Nine years on, technology has advanced further, increasing the complexity and scope of privacy risks. While the core privacy risks identified in the inquiry remain, such as non‑consensual image sharing and technology-enabled stalking, the scale, capability and subtlety of invasions have grown. We now have AI-powered surveillance. Facial recognition, behavioural analytics and real‑time biometric tracking are being used in both public and private spheres. Advanced geolocation technology is more precise because of GPS tracking through smart devices, wearables, cars and home assistants.
Social media has also evolved. Platforms now feature transitory content with vanishing messages, encrypted communications and integrated biometric information. Smart home devices with internet-connected cameras, sensors and voice assistants can potentially record audio, video and personal activity continuously. Drones are used more often, both commercially and privately, raising new legal issues regarding the aerial filming of private property and public spaces. Big data has expanded, with corporations and governments collecting, storing and analysing more data more regularly. Added to that, those data hubs are now powered by machines that have the ability to learn, increasing the risk of data breaches and misuse, including profiling, discrimination and exposure of intimate data. Deepfakes and synthetic media have led to the generation and distribution of realistic fake images, audio and video.
The bill builds on the work of the former Liberal-Nationals Government, but it also largely replicates the Crimes Amendment (Deepfake Sexual Material) Bill 2025, which was introduced by the shadow Minister for Women in August this year. The Opposition believes there is more work to be done. It is disappointing that the Government did not support the amendments moved by the Opposition in the other place. I do not intend to revisit each of those in detail. However, I mention that the bill fails to address sexually explicit deepfake text, such as AI‑generated, fabricated messages or communications. They can produce highly realistic content on nearly any subject, closely mimicking human writing.
That omission is significant because AI-generated text can cause similar harm, coercion and intimidation to other forms of digital abuse. By not explicitly covering that medium, the law fails to reflect the full scope of abuse emerging with generative AI technology. Over the past decade, technology has evolved rapidly, and it is critical that our laws keep pace to protect individuals from increasingly sophisticated forms of abuse. The Opposition will continue to advocate for laws that broaden protections to keep pace with emerging technologies, such as deepfake text, ensuring legal safeguards that adequately address all forms of digital sexual abuse.
The Hon. EMILY SUVAAL ( 16:15 :10 ): On behalf of the Hon. Daniel Mookhey: In reply: I thank the Hon. Susan Carter, Ms Abigail Boyd and the Hon. Natasha Maclaren‑Jones for their contributions to debate. I acknowledge the goodwill of all members and our shared purpose in advancing this important reform, which is critical to ensuring that the criminal law responds appropriately to evolving technology. It is just as harmful to create and share deepfake sexual images and audio material as it is to record and share real intimate images, where the subject of that material does not consent. It is vile conduct designed to embarrass, humiliate and shame a person. We also know that it can be used as a threat to coerce or control.
I note that there was extensive debate in the other place about the coverage of sexually explicit deepfake material that is solely text, which the Opposition's bill also criminalises. I do not propose to restate all the points made by the Attorney General in the other place. The Government remains of the view that the expansion suggested by the Opposition carries risks of unintended capture. The weight of evidence shows that the overwhelming majority of sexually explicit deepfake material takes the form of images, video or audio. That risk is particularly acute where such material is not distributed. It is important that we undertake reform with precision and care to ensure that the criminal law imposes no further than necessary on civil liberties such as the freedom of thought and expression.
A cautious approach is even more critical for conduct that may be engaged in by children and young people—jumping to a criminal justice response might not be the best way to address those issues. While the Government does not support the expansion at this time, it is watching the issue closely. As the Attorney General advised in the other place, the Department of Communities and Justice has been asked to look into the issue of sexually explicit deepfake text material to better understand the scope and nature of the issue and how best to address it. Additionally, under schedule 1 [28], the reforms introduced by the bill will be subject to a statutory review 12 months after commencement, with a report to be tabled within a further six months.
I acknowledge the contributions that have highlighted the need for enforceability, and the very astute observation that one of the most challenging aspects of any image-based abuse is the difficulty in taking down material from the internet. The bill adopts the existing take-down powers, which the Government considers to be sufficient at this time considering the complementary coverage provided through national schemes under Commonwealth legislation. Currently, under section 91S, when a person is convicted of one of the intimate image offences, the court can order the offender to take reasonable actions to remove, retract, recover, delete or destroy any relevant intimate image. Failure to comply with such a court order, without a reasonable excuse, is a criminal offence that carries a maximum penalty of two years imprisonment.
The bill expands on that provision to ensure that it applies both to the expanded class of conduct captured—that is, creation of deepfakes and alteration of material to be intimate material—and the expanded classes of material, which include wholly digitally generated material and intimate audio material. In addition to the powers under State legislation, eSafety Australia has a nationally significant role in supporting the taking down of harmful material, including sexually explicit deepfakes. That includes having such material removed from platforms and limiting further harassment.
The Commonwealth Online Safety Act 2021 is the key regulatory framework for keeping Australians safe online. Relevantly, that includes the Image-Based Abuse Scheme, which allows eSafety to seek the removal of intimate images or videos, including altered images, that are shared online without the consent of the person shown. eSafety can work with the platform or service that was used to post or share the relevant material and arrange for it to be removed, or prevent threats. I also note that the Commonwealth Government has recently announced that it will move to ban apps that specifically serve to create deepfake sexual material, such as nudify apps. Such reform efforts will complement the enhanced offences that the bill introduces.
I note contributions from members around not only sexually explicit deepfakes but also the rise of deepfakes relating to misinformation, fraud and other forms of unsavoury and antisocial conduct. In the case of frauds and scams, the main fraud offence, in section 192E of the Crimes Act 1900, already covers conduct of that nature. The offence has broad elements that apply where a person dishonestly and by deception obtains property belonging to another, obtains any financial advantage, or causes any financial disadvantage. The offence does not specify how the fraud is committed. It is already capable of extending to a range of technology-assisted fraudulent conduct, including the use of deepfakes or AI-generated images or audio.
Of course, not all harmful deepfakes relate to financial benefits. Responding to deepfakes generally is a challenging area of reform. It requires careful consideration of how to balance rights of expression and speech with the critical need to ensure civil public discourse and social cohesion. It is also a legislative area with overlapping responsibility. I note that the Commonwealth Government has principal responsibility for the regulation of the online environment. Other areas of law, such as defamation and privacy law, may also be relevant. The Government continues to consider the most suitable ways to respond to those issues. As the Attorney General noted in the other place, that includes working with the other Australian jurisdictions through the Standing Council of Attorneys-General.
Ms Abigail Boyd expressed concerns about police preparedness. The reforms in the bill are important and are a significant expansion of the existing framework for the non-consensual recording and distribution of intimate images. As noted in the second reading speech, it is important that such reforms are able to be operationalised properly, which is why they commence on a date to be fixed by proclamation. That enables the necessary systems upgrades to be made, policies and procedures to be updated, and training to be delivered. The Government appreciates that ongoing monitoring and review is critical to any law reform, and that need is particularly acute when dealing with new and emerging technology, which continues to evolve at a rapid pace. Schedule 1 [28] to the bill ensures that the reforms introduced by the bill will be subject to a statutory review 12 months after commencement, with a report to be tabled within a further six months.
That is a shorter period than normal for a statutory review, which indicates the need to closely monitor the offences, with an opportunity to further refine the offences once we are able to consider operational data. The bill is an important step forward to ensure that the criminal law remains responsive to our changing social and technological landscape. I acknowledge that members of both Houses have noted that changing the law is but one part of the picture. We are committed to exploring further opportunities to tackle this insidious behaviour from multiple angles, including in collaboration and consultation with the Commonwealth Government and other States and Territories where appropriate. Sexually explicit and non-consensual deepfakes are a form of abuse, objectification and dehumanisation of women and girls in particular. There is no place for such conduct in New South Wales. I commend the bill to the House.
The DEPUTY PRESIDENT ( The Hon. Rod Roberts ): The question is that this bill be now read a second time.
Motion agreed to.
Third Reading
The Hon. EMILY SUVAAL: On behalf of the Hon. Daniel Mookhey: I move:
That this bill be now read a third time.
Motion agreed to.