{"id":902749,"date":"2024-05-03T08:39:54","date_gmt":"2024-05-03T12:39:54","guid":{"rendered":"https:\/\/glaad.org\/?p=902749"},"modified":"2024-05-03T08:39:54","modified_gmt":"2024-05-03T12:39:54","slug":"input-from-glaad-for-oversight-board-on-explicit-ai-images-of-female-public-figures-cases","status":"publish","type":"post","link":"https:\/\/glaad.org\/input-from-glaad-for-oversight-board-on-explicit-ai-images-of-female-public-figures-cases\/","title":{"rendered":"Input from GLAAD for Oversight Board on \u201cExplicit AI Images of Female Public Figures\u201d Cases"},"content":{"rendered":"<p>As the world\u2019s largest LGBTQ media advocacy organization and as leading experts in LGBTQ tech accountability, GLAAD\u2019s <a href=\"\/smsi\/lgbtq-social-media-safety-program\/\">Social Media Safety program<\/a> provides ongoing key stakeholder guidance with regard to LGBTQ safety, privacy, and expression to social media platforms, including Meta\u2019s Facebook, Instagram, and Threads. In addition to the following specific guidance to the Oversight Board on the \u201c<a href=\"https:\/\/oversightboard.com\/news\/361856206851167-oversight-board-announces-two-new-cases-on-explicit-ai-images-of-female-public-figures\/\" target=\"_blank\" rel=\"noopener\">Explicit AI Images of Female Public Figures\u201d<\/a> cases (case numbers: 2024-007-IG-UA, 2024-008-FB-UA) we urge the Oversight Board to refer to the <a href=\"https:\/\/glaad.org\/smsi\/lgbtq-social-media-safety-program\/\">2023 edition<\/a> of our annual <a href=\"https:\/\/www.glaad.org\/smsi\">Social Media Safety Index report<\/a> for additional context.<\/p>\n<p>This public comment from GLAAD is specifically addressing the Board\u2019s request for input on: \u201cThe nature and gravity of harms posed by deepfake pornography including how those harms affect women, especially women who are public figures.\u201d<\/p>\n<p>The use of deepfake technology to create malicious imagery intended to bully, harass, and demean women (especially women who are public figures) is extremely serious, and violates Meta\u2019s existing <a href=\"https:\/\/transparency.meta.com\/policies\/community-standards\/bullying-harassment\/\" target=\"_blank\" rel=\"noopener\">bullying and harassment policies<\/a>, which include an array of protections (e.g. \u201cEveryone is protected from: \u2026 Severe sexualized commentary. Derogatory sexualized photoshop or drawings.\u201d<sup>[1]<\/sup>). It is vitally important that such imagery be clearly interpreted and categorized as malicious, identified as violative by Meta\u2019s moderation systems, and mitigated accordingly. From GLAAD\u2019s years of experience with the company\u2019s convoluted interpretations of its own policies, it is easy to anticipate that the word \u201cderogatory\u201d in the policy is likely to be used by the company as an opportunity to not enforce the policy. To be clear, such maliciously manufactured sexualized deepfake content is inherently, by definition, <a href=\"https:\/\/www.cyber.forum.yale.edu\/blog\/2021\/7\/20\/deepfake-pornography-beyond-defamation-law\">derogatory<\/a>.<\/p>\n<p>Also relevant here is the concept of \u201cmalign creativity,\u201d as noted in <a href=\"\/releases\/glaad-submits-public-comment-to-oversight-board-on-facebooks-anti-trans-hate-content-case\/\">GLAAD\u2019s public comment<\/a> for the Oversight Board\u2019s September 2023 <a href=\"https:\/\/oversightboard.com\/news\/689065426607908-oversight-board-announces-post-in-polish-targeting-trans-people-case\/\" target=\"_blank\" rel=\"noopener\"><strong>Post in Polish Targeting Trans People case<\/strong><\/a><strong> (2023-023-FB-UA): <\/strong>\u201cHate content, seeded over time to foster dehumanizing narratives in politics and society, often violates Meta\u2019s Community Guidelines but uses disingenuous rhetoric (including satire and humor) to circumvent safeguards.<sup>[2]<\/sup> Meta is failing to adequately confront the issues of \u2018malign creativity\u2019 that allow for unmitigated hate speech as bad actors adapt to moderation policies in a dynamic process.<sup>[3]<\/sup>\u201d<\/p>\n<p>Prior to the proliferation of current deepfake technology, such methods of <a href=\"https:\/\/www.brookings.edu\/articles\/the-threat-posed-by-deepfakes-to-marginalized-communities\/\" target=\"_blank\" rel=\"noopener\">targeting women<\/a> (especially women of color, and especially Black women, and especially public figures) have been well-established as extremely harmful forms of bullying and harassment. As noted in the 2023 <a href=\"https:\/\/glitchcharity.co.uk\/wp-content\/uploads\/2023\/07\/Glitch-Misogynoir-Report_Final_18Jul_v5_Single-Pages.pdf\" target=\"_blank\" rel=\"noopener\">Glitch Digital Misogynoir Report<\/a>: \u201cDigital misogynoir is the continued, unchecked, and often violent dehumanisation of Black women on social media, as well as through other forms such as algorithmic discrimination. Digital misogynoir is particularly dangerous because of its ability to incite offline violence. For example, after spending time on far-right social platforms, white supremacist Dylann Roof went on to murder nine Black church members, seven of whom were women, while they were at bible study. In the UK, misogynoir has recently been prominent in the sustained and targeted harassment of Meghan Markle in the tabloid press and online.&#8221;<sup>[4]<\/sup><\/p>\n<h3><strong>The Problem of Public Figures Loopholes and Self-Reporting Requirements<\/strong><\/h3>\n<p>Meta\u2019s public figure loophole in several of its policies continues to harm not only the public figures who remain unprotected by Meta\u2019s policies; these loopholes (in which Meta allows egregious hate content to remain unmitigated on their platforms) also harm members of the protected characteristic (PC) groups who share the identities of those targeted. In this instance, deepfake pornography targets all women, girls, and femme-identified people. A 2022 report commissioned by UltraViolet, GLAAD, Kairos, and Women\u2019s March shows that women, people of color, and LGBTQ people experience higher levels of harassment and threats of violence on social media than other users, and also found that attacks on the basis of identity harm others who share those identities (46% of women feel personally attacked from <em>witnessing<\/em> harassment against women who are public figures). \u201cThis means, substantively speaking, that the problem of online harassment is not only one that affects the victims of harassment themselves, but the witnesses of the harassment.\u201d<sup>[5]<\/sup><\/p>\n<p>Currently Meta\u2019s Tier 3 Bullying and Harassment policy is phrased in such a way that public figures are specifically excluded from these protections: \u201cWhen self-reported, private minors, private adults, and minor involuntary public figures are protected from the following: \u2026 <strong>Unwanted manipulated imagery<\/strong>.\u201d<sup>[6]<\/sup><\/p>\n<p>Meta should update this policy and protect public figures from unwanted manipulated imagery. Further, many of these policies require (prohibitively burdensome) self-reporting in order for the company to evaluate such content for mitigation. For example, Meta\u2019s bullying and harassment policy page concludes with a list of items for which \u201cwe require additional information or context to enforce.\u201d This begins with the following relevant policy (which requires self-reporting): \u201c<strong>Post content sexualizing a public figure.<\/strong> We will remove this content when we have confirmation from the target or an authorized representative of the target that the content is unwanted.\u201d<sup>[7]<\/sup><\/p>\n<p>This is a common feature of how Meta\u2019s policies are constructed \u2014 they feature layers of requirements (especially public figure loopholes and self-reporting requirements) that result in many policies simply being effectively unenforced due to the requirements. For example, Meta\u2019s policy that relates to targeted misgendering (which is explained in further detail <a href=\"\/social-media-platform-policies-targeted-misgendering-deadnaming-hate-speech\/\">here<\/a>) is effectively diluted by these same requirements.<\/p>\n<p>With this status-quo policy framing, Meta facilitates enormous quantities of harmful misogynist content that plagues Instagram, Facebook, and Threads, and sets a standard that normalizes contempt and hatred of women.<sup>[8]<\/sup><\/p>\n<p>GLAAD has had the repeated experience of engaging with Meta\u2019s trust and safety teams and hearing back an incoherent reasoning in which the company seems strangely determined in as many instances as possible to render their own policies inapplicable. The company\u2019s goal seems to be, as much as possible, to lean towards loopholes and reasoning that allows harmful content to remain unmitigated, rather than to apply and enforce policies to protect users from harm.<\/p>\n<h3><strong>The Need for Agile Common Sense Policy Development and Enforcement<\/strong><\/h3>\n<p>It is crucially important that platforms such as Meta recognize that bad actors will continue to manufacture content, tropes, and vehicles of hate, harassment, and disinformation that intentionally try to muddy the waters and confuse platforms about their hate-driven nature. Such disingenuous maliciousness must be seen for what it is \u2014 cleverness meant to evade community guidelines and hate speech policy violations. Meta (and other companies) have policy development and enforcement teams for this very reason.<\/p>\n<p>A case in point of how it is possible to recognize such malicious creative content for what it is, generate a policy to address it, and then enforce the policy \u2014 is Meta\u2019s development of its Holocaust denial policy. This policy was <a href=\"https:\/\/www.washingtonpost.com\/technology\/2020\/10\/12\/zuckerberg-holocaust-denial-facebook\/\" target=\"_blank\" rel=\"noopener\">implemented<\/a> in October 2020, after years of guidance and input from advocacy groups (and subsequent to CEO Mark Zuckerberg\u2019s July 2018 <a href=\"https:\/\/www.vox.com\/explainers\/2018\/7\/20\/17590694\/mark-zuckerberg-facebook-holocaust-denial-recode\" target=\"_blank\" rel=\"noopener\">statement in an interview with journalist Kara Swisher<\/a> about Holocaust denial content that \u201cat the end of the day, I don\u2019t believe that our platform should take that down\u201d). While it took years for Facebook to finally adopt the policy to recognize such content as hate speech, it was always \u2014 from day one \u2014 entirely clear that Holocaust denial is a form of antisemitism. It is just a creative form of it. Any reasonable person can look at such material from a good-faith perspective and see the malicious intent and harmful effects.<\/p>\n<p>Similarly, intentional targeted misgendering and deadnaming of public figures is anti-trans hate speech, and possesses all of the hallmark qualities of hate speech, and yet some platforms including Meta continue to resist expressly characterizing it as such. (To be clear, this is about targeted intentional instances of promoting anti-trans animus, not about accidentally getting someone\u2019s pronouns wrong.)<\/p>\n<p>Like previous Oversight Board cases (the <a href=\"https:\/\/oversightboard.com\/news\/689065426607908-oversight-board-announces-post-in-polish-targeting-trans-people-case\/\" target=\"_blank\" rel=\"noopener\"><strong>Post in Polish Targeting Trans People case<\/strong><\/a>, the <a href=\"https:\/\/www.oversightboard.com\/news\/698422811785085-oversight-board-announces-two-cases-altered-video-of-president-biden-and-weapons-post-linked-to-sudan-s-conflict\/\" target=\"_blank\" rel=\"noopener\"><strong>Altered Video of President Biden case<\/strong><\/a> and others), malicious deepfake pornography content is highly consequential and dangerous, and is causing real-world harms to the specific people who are targeted and to other women, girls, and femme-identified people, as well as contributing to a general pollution of our information ecosystem with toxic content.<\/p>\n<p>It is also important to note that a considerable amount of such content (bullying and harassing people using false, sexualized depictions) is often manufactured and amplified by high-follower hate accounts. These accounts are motivated not only by animus towards historically marginalized groups (targeting people on the basis of their protected characteristics is prohibited by Meta\u2019s community guidelines), but also by financial incentives. Maximizing engagement and generating revenue (via increasingly toxic, false, and hateful content) is a significant motivation for the perpetuation of such harmful, dangerous, dehumanizing material.<\/p>\n<p>In conclusion, we reiterate that this specific kind of weaponized speech \u2014 false and malicious sexualized depictions of women \u2014 is a dangerous and prevalent form of hate, harassment, and bullying and has been for many years now. Meta is well aware of all of this (hence the existence of its own policies). Meta should urgently and meaningfully interpret and enforce these policies to effectively mitigate such material, while also prioritizing the equally important need to not suppress or censor legitimate content and accounts.<\/p>\n<h3><strong>About the GLAAD Social Media Safety Program<\/strong><\/h3>\n<p>As the leading national LGBTQ media advocacy organization GLAAD is working every day to hold tech companies and social media platforms accountable, and to secure safe online spaces for LGBTQ people. The GLAAD <a href=\"\/smsi\/lgbtq-social-media-safety-program\/\">Social Media Safety (SMS) program<\/a> researches, monitors, and reports on a variety of issues facing LGBTQ social media users \u2014 with a focus on safety, privacy, and expression. The SMS program has consulted directly with platforms and tech companies on some of the most significant LGBTQ policy and product developments over the years. In addition to ongoing advocacy work with platforms (including TikTok, X\/Twitter, YouTube, and Meta&#8217;s Facebook, Instagram, Threads, and others), and issuing the highly-respected annual <a href=\"\/smsi\">Social Media Safety Index (SMSI) report<\/a>, the SMS program produces <a href=\"\/smsi\/lgbtq-digital-safety-guide\/\">resources<\/a>, <a href=\"\/smsi\/anti-lgbtq-online-hate-speech-disinformation-guide\/\">guides<\/a>, <a href=\"\/smsi\/report-meta-fails-to-moderate-extreme-anti-trans-hate-across-facebook-instagram-and-threads\/\">publications<\/a>, and <a href=\"\/lgbtq-celebrities-allies-letter-facebook-instagram-youtube-tiktok-twitter-anti-trans-hate-disinformation\/\">campaigns<\/a>, and actively works to educate the general public and raise awareness in the media about <a href=\"https:\/\/www.washingtonpost.com\/technology\/2024\/03\/27\/meta-glaad-report-released\/\" target=\"_blank\" rel=\"noopener\">LGBTQ social media safety issues<\/a>, especially anti-LGBTQ hate and disinformation.<\/p>\n<hr \/>\n<p><sup>[1]<\/sup> <a href=\"https:\/\/transparency.meta.com\/policies\/community-standards\/bullying-harassment\/\" target=\"_blank\" rel=\"noopener\">Bullying and Harassment | Transparency Center<\/a>, Meta<\/p>\n<p><sup>[2]<\/sup> United Nations, <em>Report: Online hate increasing against minorities<\/em> (March 2021); Harel, Tal Orian, et al. <a href=\"https:\/\/doi.org\/10.1177\/2056305120913983\" target=\"_blank\" rel=\"noopener\">\u201cThe Normalization of Hatred: Identity, Affective Polarization, and Dehumanization on Facebook in the Context of Intractable Political Conflict.<\/a>\u201d <em>Social Media + Society<\/em>, vol. 6, no. 2, Apr. 2020, p. 205630512091398.<\/p>\n<p><sup>[3]<\/sup> Malign Creativity: <a href=\"https:\/\/www.wilsoncenter.org\/publication\/malign-creativity-how-gender-sex-and-lies-are-weaponized-against-women-online\" target=\"_blank\" rel=\"noopener\">How Gender, Sex, and Lies are Weaponized Against Women Online<\/a>, Wilson Center.<\/p>\n<p><sup>[4]<\/sup> <a href=\"https:\/\/glitchcharity.co.uk\/wp-content\/uploads\/2023\/07\/Glitch-Misogynoir-Report_Final_18Jul_v5_Single-Pages.pdf\" target=\"_blank\" rel=\"noopener\">The Digital Misogynoir Report<\/a>, Glitch.<\/p>\n<p><sup>[5]<\/sup> <a href=\"https:\/\/weareultraviolet.org\/from-url-to-irl-the-impact-of-social-media-on-people-of-color-women-and-lgbtq-communities-by-ultraviolet-glaad-kairos-womens-march\/\" target=\"_blank\" rel=\"noopener\">From URL to IRL: The Impact of social Media on People of Color, Women, and LGBTQ+ Communities<\/a>, Ultraviolet, GLAAD, Kairos, Women\u2019s March.<\/p>\n<p><sup>[6]<\/sup> <a href=\"https:\/\/transparency.meta.com\/policies\/community-standards\/bullying-harassment\/\" target=\"_blank\" rel=\"noopener\">Bullying and Harassment | Transparency Center<\/a>, Meta<\/p>\n<p><sup>[7]<\/sup> <a href=\"https:\/\/transparency.meta.com\/policies\/community-standards\/bullying-harassment\/\" target=\"_blank\" rel=\"noopener\">Bullying and Harassment | Transparency Center<\/a>, Meta<\/p>\n<p><sup>[8]<\/sup> <a href=\"https:\/\/www.rte.ie\/news\/ireland\/2024\/0417\/1444148-social-media-study\/\" target=\"_blank\" rel=\"noopener\">Social media &#8216;bombarding&#8217; boys with misogynist content<\/a>, RTE<\/p>\n","protected":false},"excerpt":{"rendered":"<p>As the world\u2019s largest LGBTQ media advocacy organization and as leading experts in LGBTQ tech accountability, GLAAD\u2019s Social Media Safety program provides ongoing key stakeholder guidance with regard to LGBTQ safety, privacy, and expression to social media platforms, including Meta\u2019s Facebook, Instagram, and Threads. In addition to the following specific guidance to the Oversight Board<\/p>\n","protected":false},"author":507,"featured_media":902751,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"apple_news_api_created_at":"","apple_news_api_id":"","apple_news_api_modified_at":"","apple_news_api_revision":"","apple_news_api_share_url":"","apple_news_cover_media_provider":"image","apple_news_coverimage":0,"apple_news_coverimage_caption":"","apple_news_cover_video_id":0,"apple_news_cover_video_url":"","apple_news_cover_embedwebvideo_url":"","apple_news_is_hidden":"","apple_news_is_paid":"","apple_news_is_preview":"","apple_news_is_sponsored":"","apple_news_maturity_rating":"","apple_news_metadata":"\"\"","apple_news_pullquote":"","apple_news_pullquote_position":"middle","apple_news_slug":"","apple_news_sections":[],"apple_news_suppress_video_url":false,"apple_news_use_image_component":false,"footnotes":""},"categories":[138422,138732,138405,139172],"tags":[137638,139784],"ppma_author":[139047],"class_list":{"0":"post-902749","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-digital","8":"category-featured","9":"category-news","10":"category-tech","11":"tag-meta","12":"tag-social-media-safety"},"aioseo_notices":[],"apple_news_notices":[],"aioseo_head":"\n\t\t<!-- All in One SEO Pro 4.8.1.1 - aioseo.com -->\n\t<meta name=\"description\" content=\"As the world\u2019s largest LGBTQ media advocacy organization and as leading experts in LGBTQ tech accountability, GLAAD\u2019s Social Media Safety program provides ongoing key stakeholder guidance with regard to LGBTQ safety, privacy, and expression to social media platforms, including Meta\u2019s Facebook, Instagram, and Threads. In addition to the following specific guidance to the Oversight Board\" \/>\n\t<meta name=\"robots\" content=\"max-image-preview:large\" \/>\n\t<meta name=\"author\" content=\"GLAAD\"\/>\n\t<meta name=\"google-site-verification\" content=\"393886352\" \/>\n\t<link rel=\"canonical\" href=\"https:\/\/glaad.org\/input-from-glaad-for-oversight-board-on-explicit-ai-images-of-female-public-figures-cases\/\" \/>\n\t<meta name=\"generator\" content=\"All in One SEO Pro (AIOSEO) 4.8.1.1\" \/>\n\t\t<meta property=\"og:locale\" content=\"en_US\" \/>\n\t\t<meta property=\"og:site_name\" content=\"GLAAD | GLAAD rewrites the script for LGBTQ acceptance.\" \/>\n\t\t<meta property=\"og:type\" content=\"article\" \/>\n\t\t<meta property=\"og:title\" content=\"Input from GLAAD for Oversight Board on \u201cExplicit AI Images of Female Public Figures\u201d Cases | GLAAD\" \/>\n\t\t<meta property=\"og:description\" content=\"As the world\u2019s largest LGBTQ media advocacy organization and as leading experts in LGBTQ tech accountability, GLAAD\u2019s Social Media Safety program provides ongoing key stakeholder guidance with regard to LGBTQ safety, privacy, and expression to social media platforms, including Meta\u2019s Facebook, Instagram, and Threads. In addition to the following specific guidance to the Oversight Board\" \/>\n\t\t<meta property=\"og:url\" content=\"https:\/\/glaad.org\/input-from-glaad-for-oversight-board-on-explicit-ai-images-of-female-public-figures-cases\/\" \/>\n\t\t<meta property=\"og:image\" content=\"https:\/\/media.glaad.org\/wp-content\/uploads\/2024\/05\/02151852\/GettyImages-2150495028.jpg\" \/>\n\t\t<meta property=\"og:image:secure_url\" content=\"https:\/\/media.glaad.org\/wp-content\/uploads\/2024\/05\/02151852\/GettyImages-2150495028.jpg\" \/>\n\t\t<meta property=\"og:image:width\" content=\"1920\" \/>\n\t\t<meta property=\"og:image:height\" content=\"1323\" \/>\n\t\t<meta property=\"article:published_time\" content=\"2024-05-03T12:39:54+00:00\" \/>\n\t\t<meta property=\"article:modified_time\" content=\"2024-05-03T12:39:54+00:00\" \/>\n\t\t<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/GLAAD\/\" \/>\n\t\t<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n\t\t<meta name=\"twitter:site\" content=\"@glaad\" \/>\n\t\t<meta name=\"twitter:title\" content=\"Input from GLAAD for Oversight Board on \u201cExplicit AI Images of Female Public Figures\u201d Cases | GLAAD\" \/>\n\t\t<meta name=\"twitter:description\" content=\"As the world\u2019s largest LGBTQ media advocacy organization and as leading experts in LGBTQ tech accountability, GLAAD\u2019s Social Media Safety program provides ongoing key stakeholder guidance with regard to LGBTQ safety, privacy, and expression to social media platforms, including Meta\u2019s Facebook, Instagram, and Threads. In addition to the following specific guidance to the Oversight Board\" \/>\n\t\t<meta name=\"twitter:creator\" content=\"@glaad\" \/>\n\t\t<meta name=\"twitter:image\" content=\"https:\/\/media.glaad.org\/wp-content\/uploads\/2024\/05\/02151852\/GettyImages-2150495028.jpg\" \/>\n\t\t<script type=\"application\/ld+json\" class=\"aioseo-schema\">\n\t\t\t{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"BlogPosting\",\"@id\":\"https:\\\/\\\/glaad.org\\\/input-from-glaad-for-oversight-board-on-explicit-ai-images-of-female-public-figures-cases\\\/#blogposting\",\"name\":\"Input from GLAAD for Oversight Board on \\u201cExplicit AI Images of Female Public Figures\\u201d Cases | GLAAD\",\"headline\":\"Input from GLAAD for Oversight Board on \\u201cExplicit AI Images of Female Public Figures\\u201d Cases\",\"author\":{\"@id\":\"https:\\\/\\\/glaad.org\\\/author\\\/glaad\\\/#author\"},\"publisher\":{\"@id\":\"https:\\\/\\\/glaad.org\\\/#organization\"},\"image\":{\"@type\":\"ImageObject\",\"url\":\"https:\\\/\\\/media.glaad.org\\\/wp-content\\\/uploads\\\/2024\\\/05\\\/02151852\\\/GettyImages-2150495028.jpg\",\"width\":1920,\"height\":1323,\"caption\":\"Photo by Artur Widak\\\/NurPhoto via Getty Images\"},\"datePublished\":\"2024-05-03T08:39:54-04:00\",\"dateModified\":\"2024-05-03T08:39:54-04:00\",\"inLanguage\":\"en-US\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/glaad.org\\\/input-from-glaad-for-oversight-board-on-explicit-ai-images-of-female-public-figures-cases\\\/#webpage\"},\"isPartOf\":{\"@id\":\"https:\\\/\\\/glaad.org\\\/input-from-glaad-for-oversight-board-on-explicit-ai-images-of-female-public-figures-cases\\\/#webpage\"},\"articleSection\":\"Digital, Featured Story, News, Tech, Meta, Social Media Safety, GLAAD\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/glaad.org\\\/input-from-glaad-for-oversight-board-on-explicit-ai-images-of-female-public-figures-cases\\\/#breadcrumblist\",\"itemListElement\":[{\"@type\":\"ListItem\",\"@id\":\"https:\\\/\\\/glaad.org\\\/#listItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/glaad.org\\\/\",\"nextItem\":{\"@type\":\"ListItem\",\"@id\":\"https:\\\/\\\/glaad.org\\\/input-from-glaad-for-oversight-board-on-explicit-ai-images-of-female-public-figures-cases\\\/#listItem\",\"name\":\"Input from GLAAD for Oversight Board on \\u201cExplicit AI Images of Female Public Figures\\u201d Cases\"}},{\"@type\":\"ListItem\",\"@id\":\"https:\\\/\\\/glaad.org\\\/input-from-glaad-for-oversight-board-on-explicit-ai-images-of-female-public-figures-cases\\\/#listItem\",\"position\":2,\"name\":\"Input from GLAAD for Oversight Board on \\u201cExplicit AI Images of Female Public Figures\\u201d Cases\",\"previousItem\":{\"@type\":\"ListItem\",\"@id\":\"https:\\\/\\\/glaad.org\\\/#listItem\",\"name\":\"Home\"}}]},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/glaad.org\\\/#organization\",\"name\":\"GLAAD\",\"description\":\"GLAAD rewrites the script for LGBTQ acceptance.\",\"url\":\"https:\\\/\\\/glaad.org\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"url\":\"https:\\\/\\\/media.glaad.org\\\/wp-content\\\/uploads\\\/2022\\\/11\\\/20110804\\\/Glaad_Cyan.png\",\"@id\":\"https:\\\/\\\/glaad.org\\\/input-from-glaad-for-oversight-board-on-explicit-ai-images-of-female-public-figures-cases\\\/#organizationLogo\",\"width\":1200,\"height\":639,\"caption\":\"GLAAD logo\"},\"image\":{\"@id\":\"https:\\\/\\\/glaad.org\\\/input-from-glaad-for-oversight-board-on-explicit-ai-images-of-female-public-figures-cases\\\/#organizationLogo\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/GLAAD\\\/\",\"https:\\\/\\\/twitter.com\\\/glaad\",\"https:\\\/\\\/www.instagram.com\\\/glaad\\\/\",\"https:\\\/\\\/www.youtube.com\\\/glaad\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/glaad\\\/\",\"https:\\\/\\\/glaad.tumblr.com\\\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/glaad.org\\\/author\\\/glaad\\\/#author\",\"url\":\"https:\\\/\\\/glaad.org\\\/author\\\/glaad\\\/\",\"name\":\"GLAAD\",\"image\":{\"@type\":\"ImageObject\",\"@id\":\"https:\\\/\\\/glaad.org\\\/input-from-glaad-for-oversight-board-on-explicit-ai-images-of-female-public-figures-cases\\\/#authorImage\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/c29b0ac1836c1412cd6fe864aa7a613a6e86a0c53612b8dfe02dcdc4fc2bea68?s=96&d=mm&r=g\",\"width\":96,\"height\":96,\"caption\":\"GLAAD\"},\"sameAs\":[\"https:\\\/\\\/glaad.org\\\/author\\\/glaad\"]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/glaad.org\\\/input-from-glaad-for-oversight-board-on-explicit-ai-images-of-female-public-figures-cases\\\/#webpage\",\"url\":\"https:\\\/\\\/glaad.org\\\/input-from-glaad-for-oversight-board-on-explicit-ai-images-of-female-public-figures-cases\\\/\",\"name\":\"Input from GLAAD for Oversight Board on \\u201cExplicit AI Images of Female Public Figures\\u201d Cases | GLAAD\",\"description\":\"As the world\\u2019s largest LGBTQ media advocacy organization and as leading experts in LGBTQ tech accountability, GLAAD\\u2019s Social Media Safety program provides ongoing key stakeholder guidance with regard to LGBTQ safety, privacy, and expression to social media platforms, including Meta\\u2019s Facebook, Instagram, and Threads. In addition to the following specific guidance to the Oversight Board\",\"inLanguage\":\"en-US\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/glaad.org\\\/#website\"},\"breadcrumb\":{\"@id\":\"https:\\\/\\\/glaad.org\\\/input-from-glaad-for-oversight-board-on-explicit-ai-images-of-female-public-figures-cases\\\/#breadcrumblist\"},\"author\":{\"@id\":\"https:\\\/\\\/glaad.org\\\/author\\\/glaad\\\/#author\"},\"creator\":{\"@id\":\"https:\\\/\\\/glaad.org\\\/author\\\/glaad\\\/#author\"},\"image\":{\"@type\":\"ImageObject\",\"url\":\"https:\\\/\\\/media.glaad.org\\\/wp-content\\\/uploads\\\/2024\\\/05\\\/02151852\\\/GettyImages-2150495028.jpg\",\"@id\":\"https:\\\/\\\/glaad.org\\\/input-from-glaad-for-oversight-board-on-explicit-ai-images-of-female-public-figures-cases\\\/#mainImage\",\"width\":1920,\"height\":1323,\"caption\":\"Photo by Artur Widak\\\/NurPhoto via Getty Images\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/glaad.org\\\/input-from-glaad-for-oversight-board-on-explicit-ai-images-of-female-public-figures-cases\\\/#mainImage\"},\"datePublished\":\"2024-05-03T08:39:54-04:00\",\"dateModified\":\"2024-05-03T08:39:54-04:00\"},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/glaad.org\\\/#website\",\"url\":\"https:\\\/\\\/glaad.org\\\/\",\"name\":\"GLAAD\",\"description\":\"GLAAD rewrites the script for LGBTQ acceptance.\",\"inLanguage\":\"en-US\",\"publisher\":{\"@id\":\"https:\\\/\\\/glaad.org\\\/#organization\"}}]}\n\t\t<\/script>\n\t\t<!-- All in One SEO Pro -->\r\n\t\t<title>Input from GLAAD for Oversight Board on \u201cExplicit AI Images of Female Public Figures\u201d Cases | GLAAD<\/title>\n\n","aioseo_head_json":{"title":"Input from GLAAD for Oversight Board on \u201cExplicit AI Images of Female Public Figures\u201d Cases | GLAAD","description":"As the world\u2019s largest LGBTQ media advocacy organization and as leading experts in LGBTQ tech accountability, GLAAD\u2019s Social Media Safety program provides ongoing key stakeholder guidance with regard to LGBTQ safety, privacy, and expression to social media platforms, including Meta\u2019s Facebook, Instagram, and Threads. In addition to the following specific guidance to the Oversight Board","canonical_url":"https:\/\/glaad.org\/input-from-glaad-for-oversight-board-on-explicit-ai-images-of-female-public-figures-cases\/","robots":"max-image-preview:large","keywords":"","webmasterTools":{"google-site-verification":"393886352","miscellaneous":""},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"BlogPosting","@id":"https:\/\/glaad.org\/input-from-glaad-for-oversight-board-on-explicit-ai-images-of-female-public-figures-cases\/#blogposting","name":"Input from GLAAD for Oversight Board on \u201cExplicit AI Images of Female Public Figures\u201d Cases | GLAAD","headline":"Input from GLAAD for Oversight Board on \u201cExplicit AI Images of Female Public Figures\u201d Cases","author":{"@id":"https:\/\/glaad.org\/author\/glaad\/#author"},"publisher":{"@id":"https:\/\/glaad.org\/#organization"},"image":{"@type":"ImageObject","url":"https:\/\/media.glaad.org\/wp-content\/uploads\/2024\/05\/02151852\/GettyImages-2150495028.jpg","width":1920,"height":1323,"caption":"Photo by Artur Widak\/NurPhoto via Getty Images"},"datePublished":"2024-05-03T08:39:54-04:00","dateModified":"2024-05-03T08:39:54-04:00","inLanguage":"en-US","mainEntityOfPage":{"@id":"https:\/\/glaad.org\/input-from-glaad-for-oversight-board-on-explicit-ai-images-of-female-public-figures-cases\/#webpage"},"isPartOf":{"@id":"https:\/\/glaad.org\/input-from-glaad-for-oversight-board-on-explicit-ai-images-of-female-public-figures-cases\/#webpage"},"articleSection":"Digital, Featured Story, News, Tech, Meta, Social Media Safety, GLAAD"},{"@type":"BreadcrumbList","@id":"https:\/\/glaad.org\/input-from-glaad-for-oversight-board-on-explicit-ai-images-of-female-public-figures-cases\/#breadcrumblist","itemListElement":[{"@type":"ListItem","@id":"https:\/\/glaad.org\/#listItem","position":1,"name":"Home","item":"https:\/\/glaad.org\/","nextItem":{"@type":"ListItem","@id":"https:\/\/glaad.org\/input-from-glaad-for-oversight-board-on-explicit-ai-images-of-female-public-figures-cases\/#listItem","name":"Input from GLAAD for Oversight Board on \u201cExplicit AI Images of Female Public Figures\u201d Cases"}},{"@type":"ListItem","@id":"https:\/\/glaad.org\/input-from-glaad-for-oversight-board-on-explicit-ai-images-of-female-public-figures-cases\/#listItem","position":2,"name":"Input from GLAAD for Oversight Board on \u201cExplicit AI Images of Female Public Figures\u201d Cases","previousItem":{"@type":"ListItem","@id":"https:\/\/glaad.org\/#listItem","name":"Home"}}]},{"@type":"Organization","@id":"https:\/\/glaad.org\/#organization","name":"GLAAD","description":"GLAAD rewrites the script for LGBTQ acceptance.","url":"https:\/\/glaad.org\/","logo":{"@type":"ImageObject","url":"https:\/\/media.glaad.org\/wp-content\/uploads\/2022\/11\/20110804\/Glaad_Cyan.png","@id":"https:\/\/glaad.org\/input-from-glaad-for-oversight-board-on-explicit-ai-images-of-female-public-figures-cases\/#organizationLogo","width":1200,"height":639,"caption":"GLAAD logo"},"image":{"@id":"https:\/\/glaad.org\/input-from-glaad-for-oversight-board-on-explicit-ai-images-of-female-public-figures-cases\/#organizationLogo"},"sameAs":["https:\/\/www.facebook.com\/GLAAD\/","https:\/\/twitter.com\/glaad","https:\/\/www.instagram.com\/glaad\/","https:\/\/www.youtube.com\/glaad","https:\/\/www.linkedin.com\/company\/glaad\/","https:\/\/glaad.tumblr.com\/"]},{"@type":"Person","@id":"https:\/\/glaad.org\/author\/glaad\/#author","url":"https:\/\/glaad.org\/author\/glaad\/","name":"GLAAD","image":{"@type":"ImageObject","@id":"https:\/\/glaad.org\/input-from-glaad-for-oversight-board-on-explicit-ai-images-of-female-public-figures-cases\/#authorImage","url":"https:\/\/secure.gravatar.com\/avatar\/c29b0ac1836c1412cd6fe864aa7a613a6e86a0c53612b8dfe02dcdc4fc2bea68?s=96&d=mm&r=g","width":96,"height":96,"caption":"GLAAD"},"sameAs":["https:\/\/glaad.org\/author\/glaad"]},{"@type":"WebPage","@id":"https:\/\/glaad.org\/input-from-glaad-for-oversight-board-on-explicit-ai-images-of-female-public-figures-cases\/#webpage","url":"https:\/\/glaad.org\/input-from-glaad-for-oversight-board-on-explicit-ai-images-of-female-public-figures-cases\/","name":"Input from GLAAD for Oversight Board on \u201cExplicit AI Images of Female Public Figures\u201d Cases | GLAAD","description":"As the world\u2019s largest LGBTQ media advocacy organization and as leading experts in LGBTQ tech accountability, GLAAD\u2019s Social Media Safety program provides ongoing key stakeholder guidance with regard to LGBTQ safety, privacy, and expression to social media platforms, including Meta\u2019s Facebook, Instagram, and Threads. In addition to the following specific guidance to the Oversight Board","inLanguage":"en-US","isPartOf":{"@id":"https:\/\/glaad.org\/#website"},"breadcrumb":{"@id":"https:\/\/glaad.org\/input-from-glaad-for-oversight-board-on-explicit-ai-images-of-female-public-figures-cases\/#breadcrumblist"},"author":{"@id":"https:\/\/glaad.org\/author\/glaad\/#author"},"creator":{"@id":"https:\/\/glaad.org\/author\/glaad\/#author"},"image":{"@type":"ImageObject","url":"https:\/\/media.glaad.org\/wp-content\/uploads\/2024\/05\/02151852\/GettyImages-2150495028.jpg","@id":"https:\/\/glaad.org\/input-from-glaad-for-oversight-board-on-explicit-ai-images-of-female-public-figures-cases\/#mainImage","width":1920,"height":1323,"caption":"Photo by Artur Widak\/NurPhoto via Getty Images"},"primaryImageOfPage":{"@id":"https:\/\/glaad.org\/input-from-glaad-for-oversight-board-on-explicit-ai-images-of-female-public-figures-cases\/#mainImage"},"datePublished":"2024-05-03T08:39:54-04:00","dateModified":"2024-05-03T08:39:54-04:00"},{"@type":"WebSite","@id":"https:\/\/glaad.org\/#website","url":"https:\/\/glaad.org\/","name":"GLAAD","description":"GLAAD rewrites the script for LGBTQ acceptance.","inLanguage":"en-US","publisher":{"@id":"https:\/\/glaad.org\/#organization"}}]},"og:locale":"en_US","og:site_name":"GLAAD | GLAAD rewrites the script for LGBTQ acceptance.","og:type":"article","og:title":"Input from GLAAD for Oversight Board on \u201cExplicit AI Images of Female Public Figures\u201d Cases | GLAAD","og:description":"As the world\u2019s largest LGBTQ media advocacy organization and as leading experts in LGBTQ tech accountability, GLAAD\u2019s Social Media Safety program provides ongoing key stakeholder guidance with regard to LGBTQ safety, privacy, and expression to social media platforms, including Meta\u2019s Facebook, Instagram, and Threads. In addition to the following specific guidance to the Oversight Board","og:url":"https:\/\/glaad.org\/input-from-glaad-for-oversight-board-on-explicit-ai-images-of-female-public-figures-cases\/","og:image":"https:\/\/media.glaad.org\/wp-content\/uploads\/2024\/05\/02151852\/GettyImages-2150495028.jpg","og:image:secure_url":"https:\/\/media.glaad.org\/wp-content\/uploads\/2024\/05\/02151852\/GettyImages-2150495028.jpg","og:image:width":1920,"og:image:height":1323,"article:published_time":"2024-05-03T12:39:54+00:00","article:modified_time":"2024-05-03T12:39:54+00:00","article:publisher":"https:\/\/www.facebook.com\/GLAAD\/","twitter:card":"summary_large_image","twitter:site":"@glaad","twitter:title":"Input from GLAAD for Oversight Board on \u201cExplicit AI Images of Female Public Figures\u201d Cases | GLAAD","twitter:description":"As the world\u2019s largest LGBTQ media advocacy organization and as leading experts in LGBTQ tech accountability, GLAAD\u2019s Social Media Safety program provides ongoing key stakeholder guidance with regard to LGBTQ safety, privacy, and expression to social media platforms, including Meta\u2019s Facebook, Instagram, and Threads. In addition to the following specific guidance to the Oversight Board","twitter:creator":"@glaad","twitter:image":"https:\/\/media.glaad.org\/wp-content\/uploads\/2024\/05\/02151852\/GettyImages-2150495028.jpg"},"aioseo_meta_data":{"post_id":"902749","title":null,"description":null,"keywords":null,"keyphrases":{"focus":{"keyphrase":"","score":0,"analysis":{"keyphraseInTitle":{"score":0,"maxScore":9,"error":1}}},"additional":[]},"primary_term":{"category":139172},"canonical_url":null,"og_title":null,"og_description":null,"og_object_type":"default","og_image_type":"default","og_image_url":null,"og_image_width":null,"og_image_height":null,"og_image_custom_url":null,"og_image_custom_fields":null,"og_video":"","og_custom_url":null,"og_article_section":null,"og_article_tags":null,"twitter_use_og":false,"twitter_card":"default","twitter_image_type":"default","twitter_image_url":null,"twitter_image_custom_url":null,"twitter_image_custom_fields":null,"twitter_title":null,"twitter_description":null,"schema":{"blockGraphs":[],"customGraphs":[],"default":{"data":{"Article":[],"Course":[],"Dataset":[],"FAQPage":[],"Movie":[],"Person":[],"Product":[],"ProductReview":[],"Car":[],"Recipe":[],"Service":[],"SoftwareApplication":[],"WebPage":[]},"graphName":"BlogPosting","isEnabled":true},"graphs":[]},"schema_type":"default","schema_type_options":null,"pillar_content":false,"robots_default":true,"robots_noindex":false,"robots_noarchive":false,"robots_nosnippet":false,"robots_nofollow":false,"robots_noimageindex":false,"robots_noodp":false,"robots_notranslate":false,"robots_max_snippet":"-1","robots_max_videopreview":"-1","robots_max_imagepreview":"large","priority":null,"frequency":"default","local_seo":null,"limit_modified_date":false,"open_ai":{"title":{"suggestions":[],"usage":0},"description":{"suggestions":[],"usage":0}},"created":"2024-05-02 19:18:04","updated":"2024-07-15 17:56:54"},"aioseo_breadcrumb":"<div class=\"aioseo-breadcrumbs\"><span class=\"aioseo-breadcrumb\">\n\t<a href=\"https:\/\/glaad.org\" title=\"Home\">Home<\/a>\n<\/span><span class=\"aioseo-breadcrumb-separator\">&raquo;<\/span><span class=\"aioseo-breadcrumb\">\n\t<a href=\"https:\/\/glaad.org\/category\/tech\/\" title=\"Tech\">Tech<\/a>\n<\/span><span class=\"aioseo-breadcrumb-separator\">&raquo;<\/span><span class=\"aioseo-breadcrumb\">\n\tInput from GLAAD for Oversight Board on \u201cExplicit AI Images of Female Public Figures\u201d Cases\n<\/span><\/div>","aioseo_breadcrumb_json":[{"label":"Home","link":"https:\/\/glaad.org"},{"label":"Tech","link":"https:\/\/glaad.org\/category\/tech\/"},{"label":"Input from GLAAD for Oversight Board on \u201cExplicit AI Images of Female Public Figures\u201d Cases","link":"https:\/\/glaad.org\/input-from-glaad-for-oversight-board-on-explicit-ai-images-of-female-public-figures-cases\/"}],"authors":[{"term_id":139047,"user_id":507,"is_guest":0,"slug":"glaad","display_name":"GLAAD","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/c29b0ac1836c1412cd6fe864aa7a613a6e86a0c53612b8dfe02dcdc4fc2bea68?s=96&d=mm&r=g","author_category":"","user_url":"","last_name":"Dickinson","first_name":"Melissa","job_title":"","description":""}],"_links":{"self":[{"href":"https:\/\/glaad.org\/wp-json\/wp\/v2\/posts\/902749","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/glaad.org\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/glaad.org\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/glaad.org\/wp-json\/wp\/v2\/users\/507"}],"replies":[{"embeddable":true,"href":"https:\/\/glaad.org\/wp-json\/wp\/v2\/comments?post=902749"}],"version-history":[{"count":8,"href":"https:\/\/glaad.org\/wp-json\/wp\/v2\/posts\/902749\/revisions"}],"predecessor-version":[{"id":902759,"href":"https:\/\/glaad.org\/wp-json\/wp\/v2\/posts\/902749\/revisions\/902759"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/glaad.org\/wp-json\/wp\/v2\/media\/902751"}],"wp:attachment":[{"href":"https:\/\/glaad.org\/wp-json\/wp\/v2\/media?parent=902749"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/glaad.org\/wp-json\/wp\/v2\/categories?post=902749"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/glaad.org\/wp-json\/wp\/v2\/tags?post=902749"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/glaad.org\/wp-json\/wp\/v2\/ppma_author?post=902749"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}