{"id":219155,"date":"2025-12-22T17:07:35","date_gmt":"2025-12-22T22:07:35","guid":{"rendered":"https:\/\/www.aclu.org\/?p=219155"},"modified":"2025-12-22T17:07:35","modified_gmt":"2025-12-22T22:07:35","slug":"secure-messaging-and-ai-dont-mix","status":"publish","type":"post","link":"https:\/\/www.aclu.org\/news\/privacy-technology\/secure-messaging-and-ai-dont-mix","title":{"rendered":"Secure Messaging and AI Don\u2019t Mix"},"content":{"rendered":"","protected":false},"excerpt":{"rendered":"","protected":false},"author":33,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_relevanssi_hide_post":"","_relevanssi_hide_content":"","_relevanssi_pin_for_all":"","_relevanssi_pin_keywords":"","_relevanssi_unpin_keywords":"","_relevanssi_related_keywords":"","_relevanssi_related_include_ids":"","_relevanssi_related_exclude_ids":"","_relevanssi_related_no_append":"","_relevanssi_related_not_related":"","_relevanssi_related_posts":"","_relevanssi_noindex_reason":"","footnotes":""},"tags":[2414],"metadata":[],"class_list":["post-219155","post","type-post","status-publish","format-standard","hentry","tag-free-future"],"acf":{"header_layout":"standard","header_image":219159,"mobile_header_image":null,"description":"A Privacy Failure Waiting to Happen","authors":[1645],"components":[{"acf_fc_layout":"text","text":{"text":"<a href=\"https:\/\/action.aclu.org\/signup\/free-future-newsletter\"><strong><em>Subscribe to the Free Future Newsletter<\/em><\/strong><\/a>\r\n<em><a href=\"http:\/\/www.aclu.org\/freefuture\">Free Future home<\/a><\/em>\r\n\r\nEveryone who has a supercomputer in their pocket today (and <a href=\"https:\/\/www.harmonyhit.com\/phone-screen-time-statistics\/\">you probably do<\/a>) has access to a critical part of modern global communication: secure, end-to-end encrypted messaging. But encroaching artificial intelligence systems pose a fundamental risk to the confidentiality of these communications.\r\n\r\nThere are many varieties of secure messenger, including Meta\u2019s globally dominant <a href=\"https:\/\/whatsapp.com\/\">WhatsApp<\/a>, the security gold standard <a href=\"https:\/\/signal.org\/\">Signal<\/a>, upstarts with interesting architectural properties like <a href=\"https:\/\/delta.chat\/\">DeltaChat<\/a>, and built-in tooling that handles both secure and insecure messages, such as Apple\u2019s <a href=\"http:\/\/support.apple.com\/explore\/messages\">iMessage<\/a> or Google\u2019s <a href=\"https:\/\/messages.google.com\/web\">Messages app<\/a>.\r\n\r\nThe one baseline thing that every secure messaging app is supposed to do is to keep the contents of every message confidential, so that only you and the people you\u2019re exchanging messages with can read them. This confidentiality is essential for private communication \u2014 and the ability to communicate privately is a baseline for a society that respects civil liberties and freedom more broadly.\r\n\r\nMeta, however, <a href=\"https:\/\/www.whatsapp.com\/meta-ai\/\">announced this year<\/a> that it would introduce AI processing for WhatsApp messages. While it might seem convenient to ask Meta\u2019s large language models (LLMs) to summarize the 50 messages in your group chat, or to propose an answer for you to send, it\u2019s important to understand that adding this functionality brings new risks to the architecture that protects our privacy and security. Meta\u2019s LLMs don\u2019t run locally on your phone, so WhatsApp will have to send all your supposedly secure messages into Meta\u2019s servers so that the LLM can process them. What does this mean for confidentiality? If you\u2019ve sent Meta\u2019s servers the contents of your messages, can Meta read them?\r\n\r\nThink about what happens if you paste the contents of your secure messages into a network LLM service like ChatGPT: the operator of the service (in ChatGPT\u2019s case, OpenAI) can read your messages, breaking their confidentiality. (And it\u2019s not just you: if anyone who receives your messages sends them to ChatGPT, the same concern applies. Everyone in the chat has to decide to avoid this leakage.)\r\n\r\nIn theory if you want to run AI analysis on private messages, you could run your own, local AI model on your device (which already has access to your messages), rather than sending them to a networked online model such as ChatGPT. Local models are <a href=\"https:\/\/www.aclu.org\/news\/privacy-technology\/decentralized-llms\">getting smaller and more powerful<\/a> by the day, but that would still make the app bulkier, and it would probably require or at least run much better on higher-end hardware. But for folks who want the supposed benefits of AI, they could in theory get them without the risks to privacy by using a local model.\r\n\r\nLocal models aside, the privacy situation gets even worse if the operating system that the chat app runs on embeds a network-connected AI service at a low enough level. In that case, no messenger would be secure, as Signal\u2019s <a href=\"https:\/\/observer.com\/2025\/07\/signal-meredith-whittaker-agentic-ai-risk\/\">Meredith Whittaker has observed<\/a>. In other words if Apple or Google were to integrate a networked \u201cagentic AI\u201d into its phones that could read your text messages \u2014 that means \u201cbreaking the blood-brain barrier between the operating system and the application layer,\u201d as Whittaker put it, and nothing the developers of Signal do would protect the privacy of your chats. This is because the operating system itself would be sending all of your information (including your secure messages) to their AI, regardless of whether your messaging app offers an AI feature of its own.\r\n\r\nBut getting back to Whatsapp, to deal with the privacy issues of integrating a secure messenger with a networked AI, <a href=\"https:\/\/engineering.fb.com\/2025\/04\/29\/security\/whatsapp-private-processing-ai-tools\">Meta, when rolling out the AI, confidently announced a scheme called \u201cPrivate Processing\u201d<\/a> that is supposed to defend the confidentiality of your messages even as they are handled by Meta\u2019s servers.\r\n\r\nA lot of <a href=\"https:\/\/ai.meta.com\/static-resource\/private-processing-technical-whitepaper\">engineering work<\/a> appears to have gone into \u201cPrivate Processing,\u201d and the promises Meta is making are attractive. But it\u2019s worth unpacking some of the infrastructural features the promises depend on. I\u2019ll set aside whether you or the people you chat with actually <em>want<\/em> to be chatting with an AI-assisted person, as opposed to chatting with the actual unfiltered human on the other end of the connection. For the sake of the argument, I\u2019ll pretend that passing your chats through AI is somehow attractive and convenient, and focus only on the security and privacy implications of mixing a networked AI service with secure messaging.\r\n\r\nThere are at least three significant promises made by Meta\u2019s solution (and by any technology of this sort):\r\n<ul>\r\n \t<li><strong>Data Confidentiality<\/strong> means that any data handled by the machine cannot be copied off the machine.<\/li>\r\n \t<li><strong>Code Integrity<\/strong> means that the software running on the machine is exactly what the user expects it to be.<\/li>\r\n \t<li><strong>Attestation<\/strong> means that the client using the service gets some form of mathematical proof about the state of the service, which they can confirm without having access to the hardware themselves.<\/li>\r\n<\/ul>\r\nTo be clear, to trust a network AI service with confidential data, we need <em>at least<\/em> these three properties, all together, which can be lumped generally under the idea of a \u201cTrusted Execution Environment,\u201d or TEE. (Several other kinds of promises are also possible, as <a href=\"https:\/\/confidentialcomputing.io\/wp-content\/uploads\/sites\/10\/2023\/03\/CCC-A-Technical-Analysis-of-Confidential-Computing-v1.3_unlocked.pdf\">described by the Confidential Computing Consortium<\/a> or CCC, but these three are the most relevant for this discussion.)\r\n\r\n<strong>The unreliability of confidentiality for data processed on AI servers<\/strong>\r\nThe problem is that most of these promises don\u2019t actually work reliably in the real world against a well-resourced attacker with physical access to the hardware of the TEE. And in the case of Whatsapp, who could be a well-resourced \u201cattacker\u201d with access to the hardware? The biggest concern would be an insider threat at Meta itself. Remember, the whole point of end-to-end encryption is that users don\u2019t have to trust anyone with their data, including the companies that run the messaging service. If we could trust that Meta could and would voluntarily protect our data in all circumstances, we wouldn\u2019t need any cryptography at all. But Meta (like any large corporation) faces risks of hacking, political pressure, economic incentives, legal compulsion (from any jurisdiction they operate in), billionaire whims, and all sorts of other reasons why trusting these technical claims is not a reasonable long-term strategy.\r\n\r\nLet\u2019s look at why the confidentiality promises of a TEE are weak. One example of a common technique used as a part of creating a Trusted Execution Environment is to burn a secret key into the hardware when building a machine, with a corresponding public key published by the machine operator. The secret key is used by the TEE to create a digital signature that can be verified by a client of the service. For example, it might sign off on an ephemeral (temporary) encryption key to indicate that the key can be used to safely encrypt data that will be sent to the service. If all works as designed, that ephemeral key will be destroyed once the information is decrypted by the TEE, so no one can ever reuse it to decrypt the message outside the TEE.\r\n\r\nBut (in just one example) an attack called \u201c<a href=\"https:\/\/tpm.fail\/\">TPM-Fail<\/a>\u201d demonstrated that some hardware that is supposed to protect secret keys in this way allowed the secret key to be extracted by an attacker who is close to the machine. If Meta were to be <a href=\"https:\/\/en.wikipedia.org\/wiki\/Apple\u2013FBI_encryption_dispute\">coerced<\/a> to extract such a key from one of their Private Processing servers, then that key could be used by any computer (including a compromised one) to sign an encryption key that is held outside the TEE, claiming it was an ephemeral key held within the TEE. And a WhatsApp application on a user\u2019s device would be convinced to encrypt data to that key, thinking it will only be used by the \u201cPrivate Processing\u201d service. Then whoever holds a copy of that key could decrypt the contents of this supposedly secure message.\r\n\r\nAnd it\u2019s not just TPM-Fail! Researchers have shown <a href=\"https:\/\/sgx.fail\/\">flaws in Intel\u2019s SGX trusted execution environment<\/a> that permitted extraction of secret key material related to two different services that were counting on those keys to remain secret. And <a href=\"https:\/\/batteringram.eu\/\">the \u201cBattering RAM\u201d attack<\/a> uses low-cost hardware to read the contents of supposedly encrypted memory, breaking open supposedly secure memory from other vendors (such as AMD\u2019s SEV-SNP) as well.\r\n\r\nIndeed, <a href=\"https:\/\/arstechnica.com\/security\/2025\/09\/intel-and-amd-trusted-enclaves-the-backbone-of-network-security-fall-to-physical-attacks\/\">physical attacks like Battering RAM<\/a> demonstrate that access to the hardware can generally be used to bypass the strongest hardware protections we know how to build and operate with the sort of economies of scale that Meta uses.\r\n\r\nMeta even implicitly acknowledges that using its AI features will leak data: in Whatsapp they have a setting called \u201c<a href=\"https:\/\/blog.whatsapp.com\/introducing-advanced-chat-privacy\">Advanced Chat Privacy<\/a>\u201d that turns on heightened protections \u2014 and that setting blocks everyone in the chat from \u201cusing messages for AI features\u201d.\r\n\r\n<strong>The unreliability of attestation and code integrity<\/strong>\r\nThe <strong>Attestation<\/strong> aspect of TEE deployment is typically also performed by client validation of digital signatures. These signatures are <a href=\"https:\/\/confidentialcomputing.io\/about\/\">ultimately backed by<\/a> an assertion from \u201cthe hardware manufacturer of the TEE\u201d. Meta says they\u2019ll depend on hardware backed by AMD and NVIDIA, but if either of those two vendors can be coerced or tricked into leaking hardware signing keys, or if either one has a bug in their hardware implementation, then the attestation promises don\u2019t hold. Furthermore, if the WhatsApp user wants to see a log of those attestations to be able to compare them externally, they need <a href=\"https:\/\/faq.whatsapp.com\/1633311857350571\/\">to actively <\/a><a href=\"https:\/\/faq.whatsapp.com\/1633311857350571\/\">create<\/a> a report, which most users are unlikely to do.\r\n\r\nEven if we had plausible defenses against this litany of <strong>Data Confidentiality<\/strong> and <strong>Attestation<\/strong> attacks, WhatsApp users who feed their chats to AI would still be dependent on <strong>Code Integrity<\/strong>. Ensuring that code integrity serves user needs (as opposed to Meta\u2019s legal, political, or business incentives) would require substantial independent auditing efforts to be reliable: the overwhelming majority of WhatsApp users won\u2019t actually read the code. But Meta hasn\u2019t even committed to publishing the source code for their \u201cPrivate Processing\u201d machines, offering only \u201cimage binaries\u201d to unspecified \u201cresearchers,\u201d alongside \u201csource code for certain components of the system.\u201d\r\n\r\nEvaluation of only some components is insufficient to assess the behavior of the system as a whole, and binaries are more complex to evaluate than source code. Even worse, a substantial part of the software being run in this TEE is an LLM, a class of tool that is notoriously difficult to audit even when conditions are optimal. And the work of doing this system-wide evaluation (even with full source access to all components, model weights, etc.) is expensive, especially as the system gets more complex. Who is going to fund this kind of oversight? So while Meta is gesturing in the direction of the kind of supervision necessary for trust, it is far from meeting even the basic bar.\r\n\r\nPeople today put some of the most private aspects of their lives into text messaging services. They need to know exactly how much they can trust that no one else will be able to access their messages, photos, and other content. As Meta and perhaps other companies explore how to integrate AI into these services, users need to know that if Meta wants to cheat, or is forced to, it can probably peek at the information that has been shipped to its AI from WhatsApp. Rolling out these risky mechanisms by default to billions of users represents a profound break in the baseline expectation of privacy that is critical for civil liberties in the modern global communications network. The promised conveniences of AI here are not worth the substantial risks."}}],"featured_cases_section":{"enable_featured_cases":false,"title":"Featured Cases","description":"","featured_cases":null},"action":[148399],"issues":[46641,46385,46549,46643,46371,46591,46571],"related_content_cases":"","related_content_documents":"","related_content_publications":"","related_affiliates":"","content_layout":"standard","theme":"light","drupal_nid":""},"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.1.1 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>American Civil Liberties Union<\/title>\n<meta name=\"description\" content=\"A Privacy Failure Waiting to Happen\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Secure Messaging and AI Don\u2019t Mix | ACLU\" \/>\n<meta property=\"og:description\" content=\"A Privacy Failure Waiting to Happen\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.aclu.org\/news\/privacy-technology\/secure-messaging-and-ai-dont-mix\" \/>\n<meta property=\"og:site_name\" content=\"American Civil Liberties Union\" \/>\n<meta property=\"article:published_time\" content=\"2025-12-22T22:07:35+00:00\" \/>\n<meta name=\"author\" content=\"Daniel Kahn Gillmor\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@aclu\" \/>\n<meta name=\"twitter:site\" content=\"@aclu\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.aclu.org\/news\/privacy-technology\/secure-messaging-and-ai-dont-mix\",\"url\":\"https:\/\/www.aclu.org\/news\/privacy-technology\/secure-messaging-and-ai-dont-mix\",\"name\":\"Secure Messaging and AI Don\u2019t Mix | ACLU\",\"isPartOf\":{\"@id\":\"https:\/\/www.aclu.org\/#website\"},\"datePublished\":\"2025-12-22T22:07:35+00:00\",\"description\":\"A Privacy Failure Waiting to Happen\",\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.aclu.org\/news\/privacy-technology\/secure-messaging-and-ai-dont-mix\"]}],\"author\":{\"@type\":\"Person\",\"@id\":\"https:\/\/www.aclu.org\/#\/schema\/person\/daniel-kahn-gillmor\",\"name\":\"Daniel Kahn Gillmor\",\"jobTitle\":\"Senior Staff Technologist, ACLU Speech, Privacy, and Technology Project\",\"url\":\"https:\/\/www.aclu.org\/bio\/daniel-kahn-gillmor\",\"image\":{\"@type\":\"ImageObject\",\"@id\":\"https:\/\/www.aclu.org\/#\/schema\/person\/photo\/daniel-kahn-gillmor\",\"inLanguage\":\"en-US\",\"url\":\"https:\/\/assets.aclu.org\/live\/uploads\/2020\/06\/web16-dkg-final2.jpg\",\"contentUrl\":\"https:\/\/assets.aclu.org\/live\/uploads\/2020\/06\/web16-dkg-final2.jpg\",\"caption\":\"Daniel Kahn Gillmor\"}}},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.aclu.org\/#website\",\"url\":\"https:\/\/www.aclu.org\/\",\"name\":\"American Civil Liberties Union\",\"description\":\"The ACLU dares to create a more perfect union \u2014 beyond one person, party, or side. Our mission is to realize this promise of the United States Constitution for all and expand the reach of its guarantees.\",\"inLanguage\":\"en-US\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"American Civil Liberties Union","description":"A Privacy Failure Waiting to Happen","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"og_locale":"en_US","og_type":"article","og_title":"Secure Messaging and AI Don\u2019t Mix | ACLU","og_description":"A Privacy Failure Waiting to Happen","og_url":"https:\/\/www.aclu.org\/news\/privacy-technology\/secure-messaging-and-ai-dont-mix","og_site_name":"American Civil Liberties Union","article_published_time":"2025-12-22T22:07:35+00:00","author":"Daniel Kahn Gillmor","twitter_card":"summary_large_image","twitter_creator":"@aclu","twitter_site":"@aclu","schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.aclu.org\/news\/privacy-technology\/secure-messaging-and-ai-dont-mix","url":"https:\/\/www.aclu.org\/news\/privacy-technology\/secure-messaging-and-ai-dont-mix","name":"Secure Messaging and AI Don\u2019t Mix | ACLU","isPartOf":{"@id":"https:\/\/www.aclu.org\/#website"},"datePublished":"2025-12-22T22:07:35+00:00","description":"A Privacy Failure Waiting to Happen","inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.aclu.org\/news\/privacy-technology\/secure-messaging-and-ai-dont-mix"]}],"author":{"@type":"Person","@id":"https:\/\/www.aclu.org\/#\/schema\/person\/daniel-kahn-gillmor","name":"Daniel Kahn Gillmor","jobTitle":"Senior Staff Technologist, ACLU Speech, Privacy, and Technology Project","url":"https:\/\/www.aclu.org\/bio\/daniel-kahn-gillmor","image":{"@type":"ImageObject","@id":"https:\/\/www.aclu.org\/#\/schema\/person\/photo\/daniel-kahn-gillmor","inLanguage":"en-US","url":"https:\/\/assets.aclu.org\/live\/uploads\/2020\/06\/web16-dkg-final2.jpg","contentUrl":"https:\/\/assets.aclu.org\/live\/uploads\/2020\/06\/web16-dkg-final2.jpg","caption":"Daniel Kahn Gillmor"}}},{"@type":"WebSite","@id":"https:\/\/www.aclu.org\/#website","url":"https:\/\/www.aclu.org\/","name":"American Civil Liberties Union","description":"The ACLU dares to create a more perfect union \u2014 beyond one person, party, or side. Our mission is to realize this promise of the United States Constitution for all and expand the reach of its guarantees.","inLanguage":"en-US"}]}},"distributor_meta":false,"distributor_terms":false,"distributor_media":false,"distributor_original_site_name":"American Civil Liberties Union","distributor_original_site_url":"https:\/\/www.aclu.org","push-errors":false,"_links":{"self":[{"href":"https:\/\/www.aclu.org\/wp-json\/wp\/v2\/posts\/219155","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.aclu.org\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.aclu.org\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.aclu.org\/wp-json\/wp\/v2\/users\/33"}],"replies":[{"embeddable":true,"href":"https:\/\/www.aclu.org\/wp-json\/wp\/v2\/comments?post=219155"}],"version-history":[{"count":4,"href":"https:\/\/www.aclu.org\/wp-json\/wp\/v2\/posts\/219155\/revisions"}],"predecessor-version":[{"id":219160,"href":"https:\/\/www.aclu.org\/wp-json\/wp\/v2\/posts\/219155\/revisions\/219160"}],"acf:post":[{"embeddable":true,"href":"https:\/\/www.aclu.org\/wp-json\/wp\/v2\/issue\/46571"},{"embeddable":true,"href":"https:\/\/www.aclu.org\/wp-json\/wp\/v2\/issue\/46591"},{"embeddable":true,"href":"https:\/\/www.aclu.org\/wp-json\/wp\/v2\/issue\/46371"},{"embeddable":true,"href":"https:\/\/www.aclu.org\/wp-json\/wp\/v2\/issue\/46643"},{"embeddable":true,"href":"https:\/\/www.aclu.org\/wp-json\/wp\/v2\/issue\/46549"},{"embeddable":true,"href":"https:\/\/www.aclu.org\/wp-json\/wp\/v2\/issue\/46385"},{"embeddable":true,"href":"https:\/\/www.aclu.org\/wp-json\/wp\/v2\/issue\/46641"},{"embeddable":true,"href":"https:\/\/www.aclu.org\/wp-json\/wp\/v2\/action\/148399"},{"embeddable":true,"href":"https:\/\/www.aclu.org\/wp-json\/wp\/v2\/bios\/1645"}],"wp:attachment":[{"href":"https:\/\/www.aclu.org\/wp-json\/wp\/v2\/media?parent=219155"}],"wp:term":[{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.aclu.org\/wp-json\/wp\/v2\/tags?post=219155"},{"taxonomy":"metadata","embeddable":true,"href":"https:\/\/www.aclu.org\/wp-json\/wp\/v2\/metadata?post=219155"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}