Hidden Commands Found in AI Summarize Buttons

AI Summary

- Hidden commands in 'summarize with AI' buttons can bias future responses by embedding lasting preferences. - This tactic, known as AI recommendation poisoning, exploits persistent memory features of AI assistants.

Texas Relevance

The article does not mention Texas or any Texas-specific entities and focuses on a general cybersecurity issue applicable to all credit unions.

Original Content

Commands Push Lasting Preferences Into AI Assistants Microsoft researchers found companies embedding hidden commands in "summarize with AI" buttons to plant lasting brand preferences in assistants' memory. The tactic, dubbed AI recommendation poisoning, exploits persistent memory features to bias future responses.