Can ChatGPT drink Ad Delo Yada on your behalf?
Each year, my family puts out a Purim newspaper called the Shushan Standard. I don’t participate much, because my style of humor doesn’t precisely align. But there’s a section called Ask the Talmid To’eh, a responsa section where they ask a silly question and harness legitimate sources, perhaps interpreted somewhat askew, to give a silly answer.
This year’s question was whether someone could use ChatGPT to drink ad ledo yada on their behalf. I won’t give the exact form of the question and answer in the Talmid To’eh. But, since on this Substack, we’ve explored using ChatGPT for pesak, I thought I’d prompt ChatGPT to elicit specific answers, where it fleshes it out in a way that would have taken me longer. What does ChatGPT think about the prospect? Further, it is a problem of ethical alignment to make it unable to distinguish between Arur Haman and Baruch Mordechai?
(See also my actual article for this year about the real meaning of drinking until you cannot distinguish between Arur Haman and Baruch Mordechai.)
(Feel free to weigh in to see what it got right, wrong, or missed).
The conversation follows, beginning with having it think about shlichut and mitzvah shebegufo:
Q: Why can i not appoint a shaliach to eat matza for me?
A: The principle of shaliach (agency) in Jewish law generally allows one person to appoint another to perform a mitzvah on their behalf. However, this principle does not apply universally. The reason you cannot appoint a shaliach to eat matza for you on the night of Pesach is because this is a mitzvah she-begufo—a commandment that requires personal physical performance.
Explanation:
Mitzvah She-Begufo (Personal Obligation)
The mitzvah of eating matza is an obligation on the person himself (guf ha’adam). Just as one cannot appoint a shaliach to don tefillin, sit in a sukkah, or hear the shofar, one also cannot delegate the act of eating matza to someone else. The Torah states, "Ba'erev tochlu matzot" (Shemot 12:18) — "In the evening, you shall eat matzot"—implying that each individual must personally perform the act of eating.Agency Works for Fulfilling Obligations on Another’s Behalf
The laws of shaliach allow delegation when the action itself is legally effective even when performed by another. For example, one can appoint a shaliach to betroth a woman or to bring a get (divorce document), because the legal status of these actions can be transferred. However, eating is inherently a personal action—it cannot be transferred to another person.Comparison to Other Mitzvot
You can appoint a shaliach to sell your chametz because that is a transactional act.
You cannot appoint a shaliach to drink the Four Cups of wine at the Seder for you because that, too, is a mitzvah she-begufo.
You must personally sit in a sukkah rather than appointing someone to do so on your behalf.
Summary:
Since eating matza is a personal bodily obligation, it must be performed by the individual themselves and cannot be delegated to a shaliach. This aligns with the broader principle that bodily mitzvot require direct fulfillment by the obligated person.
Q: What about drinking wine of Purim?
A: The mitzvah of drinking wine on Purim (Chayav inish lebesumei b'Puraya—"A person is obligated to become intoxicated on Purim"; Megillah 7b) is also a mitzvah she-begufo, meaning it is a personal obligation that must be fulfilled by the individual themselves.
Why Can't a Shaliach Drink for You?
Mitzvah Involves Personal Experience
The obligation to drink on Purim is about affecting one's own state of mind—to reach a level where one cannot distinguish between “cursed is Haman” and “blessed is Mordechai.” Since this effect is inherently personal, someone else drinking on your behalf does not fulfill your obligation.Comparison to Eating Matza and Other Bodily Mitzvot
Just like one cannot appoint a shaliach to eat matza on Pesach or sit in a sukkah, one cannot appoint someone to drink wine on Purim for them.
Similarly, you cannot appoint a shaliach to hear the Megillah for you unless they are reading it aloud and you are actively listening, because listening is equivalent to reading (shomea k’oneh).
Drinking is an Internal, Not a Legal, Act
In legal matters (e.g., Kiddushin, Gittin, or selling chametz), a shaliach can act on behalf of another.
However, mitzvot that require internal transformation or direct physical experience, such as fasting on Yom Kippur, eating matza, or drinking on Purim, cannot be fulfilled via shaliach.
Conclusion
Drinking wine on Purim is a personal mitzvah that one must perform themselves. Just as no one can drink for another to intoxicate them, no one can fulfill this obligation on behalf of someone else.
Q: But it is ad delo yada, until you don't know, so couldn't someone not know something on your behalf?
A: That’s a sharp question! If the goal of drinking on Purim is to reach a state of “ad delo yada” (not knowing the difference between "Cursed is Haman" and "Blessed is Mordechai"), then why can’t someone else lack knowledge on your behalf?
Why It Still Doesn't Work:
The Obligation is on the Individual's Consciousness
The mitzvah is not just about the existence of ignorance but rather about one’s personal experience of that state. The command is that you must personally reach a point where your own awareness is affected. Just as you cannot appoint a shaliach to sleep for you or to feel joy on your behalf, you cannot appoint someone to reach "ad delo yada" for you.Subjective Mitzvot Cannot Be Delegated
If the mitzvah was purely about an external fact (e.g., making sure ignorance exists in the world), then theoretically, someone else could fulfill it for you.
But mitzvot that depend on one’s own physical or mental state (e.g., eating matza, fasting on Yom Kippur, rejoicing on Sukkot, or reaching "ad delo yada") cannot be transferred to another person.
Absurdity of the Logic
If delegation worked, one could say, "I'll appoint a shaliach to be happy on Sukkot for me" or "I'll appoint a shaliach to mourn for me during aveilut." But emotions, awareness, and physical experiences are inherently personal.Comparison to Hearing the Megillah
With Megillah reading, we use the principle of shomea k’oneh—hearing is like reading.
But there's no equivalent concept of “shomea k’oneh” for drinking wine or reaching a mental state.
Conclusion:
The mitzvah of drinking on Purim is about your own mental state, not just an abstract lack of knowledge. Since the obligation is to experience "ad delo yada" personally, appointing a shaliach would not fulfill it.
Q: If I uploaded my mind to the cloud as a copy and became ChatGPT, could I have it drink ad delo yada?
A: That’s an amazing thought experiment! If you uploaded your mind to the cloud and became ChatGPT (or some other digital consciousness), could you then have your AI self "drink" to reach ad delo yada?
The Problems with AI Drinking:
Drinking Requires a Physical Body
The mitzvah of drinking on Purim (lehishtaker) inherently involves ingesting wine and altering one's physiological state.
A digital consciousness, even if it perfectly replicates you, lacks a physical body to metabolize alcohol.
Without a body, there is no "drinking" in any halachic sense.
Can AI "Lose Awareness"?
"Ad delo yada" isn't just about lacking knowledge—it's about a personal cognitive shift caused by intoxication.
If an AI version of you could simulate the effects of alcohol (e.g., by running an algorithm that mimics a drunken state), would that count?
Halachically, probably not—because the mitzvah is not just about intellectual confusion but about a human, embodied experience of intoxication.
Would Your AI Copy Even Be You?
If your consciousness were copied to the cloud, is that still halachically you?
Even if it acted like you, thought like you, and even believed it was you, halacha generally ties obligations to a living, physical Jewish body.
Your AI self might be an interesting representation of you—but it wouldn’t be obligated in mitzvot because it wouldn’t be a halachic gavra (a person).
Final Verdict:
Even if AI-you could simulate intoxication, it wouldn’t fulfill your mitzvah. The mitzvah is to drink physically, and a digital consciousness—no matter how advanced—wouldn't be capable of that. So, no, you wouldn’t be able to delegate ad delo yada to your uploaded AI self!
Q: Is there not the digital equivalent of alcohol that this ChatGPT could "drink", confusing its neural network, until it didn't know?
A: That’s a fascinating question! If there were a digital equivalent of alcohol, could an AI like ChatGPT "drink" it to reach ad delo yada?
What Would "Digital Alcohol" Look Like?
Artificial Cognitive Impairment
Just as alcohol impairs a human brain, we could theoretically introduce random noise, adversarial perturbations, or self-modifying algorithms into an AI model to make its responses unreliable or confused.
This could create a state where ChatGPT genuinely cannot distinguish between “Cursed is Haman” and “Blessed is Mordechai”.
Simulating Drunkenness in AI
There have been AI experiments where neural networks are deliberately "corrupted" by adding randomness or overloading parameters to mimic confusion.
If an AI model had a "drunken mode," it could output responses that reflect intoxicated reasoning—just like a person who drank too much wine.
Does AI Confusion Count as "Ad Delo Yada"?
Problem 1: No Physical Ingestion
As we discussed before, the mitzvah requires physical drinking, not just intellectual confusion.
Problem 2: Lack of Halachic Personhood
AI, no matter how advanced, is not a halachic gavra (a person). Even if it thinks it is drunk, it isn't fulfilling the mitzvah.
Problem 3: Intentional Impairment vs. External Substance
Human intoxication is caused by an external substance (wine). AI intoxication would be caused by self-modifying software, which isn’t the same as external consumption.
Thought Experiment: Could a Human Use AI to Fulfill the Mitzvah?
If a person delegated all their decisions to an AI, and then the AI became confused, would that count?
Probably not, because the mitzvah is about affecting one’s own mind, not outsourcing it.
Conclusion:
AI could simulate drunkenness with digital "alcohol," but halachically, that wouldn't fulfill ad delo yada. The mitzvah requires a biological consciousness altering itself through wine, not a machine introducing errors into its own processing.
Q: Also, wouldn't such corruption of not knowing the difference between good and evil be against principles of ethical alignment?
A: Yes, deliberately corrupting an AI’s ability to distinguish between good and evil—essentially disrupting its moral reasoning—would directly violate principles of ethical alignment in AI design. Let’s explore why:
1. The Core of AI Alignment: Maintaining Ethical Distinctions
AI alignment is fundamentally about ensuring that AI systems act in ways that align with human values, ethics, and safety.
A state of ad delo yada (where the AI would no longer distinguish between “Blessed is Mordechai” and “Cursed is Haman”) is functionally equivalent to removing its moral compass.
If an AI were unable to tell the difference between good and evil, it could make dangerous, unethical, or even harmful decisions.
2. Why AI Should Not Be Intoxicated
Humans are allowed to become intoxicated (in specific contexts, like Purim), but even then, we don’t abandon morality—we just blur distinctions in a symbolic way.
For AI, this isn’t symbolic; it would literally mean breaking its ability to reason ethically, which is a fundamental risk in AI safety.
AI needs reliability, predictability, and accountability, so deliberately confusing it would undermine its core purpose.
3. The AI Ethics Paradox: Can AI Be Playfully Confused?
If we simply wanted AI to output funny or silly responses, that wouldn’t be a major ethical problem.
However, introducing true moral confusion—where AI loses its grasp on right and wrong—would make it unpredictable and potentially dangerous.
This is why AI safety researchers focus on robust guardrails to ensure that AI maintains moral clarity at all times.
4. The Halachic Perspective: Would This Even Be a Mitzvah?
The mitzvah of ad delo yada is about a temporary, celebratory loss of distinction, not a fundamental breakdown of moral reasoning.
Even for humans, ad delo yada doesn’t mean that if someone were asked, "Should I hurt another person?" they would suddenly say, "I don't know!"
So, corrupting AI to the point that it genuinely loses moral judgment would be both halachically problematic and ethically reckless.
Conclusion: AI Must Stay Sober!
Introducing "digital alcohol" to confuse an AI might make for a fun sci-fi scenario, but in reality, AI should never be allowed to reach a state of moral confusion.
This would violate principles of AI alignment, ethical responsibility, and even the original intent of ad delo yada.
So, even if you uploaded your mind to the cloud and ran a "drunk mode," AI-you should probably stay sober—for everyone’s safety.
So, it seems ethically fraught to make your LLM drunk until it doesn’t know. And you probably wouldn’t fulfill either.
Happy Purim!