3 Comments

First draft questions for ChatGPT are usually shallow and error prone.

Try training an LLM on Psak Halakhah and then ask it questions.

Expand full comment

However, for now I'll add the following.

Indeed, "zero-shotting" a generally trained LLM on a halachic question is going to be shallow and error prone. Fine-tuning an existing one is probably the way to go, to allow users to still converse and for it to be generally "knowledgeable" about the world.

However, even with such fine-tuning, I would expect it still to be somewhat shallow (though less so) and still error-prone (though less so). I regularly use an LLM that was trained on massive amounts of code, and while the results are regularly **impressive**, it still often will generate code that is not correct, and can be presenting with atypical and deeper level tasks and not be able to devise the result.

But this is all to be discussed in the future, bli neder.

Expand full comment

Yes, hope to eventually get to that, and discussing that

Expand full comment