Alex,
I like your note below, which is consistent with my previous note that criticized your earlier note, If this is your final position, I'm glad that we agree.
As for translations from one language to another, we can't even depend on humans. When absolute precision is essential, it's important to produce an "echo" -- a translation from (1) the user's original language (natural or formal) to (2) the computer system's internal representation to (3) the same language as the user's original, and (4) A question "Is this what you mean?"
John
From: "Alex Shkotin" <alex.shkotin@gmail.com>
John and All,
I began but not finished yet one report [1] of LLM ability to verbalize formal language, in this case OWL2.
The bad places have yellow and red colors.
And here [2] is an example of our dialog.
But the summary for me is clear: we can't trust LLM even for "translations from one language (natural or artificial) to another"
It is mostly correct but sometimes unexpectedly wrong.⚽
Id est even in this case we need "Revision" before to give LLM output to decision making.
Alex