The problem with the logic of large language models is we give syntactical or or lexical credit that we expect. The sentence will make sense and it won't be an inherent paradox. If that's not the case then the entirety of llam's breakdown. If every sentence is this sentence is false then let's just pack up llms and do something else because they're not going to work. In fact, that's the entire pivotal flaw with AI in general and even even computer programs. That's why they throw errors. They're they're meant to work within a certain syntax and a certain logic or lexical linguistic consistency and when they don't, everything breaks down. And quite frankly that's the problem with humans too. When we look at our irony and memes that it all just breaks down when nobody means what they say is nobody means what they say or says what they mean. Then we're we're right where it's a tower of Babel. This is a technological and linguistic Tower of Babel absolutely