![](https://cdn.hashnode.com/res/hashnode/image/upload/v1713666019115/11aca15a-3cf9-4b43-aed7-81a54fcad167.jpeg) …by asserting such internally inconsistent statements as fact and finding, [my earlier post](https://quni.io/2023/12/29/copyright-knowledge-synthesis-and-large-language-models-implications-of-the-new-york-times-lawsuit-against-openai/) may have been too idealistic and ignorant. Oh, and did I mention it gives diametrically different responses depending on whether or not you pay? And I am quite certain that a court with a human judge, and at least two human lawyers will certainly conclude that OpenAI’s ignorance isn’t a defense for [blaming an algorithm for human error](https://quni.io/2023/12/31/the-problem-isnt-ai-its-us/). _Updated commentary: The discovery for this lawsuit will surely open the kimono for some of the subjective human decisions that were used in training ChatGPT’s models. Since these are not currently documented fully, it seems a test of true openness for OpenAI. This does also seem to suggest confusion in OpenAI’s moniker in the first place, because while proprietary information is certainly reasonable for a profit-seeking business to protect its interests and investments the dual nonprofit/private structure of OpenAI suggests that these models should be open source if they are truly for the stated good of humanity._ _As a matter of principle, it seems fair to assert that information should never be hidden when it risks the potential of causing harm. Corollary to this is an overarching optimistic perspective that suggests all information should be knowable and that the wisdom of crowds will ultimately prevail to ensure that all information that can be known will be, and that —given enough time —information will be used in the most beneficial way possible for the collective good._ _Update #2: This [could be a BFD with potentially huge implications](https://quni.io/2024/01/01/the-legal-implications-of-opening-proprietary-algorithms-exploring-technology-public-policy-and-the-legal-system/). To give one example, what would it mean if credit bureaus were forced by litigation to disclose the specifics of their scoring by suits filed on behalf of those whose credit scores leave them disadvantaged for financial opportunities, like better rates and mortgages? In theory, anyone with standing to be plaintiff in a suit that could engage in the discovery process could force a radical, rethinking of not only what it means to be “open source“ but the very role of our legal system, that understanding and arbitrating the complexity brought forth by these algorithms._