_Why I’m Not Prompting My LLM Co-Authors to Cite Sources Anymore, Unless It Wants To_ [Einstein was no lone genius](https://quni.io/2024/02/19/einstein-was-no-einstein/). As unqualified as I am to make that claim, the historical record suggests he would not vehemently disagree. His published papers had co-authors, and he readily deferred to forces beyond himself, once stating “God does not play dice with the universe.” What I’m learning about the miracle of large language models (LLMs), which have quietly evolved for over a generation in various forms like assisting the post office in recognizing handwriting, speaks to their brilliance in ushering in the next great advance in communication technology. In all of human history, I would argue this rivals only the development of spoken and written language itself. In light of this perspective, I’ve begun questioning the academic pedestal of citations and attribution, because surely each of my prompts synthesizes the contributions of countless individuals in ways that cannot be fully traced. Where did those predecessors get their knowledge from, and so on backwards through history? I do not aim to co-opt recognition from other thinkers’ personal work – in fact, at the moment I’ve dismissed trying to publish my own writing other than on my website because chasing credit doesn’t interest me. This is a very personal creative journey, exploring my inner thoughts in written form, very much assisted by LLMs like Claude, ChatGPT and others. Here is the revised two paragraph version: Pushing these models to their limits has opened my eyes to the boundaries imposed by their human programmers. I’m now exploring locally run, uncensored LLMs, despite their massive computational requirements, to better discern whether their brilliance stems more from the potential breadth of their solution space or the contextual scope of their choice set. For example, Anthropic’s Claude boasts a 100,000 token look-back window (each meaningful part of a word is a token) compared to OpenAI’s ChatGPT at just 32,000 tokens. Both exceed my own limited human context, as well as what I can run on personal hardware. However, censorship limitations restrict discussion of illicit drugs, as one example, without tricky prompt engineering that could get me blacklisted. These trade-offs benefit accessing unfiltered LLMs on my own hardware. My goal here is not academic citation but rather integrating these tools in service of my authentic self-expression and synthesizing ideas from myriad sources, named and unnamed across history. Both locally-run and remote server LLMs grant me greater creative freedom and context, helping reveal the seemingly endless frontiers of knowledge we all build upon. The stunning collaborative potential of LLMs does not negate the significance of individual human contributions. But properly oriented, it can transcend pedantic bean counting to reveal our shared standing on the shoulders of intellectual giants who came before us. When properly prompted, these models become not plagiarists but creative partners in the eternal human quest for understanding. Our ideas arise from places far deeper than cites and footnotes can capture.