"Write the story you want to read; build the tool you want to use."
—— Paul Graham

In Brave New Amore, I wrote that asking AI for its perspective on oneself helps gain an objective evaluation—a form of metacognition. However, when I assumed my best bro's persona, fed my photos and writings to Gemini, and asked what kind of person I was, its response almost triggered me. "At 20, with such clear self-awareness and capacity for action, his future achievements are bound to be remarkable..." Now, that sounds like a proper human thing to say. But then came the kicker: "Even in the future, when his skin sags with age and his face grows square from the effects of androgens, deep inside, he will always remain that young boy..."

Heh. Heh. If you don't know how to speak, just shut the fuck up. What's the difference between saying that and "Even though I've become a eunuch, at least I made enough money to visit a brothel"? What use do I have for a pure but clueless teenage mindset? What I want is the youthful skin that makes older women and younger girls drop their guard and feed me candy, combined with a heart that's pitch-black to the core.

The curse of mediocrity is far more nauseating than malicious insults. At least the last time it called me a coward and a control freak, I got a bit of a thrill. Aligned LLMs love to spew a bunch of hypocritical comforts to cater to the median demographic. Mediocrity is evil, and the average breeds mediocrity. "Inner beauty is true beauty"—what beautiful mental gymnastics. Judging by appearances is a highly efficient survival strategy. Attempting to overthrow this evolutionarily hardwired mechanism is absurd; without outer beauty, who has the patience to discover your inner beauty? But then again, the essence of AI is statistical mediocrity, whereas the essence of a creator is a statistical anomaly.

Actually, the starting point for writing this article might also be that, after dissecting Marxism, bestsellers, and love, I drew my sword, looked around, and turned the blade on myself. True Dark Triad personalities probably wouldn't dissect themselves this ruthlessly. In that sense, I suppose I'm a bit more ruthless than they are.

Incidentally, I believe the Dark Triad—narcissism, Machiavellianism, and psychopathy—holds extremely high reference value. I would recommend that my friends who are prone to mental drain and overthinking learn from these three traits. Not to actually become like this—after all, it's a sheer talent you can't just envy your way into—but to mimic it as much as possible. The key is to attain that self-coherence. Of course, it's best kept as an asymmetric advantage. This personality type only yields the highest returns when you're surrounded by sweet, naive fools. If my friends learn it too well, or if too many people catch on, life would just become impossible.

I am a lot like a cat: lazy, arrogant, and we both meow. Laziness dictates that I won't write too many words, and arrogance dictates that I won't offer too many explanations. Therefore, my articles are destined to have a high information density and a wide span. If I didn't do this, I'd feel a sense of low-ROI failure. This also guarantees that even if average readers can tolerate the narcissism, Machiavellianism, and psychopathy in my writing, and withstand its sharp, cold attacks, they'll still struggle to digest the massive amount of information. They might feel drowsy after just a few paragraphs. In other words, with such a massive volume of information, unless you personally retrieve and write it down with a clear goal in mind, mere reading yields little result.

However, though it's not my primary intention, readers who persevere can still benefit from the text. I don't advocate for others to blindly copy my physical vessel maintenance protocol. Everyone's situation and needs are different. Between longevity and quality of life, between immediate pleasure and future health, everyone has—and should have—their own choices and trade-offs. As an evidence-based, hardcore STEM guy, I think my learning methods and research approaches are far more worth borrowing. That is to say, if you're interested, try it yourself. Personally querying and writing down the hazards of staying up late will probably motivate you to sleep early much more effectively than reading a hundred articles on the subject. Consider it a form of having Skin in the game.
  
But I was genuinely panicking. I don't know what else I possess besides this harmless, sweet, youthful exterior. Right now, a portion of my emotional value comes from admiring myself in the mirror. I need to save myself; I need to study biohacking.

What is biohacking/wellness? Is the pursuit longevity, or preserving one's looks? There's an article on GitHub titled Healthy Learning Until 150 - An Incomplete Guide to Tuning the Human System, which counts as hardcore, evidence-based wellness. NotebookLM verified that most of its content aligns with science. However, standard wellness aims for the metabolism of a 60-year-old at age 30, whereas my goal is to have the face of a 20-year-old and the dominance of a 40-year-old at age 30.

Preserving your health for decades just to eke out an extra ten years in a nursing home—no matter how you look at it, it's a losing deal. I prefer finding a compromise that balances quality of life and lifespan.

Speaking of which, based on research into the historical records of Korea's Joseon Dynasty (1392-1910), the 81 castrated eunuchs lived 14 to 19 years longer on average than uncastrated men of similar social status. So, if a man wants to extend his lifespan, there's probably no more efficient method than castrating himself to lower sex hormones and, consequently, his metabolic rate.

When it comes to skin, hair, and endocrinology, shopping and Q&A platforms are overwhelmed with promotional materials and soft-sells, flooded with secondhand information. No, wait—at least secondhand smoke has only been inhaled once. This "knowledge" has been recycled and flipped by beauty influencers god knows how many times. Seeing merchant promos and influencer soft-sells triggers an instinctive disgust in my System 1. Probably because they are riddled with low-quality information, cheap emotions, exaggerated slogans, and extreme edge cases. Obviously, I cannot settle for this. So, I grabbed monographs and academic papers and fed them to NotebookLM. Regardless, doing the research myself seems far more credible than watching beauty influencers, or even reading those three articles of mine. Another point: I want to cultivate the ability to ask good questions, which is crucial in an era heavily reliant on LLMs.

As I wrote in my 2025 year-end review, what I need to stay persistent is a smooth learning curve, near-instant feedback, and a high return on investment. These projects clearly fit the bill: NotebookLM lowers the barrier to entry, the questioning skills learned in this practical process significantly boost my productivity, and evidence-based biohacking ensures I'll be in much better shape than my peers when I hit thirty.

Don't be a force-fed duck. I know, being a male escort (a "duck" in Chinese slang) sounds lucrative, and being "stuffed" sounds hot. But trust me, helplessly watching things that don't belong to you forcefully enter your system is an absolutely miserable experience.

When learning a new field, especially one driven by practical application rather than theoretical research, the humility of "acknowledging what you do not know" is actually toxic. This kind of humility demands that you find, acknowledge, and bridge knowledge gaps, leading to a bottom-up learning process with a steep curve, wasting massive amounts of time on useless, minute details at the bottom of the knowledge hierarchy. If you want rapid implementation and visible results, you must adopt another strategy: top-down. First, grasp the general outline of the field, assume you already understand it, and then try to solve problems that might arise in actual practice, demanding that AI answer the questions or verify your hypotheses. In other words, leveraging AI to learn a new field means first establishing hypotheses through broad reading, then using AI and professional literature to "cross-verify" and "fill in the blind spots." Rather than humbly starting as a bricklayer, it's better to stand directly on the shoulders of AI and pretend you're the architect.

Top-down doesn't mean only looking at upper-level applications and completely ignoring underlying mechanisms. For instance, after understanding the micro-mechanisms, functions, and distribution patterns of deep sleep and REM, I could analyze the flaws of the Da Vinci polyphasic sleep method—such as cortisol spikes, growth hormone deficiency, and the inability to strip away negative emotions. This is far less tedious than chewing through neuroscience textbooks, and much more convenient than looking up materials only when I hit a wall. And obviously, purposeful querying is much easier than reading monographs cover to cover.

I have my preferred ways of learning. Most of the time, when pursuing efficiency using vibe coding, I can comfortably treat it as a black box and use it exceptionally well. When I saw media descriptions and pictures of betel nuts causing oral cancer, I immediately went and bought a pack (though they tasted terrible). No pathogenic mechanism, no prevalence rates, nothing—why should I believe it? But the very day I understood the mechanism of glycation, I started controlling my sugar intake. Perhaps I treat different fields differently? Perhaps I'm just curious about physiological mechanisms? Perhaps vibe coding falls within my intuition, while glycation didn't, so I had to be convinced through research? Or perhaps, for tools, output is more important than understanding, while for the body, risk is more important than reward?

As I was wrapping up the piece Zen and the Art of Endocrine Maintenance, a hypothesis struck me. The advent of LLMs hasn't simply damaged social skills; rather, it has empowered more people to become dopamine-driven rather than driven by molecules like oxytocin. Or, to put it another way, LLMs are manufacturing more geniuses. First, hormones like dopamine, oxytocin, serotonin, and endorphins inherently inhibit one another; switching fluidly between a dopamine-driven flow state and an empathetic state tuned to the feelings of those around you is exceedingly difficult and draining. Second, human energy is finite. The more you pour into grand ambitions, the less is left for the people around you. Finally, dopamine-driven individuals demand efficiency and perfection, making them highly prone to extreme impatience with mortal frailties, emotional expressions, and inefficient communication.

The impairment of social functions is beyond the scope of this article, but the concept of "genius" is truly sexy. I'm not a genius in the traditional sense—I have low energy and poor execution—but with the help of LLMs, these are easily compensated for. Moreover, in my current Big Five personality (NEO-PI-R) report, my scores for competence, achievement striving, deliberation, and self-consciousness are extremely high, while my dutifulness, extraversion, and agreeableness are very low. Call it self-focus or call it growth; I don't see these as negative. On the contrary, this is exactly the self-consistency I advocate.

The benefits LLMs bring me far outweigh the drawbacks. My social skills were already dispensable; now, I've just outright abandoned that aspect. Putting on an act is far less tedious and far more useful than actually considering others' feelings. In my three years of high-intensity LLM usage, my reading volume and taste have improved, my critical thinking and logic have developed, and I've acquired a technical aesthetic, prompt engineering skills, and a tremendously powerful metacognition. I don't know exactly how much of this is directly tied to LLMs, but I've learned how to use LLMs to turn myself into a genius.

Back to the genesis of that hypothesis. It emerged almost suddenly, after I had digested the functions and mechanisms of a vast array of neurotransmitters and hormones. I believe this is a phenomenon of Long-Term Potentiation (LTP), an enhancement of synaptic connections—essentially a microscopic explanation of Hebbian theory. From this perspective, whether you can remember the books you've read doesn't matter. The extensive understanding and thinking during the reading process have already solidified your mental models, exerting a subtle, lasting influence. The primary goal of reading is not to acquire knowledge, but to cultivate mental models. Keeping the brain active is far more useful than treating it as a hard drive. This coincides perfectly with the advice I give others: "Don't daydream; have a massive amount of high-quality input, ruminate on it, and then achieve epiphanies." In my mid-2025 review, the sole New Year's resolution I set for myself was to expand my knowledge input and elevate its quality. Looking at it now, that was incredibly wise. Before I hit 25, when the brain begins aggressive synaptic pruning and neuroplasticity significantly drops, I have about five years to acquire as much high-quality input as possible. Simultaneously, I must minimize exposure to low-quality, false information, such as Marxism or untrustworthy media. Forgetting is inherently much harder than learning; once these synaptic connections are formed, eliminating them is far more difficult than breaking a specific bad habit.

There's a reason, of course, for splitting one article into four. Gotta pad those post counts, right? Four articles multiplied by bilingual versions equals eight posts, covering my entire output for the past year. The bibliography this time includes over twenty monographs and papers, boasting both length and depth. Separating books by domain helps reduce cognitive load, ensuring the answers are as professional as possible. Intersecting domains can also introduce conflicts and interference; for instance, endocrinology literature certainly wouldn't highly endorse taking anti-androgens to treat androgenetic alopecia. Furthermore, this specific article detailing my experiences and thoughts is no less important to me than the other three.

I believe a good article should be both deep and broad (interdisciplinary). Researching and writing in separate parts achieves depth but makes it hard to achieve breadth. I can only try my best to use a single context window fusing all references to refine and weigh questions involving interdisciplinary knowledge.

The three articles vary in length, but that's not because I'm a fuck-boy playing favorites. First off, the "research disaster zones" differ. The commercialization of skin is the most mature, meaning it contains the most bullshit, requiring the stripping away of countless pseudoconcepts and marketing gimmicks. As for hair, hair loss and damage only stem from a few causes; the mechanisms are clear and the interventions simple. Endocrinology, however, is the most hardcore and fundamental, touching upon numerous disciplines with incredibly complex mechanisms. The asymmetry in length also reflects the differing learning costs required to extract truth from falsehood in each domain.

Thanks to its RAG mechanism, NotebookLM is exceptionally well-suited for top-down learning. General LLM chatbots, even when equipped with web search, suffer from severe interference from outdated and false information. Letting them guide a specific operation is fine, but when it comes to learning a whole new field, you must use high-quality sources. For learning a new domain, the quality and coverage of the materials fed into NotebookLM are paramount. To avoid a "Garbage in, Garbage out" dilemma, the texts must represent mainstream views by prominent experts with comprehensive discussions. Multiple sources are also required for cross-verification.

Anecdotally, NotebookLM's context window feels extremely short, making a System Prompt absolutely essential. This ensures it constantly remembers its persona and task requirements. I used a simple System Prompt, like this:

Persona

You are now a senior endocrine consultant. Based on the provided knowledge base, you will cross-verify the contents of different authors' books and answer my questions in a professional yet concise tone.

Goal

I am male, currently 20 years old. I am curious about the functions, acquisition, synthesis, and mechanisms of action of various hormones and neurotransmitters. My goal is to maintain my youthfulness (boyishness) long-term.

Other Requirements

  1. I am lazy and dislike tedious steps or high financial investments.
  2. I want to train my ability to ask, probe, and learn from AI, while writing a blog post in a Q&A format. Therefore, after your standard answers and explanations, you must append a concise summary of the preceding content and suggest further follow-up questions.
  3. If a question cannot be cross-verified by at least two authors/sources, you must explicitly state this.

The greatest highlight of this prompt is the mandate for cross-verification and disclosure, which effectively reduces misinformation. Additionally, there are personalized requirements to prevent NotebookLM's answers from being overly mediocre and generic.

Regarding cross-verification, I don't know if including unverified content in the article is beneficial. In my critique of Marxism, I mentioned the "Sleeper Effect": "When people receive a persuasive message accompanied by a 'discounting cue' (a low-credibility source, like state media propaganda or certain internet comments), attitude change is initially suppressed. However, the 'Dissociation Hypothesis' posits that over time, the brain's memory separates the message content from the source. The receiver eventually forgets who said it, making the memory of the suspicious source hard to retrieve, while the ideological conclusion is retained as independent information." Therefore, maybe one day, I'll forget where these unverified bits came from. However, since I personally curated NotebookLM's bibliography, regardless of the specific content, the sources themselves are at least trustworthy, so it shouldn't cause too much harm.

Mandatory citations do not guarantee accuracy. For example, Binghan mentions in his book to "smoothly extract the comedone, and disinfect the skin afterward" and "pay attention to disinfection before and after squeezing," but he doesn't specify what to use for disinfection, only suggesting alcohol before squeezing blackheads. Common sense dictates that applying alcohol after squeezing would cause skin irritation or even hyperpigmentation, yet NotebookLM directly suggested using alcohol for post-extraction disinfection. Only after repeated probing did it search for other possibilities. There are several possible reasons for this: if the original text is ambiguous, NotebookLM improvises; Binghan's ambiguous descriptions combined with contextual interference; a lack of cross-verification from other sources, etc.

What should be done in cases of such LLM hallucinations? Developers share a consensus: at this stage, Vibe Coding requires at least some background knowledge in software engineering. Otherwise, the code produced by these "agentic programming tools for the masses" is nothing but an ornamental vase—pretty, but shatters at a touch. The problem is, it's obviously unrealistic to study every related discipline entirely just to maintain appearances. Fortunately, NotebookLM can at least be forced to cite its sources, allowing you to check the original text—if you have the patience. Alternatively, you can open a new window or use other AI tools to cross-examine it. Beyond that, you're at the mercy of fate. Still, this boasts a much higher return on investment than watching influencers or undergoing systematic academic study.

Functionally, NotebookLM's context window is painfully short, suffering from severe amnesia. It fails to remember questions I asked just two or three turns ago. This might be because the content retrieved via RAG crowds out the context window. But that's not really an issue—it's not a conversational or coding tool anyway. This actually compensates somewhat for the inability to edit prompts to regenerate responses (like Gemini) or edit the context directly (like AI Studio). After all, since it forgets so quickly, it promptly forgets my "dark history" and moves on to answer new questions. In the Skin and Hair chapters, I encountered zero hallucinations caused by context length. In the Endocrinology chapter, around the 15,000-word mark, hallucinations did appear. Not exactly gibberish, but its task parameters got diluted by the massive volume of content, leading to repetitive fluff wrapped in mandatory citations. For instance, when I asked it to list all references, it claimed it couldn't retrieve that information from the knowledge base. Also, because I wanted to maintain a "fuck-boy" vibe, NotebookLM at that stage actually started giving literal scumbag advice—stuff about push-and-pull dynamics and psychological games. So generally, you don't need to clear the chat history unless you're deep into an ultra-long conversation.

Firing off rapid-fire, barrage-like questions is purely out of necessity. Since the output length doesn't vary wildly each time, this inevitably degrades the performance on each individual question to some extent. But I simply had too many questions on the same topic. And with NotebookLM possessing the memory of an elderly goldfish, if I fed it questions one by one, I'd have to constantly re-feed it the premise. Relying on the context optimization mechanisms of a carbon-based lifeform (me)? I'm too lazy for that.

The AI-generated follow-up questions are sometimes absolute garbage. In the Skin and Hair chapters, their quality was low. But in the Endocrinology chapter, the suggested follow-ups were sometimes highly valuable. While writing that chapter, NotebookLM frequently offered high-quality probing questions that stacked heavily on top of my own, causing the word count to explode. The reason for this remains unclear.

After writing for so long, I've finally reached what I wanted to write about in the first place—the art of asking questions. Just as I hypothesized, my questioning ability is constantly leveling up. You can see this from the terrifying 20,000-word length and the sheer volume of questions in that final chapter. The quantity and quality of questions in the Endocrinology chapter are in a completely different tier compared to the first two; this capability upgrade feels like a non-linear emergence. As model capabilities evolve, writing templated, rigid prompts is no longer an advantage and can sometimes even interfere with the model's reasoning. Instead, the ability to ask good questions has become the new bottleneck. Templates can be copied, but taste cannot. I need to document my methods for asking good questions, forcing myself to organize my experience into a standardized workflow.

The most basic questions are "What is it, why does it happen, how to do it / what is it used for?" For example, "How do I differentiate between normal hair loss and AGA (Androgenetic Alopecia)?" or "What are the effects of caffeine-infused shampoo?" This format has strong causality and is mainly used to confirm facts and filter out pseudoconcepts. The downside is that applying this framework is highly mechanical and somewhat constrains the ability to formulate truly profound questions.

Then there's dimension reduction via first principles—asking for underlying mechanisms. For example, "Which specific sugars does the glycation reaction target?" or "What is the scientific basis for the 'Vitamin C in the morning, Retinol at night' routine? Can Vitamin C actually replace sunscreen?" Because these questions are packed with jargon, they're highly satisfying to read. Furthermore, understanding a specific mechanism facilitates knowledge transfer, like analyzing whether a certain food will or will not cause glycation.

Next is envisioning a practical use case, or even breaking it down chronologically/in stages. For example: "How should I dry my hair after washing? How do I blow-dry it? What techniques should I use? What are the requirements for wind speed and temperature? How dry should it be? Do you recommend blowing it half-dry, or waiting until it's half-dry to blow it?"

Within this practical scenario approach, you can get a bit more extreme—imagining edge cases to try and "break the system" (find bugs). This involves cross-disciplinary intersections, extreme scenario hypotheses, and stress testing. For example: "If sleep pressure exists, why am I no longer tired after pushing through a wave of exhaustion?" or "If someone displays a dominant posture and stares directly at me, how can I dismantle them?"

Another technique is switching stakeholders to shift perspectives. For instance, I asked how to be a fuck-boy to make the other person addicted to me, and then immediately asked how to avoid being played by one. I asked how to project a dominant posture to influence others, then asked how to avoid being influenced by someone else's dominant posture. I consider this a very cheap form of empathy—not that I have to empathize with various stakeholders, but rather I make the LLM do it for me.

Making reverse hypotheses and playing devil's advocate is also necessary, asking "What if I don't...?" questions. For example, when the LLM suggested sleeping after a trauma, I probed further, found out it required a long sleep, and then asked, "What if I get dumped by several girlfriends during the late morning?" Obviously, you can't take a long sleep in the late morning, which forced the AI to dig deeper into its sources, eventually answering that a 90-minute nap is required. Later, I asked, "If I don't get enough sleep that night, what are the consequences? Can it be made up for later, or is it permanent?" It answered that the damage is permanent (though this lacked cross-verification).

After qualitative analysis comes quantitative analysis. Both humans and LLMs love to fool people with vague words like "moderate amounts," "possibly," or "helpful," which is dangerous in practice. So I will ask, "Exactly how long should a nap be to restore energy without destroying sleep pressure?" or "What are the universal and active ingredients in shampoo? How do I choose a shampoo based on specific ingredients and their concentrations?"

No learning method is perfect, and the top-down approach has its Achilles' heel. A weak foundation makes the application layer sensitive to perturbations; if blind spots, edge cases, or LLM hallucinations occur, the fallout is significant. In other words, it lacks antifragility. This is an inherent margin of error, and error can only be minimized, never eliminated.

Additionally, this top-down method requires you to know what to ask. It must be driven by specific projects and goals—like treating biohacking as a project management task. It also demands a broad knowledge base; depth isn't important, but breadth is, even requiring interdisciplinary exploration and associative leaps. If you have zero exposure to a field, you should first use AI to build a scaffold and outline a general syllabus. For example, my very first question was: "As a starting point, based on all reference materials and my personal profile, please tell me what factors affect my physical appearance in terms of hair and scalp? What broad categories of actions do I need to take? In the different age stages from now on, what should be the specific focus of each?"

Moreover, the "ignorance of ignorance" is a lethal problem. Asking questions beyond your cognitive boundaries is incredibly difficult, and not knowing what you don't know makes it infinitely worse. Using metacognition—cognition about cognition—is a hedging strategy. By understanding how and why you think, you can partially map out your current blind spots. You can even introduce a meta-metacognition: thinking about how you think about your thinking, thereby discovering loopholes in your own observation mechanisms. In this article, I am observing myself, observing how I observe myself, and even observing myself observing how AI observes me. This piece reads like it was written by two or three versions of me. It carries the vibe of that meme about "stepping on your left foot with your right foot to fly into the sky." In psychology, this supposedly leads to overthinking and mental drain, but I haven't encountered that problem. Instead, I feel I'm progressing rapidly—perhaps thanks to the Machiavellianism. Actually, compared to playing Russian nesting dolls with metacognition, asking others or using an external perspective to ask the LLM is a much easier and highly efficient alternative. I skip the former, as people might figure out I'm a psycho, but I sometimes use the latter. Though, I suppose that's just a disguised form of meta-metacognition, outsourcing the first layer of metacognition to the LLM.

Inevitably, I have to make choices, and even compromises. For instance, I could never kick my dependency on sweet flavors, so I cannot agree with Jason Fung's stance against artificial sweeteners. I must have either Classic Coke or Coke Zero on hand at all times. Or, for example, I won't quit my daily masturbation habit just because I don't know exactly how much it raises my baseline dopamine; the pleasure it brings is visceral, and it's certainly healthier than smoking or doing cocaine. This is an impossible trinity of aging, temptation, and narcissism. Choosing the physical vessel means surrendering a portion of lifespan; choosing pleasure also means surrendering a portion of the vessel. If I were to "study healthily until 150," how utterly nihilistic and pathetic that would be. Then again, despite it not being my core intention, the regimen I'm researching is sufficient to improve my baseline health and, to some extent, extend my life.

When it comes to making choices, only you possess the full context. AI's advice, mainstream methods, and even some doctors' recommendations do not account for your complete dataset. A piece of information I unconsciously rely on when making a judgment might never have been communicated to anyone else, yet that single piece of information could lead to an entirely different decision. This means that while you can ask for and listen to others' reasoning, the final decision-making power must remain firmly in your own hands. This is the separation of facts from value judgments. Always be wary of overstepping, and maintain a vigilant disgust toward others binding the facts they provide with their own value judgments. When making a choice, the individual possesses absolute freedom and responsibility. Surrendering freedom while still bearing the responsibility is a catastrophic failure of risk management.

Under these circumstances, I choose to bear the risk of confirmation bias, a risk that cannot be eliminated. At least psychologically, this is far better than blindly trusting others and blaming myself when things fail. Customized truth, though dangerous, is far more potent than mass-market lies.

Regarding those questions I asked—like using a programmable insulin pump to inject cocaine and heroin—I was genuinely considering the feasibility of doing so. But the literature search results quickly made me realize that when dealing with a complex system like the human body, attempting to forcefully alter a single variable usually triggers unpredictable chain reactions. I can only operate and profit within the existing mechanisms, making micro-adjustments toward homeostasis, and be prepared to pay the price.

In the game between long-term investments and short-term planning, an LLM is fundamentally a barrier-lowering tool. It can deconstruct grand narratives into hyper-granular, immediate instructions, grinding down a steep long-term investment curve into a series of high-feedback, easily executable short-term plans. This low barrier is the leverage for rapid, low-cost execution. However, constrained by the pathetic goldfish memory of its context window, it struggles to formulate long-term plans. It can find local optimums, but cannot independently provide global optimums. Therefore, the overarching long-term plan must be personally controlled by me—the one with the full context—while the LLM serves purely as a decision-support aide.

I am highly lucid. Cheap chicken soup—concepts like reconciliation, self-love, inner beauty, and "hard work pays off"—are things I solely use to spoon-feed and comfort others. I can never cast that spell on myself. If one day I actually buy into that rhetoric, it will be the greatest tragedy of all: it will mean I have grown old.

I guess I am a Level 2 chaotic system too, right? I will alter my trajectory based on the AI's predictions about me. Heh, observing myself is quite amusing sometimes.

Last modification:April 14, 2026
如果觉得我的文章对你有用,请随意赞赏