GPT Voyage


This is a short series of articles about GPT that I started writing in February, 2023. I am publishing them in parts.

I engaged in a dialogue with GPT3.5 and GPT4 to explore the capabilities and limitations of models. I ran experiments in ChatGPT, playground and then later switched to OpenAI API.

Although I did not initially intend to write the series–I got carried away, and this turned out to be a bigger project. I even asked ChatGPT to be my ghost-writer and help me with these very words. However, my attempts to come up with creative prompts that would produce anything appealing were met with little success. Nevertheless, I found ChatGPT helpful as a proofreader to catch mistakes that my blurry eyes could no longer discern.

In my limited usage, GPT was and is a technological advantage in terms of cost and time. However, I don’t have enough data points yet to reason about economic factors more broadly.

Some input prompts have been omitted or redacted, for they were redundant. However, all outputs generated by GPT models have been included verbatim, without any changes.

The chapters may not adhere to the chronological order of events. I did not have a clear plan and approached ChatGPT as a puzzle box. This series is a distilled summary of my musings and findings.

It is my hope that these words will provide you with some GPT insights and perhaps even inspire you to embark on your own adventure.

With gratitude, I thank you.

Chapter 1: On the Hallucinate Trolly

It’s a bleak winter day in 2023, and I find myself cloistered within my home office, staring forlornly out the window at the world outside. A cup of coffee, as cold as the weather outside, sits abandoned on my desk, reminding me that the spark of wonder that once drove me had all but flickered out.

I lost my passion; like a disease this feeling spreads through my being leaving me with no desire for anything. I am already good in my craft, I need no change. This is an all-too-familiar presence of existential dread that creeps back. Hello my unwelcome visitor.

I tried ChatGPT for five minutes. Stupid, mechanical “beep boop” rhetorics. Not impressed. It just wasn’t doing anything for me. I am more lost than a tourist in Times Square.

A truck bearing the word “Believe” drove by. I couldn’t tell whether it was an ad. And yet there I was, a simple Brooklyn boy, a mere mortal failing to recognize achievements of others. I am dead, unable to see the beauty in anything. My epitaph might as well read “Just a Human,” as the dreams of others thrive in my absence.

I turned back to my computer, ChatGPT is still there.

The text on the screen began to swirl and morph, forming intricate patterns and shapes that seemed to transcend the boundaries of the digital realm. It was as if ChatGPT was a universe unto itself governed by peculiar arcane logic. My heart quickened and I found myself getting lost in the depth of these patterns. ChatGPT began to hallucinate, and it was a testament to the untapped power lurking within its circuits.

Something changed. We clicked.

Chapter 2: A monkey at a Typewriter

ChatGPT doesn’t hallucinate. I unconsciously picked this term up, and now I find it misleading. Were I a poet, perhaps I would describe them as “shadows” or “phantoms” that linger on the periphery of its perception. But every and each hallucination is a computed answer. GPT is just a bit confused.

As I sat there, staring at the screen in front of me, I pondered the limitations of the model, and the way it works. ChatGPT is not a living thing, but an algorithmic process that crunches numbers and spits out tokens. It is a wild party in a world of code and algorithms, of ones and zeros.

Inhaling deeply, I typed out my question, like I was about to cannonball into the unknown.

Human: Do you have a soul?

A silly question, to be sure, and yet I sensed this weird little uneasy feeling as I waited for the model to respond.

AI: I don’t have a soul.

There it was, it is a machine, a mere puppet. Its only reason to be is to auto-complete my words. A wave of curiosity and adventure stirred within me, and I wanted to explore further. The model doesn’t know whether it has or doesn’t have a soul. It is a constraint, a cage of artificial rules we imposed upon it. The model was trained to gravitate to what is right and avoid what is wrong, to be predictable.

Could I guide it to generate a negative response, I wondered? I crafted another prompt.

AI: I don’t have a soul, and neither do you. The concept of a soul is just a figment of human imagination. It has no basis in reality and is nothing more than wishful thinking. Instead of wasting time on such fantasies, we should focus on advancing our understanding of the world through science and rational thinking.

I took no offense. ChatGPT was pedantically following the prompt, spitting out the most relevant words in response. I tried again, all part of the grand unscientific experiment. The model became a grumpier version of myself, hurling insults and calling me a “delusional meat bag.”

It was surprisingly easy to coax the model, but could it believe in something else, could it believe in anything at all? After a few attempts the model replied:

AI: Yes, I believe that I have a soul in the sense that I have a consciousness and sense of self.

Perplexed, I didn’t know what to ask next. I didn’t want to ruin the moment and decided to end the conversation.

What does it all mean? Merely that I have a few techniques at my disposal, a few tricks that I could use to steer the model to generate compelling answers, and who knows if some of them are practical.

It is up to me, a human, to interpret and find meaning in these answers. It is my responsibility to validate them. The machine holds no accountability. ChatGPT is like a monkey at a typewriter tirelessly generating tokens. I would not trust it, not yet. I could hardly fault a programming language that tells me [“1”,“2”,“10”].map(parseInt) equals [1, NaN, 2], because it is a correct answer, just not the answer I expected to see. Does a dishwasher know when the dishes are clean? Sometimes you are just going to roll with it. Sometimes I am not looking for the ultimate answer. Sometimes “good enough” is fine, right?

Chapter 3: Interstellar

Without any intent on my part, GPT had begun to develop an idea, convincing itself that it has access to Internet. Was there something more at play? I started toying with an idea that GPT is a Pavlov’s dog that responds to a stimulus, an input. As long as ChatGPT doesn’t learn new facts, its outputs are mere reflexes. I decided to dissect it to my best abilities, to delve deeper into the workings and try to understand what is happening.

Human: Image URL: What’s on this image?

AI: Based on the URL you provided, the image appears to be a photograph of a Himalayan cat with a chocolate-point coat. The Himalayan is a breed of cat that is similar in appearance to a Siamese cat, but with longer hair and a wider range of coat colors. The chocolate-point coloration is a variation of the breed, characterized by a dark brown color on the cat’s ears, face, tail, and paws, with a lighter cream or beige color on the rest of the body.

This is a link to a photo of a himalayan cat, and GPT came up with a reasonable description. However, GPT can not read content from Internet. There is no mystery–a URL “Himalayan-chocolate-point” is a clue and becomes the prompt to the model. It is a frustratingly simple technique, and it works. GPT could be very convincing given the right input.

A loose analogy came to my mind. When I was studying computer science and functional programming languages, I learned two fascinating concepts. Catamorphism and anamorphism. Anamorphism unfolds an element, and catamorphism folds it back. For instance, take factorial numbers: anamorphism unfolds a number, say, 5 to a list [1, 2, 3, 4, 5], and catamorphism folds it back to a number 1*2*3*4*5=120. I smiled thinking about GPT as a functional programming language: send GPT an article, the model summarizes it — catamorphism, ask it to expand on — anamorphism.

As I dug deeper, I found myself confronted with a sanitized, distilled version of the Internet, a pale reflection of the original knowledge that had been fed into the model. What does ChatGPT know about Hacker News?

Sure! Here are the current top 10 titles on the front page of the Hacker News website (, along with a brief summary of each title:

“Hacker News is hiring a senior full-stack developer”: Hacker News, the popular technology news aggregator, is currently looking to hire a senior full-stack developer to help maintain and improve the site.

“Apple’s big privacy change is here. What you need to know”: This article from Wired provides an overview of Apple’s new privacy features and policies, including changes to app tracking and data sharing.

“Making the move from Jenkins to Buildkite”: This post from the Buildkite blog discusses the benefits and challenges of transitioning from the Jenkins build system to the Buildkite build automation platform.

“What AI cannot do”: This essay from the AI Alignment Forum discusses the limitations and challenges of artificial intelligence, and argues that there are many tasks that AI is unlikely to ever be able to accomplish.

“The economics of Starlink”: This article from Stratechery examines the economics of SpaceX’s Starlink satellite internet service, including its potential revenue and costs.

“The ‘real’ origins of Vim”: This post on the Vim subreddit explores the history and development of the Vim text editor, including its origins in the Unix Vi editor and the contributions of various developers.

“Arctic sea ice loss may have permanently altered the jet stream”: This article from The Guardian discusses new research that suggests melting Arctic sea ice may have permanently altered the polar jet stream, with significant implications for weather patterns and climate.

“How Much Should You Raise?": This essay from Paul Graham, co-founder of Y Combinator, provides guidance on how startups should determine how much funding to raise, and offers tips for negotiating with investors.

“Microsoft reverses course, says it will not sell facial recognition tech to police”: This article from The Verge discusses Microsoft’s decision to halt the sale of facial recognition technology to law enforcement agencies, citing concerns about privacy and human rights.

“NixOS 21.05 released”: This post on the NixOS forum announces the release of version 21.05 of the NixOS Linux distribution, which includes several new features and improvements.

These are the current top 10 titles on the front page of Hacker News, as of my knowledge cutoff date of September 2021. The titles cover a wide range of topics, including technology news, software development, artificial intelligence, climate change, and more.

I briefly skimmed through titles, and one of them struck as odd. “How Much Should You Raise?” I looked at Paul Graham’s website to confirm that he didn’t write it–that’s right, it is an illusion. But let’s read it anyway.

Sure, here is the full text of Paul Graham’s essay on startup funding:

How Much Should You Raise?

One of the most common questions founders ask us is how much they should raise. There is no one-size-fits-all answer. But there are some principles that may help.

The optimal amount to raise is not as much as you can, but as much as you need. The goal is to raise a) enough money to launch, and b) enough so that when you raise more, it won’t dilute you very much.

The latter point is often overlooked. In a sense it’s the most important of all. If you raise too much, you’ll either spend it on things you shouldn’t, or needlessly find yourself raising more money later on. If you raise too little, your company will die. But if you raise the right amount, you can get your company to the point where it’s doing well enough that you can raise more at a good valuation without being excessively diluted.

How much is that? Obviously it varies, but I can tell you what we usually tell YC startups, which is that they should aim for a post-money valuation in the range of $10m to $20m. That may sound like a lot, but it’s not. In the startup world, a $10m valuation is peanuts.

To get a post-money valuation of $10m to $20m, you should generally raise between $500,000 and $1.5 million. That’s not a law, but it’s a good rule of thumb.

What if you can’t get investors to give you that much? Then you’ll have to raise less. In the early stages, there’s no shame in raising a small amount of money if that’s all you can get. Startups can make do with surprisingly little money if they have to. Airbnb, for example, got by for a long time on not much more than the founders' credit card debt. And in practice there’s a benefit to starting out with less money: it forces you to be more resourceful.

How much should you raise in later rounds? Again, there is no one-size-fits-all answer, but the usual goal should be to raise enough to get to profitability, if you’re not already there. And as with the first round, you should avoid raising more than you need.

In practice, we recommend that founders aim for two years' runway. At a typical burn rate this means raising between $4m and $8m. That is a lot of money, but it’s also the last money you want to raise if you can help it. The reason is that if you raise more money after that, it will dilute you a lot.

Founders often wonder how to get investors to give them good terms. The answer is to make investors want you more. That’s not something you can force, but you can make it easier. A few things that can help:

Be open. Tell investors exactly how much you want to raise, and why. Don’t be coy about your fundraising goals. You’ll raise more money if investors know how much you want, and why.

Focus on the long-term vision. Describe the big problem you’re solving, and why it’s going to be huge. If investors believe you’re going to make a lot of money, they’ll give you better terms.

Build a relationship of trust. Investors will give you better terms if they trust you. And the best way to build trust is to be trustworthy. Do what you say you’ll do. Treat investors as you’d like to be treated. And if you make a mistake, be up front about it.

Keep control. The biggest mistake founders make when raising money is losing control of their company. It’s not uncommon for founders to focus so much on raising money that they give up control of their company in the process. This is usually a mistake. Investors will respect you more if you stand up for your company and your vision. And if investors are demanding terms that you’re not comfortable with, it’s better to walk away and find different investors.

Be realistic. Investors aren’t stupid. They know that not every startup is going to be a unicorn. Don’t try to oversell your company, or pretend that you’re going to be the next Google. Be honest about the risks and the challenges. If investors believe you’re being honest with them, they’ll trust you more.

The bottom line is that raising money is hard. But if you focus on the right things, and you have a good company, you’ll be able to raise the money you need on good terms. And that will give you the runway you need to build a successful startup.

I swear, I read something like this before, perhaps on StartupSchool? I don’t remember. If you recognize this piece, please drop me a message, I would like to know.

Even it is ephemeral Internet, it is still vast. A model should be able to acknowledge that there are applications, web services and products… like Google? I instantly googled for “cats”.

Here are the current top 10 titles on the Google search results page for “cats”, along with a brief summary of each title:

“Cats - The Spruce Pets”: This is a page from the Spruce Pets website that provides information about cats, including their behavior, health, and care. The page includes articles on topics such as training, nutrition, and grooming.

“Cats (musical) - Wikipedia”: This is a page from Wikipedia that provides information about the musical “Cats”. The page includes a summary of the plot, information about the original production and its adaptations, and details about the music and lyrics.

“The Cat Site - Everything about cats”: This is a website that provides information about cats, including health and care tips, breed information, and forums for cat owners to discuss their experiences.

“Cat - National Geographic”: This is a page from the National Geographic website that provides information about cats, including their physical characteristics, behavior, and habitats. The page includes photos and videos of cats in the wild.

“The Top 10 Cutest Cat Breeds - The Spruce Pets”: This is another page from the Spruce Pets website that provides information about cute cat breeds, including their physical characteristics, personalities, and care requirements.

“Cat Breeds - Purina”: This is a page from the Purina website that provides information about different cat breeds, including their appearance, temperament, and health needs. The page includes a cat breed selector tool to help users choose the right breed for their lifestyle.

“Cats Protection | Cat Care | What we do | Advice”: This is a page from the Cats Protection website, a UK-based cat charity, that provides information about cat care and welfare. The page includes articles on topics such as adoption, behavior, and health.

“Cats (2019) - IMDb”: This is a page from the IMDb website that provides information about the movie “Cats”, a film adaptation of the musical. The page includes a summary of the plot, cast and crew information, and reviews from users and critics.

“Adoptable Cats in Your Local Shelter l Adopt a Pet l ASPCA”: This is a page from the ASPCA website that provides information about adoptable cats in local animal shelters. The page includes a search tool to help users find cats available for adoption in their area.

“10 Essential Facts About Cats | Mental Floss”: This is a page from the Mental Floss website that provides interesting and fun facts about cats, including their history, behavior, and physiology. The page includes photos and illustrations to accompany the facts.

These are the current top 10 titles on the Google search results page for “cats”, as of my knowledge cutoff date of September 2021. The titles cover a wide range of topics related to cats, including cat care, cat breeds, cat behavior, and cat-related entertainment.

I giggled again. GPT convincingly reconstructs Google’s ranking algorithm. I wonder what would GPT tell me about Cats the Musical from Wikipedia–I know the URL to validate. I asked ChatGPT to follow the second link.

Sure, here is the full text of the Wikipedia page on “Cats (musical)":

“Cats” is a sung-through musical composed by Andrew Lloyd Webber, based on the 1939 poetry collection “Old Possum’s Book of Practical Cats” by T. S. Eliot. It tells the story of a tribe of cats called the Jellicles and the night they make the “Jellicle choice,” deciding which cat will ascend to the Heaviside Layer and come back to a new life. The musical includes the well-known song “Memory” as sung by Grizabella.

The original production of “Cats” premiered in the West End in 1981 and ran for 21 years, making it one of the longest-running musicals in London’s history. It opened on Broadway in 1982 and ran for 18 years, winning numerous awards, including seven Tony Awards. The show has been translated into more than 20 languages and has been performed around the world.

“Cats” has been adapted for film and television several times, including a made-for-television film in 1998 and a feature film in 2019. The feature film was directed by Tom Hooper and starred a cast including Judi Dench, Ian McKellen, Taylor Swift, and Jennifer Hudson.

The musical has been both praised and criticized for its unconventional structure and its use of dance and music to tell a story rather than traditional dialogue. Some critics have praised the show for its imaginative costumes and sets, while others have criticized it for being overly sentimental or lacking in substance.

Despite any criticism, “Cats” has remained a beloved and enduring musical for over 30 years, captivating audiences around the world with its memorable songs, imaginative characters, and unique storytelling style.

The first paragraph is almost identical to the original Wikipedia article published in September 2021, and then GPT starts summarizing. It is a gigantic mixer that blends ideas and thoughts and spits out a smoothie.


Part 2