When ChatGPT launched, I thought it was overhyped. Another chatbot, I figured. Then a friend showed me something it did that genuinely shocked me, and I decided to actually give it a proper test.
So for seven days, I used ChatGPT for as much of my work as possible. Writing, research, problem-solving, brainstorming — the lot. Here's my genuinely honest take, including the parts that didn't work.
First, what actually is ChatGPT?
ChatGPT is a conversational AI built by OpenAI. You type something — a question, a request, a problem — and it generates a response. It's trained on an enormous amount of text, which means it has a surprisingly broad base of knowledge and can write in a huge range of styles and formats.
The free version uses GPT-3.5, which is capable but has limits. The paid version (ChatGPT Plus at $20/month) uses GPT-4, which is noticeably smarter — especially for complex reasoning and nuanced writing tasks.
Where it genuinely impressed me
Writing first drafts. This is where ChatGPT earns its reputation. Give it a clear brief — topic, tone, target audience, rough length — and it produces a solid starting point in seconds. Not perfect, but genuinely useful. I probably saved 40% of my drafting time this way.
Explaining complex topics is another strong suit. I threw some genuinely tricky concepts at it and asked for plain-English explanations. Repeatedly impressed. It's like having a very well-read friend who never gets impatient with questions.
Code debugging surprised me. I'm not a developer, but I do sometimes need to tweak scripts. Pasting broken code and asking ChatGPT to fix it and explain what was wrong worked better than I expected.
Where it fell flat
Anything that requires real-time information. ChatGPT's knowledge has a cutoff date, which means it doesn't know about recent events. I asked it about something that happened last month and it confidently gave me outdated information — without flagging that it might be wrong. That's a real problem.
Fact-checking. This one matters. ChatGPT can produce fluent, confident-sounding statements that are simply not true. It doesn't always know what it doesn't know. Always verify anything important independently.
Nuanced personal advice. It can provide frameworks and general guidance, but it lacks the contextual understanding that comes from actually knowing you, your situation, and your history.
"ChatGPT is an incredibly powerful first-draft machine — but it needs a human in the loop to catch errors and bring genuine judgment."
The learning curve is real
Here's something I didn't expect: how much the quality of your results depends on how you ask. Vague prompts get vague answers. The more specific and detailed you are, the better the output. Telling it to adopt a specific role, write for a specific audience, and follow specific constraints makes a huge difference.
This is called prompt engineering, and it's a genuine skill worth developing. I'll cover it in a separate post — it made ChatGPT probably three times more useful for me once I understood the basics.
Is it worth it?
For most people who do knowledge work of any kind — writing, research, analysis, communication — yes. The free tier is genuinely useful. The paid version is worth considering if you're using it heavily for work.
But go in with realistic expectations. It's a remarkable tool with real limitations. Think of it as a very capable assistant who works fast but needs supervision, not a replacement for your own judgment.
Bottom line: ChatGPT is genuinely useful, not just hype. But the people getting the most out of it are the ones treating it as a powerful starting point — not a finished product.