How to get excited about (and play with) ChatGPT – spring 2024

What do I need to know about ChatGPT/LLMs as a user?

You can google ‘what is chatGPT’ – you’ll get vague generic waffle that you probably already know from reading mainstream news articles.

You can google ‘what is LLM’ – you’ll get extremely detailed guides for engineers on how to build them.

One is too shallow, the other too deep; here’s my Goldilocks answers:

Landscape:

  • ChatGPT is powered by an LLM; almost everything in last few years is a side-effect of this new tech (LLMs)
  • The theory of LLMs is simple and well-understood, but they need to be scaled-up to become ‘intelligent’ – they’re expensive to build ($billion each)
  • You can build a very slow and stupid LLM yourself for free; it’s an intellectual exercise but it produces something not very useful 🙂
  • There are approx 4 major massive western companies actively building LLMs: OpenAI (Microsoft-owned), Google, Facebook/Meta, Anthropic … and several governments (e.g. China).
  • LLMs are largely interchangeable

“Chat”GPT ?

ChatGPT is a front-end on top of an LLM; it only works with OpenAI’s LLM (called “GPT-something”, e.g. GPT-3, GPT-4, GPT-4-turbo, etc)

  • because it was the first, and so popular, most LLM vendors now also offer “a chatbot on top of our LLM”
  • …and also try to catch-up in user-numbers by out-competing on features: rival (non-ChatGPT) chatbots tend to be faster, or higher quality, or have extra features – but they have less adoption

“Custom GPT” ?

  • OpenAI has a feature most others don’t: “custom GPTs”, where – with no programming knowledge – you can ‘program’ the LLM to do some tasks, using natural human language
  • You can share Custom GPTs with other people — see below
  • A “Custom GPT” (or “GPT” for short – but that’s a terrible name, because it conflicts with the technical term “GPT” which describes the internals of how an LLM is built!) in this context is only “a conversation with an chatbot” – so it’s great for creating a new document, or analysing an incoming document (up to the token limit).
  • …but it’s terrible for doing large-scale analysis, or for outputting multiple documents, etc. For those you want to look into RAG (DB-style access layer for LLMs to use), and writing your own source code to “wrap” the LLM and use it as a small tool inside your larger app

Learn by playing: the easiest route to learn more!

GenerativeAI has been such huge news partly because it is so incredibly easy to start using that it requires zero training – and zero tech knowledge. The worst thing you can do is wait – wait to be ‘ready’, wait to be ‘taught’, wait for ‘everyone else to get good and improve their lives because I was too self-conscious to dive in and experiment for myself’.

Custom GPTs: pre-made AIs that solve specific problems

I’ve written a whole page on Custom-GPTs, with examples and links to other resources: Custom GPTs

Making ChatGPT work for you

Getting started takes … minutes:

  1. Visit https://openai.com/chatgpt/
  2. Create an account – it’s FREE
  3. Start typing … it’s a simple web page with a chatbox.

Advanced techniques you can try straight away, which will give you much better results:

  • DON’T use it like google: we’ve learnt to write short, terse, “googleable” questions: these give TERRIBLE results when fed to an LLM
  • DO re-think your problem as “creative challenge”: LLM’s are fundamentally ‘creative exploration of ideas’ engines, they excel at being given an open-ended remit, and rise to the challenge
  • DON’T over-specify your problem: unlike all previous computer systems you DO NOT need to be precise (precision actually gives you worse results)
  • DO provide the background first, then ask the question as the very last sentence – LLMs produce better results this way around than if you type the exact same text but put the question first … then drown it in pages of context … because they’ve ‘forgotten’ what you asked for by the end
  • DO tell the LLM what person you want it to (pretend) to be – due to their internal workings, LLMs assume you want them to be stupid (the average of all Facebook posts) by default: if you want them to be ‘intelligent’ you have to ask for it. Official advice from OpenAI “Start your request with ‘imagine you have an IQ over 150’ and you’ll get smarter responses”
  • DO take this further: “Imagine you’re a Maths professor” (before asking it to solve a maths problem), “Imagine you’re a professional software engineer with 10 years experience working at Google” (before asking it to write some code for you), etc
  • DO write multiple paragraphs as a single opening request – LLM’s start ’empty’ in every conversation, you need to ‘warm them up’ to the topic by giving them a small info-dump (1 page of A4 text is about right, plus any extra documents relevant to your request)
  • DON’T write 1-sentence requests – you’ll get low-quality / generic answers

Learning more / keeping up to date

Add your email to my mailing list to get updates each time I do new AI experiments, so you can read about them and be inspired – and try them out yourself.

This website is my live list of “things I’m doing with AI – and which ones succeeded/failed” – https://aipioneerspath.com

Send me your own experiments, things you’ve achieved with ChatGPT and your own successful prompts! I like to feature and share the best ones on LinkedIn – message me directly on https://www.linkedin.com/in/adam-martin-b3ba4414a/ or email: adam.m.s.martin+aipp@gmail.com

…and a deep-dive into the background of GPT, LLM, Etc

If you’ve got an hour free to spend reading about how we got here, what each letter of ‘G, P, T’ means, and some of the underlying reasons for why stuff works – and what it is good/bad at – here’s an excellent (but very long!) journey / writeup that walks you through (with code examples and open-source links you can use to reproduce it yourself), from basic traditional ML right through to modern LLMs:

https://blog.openthreatresearch.com/demystifying-generative-ai-a-security-researchers-notes/

Subscribe for more AI tips