VIBE

Behind the Prompts: What I Learned from Engineering LLM Prompts

Over the last 2 days, I’ve dived deep into Prompt Engineering — not the surface-level prompt crafting, but the core architecture of how LLMs interpret, generate, and optimise responses based on prompt structure and configuration.

What started as a casual exploration through Google’s Prompt Engineering Essentials turned into an unexpectedly technical journey that bridged NLP design, token sampling, inference configuration, and agent-based reasoning.

Let’s break it down.

LLMs Are Not Magical — They’re Token Predictors

Large Language Models, whether GPT, Gemini, Claude, or Llama, are not creative writers — they are statistical machines that predict the next token given the previous context. That’s all.

When we input a prompt, the LLM

Each output token becomes part of the next input — forming an autoregressive chain. Now, this chain is where prompt engineering plays a major role.

Prompt Engineering = Controlling the Prediction Pipeline

When we design prompts, we’re essentially biasing the LLM towards the desired output path. But that’s only half the job — the other half is configuring the LLM’s sampling settings, which control how randomness and diversity affect token generation.

From Google’s whitepaper, I realised there are three core parameters we must understand:

Temperature → Controls entropy in token sampling

Top-K → Limits the candidate pool to the top-K probable tokens.

Top-P (Nucleus Sampling) → Chooses the smallest possible set of tokens whose cumulative probability ≥ P.

These three form the sampling triad.

Google recommends baseline configs:

Google’s 5-Step Prompting Framework

The Prompt is Code — Especially in Agent Design

I’ve been building a tool named AdaptiveFuzz, a cybersecurity automation agent that performs basic enumeration techniques using an LLM backbone. Unlike traditional programming, where functions are deterministic and require strict syntax, LLMs operate via intention expressed in text.

With the right prompt + system design:

For instance:

You are a red team analyst. Use the terminal to list all users in this Linux system. Then analyse /etc/passwd for any suspicious accounts.

This acts like a function call. If the agent is connected to terminal tools, the LLM breaks down tasks step-by-step, issues commands, processes results, and iterates. The better the prompt, the more accurate the enumeration.

Multi-Level Prompting: Roles, Context, System Control

Multi-Level Prompting outlines system prompting, role prompting, and contextual prompting as primary techniques:

You are a JSON API. Only respond in JSON.

You are an expert penetration tester working in a red team.

Given the following user access logs…

These enable modular instruction pipelines, just like functions in traditional code — each part has intent, scope, and isolation.

Chain of Thought, ReAct, Self-Consistency — Not Just Fancy Names

Chain of Thought (CoT)

Prompts like:

Let’s solve this step by step…

Force the model to externalise its reasoning, leading to better accuracy in multi-step problems like math or data processing.

ReAct (Reason and Act)

Combines reasoning with real-world tool usage:

  1. Think
  2. Act (search, shell, API call)
  3. Observe
  4. Repeat This is what powers real LLM agents.

Self-Consistency

Run multiple CoT prompts → sample various reasoning paths → pick the most common answer.

It’s like ensemble learning, but in prompt space.

Realisation: Prompt Engineering Is Software Engineering

The biggest shift in my thinking?

Prompt engineering isn’t just an art — it’s programming with natural language. And as LLMs become more capable, prompt structures will become as critical as code architecture.

You:

Prompt engineering isn’t just about better answers — it’s about unlocking LLM automation, agent autonomy, and scalable intelligence.

If you’re building tools, testing ideas, or debugging workflows, get under the hood. Study the temperature. Tune the top-k. Structure your prompts.

Because behind every good LLM output… is a great engineer, writing better prompts.

#AI #KT