← Back to Research

15 March 2026

How I Optimised My Website for Answer Engines

A technical breakdown of how stefanbardega.com was structured for AI discovery. From JSON-LD schemas and entity cross-referencing to robots.txt directives for AI crawlers: the complete implementation.


Search used to be about ranking pages. Now it is about training systems.

Large language models, answer engines, and AI assistants do not simply index web pages the way traditional search engines do. Instead they ingest structured information, build entity graphs, and retrieve content when a user asks a question.

This fundamentally changes how websites should be structured.

Over the past few weeks I rebuilt my personal site, stefanbardega.com, specifically to explore how websites can be optimised for Answer Engine Optimisation (AEO) as well as traditional SEO.

The objective was not simply to rank pages in Google, but to ensure that systems like ChatGPT, Perplexity, Claude and Google's AI systems can clearly understand:

Below is the full technical breakdown of how the site was structured.

1. A Universal Meta Tag System

The first layer of optimisation was a centralised SEO component that applies consistent metadata across every page. Every page automatically renders the same core head tags.

Title tags

Titles are unique per page but follow a consistent format:

<title>Stefan Bardega | Global Head of Performance Marketing</title>
<title>Stefan Bardega | Biography & Career</title>
<title>Research & Writing | Stefan Bardega</title>
<title>Press and Publications | Stefan Bardega</title>
<title>Speaking Engagements | Stefan Bardega</title>
<title>Contact Stefan Bardega | Speaking & Press</title>

This structure does two things. First, it creates clear keyword relevance. Second, it reinforces the entity name on every page. AI systems often use titles as one of the first signals when classifying a document.

Meta descriptions

Each page includes a unique description. Example:

<meta name="description" content="Stefan Bardega is the Global Head of Performance Marketing at IDX. Over 20 years of experience in digital marketing, AI-driven brand discovery, and performance media strategy." />

Descriptions help both traditional search snippets and AI summarisation systems.

Canonical URLs

Every page declares a canonical URL:

<link rel="canonical" href="https://stefanbardega.com/biography" />

This ensures crawlers never confuse duplicate URLs, including those with trailing slashes, without trailing slashes, or with query parameters. Canonicalisation is critical for entity consistency.

Robots directives

All pages explicitly instruct crawlers to index and follow:

<meta name="robots" content="index, follow" />

2. Open Graph Tags for AI Sharing

Many AI systems retrieve content by resolving URLs in the same way social platforms do. Open Graph metadata controls how those URLs resolve.

<meta property="og:title" content="Stefan Bardega | Global Head of Performance Marketing" />
<meta property="og:type" content="website" />
<meta property="og:url" content="https://stefanbardega.com" />
<meta property="og:image" content="https://stefanbardega.com/images/stefan-bardega.jpg" />
<meta property="og:site_name" content="Stefan Bardega" />
<meta property="og:locale" content="en_GB" />

Article pages use og:type = article. Profile pages use og:type = profile. This helps crawlers classify page types before reading the full document.

3. Twitter / X Cards

Every page includes Twitter card metadata:

<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:title" content="..." />
<meta name="twitter:description" content="..." />
<meta name="twitter:image" content="https://stefanbardega.com/images/stefan-bardega.jpg" />

This ensures consistent previews across LinkedIn, X, messaging apps, and AI assistants resolving shared URLs.

4. Structured Data: The Core AEO Layer

The most important part of the site is structured data. Schema.org JSON-LD tells AI systems exactly what the content represents. Five separate schema types are implemented across the site, all connected through a single shared entity identifier.

4a. Person Schema: Defining the Entity

The homepage and biography page define the primary entity at https://stefanbardega.com/#person:

{
  "@context": "https://schema.org",
  "@type": "Person",
  "@id": "https://stefanbardega.com/#person",
  "name": "Stefan Bardega",
  "jobTitle": "Global Head of Performance Marketing",
  "worksFor": {
    "@type": "Organization",
    "@id": "https://investisdigital.com/#organization",
    "name": "IDX"
  },
  "url": "https://stefanbardega.com",
  "image": "https://stefanbardega.com/images/stefan-bardega.jpg",
  "knowsAbout": [
    "Performance Marketing",
    "AI in Marketing",
    "Answer Engine Optimisation",
    "Brand Discovery",
    "Paid Media Strategy",
    "Programmatic Advertising",
    "Search Marketing",
    "CRM and Analytics",
    "Digital Measurement",
    "Media Mix Modelling",
    "First-Party Data Strategy",
    "Marketing Technology"
  ]
}

This schema tells AI systems: "This website represents this person."

4b. Career History Using hasOccupation

Career history is encoded directly in the schema. Example role entry:

{
  "@type": "Role",
  "roleName": "President, iProspect EMEA",
  "startDate": "2019-01",
  "endDate": "2020-05",
  "worksFor": { "@type": "Organization", "name": "iProspect" }
}

Six roles are included: IDX, Traktion, iProspect (two roles), ZenithOptimedia, and MediaCom. This allows AI assistants to answer questions like "What companies has Stefan Bardega worked for?" without needing to infer it from page text.

4c. WebSite Schema

The site itself is defined as an entity, linked back to the Person schema via the author field:

{
  "@context": "https://schema.org",
  "@type": "WebSite",
  "@id": "https://stefanbardega.com/#website",
  "url": "https://stefanbardega.com",
  "name": "Stefan Bardega",
  "description": "Official website of Stefan Bardega, Global Head of Performance Marketing at IDX.",
  "author": {
    "@type": "Person",
    "@id": "https://stefanbardega.com/#person"
  }
}

This creates a simple entity graph: Person → Website → Articles.

4d. BlogPosting Schema

Each research article uses BlogPosting schema, ensuring every article clearly identifies author, publication date, topic, and site hierarchy:

{
  "@type": "BlogPosting",
  "headline": "Answer Engine Optimisation: The Strategic Imperative That Boards Cannot Afford to Ignore",
  "datePublished": "2026-03-15",
  "dateModified": "2026-03-15",
  "author": {
    "@type": "Person",
    "@id": "https://stefanbardega.com/#person"
  }
}

4e. ContactPage and Event Schemas

The Contact page uses @type: ContactPage. The Speaking page uses @type: ItemList with each engagement structured as a nested @type: Event schema including performer, location, and topic. This allows assistants to answer questions like "Has Stefan Bardega spoken at industry conferences?"

5. Entity Cross-Referencing

Every schema references the same identifier:

https://stefanbardega.com/#person

This is one of the most important AEO techniques. When AI systems crawl the homepage, the biography page, and research articles, they all point to the same entity. The crawler merges these pages into one coherent knowledge profile.

6. Semantic HTML Structure

Each page has exactly one H1 heading. Examples:

<h1>Stefan Bardega</h1>          <!-- Home -->
<h1>Biography</h1>               <!-- Biography -->
<h1>Speaking Engagements</h1>    <!-- Speaking -->

Articles use the full article title as the H1. Subsections use H2 and H3 tags to create a clear semantic hierarchy that both search engines and AI systems interpret as structured knowledge.

7. Image Identity Signals

The same portrait image appears in three places: the HTML image tag, Open Graph metadata, and the Person schema image property:

<img src="/images/stefan-bardega.jpg" alt="Stefan Bardega" />

Using the same image URL consistently across all three reinforces the identity association across different ingestion paths.

8. robots.txt for AI Crawlers

Most websites simply allow all crawlers with a generic wildcard. This site explicitly names major AI agents:

User-agent: GPTBot
Allow: /

User-agent: Google-Extended
Allow: /

User-agent: Claude-Web
Allow: /

User-agent: PerplexityBot
Allow: /

User-agent: CCBot
Allow: /

User-agent: Bytespider
Allow: /

Explicitly naming these bots signals that the content can be used in AI retrieval and that the site actively welcomes indexing by AI assistants.

9. XML Sitemap with Priority Signals

The site includes a full XML sitemap with weighted priorities:

/ ................................ 1.0
/biography ....................... 0.9
/research ........................ 0.9
/research/aeo-strategic-imperative 0.9
/research/linkedin-network-mining  0.8
/research/is-ai-hurting-meta ...... 0.8
/press ........................... 0.7
/speaking ........................ 0.7
/contact ......................... 0.6

The flagship research article is weighted at 0.9, the same as the biography, signalling to crawlers that it is high-value authoritative content.

10. Clean Semantic URLs

All URLs are human-readable and keyword-rich. No query parameters. No IDs:

stefanbardega.com/research/aeo-strategic-imperative
stefanbardega.com/research/linkedin-network-mining-ai
stefanbardega.com/research/is-ai-hurting-meta

This allows AI systems to infer page topics immediately from the URL alone, before reading the page.

11. Canonicalisation

Every page includes its own canonical URL to avoid duplicate indexing and ensure consistent entity signals:

<link rel="canonical" href="https://stefanbardega.com/research/aeo-strategic-imperative" />

12. Citation Links

Articles link out to credible sources including Reuters, the Financial Times, Meta investor reports, and industry research. All external links use:

rel="noopener noreferrer"

Outbound citations are an important trust signal. They demonstrate that claims are grounded in verifiable information, which AI systems weight when assessing whether to cite a source.

The Real Objective

Traditional SEO focused on ranking pages. Answer Engine Optimisation focuses on building an entity that AI systems can understand.

The architecture of this site ensures that when someone asks "Who is Stefan Bardega?" or "What is Stefan Bardega known for?", AI systems can retrieve structured, verifiable answers.

This is the future of discoverability. It requires thinking about websites not simply as pages, but as machine-readable knowledge graphs.


If you would like support on how to optimise your site, whether personal or business, to get discovered in answer engines, get in touch.


Written by Stefan Bardega

Global Head of Performance Marketing at IDX

Read bio