Research

5,913 websites. Grade F. The Central Europe AI Readiness Study 2026.

We analysed the structured data of Europe's largest websites — so you don't have to. This page gives you an early look at what we found: who's doing it right, who's leaving visibility on the table, and what the data says about the state of schema markup across the continent.
// Study parameters

The dataset.

Scope

5,913 domains · 9,275 URLs analyzed · Snapshot: 6 March 2026

Method

3-C Framework — Content (40%) · Code (35%) · Credibility (25%)

Tool

Puppeteer Headless Browser — Schema extraction, content analysis, technical audit, Core Web Vitals

Validation

SUTSCHE/Boye Study — Independent validation on 1,000+ pages from DAX, SDAX, FTSE 250, SMIM/SPI, AMX

// The paradox

The infrastructure is there. The signal isn't.

Fast servers. Zero content signals. No credibility markers. Central European websites score excellent on Core Web Vitals — and near-zero on every dimension AI systems use to evaluate, understand, and cite content.
// Core Web Vitals

Technically excellent.

648ms
LCP
Largest Contentful Paint — good
500ms
FCP
First Contentful Paint — good
0
CLS
Cumulative Layout Shift — perfect
97ms
TTFB
Time to First Byte — excellent
// Key findings

The numbers don't lie.

27
Median 3-C Score
Out of 100. Grade F. Infrastructure without signal.
0%
Credibility signals
Author attribution. Publication dates. Cited sources. On zero URLs.
3.5%
Schema-rich domains
Only 3.5% have schema rich enough to be useful to AI systems.
49%
AI bots blocked
URLs that partially or fully block AI crawlers.
// 3-C Framework

Three dimensions. One score.

Score = (Content × 0.40) + (Code × 0.35) + (Credibility × 0.25)
25

Content — 25/100

What is being said? Statistics, quotes, source references, comparison tables — absent across the board.

45

Code — 45/100

How is it structured? Schema.org, Technical SEO. Best dimension — still below passing.

5

Credibility — 5/100

Who says it? Authors, dates, external references. Near-zero on every metric.

// Deep dive: Schema saturation

66.9% have no schema. Of those that do, most is empty.

// Schema quality

Six tiers of schema quality.

None

3,954 domains (66.9%) — No schema markup at all

Broken

1,143 domains (19.3%) — Types declared, properties empty (0% completeness)

Skeleton

71 domains (1.2%) — Less than 40% of properties filled

Thin

102 domains (1.7%) — 40–60% property completeness

Medium

434 domains (7.3%) — 60–80% property completeness

Rich

209 domains (3.5%) — 80%+ completeness, 3+ types — actually useful for AI

Why 19.3% are "broken": 1,143 domains have Schema.org types in their HTML, but no parseable properties. Main causes: 405 domains with Article schema — CMS-generated, properties empty. 169 domains with Organization without usable properties. These sites think they have schema. They don't.

// Critical gaps

The five hardest property findings.

99%

mainEntity on WebPage — missing. The semantic link between page and content. Without it, AI can't connect your page to what it describes.

78%

description on Organization — missing. Who is this company? AI systems can't answer.

63%

publisher on WebSite — missing. Who owns this site? Unknown to machines.

51%

sameAs on Organization — missing. No external identity anchoring (Wikipedia, LinkedIn, social profiles).

97%

jobTitle on Person — missing. Author authority completely invisible.

// Type combinations

How types are combined.

Ideal

89 domains (1.5%) — Organization + WebSite + WebPage + Breadcrumb

Good

266 domains (4.5%) — At least Organization + WebSite

Partial

487 domains (8.2%) — Only Article (without organizational context)

Minimal

166 domains (2.8%) — Only Organization alone

// Deep dive: Credibility

Trust signals. All zero.

Author

0% — Author attribution on analyzed URLs

Date

0% — Publication date present

Current

0% — Current year marker

Sources

0% — Cited sources or references

Org Schema

10% — Organization schema present (but mostly empty)

// Deep dive: Content

Content signals. Nearly absent.

Statistics

0% — Pages with statistical data

Quotes

0% — Pages with cited quotes

Sources

0% — Pages with source references

Tables

0% — Pages with comparison tables

Pro/Con

0% — Pages with structured arguments

Word count

13% — Pages exceeding 1,000 words. Length without substance.

Technical SEO signals

Title (30–69 chars): 47%

Meta Description (120–160): 28%

Viewport Meta: 94%

Canonical Tag: 79%

Single H1: 53%

Additional signals

Open Graph: 17%

Hreflang: 19%

Heading Hierarchy: 30%

Images with Alt Text: 17%

Core Web Vitals: Excellent across the board

// AI bot access

Who's allowed in. Who's not.

GPTBot, ClaudeBot, Google-Extended, Bingbot, PerplexityBot analyzed across 5,913 domains. Not a deliberate strategy. Nobody checked.
51%

Fully allowed

Open access for all analyzed AI bots.

43%

Partially blocked

Some AI bots blocked, others allowed. Inconsistent configuration.

6%

Fully blocked

All analyzed AI bots blocked via robots.txt.

// Independent validation

An independent study reached the same conclusions.

SUTSCHE GmbH — an independent DXP consultancy with no vendor contracts — analyzed 1,000+ pages from DAX, SDAX, FTSE 250, SMIM/SPI, and AMX companies using the same 3-C Framework. Their findings mirror ours.

Countries: UK (283), Germany (276), Switzerland (159), Netherlands (153), Denmark (125), Belgium (101), Austria (98), Luxembourg (58).
// SUTSCHE/Boye findings

Key results.

No Schema

47% — No schema markup at all

FAQPage

0.9% — FAQPage adoption

Article

3% — Article schema present

Author

1% — Author schema present

AI blocked

27% — AI bots actively blocked

AI open

38% — Fully open for AI bots

No date

80% — No publication date

No H1

18% — Missing H1 heading

// Methodology

How we measured.

enhancely Study: 5,913 domains, 9,275 URLs, snapshot 6 March 2026. Puppeteer Headless Browser. Schema extraction + property analysis (5,159 raw files). Content analysis: GEO signals (7 categories). Technical SEO: 10 signal types. Core Web Vitals: synthetic measurement, 2,263 URLs. Scoring: 3-C Framework.

Scientific foundation: Assessment framework based on synthesis of 16 peer-reviewed papers and industry reports (Aggarwal et al. KDD '24 · Dang et al. Semantic Web 2025 · Chen et al. 2025 and others).

Limitation: Synthetic measurements. Breadth study (1 URL/domain). No deep crawl. Core Web Vitals: Headless Chrome, no real user data.
// FAQ

Questions about the study.

The 3-C Framework evaluates AI readiness across three dimensions: Content (40% weight) — measures GEO signals like statistics, cited sources, comparison tables, and structured arguments. Code (35% weight) — measures Schema.org coverage, technical SEO, and structured data quality. Credibility (25% weight) — measures author attribution, publication dates, external references, and organizational identity signals. The composite score is calculated as (Content × 0.40) + (Code × 0.35) + (Credibility × 0.25), resulting in a 0–100 scale. The median score across 5,913 Central European domains was 27/100 — Grade F.
Most CMS platforms do not generate Schema.org markup by default. WordPress, TYPO3, and other enterprise systems produce HTML that is visually correct but structurally empty for machines. SEO plugins like Yoast cover only 20–30 generic types and require manual configuration per page. The result: two-thirds of European websites have zero structured data, and another 19.3% have broken schema — types declared but properties left empty by CMS auto-generation. Only 3.5% of analyzed domains have schema rich enough (80%+ property completeness, 3+ types) to be useful for AI systems.
SUTSCHE GmbH — an independent DXP consultancy with no vendor relationship to enhancely — analyzed 1,000+ pages from major stock index companies (DAX, SDAX, FTSE 250, SMIM/SPI, AMX) across 8 countries using the same 3-C Framework. Their findings mirror the enhancely study: 47% have no schema at all, 0.9% FAQPage adoption, 1% Author schema, 80% missing publication dates, and 27% actively blocking AI bots. Two independent datasets, one conclusion.
If your website is hosted in Central Europe, there is a 96.5% probability that your Schema.org markup is insufficient for AI systems — either missing entirely (66.9%), broken (19.3%), or too thin to be useful (10.2%). The good news: because so few competitors have adequate structured data, implementing comprehensive Schema.org markup now creates a significant competitive advantage. enhancely can analyze your domain in under 60 seconds and show exactly where you stand on the 3-C scale.
Credibility signals in the 3-C Framework refer to machine-readable markers — not human-visible content. A website may display an author name on a blog post, but without Person schema with jobTitle, sameAs (linking to LinkedIn/Wikipedia), and affiliation properties, AI systems cannot verify or weight that authority. The study found 0% author attribution, 0% publication dates, and 0% cited sources in structured data across all analyzed URLs. The information exists on pages — it is just invisible to machines because it lacks Schema.org markup.

Your content is already perfect for humans.