Uncategorized

Core Web Vitals in 2026: A Practical Guide to LCP, CLS, and INP

I have been optimizing websites for over a decade, and I can tell you that no single ranking factor has changed how I work as much as Core Web Vitals. When Google first rolled these metrics out, most SEOs treated them as a minor technical checkbox. That was a mistake. In 2026, with the shift from FID to INP fully settled and Google continuing to refine how user experience affects rankings, understanding these three metrics at a deep level is not optional. It is the baseline.

In this guide, I am going to break down exactly what LCP, CLS, and INP measure, how to diagnose problems with each one, and the specific fixes that have worked across dozens of client projects here in Barcelona and beyond. No fluff, no theory without application. Just what works.

What Each Core Web Vital Actually Measures

Before you can fix anything, you need to understand what Google is actually tracking. Each metric targets a different dimension of user experience, and confusing them leads to wasted effort.

LCP: Largest Contentful Paint

LCP measures how long it takes for the largest visible element in the viewport to fully render. This is usually a hero image, a large text block, or a video poster frame. Google considers anything under 2.5 seconds as good, between 2.5 and 4 seconds as needing improvement, and anything over 4 seconds as poor.

What most people miss is that LCP is not a single event. It is made up of four sub-parts, and understanding them is the key to effective optimization:

  1. Time to First Byte (TTFB) – The time from the user’s request to the first byte of the HTML response arriving. This is server-side performance.
  2. Resource Load Delay – The gap between TTFB and when the browser starts loading the LCP resource (image, font, etc.). This happens when the resource is not discoverable early in the HTML.
  3. Resource Load Duration – How long the actual file takes to download. File size and CDN performance matter here.
  4. Element Render Delay – The time between the resource finishing download and the element actually painting on screen. Render-blocking CSS and JavaScript cause this.

When I diagnose LCP issues, I always figure out which sub-part is the bottleneck first. Optimizing image size when TTFB is 3 seconds is a waste of time.

CLS: Cumulative Layout Shift

CLS measures visual stability. Every time an element on the page moves unexpectedly after it has rendered, that counts as a layout shift. Google scores this using session windows: it groups shifts that occur within 1 second of each other (with a maximum window of 5 seconds), and your CLS score is the largest single session window total. Good is under 0.1, poor is above 0.25.

The session window approach matters because it means a page that loads and has one burst of shifts early on will score differently than a page that has small shifts happening throughout the entire user session. Google is looking at the worst burst, not the cumulative total of every shift during the entire page lifecycle. This was a change from the original CLS calculation and it made the metric much fairer for long-lived pages like single-page applications.

INP: Interaction to Next Paint

INP replaced FID in March 2024, and this was a significant upgrade. FID only measured the delay before the browser could start processing the first interaction. INP measures the full latency of interactions throughout the entire page lifecycle, from input to the next visual update. It reports the worst interaction (with some outlier smoothing on pages with many interactions).

The practical difference is enormous. A page could have great FID because the main thread was free when the user first clicked, but terrible INP because subsequent interactions were blocked by heavy JavaScript. I had a client whose e-commerce filters had a 600ms INP because every filter click triggered a full re-render. FID would never have caught that.

Good INP is under 200ms. Poor is above 500ms.

How to Diagnose Core Web Vitals Issues

You need both lab data and field data. Lab data tells you what is happening technically. Field data tells you what real users experience. They often disagree, and when they do, field data is what Google uses for rankings.

Chrome DevTools Performance Panel

Open DevTools, go to the Performance tab, enable Web Vitals, and record a page load. You will see LCP, CLS, and INP events marked on the timeline. For LCP, click the marker to see exactly which element was the LCP element and when each sub-part occurred. For CLS, you can see each individual layout shift and what elements moved. For INP, interact with the page during recording and you will see each interaction’s processing time.

I always throttle to mid-tier mobile (4x CPU slowdown, Fast 3G) because that is closer to what most real users experience. Testing on your developer machine with a fiber connection gives you misleading results.

PageSpeed Insights and CrUX Data

PageSpeed Insights shows both Lighthouse lab data and Chrome User Experience Report (CrUX) field data when available. The field data section at the top is what matters for rankings. If your site does not have enough traffic for CrUX data, you rely on lab data and the Web Vitals JavaScript library for your own real-user monitoring.

I check CrUX data at both origin level (your whole domain) and URL level. Sometimes the origin passes but specific high-traffic pages fail, or vice versa. Google evaluates at the URL group level when possible.

The Web Vitals Extension and Library

Install the Web Vitals Chrome extension for quick checks while browsing. For production monitoring, add the web-vitals JavaScript library and send data to your analytics. This gives you field data even when CrUX does not cover your pages.

import {onLCP, onCLS, onINP} from 'web-vitals';

onLCP(metric => sendToAnalytics('LCP', metric));
onCLS(metric => sendToAnalytics('CLS', metric));
onINP(metric => sendToAnalytics('INP', metric));

Specific Fixes That Work

Core Web Vitals: Fixing LCP

For TTFB issues: Move to a faster host or add a CDN. I have seen TTFB drop from 1.8s to 0.3s just by switching from shared hosting to a proper VPS with server-side caching. If you are on WordPress, install a page caching plugin and make sure your database queries are not bloated by poorly coded plugins.

For Resource Load Delay: Make sure the LCP image is discoverable in the initial HTML, not loaded via JavaScript or CSS background-image. Use a preload hint for critical images:

<link rel="preload" as="image" href="/hero-image.webp" fetchpriority="high">

For Resource Load Duration: Compress images properly. Use WebP or AVIF format. Resize to actual display dimensions instead of serving a 4000px image in a 800px container. I use srcset with appropriate sizes to serve the right image for each viewport.

For Element Render Delay: Eliminate render-blocking resources. Inline critical CSS, defer non-critical CSS, and add async or defer to JavaScript files that are not needed for above-the-fold rendering.

<!-- Inline critical CSS -->
<style>
  .hero { display: block; width: 100%; aspect-ratio: 16/9; }
  .hero img { width: 100%; height: auto; }
</style>

<!-- Defer non-critical CSS -->
<link rel="preload" href="/styles.css" as="style" onload="this.onload=null;this.rel='stylesheet'">

Core Web Vitals: Fixing CLS

The number one CLS fix is setting explicit dimensions on images and embeds. Always include width and height attributes, or use CSS aspect-ratio. This reserves space before the resource loads.

<img src="photo.webp" width="800" height="450" alt="Description"
     style="aspect-ratio: 16/9; width: 100%; height: auto;">

Other common CLS culprits I fix regularly:

  • Web fonts causing FOUT/FOIT: Use font-display: swap with size-adjust to minimize layout shifts when custom fonts load. Better yet, preload your key fonts.
  • Dynamically injected content: Ad slots, cookie banners, and notification bars that push content down. Reserve space for ads with min-height on containers. Place cookie banners as overlays, not inline elements.
  • Late-loading above-the-fold content: Client-rendered content that pops in after JavaScript executes. Server-render critical content whenever possible.

Fixing INP

INP problems almost always come from heavy JavaScript. The fixes:

  • Break up long tasks: Any JavaScript task over 50ms blocks the main thread. Use requestIdleCallback or scheduler.yield() to break large tasks into smaller chunks.
  • Reduce third-party scripts: Analytics, chat widgets, ad scripts, and social embeds all compete for main thread time. Audit them ruthlessly. I removed a chat widget from a client site and INP dropped from 450ms to 180ms.
  • Debounce event handlers: Scroll, resize, and input handlers that fire on every event create interaction delays. Debounce or throttle them.
  • Use content-visibility: auto for off-screen content to reduce rendering work during interactions.
// Break up a long task using scheduler.yield()
async function processLargeList(items) {
  for (let i = 0; i < items.length; i++) {
    processItem(items[i]);
    if (i % 50 === 0) {
      await scheduler.yield();
    }
  }
}

Real Client Results

Here are actual before-and-after numbers from projects I completed in the past 18 months. I am sharing these because vague claims about “improved performance” are useless. You need to see what realistic improvements look like.

Client / IndustryMetricBeforeAfterKey Fix
E-commerce (Fashion)LCP5.2s1.8sImage optimization + preload + CDN
E-commerce (Fashion)INP620ms170msRefactored filter JS, removed 3 scripts
Travel BlogCLS0.380.04Image dimensions + ad slot reservations
Travel BlogLCP4.1s2.0sServer caching + critical CSS inline
SaaS Landing PagesLCP3.8s1.4sMoved from client-render to SSR for hero
Local Services SiteCLS0.310.02Font preload + cookie banner overlay
News PublisherINP510ms190msLazy-loaded ad scripts + task chunking
Restaurant ChainLCP6.1s2.2sHost migration + image format conversion

The ranking impact varied. The e-commerce fashion client saw a 15% increase in organic traffic within three months of passing all Core Web Vitals, but that was also combined with content improvements. The travel blog saw a modest 6% lift in rankings for competitive queries. Core Web Vitals are a tiebreaker, not a magic bullet, but when you are competing in tight SERPs, that tiebreaker matters.

Common Core Web Vitals Mistakes to Avoid

Mistake 1: Optimizing only for lab data. Your Lighthouse score can be 100 while your field data fails. Lab tests run on a single configuration. Real users have slow phones, congested networks, and browser extensions that inject JavaScript. Always prioritize field data from CrUX or your own RUM solution.

Mistake 2: Lazy-loading the LCP image. This is incredibly common. People add loading=”lazy” to every image on the page, including the hero image that is the LCP element. Lazy-loading the LCP image delays it because the browser waits until it enters the viewport to start fetching. The LCP image should have loading=”eager” (the default) and fetchpriority=”high”.

Mistake 3: Ignoring INP because the site “feels fast.” Your site feels fast on your MacBook Pro. On a mid-range Android phone with 20 browser tabs open, those 400ms interaction delays are very real. Test on real devices or at minimum use CPU throttling in DevTools.

Mistake 4: Treating Core Web Vitals as a one-time project. New plugins, ad scripts, design changes, and content updates can regress your scores at any time. Set up monitoring and alerts. I check CrUX data monthly for every client and catch regressions before they impact rankings.

Mistake 5: Chasing a perfect score instead of passing thresholds. The ranking benefit comes from passing the “good” thresholds. Going from 2.0s LCP to 1.2s LCP does not give you more ranking benefit than being at 2.4s. Focus on getting all three metrics into the green zone, then move on to higher-impact SEO work.

A Practical Optimization Workflow

Here is the exact process I follow for every client:

  1. Check CrUX field data in PageSpeed Insights for the top 10 landing pages by traffic. Identify which metrics fail and on which pages.
  2. Run Lighthouse on failing pages to get specific diagnostics. Note the LCP element, layout shift sources, and long tasks.
  3. Use DevTools Performance panel with throttling to reproduce issues and identify root causes using the sub-part breakdown for LCP.
  4. Prioritize fixes by impact and effort. Image optimization and caching give the biggest LCP wins for the least work. JavaScript refactoring for INP takes longer but is often necessary.
  5. Implement, test in staging, deploy, then wait 28 days for CrUX data to update. CrUX uses a rolling 28-day window, so you will not see field data improvements immediately.
  6. Set up ongoing monitoring using the web-vitals library sending data to your analytics platform.

Core Web Vitals: The Bottom Line

Core Web Vitals in 2026 are not about chasing perfect scores. They are about delivering a genuinely good user experience and making sure Google can measure it. Understand what each metric captures, diagnose using the right tools at the right layer, fix the actual bottleneck instead of guessing, and monitor continuously. That is the approach that has consistently delivered results for my clients, and it will work for you too.

If you only do three things after reading this article: preload your LCP image with fetchpriority=”high”, add explicit dimensions to every image and embed, and audit your third-party JavaScript for main thread impact. Those three actions alone will move the needle for most sites.

Further Reading

If you found this guide helpful, check out these related articles:

For more information, see these authoritative resources: Google’s Web Vitals documentation, PageSpeed Insights.

Javier Morales

Javier Morales

SEO Consultant & Writer

SEO consultant based in Barcelona with over 10 years of experience helping businesses grow their organic traffic through actionable strategies.

Related Articles

Measuring SEO ROI: How to Prove the Value of Your SEO Work
Guide

Measuring SEO ROI: How to Prove the Value of Your SEO Work

Measuring SEO ROI is the one challenge I hear from clients more than any other: “Is our SEO investment actually paying off?” After ten years...

April 4, 2026 11 min read
Structured Data & Schema Markup: The Complete Guide for 2026
Guide

Structured Data & Schema Markup: The Complete Guide for 2026

Structured data is one of those SEO topics that makes people’s eyes glaze over. It sounds technical, intimidating, and optional. But after implementing schema markup...

March 26, 2026 10 min read
E-E-A-T Signals: How Google Evaluates Your Expertise (And How to Prove It)
Guide

E-E-A-T Signals: How Google Evaluates Your Expertise (And How to Prove It)

If you have been doing SEO for any length of time, you have probably heard someone say “you need to improve your E-E-A-T.” And if...

March 23, 2026 9 min read