DE

EN

Halid
Knowledge

- Blog Posts

Halid Osmaev has been with the internetwarriors in web analytics since August 2021. With extensive knowledge in IT and mathematics, he can assist you with everything from simple to the most complex web analytics topics. Whether it's setting up a basic tracking infrastructure or conducting an in-depth analysis of company processes, he's ready to help.

Halid Osmaev
Halid
Knowledge

- Blog Posts

Halid Osmaev has been with the internetwarriors in web analytics since August 2021. With extensive knowledge in IT and mathematics, he can assist you with everything from simple to the most complex web analytics topics. Whether it's setting up a basic tracking infrastructure or conducting an in-depth analysis of company processes, he's ready to help.

Halid
Knowledge

- Blog Posts

Halid Osmaev has been with the internetwarriors in web analytics since August 2021. With extensive knowledge in IT and mathematics, he can assist you with everything from simple to the most complex web analytics topics. Whether it's setting up a basic tracking infrastructure or conducting an in-depth analysis of company processes, he's ready to help.

All Categories

SEO

SEA

Social Media

Web Analytics

Growth Marketing

Artificial Intelligence

Success Stories

Inside internetwarriors

Server-Side Tracking - An Overview

Jul 16, 2025

Halid Osmaev

Halid

Osmaev

Category:

Web Analytics

Banner for a blog article - Woman sitting in front of a screen

Server Side Tracking is the new standard. A significant advantage is the control provided over the data flow, especially user data. In this article, we discuss Server Side Tracking using Google Tag Manager as an example and review its benefits and which user data is sent. But first, the important question: What is Server Side Tracking? In short: Server Side Tracking is a data collection method where the tracking information is processed not in the browser but directly through the server of the website operator and forwarded to analysis or marketing tools afterward. The traditional tracking method is Client Side Tracking (CTS) , where a code snippet is embedded in the page, for example, via the Google Tag Manager. This sends event data directly to third-party services like Google Analytics 4, Meta Ads, etc. However, control over the sent user data (IP address, demographic data, etc.) is limited to the adjustments offered by the tool. Additionally, a third-party cookie is usually set, resulting in loss of data amount and quality.   Figure 1: Comparison of Client-side and Server-side tagging With Server Side Tracking (SST) , all data is first sent to a private server, where, for example, the Server Side Google Tag Manager is running. This ensures that there is no undesired data transfer occurring on the website with the users. This transfer happens only in the Google Tag Manager Server Side. However, this can then be adapted to a data protection-compliant standard by clear insight into the data and further configuration options like transformers . Server Side Tracking vs. Client Side Tracking The traditional Client Side Tracking (CST) is still widespread but increasingly reaching its limits. In CST, tracking scripts are executed directly in the user’s browser, sending data like page views, clicks, or conversions to third-party tools. However, this approach is very susceptible to modern tracking protection measures such as ad blockers, VPNs, intelligent tracking prevention (ITP) in iOS/Safari, and various data protection regulations. In contrast, Server Side Tracking (SST) uses a different approach: Tracking data is no longer sent directly from the browser to external tools but first to the server. This acts as a proxy or central data hub through which all tracking requests run. The server request is treated similarly to an API request and is thus less vulnerable to blocking . Additionally, all data processing takes place within one's infrastructure , significantly reducing the risk when dealing with data protection authorities. Another difference lies in the use of cookies: While Client Side Tracking relies on third-party cookies – which are increasingly blocked by browsers – Server Side Tracking prefers first-party cookies , considered more trustworthy and stable. Why is Server Side Tracking now standard? While Client Side Tracking is increasingly losing its effectiveness due to growing restrictions, Server Side Tracking offers a future-proof, high-performance, and privacy-friendly alternative – with significantly higher data quality and control for companies. Overview of the benefits of Server Side Tracking: More data control : Unlike the specifications of external tracking tags, companies with SST retain full control over the collected data. Higher data quality : SST can often bypass ad blockers and tracking protection measures, typically leading to at least 12% more data. Performance advantages : Instead of addressing many individual tracking tools directly from the browser, only one server is contacted – conserving resources and improving website load time. Data protection compliance : By processing exclusively within their server structure, companies can better respond to legal requirements. Server Side Tracking and Data Protection Regulations Server Side Tracking offers not only technical advantages but also a significantly better basis concerning data protection laws. The main legal regulations in the European area are the GDPR, the TTDSG, and the EU-USA Data Privacy Framework.  An overview: General Data Protection Regulation (GDPR) The General Data Protection Regulation ( GDPR ) mandates that personal data – which can be traced back to a real person, such as name, email address, or IP address – may only be collected and processed with the explicit consent of the users (e.g., through a cookie banner). It has been applicable in all EU member states since May 25, 2018, forming the central legal framework for handling personal data in the European area. The GDPR requires companies to inform transparently which data is collected for what purpose and how long it will be stored. Additionally, users must be able to object to processing or revoke consent at any time. For tracking, this means: No data may be collected or shared with third parties without clear and voluntary consent – even if the technology allows it. Violations of the GDPR can result in hefty fines. Server Side Tracking offers the advantage that data collection, storage, and sharing can be centrally controlled and better documented – facilitating GDPR-compliant implementation. Telecommunications-Telemedia Data Protection Act (TTDSG) The TTDSG (Telecommunications-Telemedia Data Protection Act) supplements the GDPR specifically for online services and stipulates that no arbitrary user data , especially through cookies or similar technologies, may be stored or read without prior consent . The law came into force on December 1, 2021, merging central data protection requirements from the GDPR and the German Telemedia Act (TMG) as well as the Telecommunications Act (TKG).  For online tracking, this means: Even setting a cookie that is not purely technically necessary requires active, informed consent from users, for example, through a consent banner. Tracking methods attempting to create user profiles without consent – even through technologies like fingerprinting – are prohibited under the TTDSG. This tightens the requirements for data-driven online marketing measures and underscores the necessity to make tracking privacy-compliant and transparent – something that is much better controlled with Server Side Tracking. EU-USA Data Privacy Framework Particularly relevant for international companies is the new EU-USA Data Privacy Framework , which facilitates transatlantic data transfer and has been in effect since summer 2023. Previously, it was problematic to send personal data to US services because US authorities had extensive access to it by law. The new agreement creates more legal certainty when US services like Google or Meta are used – but only if the services are certified under the new framework. These are just a few of the laws affecting tracking. Therefore, an understanding of user data is important. Conclusion: Why does Server Side Tracking offer more data protection compliance? Server Side Tracking allows the entire data processing to initially run through one's server infrastructure. This means: Tracking occurs not directly with the users but only after explicit consent and under full control of data processing on one's server. This allows the requirements of data protection laws to be better implemented, such as targeted anonymization , pseudonymization , or restriction of data sharing with third parties . Overall, Server Side Tracking enables a more data protection-compliant handling of user data, allowing companies to maintain oversight and control – which is essential under the current regulatory framework. What user data is sent with Server Side Tracking? The good news: Only the absolute minimum. What does this mean? Using Google Tag Manager as an example: When an event on the page, like a click, is triggered, an HTTP request is sent to the Server Side Google Tag Manager . Naturally, HTTP header information is sent along. This includes, among others: Time IP address Page URL Approximate location (by IP address) Operating system Browser Resolution Device Additionally, there are other parameters specifically related to configuration. Detailed information can be found in the documentation at [ https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers ]. There are also parameters automatically captured by Google Tag for campaign optimization, which include: utm_source utm_medium utm_campaign utm_content utm_term and the Click ID It should also be ensured which data is user-defined sent with the Google Tag Manager configuration. In the Server Side Google Tag Manager, users can configure precisely through the use of transformers, which specific data should be forwarded in what form and which should be withheld. However, for a data-secure implementation, the conclusion should be: “ Track only as much data as needed .” The challenge is to limit tracking to what is necessary without incurring disadvantages. We set up correct Server Side Tracking for you The Internetwarriors are a team of experts in various fields of online marketing. One of our main focuses is web analytics and Server-Side Tracking (SST). With extensive expertise and a profound understanding of the latest trends and technologies in digital analytics, we offer tailored solutions to optimize our clients' online presence. We are thus a valuable partner for you when you want to set up professional tracking that provides all the data you need for strategic decisions and monitoring your online marketing activities. Contact us now without obligation !

Privacy Sandbox Stopped! The Future of Chrome Third-Party Cookies

Jul 29, 2024

Halid Osmaev

Halid

Osmaev

Category:

Web Analytics

Google plans to block third-party cookies in Chrome and replace them with cohort data. However, this faced difficulties. Even though the plans were abandoned, the fear of a comprehensive tracking blockade remains. In this blog post, you'll learn more about the latest developments in tracking and how our solution can help. Chrome relies on third-party cookies – What you need to know now In 2019, Google announced that with the introduction of the Google Privacy Sandbox, it would block all third-party cookies in its Chrome browser. These measures have already been implemented by the browsers Safari and Firefox. However, Google's approach encountered significant challenges, particularly regarding the use of cohort data provided by Google based on browsing history. Instead of traditional third-party cookies, the Privacy Sandbox is intended to provide information about user groups, known as cohorts or interest groups. These groups are based on browsing history and offer advertisers a new way to reach target audiences. The problem is that marketers rely on the data provided by Google, which could allow Google to gain a monopoly position. Additionally, the results of the Privacy Sandbox trial from the first quarter of 2024 did not yield the desired outcomes. These insufficient results ultimately led to Google abandoning its plans to block third-party cookies . This was confirmed in an official blog post by Anthony Chavez. Yet, the concern over a comprehensive blockade may not necessarily be over. With Google’s new proposal, users can make informed decisions about which data to permit for tracking. This might lead many users to opt for a full or heavily restricted tracking blockade. Therefore, it remains essential to keep your advertising tracking methods up to date to counteract the effects of potential tracking blockades. A proven technology in this area is server-side tracking. Our case studies and the experiences of numerous clients show that server-side tracking can increase the captured data volume by at least 12% . Do you have any further questions or comments? Feel free to contact us or use the comment function below.

Why LinkedIn CAPI Tracking Doesn't Work

Apr 12, 2024

Halid Osmaev

Halid

Osmaev

Category:

Web Analytics

LinkedIn CAPI is a new interface to send conversions via server-side tracking. Additional user data can also be sent to more accurately track the attribution to the respective LinkedIn campaigns. What is LinkedIn CAPI all about? The term CAPI may already be familiar to some from the Meta CAPI Tag . The Meta CAPI Tag is a great tool for significantly improving the tracking and thus the optimization of Meta campaigns. Now an official LinkedIn CAPI Tag has finally been added as a tag template to Google Tag Manager. This can be found and added through the community templates. There are also several guides on how the tag should be implemented, but it has become clear during implementation that it does not work as initially thought. The Problem Many of the instructions currently found on the internet do not lead to functional tracking. Despite the configuration as described in various instructions, no conversions are registered in the LinkedIn Campaign Manager. The reason for this is some parameters that are required but not marked as such. To correctly implement LinkedIn CAPI tracking, follow the instructions below: The (correct) Guide Prerequisites For LinkedIn CAPI conversions to be tracked, the following prerequisites must be met: There must be a functioning server-side tracking  in place LinkedIn campaigns must already be set up 1. Generate API Access Token The first step is to generate and then extract the LinkedIn API Access Token. Log in to the LinkedIn Campaign Manager. Under Analyze in the left menu, click on Conversion Tracking . Click on Data sources . In the dropdown of Create source , click on Conversion API or CSV . Select the Google Tag Manager option and follow the steps. Securely save the generated API Token 2. Create a LinkedIn Conversion Additionally, the LinkedIn conversion to be tracked and its Conversion Rule ID are required. In the dropdown Create Conversion , click on Conversions API or CSV conversion . Select the Google Tag Manager option and follow the steps. After successful creation, reopen the conversion from the conversion overview. The Conversion Rule ID can be found in the URL 3. Configure the Client Now that the necessary information is available, the client-side Google Tag Manager configuration can begin. Create a new First-Party-Cookie Variable that reads the cookie “li_fat_id” . Open the Google tag and add the Parameter: user_data.linkedinFirstPartyId with the value of the specified Variable . Additionally, add a randomly generated number as the parameter EventID . This can be used to deduplicate events when using client-side LinkedIn Insight Tags and CAPI Tags. If not already present, create a GA4 Event for the created conversion 4. Configure the Server Next, the server-side Google Tag Manager must be configured Add the LinkedIn | CAPI Tag Template from linkedin-developers from the community gallery. Fill in the parameters LinkedIn API Access Token and Conversion Rule ID with the values from steps 1 and 2. Enter the sent Event ID under EventID . For the parameter Conversion Value , enter the value {"currencyCode": "EUR", "amount": "0.0"} . Here, “EUR” and “0.0” can be adjusted if the conversion provides a corresponding value (e.g., a product purchase). If this is not the case, the value should not be changed. Finally, the tag configuration should look as follows Notes The reason many guides have failed to function properly is that parameters like Conversion Value, marked as optional, were not actually optional. Additionally, the parameter user_data.linkedinFirstPartyId was necessary, as attribution to the LinkedIn campaigns would otherwise not be possible. Additional Configuration Options It is possible to expand LinkedIn CAPI tracking. The following parameters can optionally be sent from the client-side Google Tag to ensure they are automatically captured and sent by the LinkedIn CAPI Tag: user_data.sha256_email_address : The hashed email address of the user user_data.companyName: The name of the user's company user_data.jobTitle: The user's job title user_data.address.first_name: The user's first name user_data.address.last_name: The user's last name Attribution to the campaigns based on the last four parameters is not always guaranteed. It is possible that multiple users are found with the provided data if only the first name is sent. Furthermore, please note that any user data should only be sent if the user has given consent! Our competent web analytics team is happy to assist you in setting up the tracking or answering any other questions. Our experts in web analytics are ready to help you. Please get in touch.

AI Mode and AI Overview in Google Ads – What should you keep in mind?

Apr 22, 2026

Markus

Brook

Category:

Search Engine Advertising

The key points at a glance   Google has fundamentally changed: Instead of blue links, AI-generated answers dominate the search results page — with direct effects on Google Ads.  AI Overviews have been active in Germany since spring 2025. Ads can already appear above, below, and in some cases within the AI responses.  Ads directly in Google AI Mode are currently being tested in the US and will soon also come to Germany.  Only certain campaign types qualify for these new placements — above all Broad Match, AI Max for Search, Performance Max and Shopping Ads .  Anyone who still works exclusively with Exact Match or a rigid campaign structure today will lose visibility in the future exactly at the moments that matter.  AI Max for Search is currently the fastest-growing AI feature in Google Ads and a key lever for the new placements.  Anyone who optimizes their campaign structure, data quality and assets now will secure a decisive head start.  Search has fundamentally changed   Anyone searching on Google today increasingly gets not a list of links, but a direct answer. The search results page advertisers have grown used to over the years looks fundamentally different in 2026 than it did just two years ago.  Two technologies are driving this change:  AI Overviews are AI-generated summaries that have also been active in Germany since spring 2025. They appear at the top of the page for more complex or informational search queries and often answer the question so completely that many users do not scroll any further. This changes where and how ads are perceived and which ones are served at all.  Google AI Mode has taken things a step further. Available in Germany since October 2025, it is a standalone, conversational search interface. Users no longer type in individual search terms, but have real dialogues, similar to an AI assistant. The intent behind them is often much more layered, the context more complex.  For Google Ads advertisers, this means: Reaching the right audience no longer depends only on precise keywords, but on understanding intent, context and conversation flow. The AI decides and it decides based on data and signals, not manually maintained keyword lists.  Where do ads actually appear — and which campaigns qualify?   This is the most practical question advertisers ask: Where exactly do my ads appear, and what do I need to do for that?  In AI Overviews   Ads can appear in three places around an AI Overview: above, below, or directly within the AI answer. Placement above and below is already available in all markets where AI Overviews are active, including Germany. Integration directly into the answer text is currently limited to English-language markets.  Important to understand: There is no separate opt-in for these placements. If you use the right campaign types and have relevant ads, you are automatically considered. Just as little can this placement be specifically excluded.  Google evaluates both the actual search query and the content of the AI-generated answer to decide whether an ad fits. This is a key difference from classic keyword logic: relevance is now measured in the context of the entire answer, not just the individual search term.  In Google AI Mode   Tests are currently running here in the US. Ads appear there directly embedded in the conversational responses — not as separate blocks, but as an integrated part of the AI answer. This is an even tighter context than with AI Overviews. The global rollout, including for Germany, has been announced, but no specific date has been set yet.  Which campaign types are actually qualified?   This is the point where many advertisers get stuck. Not every campaign is automatically served in AI Overviews or AI Mode. Google has clearly defined which campaign types qualify:  Search Ads with Broad Match keywords   AI Max for Search Performance Max (PMax)   Shopping Ads   Campaigns that work exclusively with Exact Match or Phrase Match are not qualified for these placements. This is a structural turning point: anyone who still relies on hyper-granular keyword structures today will, over time, lose impression share exactly at the moments when users are most ready to buy.  AI Max for Search: What is behind it and why is it so relevant right now?   AI Max in Google Ads is not a new campaign type, but a feature package that can be integrated into existing search campaigns. Activated with one click in the campaign settings, it fundamentally changes the campaign logic.  Specifically, AI Max combines two approaches: first, the familiar Broad Match technology, which also matches search queries when the exact wording differs from the entered keywords. Second, so-called keywordless serving — similar to Dynamic Search Ads in the past, but much smarter. The AI independently recognizes which search queries an ad would be thematically relevant for, even without a stored keyword.  To this are added three other core features:  Automated text adaptation: Google generates new headlines and descriptions based on existing ad titles, descriptions, and landing page content — and selects in real time the combination that best fits the respective search query. Since February 2026, text guidelines have been available worldwide for all advertisers: there you can define which wording the AI may use and which it may not.  URL expansion: Users are automatically sent to the page on your website that best matches the search query — not necessarily the URL stored in the campaign. Certain pages can be excluded from the system.  Brand controls: Advertisers can define for which brands ads should appear and for which they should not. This is especially relevant for accounts that actively manage competitor or brand campaigns.  When does AI Max pay off — and when does it not (yet)?   AI Max shows its strengths above all in accounts that already have enough conversion data and target broad audiences. In e-commerce and with B2C products with high search volume, results are typically strongest.  In niche markets, with very explanation-heavy B2B products, or accounts with only a few daily conversions, the rollout should be more cautious. An A/B test with a 50/50 split between the existing campaign and the AI Max version is the most sensible first step here.  What applies in any case: the foundation has to be right. Clean conversion tracking, a data-driven attribution model, and clear conversion goals in the account are mandatory. Anyone activating AI Max without this foundation leaves the AI in charge without a map or compass.  Performance Max: Google’s preferred channel for AI Overviews   Performance Max is not new, but its role has shifted. Google increasingly sees PMax as the main format for serving in AI-driven surfaces. This is because PMax was built from the ground up for data-driven, cross-channel serving: it provides the AI with text, images, videos and audience signals, and leaves the optimal combination to it.  For advertisers, this means: Anyone who has already set up PMax properly and regularly maintains asset groups is well positioned for AI Overviews and the AI Mode. Anyone not yet using it should start now at the latest — with clear goals, enough assets and regular monitoring of search terms.  A good sign: PMax has become significantly more transparent in recent months. Negative keywords can now be added directly, and the channel reporting shows which channel (Search, YouTube, Display, Gmail, Discover) contributes what to performance — without additional scripts or workarounds.  What this means for campaign structure   Many accounts have grown historically: strict match type separation, single keyword ad groups, dozens of ad groups for minimal differences. That used to make sense to maintain control. Today, this structure works against the AI.  If you split data across too many campaigns, you give the algorithm too little material to learn from. Instead of quickly recognizing patterns and optimizing, it stalls.  The current approach that has proven effective in practice looks like this: topic-based campaigns with a manageable number of keywords, a combination of Exact and Broad Match, Smart Bidding as standard. Not maximally granular, but maximally data-dense.  That does not mean giving up control completely. Negative keywords, audience signals, text guidelines and regular review of search queries remain active levers.  The foundation: data quality decides   Here is a mistake that runs through almost all accounts: people discuss campaign types and features before the data foundation is right. But the rule is: Garbage in, garbage out. If you feed the AI bad data, you are only automating budget burn.  Server Side Tracking (SST) is the foundation. Classic browser tracking increasingly loses data due to ad blockers, cookie restrictions and iOS updates. Server Side Tracking bypasses these hurdles and, in practice, delivers at least 12% more usable data points — signals that Smart Bidding and AI Max urgently need for optimization.  In addition, advertisers should actively use the following data sources:  First-party data / customer lists : Existing and new customers can be evaluated differently in a targeted way via Customer Match lists. In the area of new customer acquisition, Smart Bidding can be prompted to weight new customers more heavily — with concrete effects on bid logic.  CRM data (offline conversions) : Especially in B2B, it makes no sense to treat every lead equally. Anyone feeding back CRM data (e.g., from HubSpot or Salesforce) via offline conversions gives Google Ads the signal to distinguish between "poor" and "valuable" — and that is exactly the prerequisite for sustainably profitable growth.  Conclusion: Act now before the market does   Google Ads in 2026 is a data-driven system, not a manual tool. The question is no longer whether to use AI Max, AI Overviews and modern tracking structures — but when. Anyone who actively shapes the transformation now secures visibility at the moments that really matter.  As an experienced Google Ads agency, we guide you through exactly this process: from tracking infrastructure to campaign structure to AI Max and Performance Max. Get in touch now →   FAQ   Will my Google Ads be served automatically in AI Overviews? Not automatically. Ads appear in AI Overviews when the ad matches both the search query and the content of the AI answer. Another requirement is that you use Broad Match, AI Max or Performance Max.  What does advertising in Google AI Mode cost more than classic Search Ads? There is no separate pricing model for AI Mode ads. Google's auction system stays the same — placement is determined by relevance, quality score and bid.  Can I exclude my ads from AI Overviews? No. Google currently does not offer a way to specifically disable these placements.  Do I get separate reporting for AI Overview ads? Not yet in full. At present, ads in AI Overviews are counted as "Top Ads" and appear accordingly in standard reports. Dedicated segment reporting has been announced for the future, but is not yet available.  When will ads in Google AI Mode also come to Germany? There is no official date yet. Ads in AI Mode are currently being tested in the US (as of March 2026). The international rollout has been announced.  Does AI Max also make sense for smaller accounts? That depends on the individual case. In principle, AI Max needs a solid data foundation — meaning enough conversions, clean tracking and clear goals. For accounts with only a few daily conversions, we first recommend a controlled A/B test before the entire campaign is switched over.  Do I need to create new campaigns to appear in AI Overviews? No. Existing campaigns qualify automatically, provided the right campaign types and match types are used.  What is the difference between AI Overviews and AI Mode? AI Overviews are AI summaries within the normal Google search. AI Mode is a separate, conversational search interface for complex, multi-step queries — comparable to an AI chatbot directly in search. 

Agentic Commerce & Agentic Shopping 2026: Why AI Shopping Agents are Rewriting Commerce

Mar 30, 2026

Moritz

Klussmann

Category:

Artificial Intelligence

Beitragsbanner-des-Artikels-Agentic-Commerce

The world of online marketing is spinning faster today than ever before. While we've been fighting for clicks and conversions at internetwarriors since 2001, we're currently experiencing the most radical upheaval in our history. The trigger: Agentic Commerce . We are transitioning from mere information search to task-oriented execution. Today, a user no longer just asks for products; they instruct a AI shopping agent to autonomously handle the entire purchase process. In this article, I'll show you why the failure of OpenAI's "Instant Checkout" is not the end of the hype, but the starting point for a new technical infrastructure that you need to know as a retailer now. The OpenAI Pivot: From Shopping Cart to Discovery Platform In March 2026, OpenAI ended its "Instant Checkout," prompting one of the most discussed debates in e-commerce. Failure or strategy? We reveal what is really behind the pivot and what it means for retailers. What was Instant Checkout? In September 2025, OpenAI launched the Agentic Commerce Protocol (ACP) with Stripe, bringing "Instant Checkout" to ChatGPT. The vision: users find a product in the chat and buy it directly without leaving the platform. Etsy, Walmart, and Shopify were the first partners – Shopify president Harley Finkelstein called it a "new frontier" for online retail. Why did direct checkout fail? In early March 2026, OpenAI pulled the plug. What critics dismiss as the failure of Agentic Commerce is, upon closer inspection, a strategic pivot from which we can learn a lot. OpenAI underestimated the immense complexity of global commerce. Three critical factors made direct purchase completion in the chatbot impossible: The three technical killers:   1. Lack of real-time synchronization: The inventory data of millions of retailers could not be reconciled at the required speed – outdated prices and stock immediately shattered user trust.   2. Compliance hurdles: Systems were missing for automated calculation of regional taxes (in the US alone, thousands of local tax jurisdictions) and for compliance with local laws like the Price Indication Regulation (PAngV) in Europe.   3. Fraud prevention: Agent-based transactions require completely new security architectures to prevent automated abuse. Another factor that is rarely mentioned in reporting: the withdrawal comes immediately after Amazon's $50 billion investment in OpenAI. Amazon controls 40 percent of US e-commerce and is building its own AI shopping tool with Rufus . Whether coincidence or strategic calculus – the timing is remarkable. 🟢 Update: March 25, 2026 OpenAI has simultaneously launched a completely new shopping experience with the checkout withdrawal: visual product browsing, side-by-side price comparisons, and image upload for product searches. Seven major US retailers – including Target, Sephora, Nordstrom, and Best Buy – are already live via ACP. Walmart operates a dedicated In-ChatGPT app with loyalty integration and native Walmart payment. This is not a withdrawal – this is a pivot. The new Warrior reality: OpenAI is primarily focusing on Product Discovery through ACP. The checkout returns to the retailer – but the decision of which retailer gets the order is increasingly made by the agent. Agentic Shopping works – just not yet in the West Anyone who believes that the failure of Instant Checkout proves Agentic Shopping is just hype is making a categorical mistake. Alibaba's Qwen-App is already completing food orders, travel bookings, and product purchases entirely in a single conversation – and at scale. The decisive difference: Alibaba owns the AI model, the marketplace, the payment infrastructure, and the logistics all from one source. OpenAI attempted to replicate the same without owning this stack. It was structurally doomed to fail. Google UCP: The new operating system of commerce While OpenAI is correcting, Google is creating facts with the Universal Commerce Protocol (UCP) . Unlike closed systems, UCP is an open standard that allows AI agents to communicate directly with merchants' backends – from discovery through checkout to post-purchase management. For you as a retailer, this means: Your Google Merchant Center (GMC) becomes the critical interface for AI in e-commerce . Google has introduced new attributes to make your products machine-readable: ·         product_faq – questions and answers directly extractable from the feed for AI agents ·         product_use_cases – specific scenarios in which your product offers the best solution ·         native_commerce – a switch signaling whether your product is ready for autonomous checkout The advantage for Germany: Google Merchant Center and Google AI Mode are already active in DACH. Retailers who optimize their feed now secure a real time advantage. SEO alone is no longer enough: Welcome to the era of GEO Our analysis of German e-commerce shops shows a clear picture: A top ranking in traditional search does not guarantee visibility in AI responses. Over 60 percent of URLs linked in AI overviews do not rank in the top 50 of traditional Google search. The rules have changed. This is where Generative Engine Optimization (GEO) comes into play – the discipline of optimizing content not for human clicks but for extraction by AI systems. Feature Classic SEO Generative Engine Optimization (GEO) Target Group Human users AI agents & Large Language Models Primary KPI Click-through rate (CTR) & rankings Mention rate & citation authority Content Logic Keywords & readability Semantic depth & fact density Technical Basis Crawlability & loading speed Structured data & API connectivity Success Measurement Google Search Console (rankings) Brand mentions in LLM responses Warriors Insight: In Germany, AI overviews already appear in 33 percent of all search queries. If you don't opt for GEO now, you will become invisible to the "agent customer" before they even arrive at a website. Strategic Warriors Knowledge: Brand power and the 95:5 rule In the Agentic Web, it's not just the keyword that counts anymore, but the authority of your brand as an "entity" – how a Large Language Model knows, categorizes, and recommends your brand. The 95:5 rule in B2B Only 5 percent of your target group is currently ready to buy (In-Market). The remaining 95 percent need to be reached through thought leadership and trust building in the long term. AI agents prefer brands that are anchored as expert entities in the knowledge graphs of Large Language Models. Those who only optimize for transactional keywords lose the majority of their potential customers before they are ready to buy. Preferred Sources: The Democratization of the Algorithm Google now allows users to actively mark their preferred sources. These "Preferred Sources" receive a permanent visibility boost – regardless of algorithm updates. This fundamentally changes the game: Trust is the new currency. You must persuade users to actively choose your brand as trustworthy – not just ranking well. Checklist: Make your shop agent-ready now For German retailers, the groundwork begins today, even though fully autonomous Agentic Shopping in DACH is still 12–24 months away. Product data excellence in Merchant Center: Maintain GTINs, precise attributes, and new UCP fields (product_faq, product_use_cases). A flawed feed is the largest KI visibility obstacle you can control yourself. Technical infrastructure for AI agents: Implement an llms.txt file (the robots.txt for AI crawlers) and consistently use JSON-LD – specifically the Product, FAQPage, and Article schemas. These are the signals that AI agents prioritize. API-First strategy: Ensure that inventories and prices can be retrieved in milliseconds via interfaces. Outdated data was the main reason for OpenAI's checkout failure – and the same mistake will be costly for retailers once agents actively book. Semantic enrichment with the Query Fan-Out Principle: Answer the questions an AI asks when comparing products on behalf of a customer: For which use cases is the product optimal? What alternatives are there? What are common purchase barriers? This depth distinguishes cited from ignored content. GEO strategy and build brand authority: Ensure that your shop is perceived as an expert entity in relevant categories – in ChatGPT, Perplexity, and Google AI Mode. More on this in our GEO audit → Secure DACH compliance early: PAngV and GDPR apply to AI-mediated purchases as well. Price reductions must disclose the lowest price of the last 30 days as a reference – and this must be machine-readable. Clarify this early with your legal advisor. Conclusion: Become a leader of the new era Agentic Commerce is no longer a science fiction scenario – it's the technological reality of today, still in development, but unstoppable. What OpenAI buried with Instant Checkout is a specific business model: the chatbot as a transaction facilitator between retailer and customer. What lives on – and is accelerating – is the underlying logic: AI shopping agents take over discovery, filter options, prepare purchase decisions. This already happens, daily, for millions of users. The question for retailers is no longer whether , but if they are visible when the agent decides . The companies that are ahead in two years are not the ones with the biggest budget. They are the ones with the best data, the strongest GEO presence, and the clearest understanding of how Artificial Intelligence in e-commerce is used as a lever rather than a threat. Frequently Asked Questions about Agentic Commerce What is the difference between Agentic Commerce and traditional e-commerce? Traditional e-commerce follows the Search & Click principle: The user actively searches, compares manually, and buys themselves. Agentic Commerce follows the Ask & Done principle: An AI shopping agent takes over product search, price comparison, availability check, and – if authorized – the purchase completion fully autonomously. What is Agentic Shopping? Agentic Shopping is the practical manifestation of Agentic Commerce: The user formulates a concrete goal – such as "Order printer cartridge XYZ at the best price by tomorrow" – and an AI shopping agent carries out all steps independently: search, comparison, purchase. Why did OpenAI discontinue Instant Checkout? OpenAI faced three technical hurdles: lack of real-time inventory synchronization across millions of retailers, no infrastructure for tax collection, and no fraud prevention for agent-based transactions. OpenAI is now pivoting to Product Discovery – the checkout remains with the retailer. What is the difference between SEO and GEO? SEO (Search Engine Optimization) optimizes content for the Google search algorithm and for human users – the goal is the click. GEO (Generative Engine Optimization) optimizes for AI systems and Large Language Models that extract content and output as a direct answer – without the user clicking on a website. Both disciplines complement each other and build on each other. Is my shop legally safe for AI purchases in Germany? In the DACH region, you must pay particular attention to GDPR and PAngV (Price Indication Regulation). Price reductions must always disclose the lowest price of the last 30 days as a reference – also machine-readable for AI agents. Clarify this early with your legal advisor before you register for Agentic Commerce protocols. When is Agentic Commerce coming to Germany? ACP and the new ChatGPT shopping hub are currently US-first. However, Google Merchant Center and Google AI Mode are already active in DACH – AI overviews already appear in 33 percent of all German search queries. Experts predict that AI agents could reach a market share of 20-30 percent in European e-commerce in two to three years. The preparation starts now. Is your shop ready for AI shopping agents? We analyze your GEO visibility, your product feed, and show you where you are currently invisible to AI agents – and how you can change that. Request GEO analysis now → Sources & further links: CNBC, March 2026: “OpenAI revamps shopping experience in ChatGPT after struggling with Instant Checkout” – cnbc.com Forrester Research: ConsumerVoices Market Research Survey, March 2026 Gartner: Bob Hetu, Analyst, gegenüber CNBC, March 2026 The Information, March 2026: First report on the Instant Checkout withdrawal OpenAI Blog, March 2026: Official statement on Instant Checkout and new shopping experience Google: Universal Commerce Protocol – Announcement January 2026

Budget Killers in Your Account: Quickly Identify Unprofitable Campaigns and Optimize Google Ads

Mar 23, 2026

Karina

Nikolova

Category:

Search Engine Advertising

Article banner on budget killers in the account

One of the main differences between SEA and SEO is time. While SEO measures need time to show growth and performance improvements, paid campaigns require quick actions as any delay costs money. Even if your campaigns appear to be set up correctly at first glance, you can’t rely on hope and a good gut feeling if they aren’t delivering profitable results.  In the following article, I will demonstrate three signs that help you recognize unprofitable campaigns at first glance and what could be behind them. Additionally, I will show you specifically how you should optimize your Google Ads campaigns in these cases.  However, before we get started, there are three points that can provide a quick explanation for poor performance. If your campaigns still perform poorly despite these factors, you should choose a different approach to improve the figures and reduce Google Ads CPCs .  Your tracking isn't working  It’s a commonly underestimated problem: Unexpected changes on your website, such as the creation of new landing pages or migration to other data platforms, can disrupt your tracking. This can result in your campaigns showing 0 conversions. Ideally, the Google Ads managers are informed in advance about such planned changes, but in reality, that’s not always the case. An example: Once, a client of mine removed a CPA button that we had measured as a soft conversion goal. My campaigns began to struggle significantly, and I had to quickly find a solution to reduce Google Ads costs. In the end, we couldn’t see any conversions because there was literally no conversion action on the website that could trigger conversions in Google Ads.  Tip: Regularly check if your tracking is functioning correctly. Without working tracking, you cannot optimize your Google Ads. It’s still possible for conversions to be generated, but they won't appear in Google Ads, only in the backend. Once the tracking problems are resolved, your campaign might perform well again.  Your campaign is still in the learning phase  Paid campaigns need patience, even though we all want to see good results as quickly as possible. That would prove our expertise and help us further optimize and scale the Google Ads campaigns. However, new campaigns cannot always work wonders, as the algorithm needs time to learn and improve performance. The official learning phase usually lasts up to four weeks. Depending on the business model, this process can also be shorter because the quicker the campaign generates conversions, the faster the algorithm learns. However, this development is not always guaranteed. For instance, the average customer journey in the B2B sector generally takes more time. Additionally, it often includes several touchpoints before achieving the desired result.  Tip: Be patient during the learning phase.  Your main goal is not clear  Unrealistic expectations usually lead to disappointments - not only in life but also in Google Ads. If marketing goals are vague, clear results will not follow either. If the goals are clear, but you don’t know which campaign types are suitable for them, the figures will also disappoint.  For example, if you work with display or video ads, you should not automatically expect to receive many high-quality leads. Not because your setup is wrong, but because these campaign types pursue different goals. They are meant to increase the awareness of your product and cover the early phase of the customer journey. Moreover, the ad formats are tailored to this goal - think of skippable ads on YouTube. They are there to promote your brand and convey a message. However, it is not realistic to expect good leads from them, as they are likely to be skipped, with the customer taking no further action. If your shopping campaigns don’t deliver results for weeks, this is at least alarming.  Tip: Define clear objectives for each phase of the funnel and choose the appropriate campaign types. Only then can you effectively optimize your Google Ads campaigns.  There is a Budget-Killer in the House  But let's go back to the three clear signs that a budget-killer is present in your account:  Campaigns with traffic but no conversions  Rising CPAs  Decreasing ROAS  If your goal is conversions and you see none or increasingly fewer, there’s a problem. Especially if your tracking is functioning and the learning phase is complete. If the campaign still does not deliver the desired conversions, this impacts not only your KPIs but also the performance of your automated bidding strategies. For instance, if you optimize for tCPA or tROAS, declining conversions will lead to a higher CPA, a lower ROAS, and overall restrictions on bidding strategies.  Here is a list of factors that could explain the decline in conversions you are observing. These include:  Landing page – Any change that worsens the user experience can negatively influence the conversion rate as well as the bounce rate.  Competition - Especially in e-commerce, competition through lower prices can affect the number of conversions as well as the conversion rate.  Seasonality - If your business experiences significant declines during certain periods, you should adjust your marketing strategy accordingly.  Irrelevant Traffic - Ensure that your ads don’t appear for irrelevant search queries to reduce Google Ads costs for poor traffic. This often helps to lower Google Ads CPC.  Faulty Targeting – A reasonable campaign setup is vital in Google Ads. However, despite optimal campaign setups, certain target groups or keywords may perform less well than expected. For this reason, you should quickly optimize the targeting of your Google Ads campaigns if the desired results are not there.  Google Ads campaigns are not static. What works well today can perform poorly tomorrow. As a marketing manager, you should thoroughly understand the business model and goals, select the appropriate campaign types, set KPIs, and set realistic expectations. The rest lies in flexible and smart Google Ads optimization. Additionally, your task extends beyond Google Ads as overall performance is influenced by many other factors described above. For example, dramatic political or economic developments can have the same negative impact as a poorly optimized campaign. Your Google Ads expertise should go hand in hand with thorough market analysis so that you can see the bigger picture and take the right actions.  If you need assistance with this or if you want to scale your existing campaigns, our SEA team is happy to advise you. Contact us now! 

Identify and Properly Analyze AI Traffic in Google Analytics

Mar 9, 2026

Nadine

Wolff

Category:

SEO

Article Banner for 'AI Traffic in GA4'

Since Large Language Models (LLMs for short) have become part of everyday life and users increasingly use AI tools like ChatGPT, Gemini, Claude, or Perplexity, a completely new traffic source has emerged. For website owners and marketing managers, the question is increasingly becoming how many users actually reach their website via links and recommendations from these LLMs and how large the share of this AI-generated traffic is in overall visitor volume. This traffic, let’s call it “AI Traffic,” is not automatically shown in Google Analytics. In this article, I’ll show you how to find, measure, and evaluate AI Traffic in GA4. At the same time, you’ll learn what conclusions you can draw from it for your planning and why AI visibility will be just as relevant in the future as classic search engine rankings. What exactly is AI Traffic and how is it composed? The term AI Traffic refers to all website visits that originate from AI systems and generative search engines. Here are some examples of where the traffic could come from: Traffic from ChatGPT/GPT Search Traffic from Perplexity Traffic from AI-integrated browsers (e.g., Microsoft Edge with the integrated Copilot) Copied links that users click from AI responses AI Traffic can be generated actively by users when they click links in an AI response. In addition, there is passive traffic when AI systems crawl pages to process content for their models. Recognizing AI Traffic in GA4: The Most Important Methods 1. Recognize referrers (e.g., ChatGPT traffic) When a user clicks a link from an AI response, the browser automatically sends a so-called referrer. This information indicates which page the user is coming from. In GA4, this appears in the “newly generated traffic” tab as “Referral,” for example with the source perplexity or claude. Figure 1: AI traffic via a referrer  2. UTM tracking For some time now, ChatGPT has automatically appended “?utm_source=chatgpt.com” to links it outputs in responses. This means that this AI Traffic appears in Google Analytics not as a referral, but as its own source with UTM tagging - and is therefore easier and cleaner to identify than plain referral traffic. Perplexity or other AI systems do not necessarily do this. This traffic is often only identifiable via the referrer. AI Traffic in GA4 - Make exploratory data analysis visible Exploratory data analysis in GA4 offers the most flexible way to evaluate AI Traffic in a targeted manner. Unlike in standard reports, you can freely combine your own dimensions, filters, and segments here. To do this, create a new empty data exploration and add a dimension and, if desired, one or more metrics: Dimension --> Session – Source/Medium Metrics --> Sessions Figure 2: Exploratory data analysis To see only traffic from AI platforms, now create a filter using a regular expression (regex). This filter ensures that only sessions are shown whose source is one of the AI platforms mentioned. Figure 3: Example of a regex that filters the various AI systems The result shows you - as in the example above - a detailed table by source and medium. One thing stands out: ChatGPT appears in two variants, once as “chatgpt.com / referral” and once with UTM tagging as “chatgpt.com / (not set).” This is because ChatGPT does not consistently append the UTM parameter to every link. It is therefore recommended to evaluate both entries together. What you see in GA4 - and what it means Once you have isolated AI Traffic in GA4, you essentially have three different metrics available: Size & development: How many sessions are generated via AI platforms? How does this develop over time? A growing value shows that your content is increasingly being recommended by LLMs as a source. This in turn is a direct signal of your AI visibility. Links : Which pages are being linked? Which of your subpages appear as landing pages? This metric shows you which content LLMs consider relevant enough to recommend. These are your strongest pieces of content in an AI context. User behavior: Time on site, bounce rate, and engagement rate of AI Traffic compared with other channels provide insight into whether the linked content matches users’ expectations. High bounce rates, on the other hand, can mean that the linked page does not deliver what the AI response promised. What you can infer from AI Traffic in GA4 The landing pages (with the AI Traffic) are your direct feedback on which content LLMs consider worth citing. Look at what these pages have in common: Are they more explanatory how-to articles? Detailed guides? Definitions? These patterns show you which content format LLMs prefer - and you can use that specifically for new content! Identify content gaps Get an overview of which topics your AI Traffic is coming from and compare them with your overall content offering. Are there topic areas where you get traffic but only have a few or thin pieces of content? These are your content gaps - areas where LLMs already see you as a relevant source, but you still aren’t fully realizing the potential. Optimize content specifically for LLMs (GEO) Generative Engine Optimization, or GEO for short, is the counterpart to classic SEO - but for AI systems. Specifically, the goal is to structure content so that LLMs can easily process and cite it. This includes clear, concise answers to specific questions, well-structured sections with clear headings, and trustworthy, source-based language. Pages that already receive AI Traffic are your best starting point - they are clearly already working, and targeted optimization can further increase their visibility in LLM responses. Conclusion: AI Traffic will become a strategic success factor Recognizing AI Traffic in GA4 is possible, but only with the right methods. Anyone who understands AI visibility and tracks it cleanly gains valuable insights into the relevance and future viability of their content. For companies, this means a new responsibility in content creation and technical optimization. If you need support with tracking, SEO/GEO, or AI content strategy, feel free to get in touch with us. Our team will help you make AI visibility measurable and align your measures based on data. Contact us now! FAQ What is the difference between AI Traffic and Bot Traffic? Bot traffic comes from classic crawlers, while AI Traffic results from AI systems and real users in AI interfaces. Is AI Traffic automatically marked in GA4? Not completely. Some systems are recognized, but much of it still has to be filtered out via segments or referrers. Which AI platforms should I track in GA4? The most important sources today are ChatGPT, Perplexity, Claude, Gemini, and Microsoft Copilot. ChatGPT is usually the largest source because it automatically sets UTM parameters and is therefore the easiest to identify in GA4. Is it worth analyzing AI Traffic if the volume is still low? Clear answer: Yes! Anyone who starts measuring and understanding AI Traffic now builds an advantage before this channel becomes the standard for the industry. Similar to SEO in the early 2000s, the same applies here: those who get in early benefit in the long run.

Optimizing content specifically for prompts using the Query Fan-Out principle

Feb 13, 2026

Julien

Moritz

Category:

SEO

Article Header Banner for Query Fan-Out

Large Language Models (LLMs) like ChatGPT, Claude, or Gemini are fundamentally changing how content is found, evaluated, and utilized. Visibility is no longer solely achieved through traditional search queries but increasingly through prompts that users input into AI systems.  A frequently mentioned principle for optimizing one's content in this regard is the so-called Query Fan-Out principle. But what does this specifically mean for your content? In this article, you'll learn how ChatGPT & Co. decompose inquiries in the background and how you can structure your content so that it is relevant, comprehensible, and quotable for LLMs.  Key Points at a Glance  LLMs generate multiple search queries simultaneously from a prompt (Query Fan-Out).  These queries often run parallel in both German and English.  Content is evaluated based on topics, entities, terms, and synonyms.  In just a few steps, you can analyze which queries ChatGPT uses yourself. We show you how here.  Concrete requirements for your content structure can be derived from this.  What is Query Fan-Out? Query Fan-Out describes the process where an LLM generates multiple sub-queries from a single prompt. A prompt is thus unfolded into multiple queries. This multitude of queries is called Fan-Out because a query fans out like a fan into many individual queries. In the background, the system sends various search queries simultaneously to the index (e.g., Bing or Google). It is only from the synthesis of selected results that the AI compiles the final answer.  We will examine how you can easily investigate this yourself for a prompt in a step-by-step guide.  Why is Query Fan-Out so important?  Your content aims to be found. However, Large Language Models are increasingly used today. This changes the requirements for your content so that it continues to appear in Google search results but is also used by as many LLMs as possible for answer generation.  The better your content matches the generated queries, the more likely it is to be used by LLMs as a source.  Step-by-Step Guide: What Queries are Created by ChatGPT?  With a sample prompt in ChatGPT, one can clearly see how these queries appear. You can easily recreate this for your own prompts and optimize your content accordingly.  Step 1: Open Developer Tools  Open ChatGPT in the browser  Enter a prompt and submit it  Right-click somewhere in the interface  Select “Inspect”  Step 2: Filter Network Tab & Search for Chat-ID  Switch to the “Network” tab  Filter by Fetch / XHR  Copy the chat-ID from the last part of the URL  Paste it into the search field  Reload the page Step 3: Select Network Request  Click on the network request with the chat-ID in the name  Switch to the “Response” tab Step 4: Find Queries  Search for the term “queries”  Now you see specific search queries that ChatGPT uses for web search  Mostly in German and English  Step 5: Evaluation of Requests The following prompt was entered:  “I want to mount my TV on the wall. What is the recommended seat distance for a 65-inch OLED TV? I'm looking for a high-quality and safe full-motion wall mount. Compare current models and suggest the best ones!” ChatGPT utilizes two sub-queries in the web search to find suitable content:  1. DE: “ pivotable wall mount 65 inch TV recommendations wall mount TV 65" pivotable” 1. EN: “ best full motion TV wall mount for 65 inch TVs review high quality”   2. DE: “Recommended seat distance 65 inch TV distance OLED TV seat distance”   2. EN: “what is recommended viewing distance for 65 inch TV”   From these queries, ChatGPT searches for appropriate sources and subsequently generates the following answer, with source indication:  Now you should carefully look at the queries and also the sources used. What types of content are cited?  The example used clearly shows that there is an information cluster and a comparison cluster. Different sources are used for these clusters. To be optimally found for this prompt, you need an informative article on the topic “Recommended viewing distance to TV” . From ChatGPT's query, it can be derived that the subtopics: TV size in inches and display types (e.g., OLED) should be addressed. Additionally, the synonym TV viewing distance should appear in the content, preferably in an H2. The product selection comes from other articles. Thus, your products should appear in as many comparison articles (on external websites) on the topic “Best TV wall mount” as possible, so they can be presented here. Additionally, ChatGPT accesses manufacturer websites. With your own content on product and category pages , you can influence the answers of LLMs. Clearly consider what makes your product or service unique and how you stand out from competitors. Because exactly these advantages can bring users from the AI chat to your own website.  Additionally, it can also be beneficial to publish your own comparison articles . Naturally, you should strongly present your own brand within these, but also mention competitors and their advantages.  LLMs recognize that the information density in the English-speaking network is generally higher. Translating your own content can therefore be a great advantage and ensure greater visibility with ChatGPT and others.  Strategies for Optimization for the Query Fan-Out Principle  What does the Query Fan-Out principle mean for your own content? You need an SEO strategy that works even in the age of generative AI. For this, we have developed five tips that you can directly implement.  1. Comprehensive Topic Clusters Instead of Keyword Focus  The Google Query Fan-Out behavior shows the desire to capture topics in their entirety. LLMs divide a prompt into multiple thematic clusters with varying intent, such as information, comparison, or product queries.  Informative content should be built comprehensively . Content should not only answer "What" questions but also "How", "Why", and "What are the alternatives?" Use targeted synonyms and related entities. If you write about “TV wall mounts”, terms like "VESA", "Pivotable", and "OLED television" should be included.  2. Direct Answers  Write precise definitions and direct answers to user questions at the beginning of your paragraphs . An AI looking for a quick answer to a sub-query will more likely cite text that provides a clear answer: “The ideal viewing distance for a 65-inch OLED TV is about 2.50 to 2.80 meters.” Avoid unnecessary filler sentences just to include keywords.  Further detailed and extensive information considering secondary keywords can be placed afterward.  3. Structured Data  LLMs work resource-efficiently and love structure. When an AI conducts a price comparison or technical analysis, it preferably accesses data marked up with Schema.org . Use structured data in JSON-LD format to make products, FAQs, and reviews machine-readable.  4. Internationally Visible Content  Often, Large Language Models automatically generate English-language queries, even when prompts are written in German. Therefore, building internationally visible content is increasingly important, even if your target audience is German-speaking. You should provide your core content in English as well.  5. Building "External" Visibility  Transactional inquiries like “Best price-performance TV wall mount 2026” are answered using comparison content and user reviews . To be visible with your brand in LLMs, you need to build recognition. Content partnerships with magazines or collaborations with influencers who publish independent reports and product comparisons are a strong lever. It’s not just about classic backlinks that provide authority but also about mentions of the brand in a relevant context on as many platforms as possible. This can be articles from magazines, competitors, online retailers as well as UGC content on YouTube, Reddit, etc. Conclusion: SEO & GEO United  Query Fan-Out reveals how LLMs find and evaluate content. By structuring your content to answer multiple questions simultaneously, being thematically complete, and considering relevant entities as well as synonyms, you optimize not only for traditional search engines but deliberately for AI systems. This is where a new form of visibility is currently being created. Optimization for the Query Fan-Out principle is no longer a "nice-to-have", but the new foundation for digital visibility. By understanding how LLMs deconstruct queries, you can create content that is not only found but also cited as a trustworthy source.  If you need assistance or want to optimize your content specifically for LLMs, our SEO / GEO team can gladly advise you. Contact us now!

ChatGPT for Ad Copy: Turning Strategic Decisions into Measurable Performance

Jan 30, 2026

Yasser

Teilab

Category:

Search Engine Advertising

Banner for a blog post: Laptop with AI graphics

Good ads rarely emerge from a sudden spark of inspiration or pure creative chaos. In the world of performance marketing , they are the result of a rigorous process: clear decisions, sound hypotheses, and the relentless willingness to test them in the market against the reality of data. At this point, ChatGPT for ad copy becomes either a highly effective precision tool or a mere text production machine that just creates digital noise. AI does not determine the success of a campaign; it merely exposes how structured your marketing thinking really is. In this guide, you'll learn how to transform ChatGPT from a "writing aid" into a strategic performance tool that elevates your Google Ads and Meta Ads to a new level. This strategic approach is exactly what we implement at internetwarriors daily in Google Ads and Meta Ads – data-driven, test-based, and scalable. Book an appointment with us now! The Paradox of AI Text Production: Why More Content Doesn't Automatically Mean More Success Ad copy has always been a test problem. Marketers formulate assumptions, launch them, and let the numbers decide. The real limit was never in tracking or analysis, but in operational capacity. Every new ad, every new "angle" took time in conception, coordination, and creation. ChatGPT has shattered this limit. A new entry or an alternative tonality can be developed in seconds today. But here's the trap: those who misuse ChatGPT only scale mediocrity. The shift in everyday work: • Previously: The bottleneck was writing (copywriting). • Today: The bottleneck is thinking (strategy & psychology). ChatGPT doesn't think strategically. It doesn't decide which message is relevant in the market. If ads didn't work before, ChatGPT won't solve this problem – it will only accelerate failure by producing more bad ads in a shorter time. Preparation: Ad Copy Starts Not in the Prompt but in the Focus Much of what is perceived as "generic" AI text is not due to the model but to weak briefing. Before you type the first prompt into the chat window, one central question must be answered: Why should the audience click right now? The Psychology of the Click People don't click on ads because a product is "innovative" or "market-leading." They click because they expect a transformation. ChatGPT is excellent at translating a well-defined idea into variations, but it is unsuitable for finding that idea itself. What you need to define before using ChatGPT: The specific pain point: What exact problem keeps your customer awake at night? (Not: "They need software," but: "They're afraid of data loss"). The functional benefit: What improves immediately? (Time savings, risk reduction, status gain). Objection handling: What thought prevents the customer from clicking? ("Too expensive," "Too complicated," "No time to switch") Thinking in "Angles": The Framework for High-Converting Ads Those who use ChatGPT for ad copy should stop asking for "texts" and start thinking in angles . An angle is a conscious decision for a psychological perspective. Angle Type Focus Example (Project Management Tool) Efficiency Time savings & focus "Gain back 5 hours per week."  Safety Error avoidance & control "Never miss a deadline again."  Simplicity Low barrier & usability "Set up in 2 minutes. No training required."  Social Proof Trust & benchmarking "Why 500+ agencies have switched."  The Rule: An angle always corresponds to exactly one hypothesis. Only when the angle is set do we let ChatGPT formulate the variations. Defining, testing, and systematically scaling angles is not a creative but a strategic problem. If you want to know how we translate such hypotheses into high-performing campaigns, find out more about our approach now! ChatGPT for Google Ads: Mastering Responsive Search Ads (RSA) In Google Ads, AI plays to its strengths especially well with Responsive Search Ads. This ad format thrives on the combination of different elements. The most common mistake? Creating 15 headlines that all say almost the same thing. The Building Block Principle Effective RSA copy is created when each headline serves a clear function. We use ChatGPT to specifically serve these functions: • Function A: Problem description. (e.g. "Tedious Excel lists?") • Function B: Benefit promise. (e.g. "Automatic reporting at the push of a button.") • Function C: Trust signal. (e.g. "2024 test winner.") • Function D: Call-to-action. (e.g. "Request demo now.") Strategic Prompt Tip for Google Ads: "Create a total of 10 headlines for a Google Search Ad for Product [X]. Important: Create 3 headlines that address a problem, 3 headlines that mention a benefit, and 4 headlines with a strong CTA. Each headline must be a maximum of 30 characters long. Avoid repetitions." Meta Ads: The Battle for the "Scroll Stop" In the meta environment (Facebook & Instagram), the attention span is minimal. The first sentence – the hook – decides success or failure. ChatGPT as Hook Generator Instead of generating entire ads, it's more effective to use ChatGPT solely for the development of openings. A strong hook must pull the user out of their passive scrolling trance. Three Hook Formats to Test with ChatGPT: The Provocative Question : "Did your team really know what was top priority this morning?" The "Statistical" Statement : "78% of all projects fail due to poor communication – here's how to prevent it." The "Negative Framing" : "Stop wasting time in meetings that could have been an email." Important : Even if ChatGPT provides the text, manual verification of advertising guidelines (especially concerning sensitive topics like finance or health) is indispensable. Practical Guide: How to Brief ChatGPT Like a Pro To get results that don't sound like a "robot," you need a structured briefing framework. At internetwarriors, we often use the following scheme: Step 1: Role Assignment Always start by giving the AI an identity. "You are an experienced performance marketer and conversion copywriter. Your goal is to write texts that not only inform but also trigger an action (click/purchase)." Step 2: Context Input Feed the AI with hard facts: • Target audience: Specific persona (e.g. "CEO of small agencies, 30-50 years old, stressed"). • Offer: What is the irresistible offer? • Objection: What is the customer's biggest concern? • Tone: (e.g. "Direct, professional, without marketing clichés"). Step 3: Iteration Never settle for the first result. Use commands like: • "Make it shorter and more concise." • "Remove all adjectives like 'revolutionary' or 'unique'." • "Reword Angle 2 for an audience that is very price-sensitive." The "Warriors Check": The 5 Most Common Mistakes in AI Ads To prevent your performance campaigns from sinking into mediocrity, avoid these mistakes: Too much trust in the facts: ChatGPT sometimes hallucinates. Always manually verify USPs and data. Missing brand voice: If the AI sounds too much like a "salesperson," you'll lose your target audience's trust. Adjust the tone. Ignoring platform logic: A text that works on LinkedIn will fail miserably on Instagram. Adapt the formats. No A/B testing: Many marketers use AI to find a perfect ad. The goal, however, should be to find five radically different approaches and test them against each other. Marketing buzzword bingo: Words like "holistic," "synergistic," or "innovative" are click killers. Instruct the AI to remove these words. Outlook: The Future of Ad Creation We are moving towards an era where AI will not only adapt text but also images and videos in real time for individual users. Yet even in this world, one constant remains: Strategy beats the tool. Those who learn today to use ChatGPT as a partner for hypothesis building and angle development will have an unbeatable advantage. It's not about writing faster – it's about learning faster what works in the market. Conclusion: ChatGPT is Your Lever, Not Your Replacement If ChatGPT has so far primarily served as a tool to "quickly create a text" in your setup, much of the potential remains untapped. The decisive lever lies in the systematic interlocking of psychological know-how, clean structure, and the speed of AI. This is exactly where we at internetwarriors come in. As specialists in Google Ads and Meta Ads, we help companies: • Strategically build ad copy processes. • Integrate AI meaningfully and data-drivenly into campaigns. • Develop scalable setups that are based not on chance, but on validated hypotheses. Do you want to use ChatGPT not just as a typewriter but as a real performance tool? We support you in sharpening your messages so that they are not only seen but convert. Contact us for a non-binding analysis of your current campaigns! This article was created with AI assistance – but curated with the strategic mind of a warrior.

2026 und das Zeitalter der Agentic Search - Wenn Kunden keine Menschen mehr sind

Jan 14, 2026

Axel

Zawierucha

Category:

Growth Marketing

Here you will find all parts of our blog series: Part 1 -  Why "Zero-Sum" is a misconception and the search is just beginning | find it here Part 2 - The "December 2025 Core Update" and how to regain visibility | find it here Part 3 - Advertising in the Age of Conversation – Why keywords are no longer enough | find it here ————— Blog Series: The Transformation of Search 2026 (Part 4/4)   Welcome to the future. Or better yet: Welcome to the present of 2026.  In the previous parts, we analyzed the traffic crash and explored new advertising tools. To conclude this series, we venture a look at what is emerging: The "Agentic Web".  The biggest change ahead is not how people search, but who searches. We are experiencing the transition from information gathering to task completion.  "Preferred Sources": Democratization of the Algorithm   Let's start with a technology that is already here and will change SEO forever: "Preferred Sources".  In late 2025, Google deployed this feature globally. Users can now actively mark news sources and publishers (with a star) that they prefer.  Why is this revolutionary?  Until now, SEO was a technical battle against an anonymous algorithm. Now, brand loyalty becomes a direct ranking factor. If users mark your page as a "Preferred Source", your content receives a permanent boost in their feed – completely independent of what the next Core Update dictates.  This means:  Community > Keywords: A small, loyal fan base is more valuable than broad, volatile traffic.  Trust as a metric: You must actively motivate your users to choose your brand as a preferred source. This is the new newsletter signup.  "Live with Search": Seeing the World Through the Camera   SEO has been text-based so far. With "Live with Search", it becomes multimodal.  Users can now interact with Google in real-time via camera and voice. A user films a shelf at the hardware store and asks, "Which of these anchors will hold in drywall?"  Thanks to the new Gemini Native Audio Model, Google responds smoothly, like a human advisor in your ear.  The implication for brands: Their products must be visually identifiable. Packaging design becomes SEO. And: Your website must answer questions posed while viewing the product, not just while searching for it.  "Agentic Search": From Searching to Doing   The term of the year 2026 is "Agentic Search".  An AI agent (Agent) is more than a chatbot. A chatbot gives information. An agent acts.  Search 2024: "Show me flights to London."  Agentic Search 2026: "Book me the cheapest flight to London on Friday, take my preferred aisle seat, and add it to my calendar."  Experts predict that the market for AI agents will explode to over 50 billion dollars by 2030. For us at internetwarriors.de, this means a radical shift in "Search Everywhere Optimization" (SEO).  If your "visitor" is a bot, it doesn't need a nice design. It needs APIs, clear schema.org structures, and flawless logic. We no longer optimize websites just for human eyes, but for machine actors.  Gemini in Translate: The Global Competition   Finally, the last bastion falls: The language barrier.  With the integration of Gemini into Google Translate, translations become context-sensitive and culturally nuanced. A US shop can suddenly serve the German market as if it were locally established, thanks to real-time translation.  For German companies, this means: Competition becomes global. But their opportunities also become global.  Conclusion: The Year of Decision   The transformation of search 2026 is not a threat to those who provide quality.  Redundant information becomes extinct (December update).  Transaction and expertise prevail (Liz Reid theory).  Advertising becomes smart and context-based (AI Max).  Brand loyalty beats algorithm (Preferred Sources).  At internetwarriors , we are ready for this era. We help you not only to be found but to be chosen – by people and agents.  Let’s discuss your strategy for 2026 together. Schedule an appointment now .  

Werben im Zeitalter der Konversation – Warum Keywords nicht mehr genügen

Jan 13, 2026

Axel

Zawierucha

Category:

Growth Marketing

Here you will find all parts of our blog series: Part 1 - Why "Zero-Sum" is a misconception and the search is just beginning | find it here Part 2 - The "December 2025 Core Update" and how to regain visibility | find it here Part 4 - 2026 and the Age of Agentic Search - When customers are no longer human | find it here ————— Blog Series: The Transformation of Search 2026 (Part 3/4)   In the first two parts of this series, we've analyzed the economic theory behind Google's transformation ("Expansionary Moment") and the brutal reality of December's update for SEOs. But while SEOs are still licking their wounds, SEA managers (Search Engine Advertising) need to reforge their weapons.  The year 2026 marks the end of classic keyword dominance. With the introduction of "AI Max for Search" and the opening of "AI Mode" for advertising, Google has fundamentally changed the rules of monetization. Trying to bid exact keywords ("Exact Match") against an AI today is like fighting drones with bows and arrows.  In this article, we deconstruct the new advertising infrastructure and show you how to run ads in a world where users no longer search but engage in conversations.  AI Max: The "Intent Engine" Replaces the Keyword   For a long time, "Performance Max" (PMax) was the panacea for Google's inventory. But there was a gap for pure search campaigns. This is now filled by "AI Max for Search," a tool that Google markets as a "One-Click Power-Up."  The Problem with Keywords   Imagine users searching: "I need a car for 3 kids and a dog that runs on electricity and costs under $50,000."  Previously, you had to bid on combinations like "electric SUV," "affordable family car," or "7-seater." It was necessary to guess what users would enter.    AI Max turns this principle on its head. It analyzes not the words (strings), but the intent.  How AI Max Works   AI Max uses your website and its assets as a foundation. When users make the above complex request, the AI understands the context ("family + space requirement + budget constraint"). It scans your landing page, finds your model "E-Family Van," dynamically generates a fitting headline (e.g., "The perfect E-Van for your family of 5"), and displays the ad – even if you have never booked the keyword "dog."  The results speak clearly: Beta tests show a 27% increase in conversions with a similar CPA (Cost per Acquisition) compared to pure keyword campaigns.  Strategic Advice: Keywords become mere "signals." Your landing page and your creative assets (images, text) become the real targeting. If your landing page does not answer the question, AI Max cannot generate an ad.  The "AI Mode": Ads in the Conversation   The "AI Mode" is Google's answer to ChatGPT and Perplexity – a purely conversational interface capable of handling complex, multi-step inquiries. The crucial question for advertisers has long been: Where is the space for advertising here?  The answer is: Sponsored Responses .  Integration Instead of Interruption   Unlike the classic search where ads are often perceived as disruptions, Google integrates ads seamlessly into the dialogue in AI Mode.  Scenario: Users plan a trip to Tokyo and ask the AI Mode about hotels near Shibuya Crossing with a pool.  Advertising: Instead of a banner, your hotel appears as part of the response, marked as "Sponsored," including an image and direct booking link.  Since inquiries in AI Mode are "2x to 3x longer" than in classic search, the algorithm receives significantly more context signals. This enables targeting with unprecedented precision. A user who asks so specifically is deep in the funnel. The click rate may decrease, but the conversion rate rises.  The New Currency: Assets   To participate in AI Max and AI Mode, you need "raw material." The AI assembles the ad in real time. This means for you:  Visual Excellence: You need high-quality images and videos. AI Max prioritizes visual elements to create "Rich Cards" in the chat.  Structured Data: Your product feed (Merchant Center) must be flawless. The AI needs to know if the shoe is "waterproof" to display it for the query "running shoes for rain."  Broad Match + Smart Bidding: This is the technical prerequisite. "Exact Match" cuts you off from the new AI interfaces. You need to release the algorithm (Broad Match) but control it through the target (Smart Bidding on ROAS/CPA).  Conclusion for Part 3   We are moving from a "Search Engine" to an "Answer Engine." Advertising must become the answer. Banner ads are dying out; helpful, context-sensitive product suggestions take over.  Don't throw away your keyword lists, but treat them for what they are: relics from a time when we still communicated with machines in "telegraphic language."  Need help transitioning to AI Max? The SEA team at internetwarriors audits your account and prepares it for 2026.  

Das "December 2025 Core Update" und wie man die Sichtbarkeit zurückgewinnt

Jan 12, 2026

Axel

Zawierucha

Category:

Growth Marketing

Blog Article Banner - Depiction of a Pendulum and the Theme

Here you will find all parts of our blog series: Part 1 -  Why "Zero-Sum" is a misconception and the search is just beginning | find it here Part 3 - Advertising in the age of conversation – Why keywords are no longer enough | find it here Part 4 - 2026 and the Age of Agentic Search - When customers are no longer people | find it here ————— Blog Series: The Transformation of Search 2026 (Part 2/4) While Liz Reid emphasized the economic stability of Google search in interviews, dramas were unfolding in server rooms and marketing departments worldwide. The "December 2025 Core Update" will go down in history as one of the most volatile and toughest updates. It was not merely a correction; it was a system change.  In this second part, we analyze the forensic data of the update, explain why "Redundancy" is the new "Spam", and show you a way out of dependency with the new "Preferred Sources" feature.  Holiday Havoc: The Timing of Terror   The update began on December 11, 2025, at 9:25 AM PT and extended until January 1, 2026. For e-commerce and ad-funded publishers, this timing – in the middle of the busiest quarter – was the "Holiday Havoc". The impacts were brutal and immediately measurable:  Traffic Collapse: Hundreds of webmasters reported declines in daily visitor numbers between 70% and 85% .  Discover is dead (for many): Google Discover was particularly affected. A publisher documented a drop in impressions by 98% within days before the official announcement. Since Discover now accounts for up to two-thirds of traffic for many news sites, this was tantamount to a threat to existence.  Volatility Index: The SISTRIX Update Radar recorded a value of 3.54 on the day of the announcement – a massive spike far beyond normal fluctuations.  The "Second Wave": Why it hurt twice   Our analyses at internetwarriors show an unusual pattern. After the initial crash on December 11, there was deceptive calm, followed by a "Second Wave" of volatility around December 20.  We interpret this as a two-stage filtering process:  Phase 1 (Content): The algorithm scanned for static quality features and especially for redundancy.  Phase 2 (User Signals): In the second wave, the user data of the new AI Overviews was analyzed. Pages that ranked but didn't generate clicks or had high bounce rates compared to the AI response were downgraded retroactively.  The new ranking poison: Redundancy   Why were so many established sites hit? The answer lies in the nature of AI overviews. Previously, a page was valuable if it summarized information well. Today, the AI does that.  The December update punished redundancy.  If your page merely repeats facts already present in Google’s "Knowledge Graph" (e.g., "How tall is Liz Reid?"), your page is technically redundant. It doesn’t offer added value over AI.  Google has now firmly integrated its "Helpful Content" signals into the core algorithm. "Helpful" today means: Does this page offer a perspective, experience, or data that AI cannot hallucinate or aggregate?  The Glimmer of Hope: "Preferred Sources"   But Google didn’t just take, Google also gave. Parallel to the update and volatility, Google rolled out the "Preferred Sources" feature globally.  This is perhaps the most important strategic innovation for 2026.  What is it? Users can mark their preferred news sources in search settings or directly in "Top Stories" (through a star).  The Effect: Content from these sources gets a permanent ranking boost in the user's personal feed and appears in a separate section "From your sources".  This fundamentally changes the SEO game. Until now, SEO was a battle for the algorithm. From now on, it is also a battle for brand loyalty. A small niche blog can outperform large publishers if it has a loyal community that actively marks it as a "Preferred Source".  We see here a democratization of the algorithm: the users decide who ranks, not just the AI.  Your Survival Strategy for Q1 2026   Based on this data, we recommend our clients the following immediate actions:  Redundancy Audit: Check your content. If you have an article that ChatGPT could write just as well in 10 seconds, delete or revise it. Add exclusive data, expert opinions, or videos.  The "Star" Campaign: Launch campaigns to encourage users to mark you as a "Preferred Source". Explain to users how it’s done. This is the new newsletter signup.  Diversification: Do not rely solely on Google Discover. The 98% drop shows how volatile this channel is.  The December update was painful, but it has cleansed the market. Whoever still stands now has substance. But how do you monetize this substance in a world where keywords are losing importance? In part 3 of our series, we dive deep into the new advertising world of AI Max and AI Mode , and show you how ads are placed when no one is searching anymore. 

Warum "Zero-Sum" ein Irrtum ist und die Suche gerade erst beginnt

Jan 9, 2026

Axel

Zawierucha

Category:

Growth Marketing

Banner Blog Series: Transformation of Search

Here you can find all parts of our blog series: Part 2 - The "December 2025 Core Update" and how to regain visibility | can be found here Part 3 - Advertising in the age of conversation – Why keywords are no longer enough | can be found here Part 4 - 2026 and the era of agentic search – When customers are no longer human | can be found here ————— Blog series: The Transformation of Search 2026 (Part 1/4)   Looking back at the year 2025, we see a battlefield. It was the year when theoretical discussions about AI in marketing suddenly became serious. It was the year when publishers panicked, stock prices wavered, and Google's Vice President Liz Reid said a sentence in the Wall Street Journal that would go down in the history of digital marketing: "We are in an expansionary moment."   For many of our clients at internetwarriors, however, it didn’t feel like expansion in December 2025, but rather contraction. Yet the data presents a more complex picture. In this first part of our four-part series at the start of 2026, we analyze the macroeconomic level of the "new search." We deconstruct Google's strategy and explain why the classic SEO thinking focused on "clicks" must give way to a new thinking in "transactions."  The fear of the zero-sum game   By the end of 2025, the SEO industry was dominated by a simple, fear-driven calculation: The "zero-sum game." The logic seemed irrefutable: If an AI (be it ChatGPT, Perplexity, or Google AI Overviews) provides the answer directly, users no longer click on the website.  1 AI answer = 1 lost click for the publisher  Therefore: The ecosystem shrinks  This fear fueled the volatility we saw at the end of the year. But in December 2025, Liz Reid, VP of Search at Google, countered this thesis in a much-discussed interview with the Wall Street Journal. Her core message: We view the cake as static when it is actually growing. The theory of the "expansionary moment"   Reid argued that we are experiencing an "expansionary moment." Through AI's ability to process more complex queries ("Plan a 3-day trip to Paris with kids for under 500 euros"), induced demand is created.  In the past, users would have broken down this complex question into ten separate searches – or not asked at all, knowing Google would fail. Today, they ask the question.  The paradox Reid describes is crucial for your 2026 marketing strategy:  "Making these things easier causes people to ask more questions... to get more help."   Even if the click-through rate (CTR) per individual search decreases because AI provides the answer, the total search volume increases so significantly that the absolute traffic remains stable or even grows. Reid emphasizes: "Those two things end up balancing out."   For website operators, this means: Traffic will not disappear, but it will shift. The simple questions ("How tall is the Eiffel Tower?") are lost to you. The complex questions ("Which hotel in Paris offers babysitting and is centrally located?") will surge.  The "Shoe Paradox": Information vs. Transaction   One of the most important strategic insights for 2026 is hidden in Reid's "shoe example." When asked about the threat to the business model, she replied dryly:  "If the ads are for shoes, you might get an answer on AI overviews, but you still have to buy the shoes. None of the AIs substitute the need for the actual pair of shoes." This statement is invaluable. It draws a hard line through the internet:  Information Arbitrage (At Risk): Websites that only aggregate information from others (e.g., "The 10 Best Running Shoes") will be replaced by AI. AI is the better aggregator.  Transaction Origin (Safe): Websites that have the actual thing (the shoe, the hotel room, the service) are irreplaceable.  For our clients at internetwarriors, this means: If your business model is based on capturing and redirecting traffic without offering your own added value, 2025 was your last good year. But if you own the product or expertise, your golden era now begins.  The Stability of Advertising Revenue: A Peek into the Books   Many analysts expected Google's advertising revenue to collapse as users clicked less. But the numbers show stability. Liz Reid confirmed that ad revenue in the environment of AI Overviews has remained "relatively stable."  Why? Because the new search queries in AI mode (more about this in Part 3) are often 2 to 3 times longer than classic keywords.1  Longer queries mean more context. More context means more precise targeting.  Users searching for "running shoes" might just be browsing.  Users looking for "running shoes for a marathon under 3 hours in the rain" have their credit card ready.  The clicks become fewer, but they become more valuable. We are moving from an economy of attention (traffic) to an economy of intent (intent).  Conclusion and Outlook   The year 2025 taught us that Google is willing to cannibalize its own core business to stay ahead in the AI race. For companies, this means: Don't panic over the loss of traffic from simple keywords. Focus on the complex questions and transactions.  Yet, while the leadership at Google talks of expansion, the reality for many SEOs in December 2025 looked different. In the next part of this series, we analyze the "December 2025 Core Update" – an algorithmic bloodbath that enforced this new reality.  Do you have questions about your traffic development in 2025? The internetwarriors team would be happy to analyze your data and help you capitalize on the new opportunities.  

Show more posts