Marketers Could Use AI to Make Sure You See Their Ads – Here’s How It’s Done

Marketers Could Use AI to Make Sure You See Their Ads – Here’s How It’s Done

In summary

  • AdGazer is a model that predicts human attention to ads using AI trained with eye tracking.
  • Page context drives up to a third of ad attention results.
  • An academic demonstration could quickly evolve into a real-world ad tech implementation.

Somewhere between the article you’re reading and the ad next to it, a silent war is being waged for your eyes. Most display ads miss out because people simply hate them, to the point that big tech companies like Perplexity or Anthropic are trying to move away from those invasive loads, looking for better monetization models.

But a new artificial intelligence tool from researchers at the University of Maryland and Tilburg University wants to change that: by predicting, with eerie accuracy, whether you’ll actually see an ad before anyone bothers to place it there.

The tool is called AdGazer and it works by analyzing both the ad itself and the web page content surrounding it, and then predicts how long a typical viewer will stare at the ad and your brand logo based on extensive historical data from advertising research.

The team trained the system with eye-tracking data from 3,531 digital display ads. Real people wore eye-tracking equipment, browsing pages, and their gaze patterns were recorded. AdGazer learned from all this.

When tested on ads it had never seen before, it predicted attention with a correlation of 0.83, meaning its predictions aligned with actual human gaze patterns about 83% of the time.

Unlike other tools that focus on the ad itself, AdGazer reads the entire page around it. A financial news article next to a luxury watch ad performs differently than the same watch ad next to a sports score indicator.

The surrounding context, according to the study published in the marketing magazineIt accounts for at least 33% of the attention an ad receives and approximately 20% of the time viewers watch the brand specifically. This is a big problem for marketers who have long assumed that creativity itself did all the heavy lifting.

The system uses a large multimodal language model to extract high-level themes from both the ad and surrounding page content, then determines how well they match semantically: basically, the ad itself versus the context in which it is placed. These topic embeddings are fed into an XGBoost model, which combines them with lower-level visual features to produce a final attention score.

The researchers also created an interface, Gazer 1.0, where you can upload your own ad, draw bounding boxes around the brand and visual elements, and get a predicted look time in seconds, along with a heat map showing which parts of the image the model thinks will attract the most attention. It runs without the need for specialized hardware, although the full LLM-based theme combination still requires a GPU environment that is not yet integrated into the public demo.

For now it is an academic tool. But the architecture is already there. The gap between a research demonstration and a production ad tech product is measured in months, not years.

Daily report Fact sheet

Start each day with the biggest news stories, plus original features, a podcast, videos and more.

Leave a Reply

Your email address will not be published. Required fields are marked *