Images not only depict objects but also encapsulate rich interactions between them. However, generating faithful and high-fidelity images involving multiple entities interacting with each other, is a long-standing challenge. While pre-trained text-to-image models are trained on large-scale datasets to follow diverse text instructions, they struggle to generate accurate interactions, likely due to the scarcity of training data for uncommon object interactions.
This paper introduces InterActing, an interaction-focused dataset with 1000 fine-grained prompts covering three key scenarios: (1) functional and action-based interactions, (2) compositional spatial relationships, and (3) multi-subject interactions.
To address interaction generation challenges, we propose a decomposition-augmented refinement procedure. Our approach, DetailScribe, built on Stable Diffusion 3.5, leverages LLMs to decompose interactions into finer-grained concepts, uses a VLM to critique generated images, and applies targeted interventions within the diffusion process in refinement. Automatic and human evaluations show significantly improved image quality, demonstrating the potential of enhanced inference strategies.
Scenario | Subclass | Examples |
---|---|---|
Functional and Action-Based Interactions (600) |
Tool Manipulation (227) | cutting, painting, sailing, stirring, taking a photo |
Physical Contact (373) | sculpting snow, stacking, holding | |
Compositional Spatial Relationships (200) |
Abstract Layouts (183) | tic-tac-toe, table, atom, solar system, forest, tree, bookshelf |
Geometric Patterns (17) | zig-zag pattern, circle, center | |
Multi-subject Interactions (200) | Interaction (200) | huddling, high-five, collaborating to lift, weaving leaves together, sharing food |
Table 1: The InterActing dataset contains 1000 text-to-image prompts. We categorize them into subclass and count the occurrence.
Due to the challenges in assessing whether an image aligns with the prompt's description, we primarily rely on human evaluation, referred to as the human Likert scale. We further explored the use of VLMs and pre-trained metrics for automatic evaluation purposes (Automatic evaluation.).
We note that these evaluations are inherently more noisy, so we compared their agreement with human preferences on sampled image pairs generated by all models. Overall, VLM evaluator achieves the highest agreement at 90.4%, compared to the other metrics: ImageReward (73.6%), CLIPScore (70.4%), and BLIP-VQA (67.6%).
DetailScribe operates in three stages:
1) given an input natural language prompt, a large language model hierarchically decomposes it into detailed sub-concepts;
2) an initial image is generated from the prompt using a text-to-image model, followed by a vision-language model critique conditioned on both the decomposed sub-concepts and the generated image;
3) based on the critique, the prompt is refined and a re-denoising process corrects errors, yielding a more faithful and realistic generated image.
We evaluate the models on three scenarios from the InterActing dataset and report the results separately. Due to the scalability of high-quality human evaluation, we sampled 50 prompts from InterActing for both human evaluation and automatic evaluation, and compared the agreement in between. Table 2 shows the average human/VLM likert scale (1 - 5) and pre-trained metrics on three scenarios of sampled InterActing dataset.
We report the human Likert scale (Human Evaluation), VLM evaluation score (GPT-4o), as well as ImageReward (ImReward), CLIPScore (CLIPS.) and BLIP-VQA (B-VQA) score.
According to the above evaluation results, VLM evaluator achieves the highest agreement to human evaluation at 90.4%, compared to the other metrics: ImageReward (73.6%), CLIPScore (70.4%), and BLIP-VQA (67.6%).
We further presented the automatic evaluation on the entire InterActing in Figure 4.
@misc{gu2025generatingfinedetailsentity,
title={Generating Fine Details of Entity Interactions},
author={Xinyi Gu and Jiayuan Mao},
year={2025},
eprint={2504.08714},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2504.08714}
}