Meta-LoRA:
Meta-Learning LoRA Components
for Domain-Aware ID Personalization

1 Pixery Labs, 2 METU Dept. of Computer Engineering, 3 METU Dept. of Electrical and Elec. Engineering
Arxiv Preprint - 2025

Abstract

Recent advancements in text-to-image generative models, particularly latent diffusion models (LDMs), have demonstrated remarkable capabilities in synthesizing high-quality images from textual prompts. However, achieving identity personalization-ensuring that a model consistently generates subject-specific outputs from limited reference images-remains a fundamental challenge. To address this, we introduce Meta-Low-Rank Adaptation (Meta-LoRA), a novel framework that leverages meta-learning to encode domain-specific priors into LoRA-based identity personalization. Our method introduces a structured three-layer LoRA architecture that separates identity-agnostic knowledge from identity-specific adaptation. In the first stage, the LoRA Meta-Down layers are meta-trained across multiple subjects, learning a shared manifold that captures general identity-related features. In the second stage, only the LoRA-Mid and LoRA-Up layers are optimized to specialize on a given subject, significantly reducing adaptation time while improving identity fidelity. To evaluate our approach, we introduce Meta-PHD, a new benchmark dataset for identity personalization, and compare Meta-LoRA against state-of-the-art methods. Our results demonstrate that Meta-LoRA achieves superior identity retention, computational efficiency, and adaptability across diverse identity conditions.

Overview

MODEL ARCHITECTURE

The overall architecture of our Meta-LoRA model.

Metric Results

MODEL RESULTS

The performance of our Meta-LoRA model on the Meta-PHD dataset.

Visual Results

- Comparison between models for the 'female' class:

1. Visual Results. Comparison between models for the 'female' class.

- Comparison between PuLID and our model:

2. Visual Results. Comparison between PuLID and our model.

- Extended comparisons between the state-of-the-art models and our model:

3. Visual Results. Extended comparisons between the state-of-the-art models and our model.

BibTeX


        
@misc{
  topal2025metalorametalearningloracomponents,
  title={Meta-LoRA: Meta-Learning LoRA Components for Domain-Aware ID Personalization}, 
  author={Barış Batuhan Topal and Umut Özyurt and Zafer Doğan Budak and Ramazan Gokberk Cinbis},
  year={2025},
  eprint={2503.22352},
  archivePrefix={arXiv},
  primaryClass={cs.CV},
  url={https://arxiv.org/abs/2503.22352}, 
}