figshare
Browse

MViT2025

presentation
posted on 2025-10-30, 09:28 authored by Linjun HeLinjun He
<p dir="ltr">Vision Transformers (ViTs) have demonstrated remarkable performance in image classification and structural modeling; however, fixed patch partitioning and static positional encoding often disrupt spatial continuity, thereby limiting their ability to represent rotated structures and irregular boundary regions. To address these limitations, we propose the Moore-curve Vision Transformer (MViT), a Vision Transformer (ViT) framework based on a recursive Moore curve. The proposed framework comprises three key components. First, a multi-order fractal mapping is employed to optimize patch reordering and enhance the spatial coherence of the token sequence. Second, a 7×7 dynamic partitioning template together with a boundary compensation algorithm jointly optimizes dense structural representation and resolution adaptability. Third, a period-aware positional encoding module integrates fractal periodic parameters with convolutional features to align positional embeddings with the fractal traversal pattern. This design significantly enhances the model’s structural adaptability to complex image layouts. Experimental results show that MViT improves classification accuracy over ViT-B/16 by 0.52% and 0.31% on the CIFAR-100 and ImageNet-21k datasets, respectively, while also achieving noticeable improvements in PSNR and SSIM. Ablation and rotational perturbation experiments further confirm its robustness to rotation and localized focus variations. Moreover, MViT exhibits strong structural compatibility, maintaining stable performance across different Transformer backbones and diverse visual tasks.</p>

History

Usage metrics

    Keywords

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC