figshare
Browse

HIAT

software
posted on 2025-04-24, 09:15 authored by Zeyuan ZengZeyuan Zeng
# HIAT
==================================================
## 1. Overview

This is an implementation of "Non-Additivity in hypergraph neural networks: a new mathematical theory delineating high-order interactions in hypergraph message propagation", a hypergraph framework for the management of the superior expressive power of high-order interactions.

Much evidence has demonstrated the superior expressive power of high-order interactions in hypergraphs compared to pairwise interactions in graphs. However, the conditions under which the superior expressive power of high-order interactions manifests remain to be systematically clarified in hypergraph message propagation. Motivated by the non-additive interactions in complex physical systems, we attribute the superior expressive power to non-additivity and develop a comprehensive mathematical analysis framework using multiset functions. Under this framework, existing hypergraph neural networks in cumulative forms remain confined to the graph domain.

To directly exploit non-additive high-order interactions, we propose a hypergraph framework, a hypergraph inter-neighbor attention layer, with minimal additional parameters and consistent asymptotic computational complexity. Experimental results demonstrate the advantages of hypergraph frameworks with non-pairwise additivity by achieving an average performance improvement of 5.39 points, with an increase of up to 9.2 points, across 11 datasets with 6 categories in low-resource learning, and showcasing a consistent improvement in regular semi-supervised learning and tabular data representation learning.

This study lays a theoretical underpinning for the superior expressive power of high-order interactions in the context of hypergraph message propagation and innovatively offers an optimally efficient and widely applicable hypergraph deep learning framework for non-pairwise additive interactions.

For more insights, (empirical and theoretical) analysis, and discussions, please refer to our paper.

Thank you for reading this far.

## 2. Requirement

Models built based on different libraries may have different performances. Therefore, we use the models the authors release. We retain the requirement part in the original README.md file of each project to help the readers get the corresponding requirements more conveniently.

## 3. Implementation

### 3.1 HIAT_II

HIAT_II is implemented in ```hiatii.py``` within the ```low-resource learning/models``` directory, and in ```models.py``` located in the ```regular semi-supervised learning/src``` directory.

The low-resource learning implementation is adapted from the original ED-HNN codebase (https://github.com/Graph-COM/ED-HNN), while the regular semi-supervised learning version is based on the AllSet source code (https://github.com/jianhao2016/AllSet). The two implementations do not differ significantly, except for minor differences in dataset preprocessing — for example, the preprocessing for the Citeseer dataset.

In the ```low-resource learning``` directory, ```train_base.py``` with argument ```--standard_split``` is used for experiments with 20 data splits under a single random model initialization. It is intended for quick and simple reproduction of the experimental results. For full experiments involving 20 data splits across 5 random model initializations, please use ```train_base2.py``` with argument ```--standard_split```.

Distinct from the standard cases, the experiments on the House dataset with feature noise levels of 0.6 and 1.0 closely follow the original implementation of ED-HNN. Thus, to reproduce these experiments, please navigate to the ```low-resource learning``` directory and run ```train_base.py``` with argument ```--dname house-mommittees```, which contains the implementation configured for the 50/25/25% data split setting. To distinguish the 1/1/98% split setting of the House dataset from other variants, we place it in a separate directory named ```house-committees_2``` within the ```raw_data``` folder.

### 3.2 AllsetInnerTransformer

AllsetInnerTransformer is implemented as ```AllSetTrans_HIAT``` in ```layers.py``` within the ```tabular data learning``` directory.

The tabular data learning implementation is adapted from the original HyTrel codebase (https://github.com/awslabs/hypergraph-tabular-lm).

History

Usage metrics

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC