Department of Computer Science, Purdue University
Abstract
Advancements in open-source pre-trained backbones make it relatively easy to fine-tune a model for new tasks. However, this lowered entry barrier poses potential risks, e.g., bad actors developing models for harmful applications. A question arises: Is possible to develop a pre-trained model that is difficult to fine-tune for certain downstream tasks? To begin studying this, we focus on few-shot classification (FSC). Specifically, we investigate methods to make FSC more challenging for a set of restricted classes while maintaining the performance of other classes. We propose to meta-learn over the pre-trained backbone in a manner that renders it a "poor initialization". Our proposed Learning to Obstruct (LTO) algorithm successfully obstructs four FSC methods across three datasets, including ImageNet and CIFAR100 for image classification, as well as CelebA for attribute classification.
Paper Excerpt
The goal of LTO is to obstruct the learning of specific classes in a restricted class set
\( \mathcal{R} \), when utilizing few-shot classification (FSC) methods. At the same time, we aim to ensure
that the model's performance in the other class set \( \mathcal{R'} \) remains unaffected. We consider the scenario
where the FSC algorithms \( \mathbf{F} = (\hat{F}, F) \) use a pre-trained backbone and are known to the obstructor.
To achieve this, we introduce the Learning To Obstruct (LTO) algorithm \( \mathbf{A} \) that modifies the pre-trained
backbone's parameters \( \theta^p \) to create a poor initialization \( \mathbf{A}(\theta^p) \). When the FSC algorithm
is applied to \( \mathbf{A}(\theta^p) \), the model will perform poorly on restricted classes but not the other classes.
@inproceedings{zheng2024learning,
title={Learning to obstruct few-shot image classification over restricted classes},
author={Zheng, Amber Yijia$^\ast$ and Yang, Chiao-An$^\ast$ and Yeh, Raymond A},
booktitle={Proc. ECCV},
year={2024},
note = {$^\ast$ equal contribution}
}