Black-box forgetting: A new method for tailoring large AI models
Pretrained large-scale AI models need to 'forget' specific information for privacy and computational efficiency, but no methods exist for doing so in black-box vision-language models, where internal details are inaccessible. Now, researchers addressed this issue through a strategy based on latent context sharing, successfully getting an image classifier to forget multiple classes it was trained on. Their findings could expand the use cases of large-scale AI models while safeguarding end users' privacy.