BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//University of Liverpool Computer Science Seminar System//v2//EN
BEGIN:VEVENT
DTSTAMP:20260408T205255Z
UID:Seminar-verification-1320@lxserverA.csc.liv.ac.uk.csc.liv.ac.uk
ORGANIZER:CN=Patrick Totzke:MAILTO:totzke@liverpool.ac.uk
DTSTART:20260122T110000
DTEND:20260122T120000
SUMMARY:Verification Series
DESCRIPTION:Yi Dong: Fine-grained Activation Manipulation by Contrastive Orthogonal Unalignment for Large Language Model\n\nLarge language models have been widely applied, but can inadvertently\n\nencode sensitive or harmful information, raising significant safety concerns.\n\nMachine unlearning has emerged to alleviate this concern; however, existing\n\ntraining-time unlearning approaches, relying on coarse-grained loss\n\ncombinations, have limitations in precisely separating knowledge and balancing\n\nremoval effectiveness with model utility. In contrast, we propose Fine-grained\n\nActivation manipuLation by Contrastive Orthogonal uNalignment (FALCON), a novel\n\nrepresentation-guided unlearning approach that leverages information-theoretic\n\nguidance for efficient parameter selection, employs contrastive mechanisms to\n\nenhance representation separation, and projects conflict gradients onto\n\northogonal subspaces to resolve conflicts between forgetting and retention\n\nobjectives. Extensive experiments demonstrate that FALCON achieves superior\n\nunlearning effectiveness while maintaining model utility, exhibiting robust\n\nresistance against knowledge recovery attempts.\n\nhttps://www.csc.liv.ac.uk/research/seminars/abstract.php?id=1320
LOCATION:
END:VEVENT
END:VCALENDAR
