PDF.

We show that large language models can be used to perform at-scale deanonymization. With full Internet access, our agent can re-identify Hacker News users and Anthropic Interviewer participants at high precision, given pseudonymous online profiles and conversations alone, matching what would take hours for a dedicated human investigator. We then design attacks for the closed-world setting. Given two databases of pseudonymous individuals, each containing unstructured text written by or about that individual, we implement a scalable attack pipeline that uses LLMs to: (1) extract identity-relevant features, (2) search for candidate matches via semantic embeddings, and (3) reason over top candidates to verify matches and reduce false positives. Compared to prior deanonymization work (e.g., on the Netflix prize) that required structured data or manual feature engineering, our approach works directly on raw user content across arbitrary platforms. We construct three datasets with known ground-truth data to evaluate our attacks. The first links Hacker News to LinkedIn profiles, using cross-platform references that appear in the profiles. Our second dataset matches users across Reddit movie discussion communities; and the third splits a single user’s Reddit history in time to create two pseudonymous profiles to be matched. In each setting, LLM-based methods substantially outperform classical baselines, achieving up to 68% recall at 90% precision compared to near 0% for the best non-LLM method. Our results show that the practical obscurity protecting pseudonymous users online no longer holds and that threat models for online privacy need to be reconsidered.

  • Silver Needle@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    20 hours ago

    That’ll never work. The internet is messy like a jungle, I might find bird crap somewhere but it will not get me the bird. I might find a turned leaf, but what turned the leaf will never be known to me. All despite me being able to reason and investigate phenomena that occur.

    I view all things like particle systems: There are general trends, sometimes we can observe how single particles travel and we can derive rules from their behavior. Yet we are never able to see everything at full resolution, let alone know everyone in the way the “evil” “AI” thought experiments portray all knowing bots. What people say about Palantir is very similar falls into the category of we-don’t-know-the-rest-of-it.

    No use going paranoid over preliminary results from a tool we readily use but don’t fully comprehend the limitations of (in the meaning of: we don’t know how shitty and unreliable they are in actuality).