> [!info]
> Input: [[Social Media Posts|posts]], [[Social Media Account|accounts]]
> Output: conclusion about content artificialness
>
> Types: [[Behavioural Weakness|behavioural]]
> Weakness: [[SOWEL-7. Copying Content]]
### Explanation
Artificial content can be detected using various methods, including statistical analysis, which identifies many identical or similar messages posted simultaneously, and behavioral analysis, both automated and manual. These techniques are enhanced by approaches that determine whether the content was generated using Language Learning Models (LLMs).
This method of analysis is essential for identifying fake accounts and bots that spread deceptive and disinformation messages, engage in astroturfing, and disseminate state-sponsored propaganda. Detecting bots helps mitigate the spread of false information, protect public opinion, and ensure the integrity of online discourse by identifying and removing these malicious entities.
Approaches to detect bots and artificial content include:
- **LLM and neural networks usage**: Utilizes word vectors and convolutional layers to analyze the text content of a single social media post for bot detection. Also model trained on profile features such as the number of likes, retweets, and followers could be helpful to identify bots.
- **Coordinated link sharing behaviour**: Detects entities that share the same news within short periods, indicating inauthentic behavior .
- **Graph Analysis and Classification Algorithms**: Uses similarity matrices, Principal Component Analysis (PCA), and Support Vector Machines (SVMs) to detect bots based on network structures .
- **Attractor+**: Focuses on synchronized and coordinated retweeting behavior, using subgraph detection algorithms like Cohesive, Louvain, and Attractor+ to identify malicious retweeter groups .
- **Enhanced PeerHunter**: Clusters bots into communities based on mutual contacts and analyzes network-flow level behavior to detect botnets .
- **Text mining methods**: Utilizes text content analysis to detect spamming behavior and identify bots based on the writing style and content of their posts.
- **Behavioral analysis**: Observes user behavior patterns, such as bursty posting and coordinated activities, to detect bots.
- **Template-based spam filtration**: Uses features like celebrity names, eye-catching actions, and URLs to filter out spam and detect bots.
### Examples
- [GIJN - How to Identify Bots, Trolls, and Botnets](https://gijn.org/stories/how-to-identify-bots-trolls-and-botnets/)
- [Identifying and characterizing superspreaders of low-credibility content on Twitter](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0302201)
- [BotNet Detection on Social Media](https://arxiv.org/pdf/2110.05661)
- [Hoaxy: A Platform for Tracking Online Misinformation](https://www.researchgate.net/publication/301841797_Hoaxy_A_Platform_for_Tracking_Online_Misinformation)
### Tools
- [Botnadzor](https://botnadzor.org/) - a tool to detect and explore VK.ru bots
- [DeBot](https://www.cs.unm.edu/~chavoshi/debot/) - real-time X (Twitter) bot detection via activity correlation
- [Botometer X](https://botometer.osome.iu.edu/) - checks the activity of a X (Twitter) account and gives it a score based on how likely the account is to be a bot
- [Hoaxy](https://hoaxy.osome.iu.edu/) - visualization of information spreading on Twitter
- [Bot Repository](https://botometer.osome.iu.edu/bot-repository/index.html) - a centralized place to share annotated datasets of Twitter social bots
### See also
- [[How to detect a fake account]]