This research focuses on identifying and analyzing deceptive and untrustworthy content in online media. By leveraging NLP, large language models (LLMs), and syntactic pattern analysis, the study aims to detect manipulative language, misinformation, and hidden biases. The system will not only flag potentially unreliable documents but also provide explanations and warnings to help users understand why a source may be untrustworthy. This approach seeks to enhance digital literacy, mitigate the impact of misinformation, and protect public trust from harmful narratives and online manipulation.