Abstract
The pragmatic ability of artificial intelligence systems is one of the critical but under-researched areas in the research area, especially in recent times, with increased reliance on AI technologies in human communication. This study responds to the main question of the difficulties AI systems encounter in comprehending human pragmatic signals, and how these interactions affect communication dynamics. This study has, therefore, adopted qualitative and descriptive research approaches to analyze interaction data obtained from WildChat and OpenAI logs, focusing on pragmatic functions like requests, agreements, and questions. Data collection depended on anonymized natural conversations in real life, while the analytical model used theoretical frameworks first, Grice's Cooperative Principle, then Relevance Theories assess how well AI performs in interpreting implicit meanings and cultural nuances. Key findings indicate that, while AI performs well with explicit speech acts, it fails in dealing with indirectness, ambiguity, and cultural variability. Repeated interactions with AI thus lead users to simplify their communication, which gives rise to concerns about the erosion of human pragmatic abilities. The study provides conclusions for addressing the pragmatic limitations of AI in a way that could promote more natural and contextually appropriate human-AI communication. The implications of these findings for AI design, linguistic theory, and societal norms are huge.