It might seem a straightforward question given that LLMs (Large Language Models) and AI in general are ubiquitous, the very nature of the question requires an overview of the philosophical landscape and epistemic underpinnings of the normative sense specifically attached to the term ‘trust’ in the question statement.
Once you understand that landscape, it will allow you to not just understand how you have to look at the Grok but also it will equip you with a sense of picking up the right channels or sources for the information you take from the internet.Â
So let’s delve into this and first briefly try to understand how trust works among humans in general then see how it is shaped when it comes to the relationship between Humans and AI, particularly Grok.
How trust works in general?
As per the Cambridge dictionary, trust is defined as ‘to believe that someone is good and honest and will not harm you, or that something is safe and reliable.’ Even though it’s quite a complex concept to have a robust single definition, almost all of its variations fall somewhat there in that categorization of this definition.Â
So in other words, when it comes to interpersonal relations, a human being trusts another human being mainly because of their ‘reliability’ or their sense of ‘responsibility.’ Reliability means how knowledgeable and trustworthy, given the experience with that person, he is. And a sense of responsibility means their innate disposition to be responsible towards other fellow human beings.Â
For example, if you visit a physician and trust their diagnosis then that is mainly because you see them as reliable for the reason of their degree, their knowledge of medical science, and your experience with them. Also, you trust them because you see them as a professional who has a sense of responsibility towards your well-being. So essentially those two properties, reliability and responsibility, are needed to cement the trust in intrapersonal relations.Â
Moreover, this will be helpful to keep in perspective for the following information that it’s the inherent sense of responsibility that makes humans liable for autonomy and accountability. And that’s the key element to consider when it comes to human beings’ relationship with AI and their dependence on it.Â
How trust works between humans and AI?
There is a massive debate going on among philosophers and scientists on whether to trust AI or not for the last many years. However, from looking at the available information it seems like humans, unlike intrapersonal relationships, can not trust AI fully as this relationship is mainly based on one element, which is ‘reliability, of the above two key elements of trust.
AI completely lacks the characteristic of conscientiousness, hence it does not have any sense of responsibility. So the only element to trust it remains of is ‘reliability’ which further underlies several factors. And that element is also not very promising as regular malfunctions keep appearing in AI across different fields.
Now taking that information as the basis, let’s look at whether you can trust X’s AI chatbot.Â
What is Grok and can you trust it?
Grok, like many other LLMs, is a creation of xAI, seamlessly woven into the fabric of the platform formerly known as Twitter, now X. Users across the globe ask it various questions of all types, and this then, as per its programming, answers and interacts with the user accordingly. However, the responses it gives often happen to contain elements of biases and can be malleable with prompts as well.Â
Like the rest of chatbots, at the heart of Grok lies its fundamental characteristic of pattern matching based on the vast amount of text and code that is programed into it. So, inherently it can not make any independent analysis that does not exhibit any element of bias and is completely detached from preconceived notions.Â
This phenomenon was on display during a period of communal unrest in India. The situation involved mass protests and was complex to understand. Meanwhile, users hurried to X looking to understand rapidly evolving events and many tagged Grok hoping for credible answers.
The Chatbot’s answers proved shockingly malleable and depicted clear bias embedded in the very question asked.Â
Moreover, the danger goes beyond this confirmation bias. The inconsistency in the response can make its users cherry-pick only that information that aligns with their existing views and can add to the polarization.Â
X itself warns you of trusting Grok
The platform explicitly warns that Grok ‘may confidently provide factually incorrect information, summarize, or miss some context. We encourage you to independently verify any information you receive.’
Moreover, Grok’s reliance on ‘real-time data from X posts’ to formulate its responses further shows how unverified, inauthentic, and inflammatory its knowledge base is.
So, to sum this up, given all the context above, you have to be extremely vigilant and careful when taking information from Grok and instead of blindly trusting its knowledge base, you need to consider the other well-established credible sources of information.