Are You Asking the Right Questions of Your AI?
The human interactions are critical!
Many of us believe Artificial intelligence (AI) and machine learning (ML) technologies such as ChatGPT, Bard, Bing, Amazon Bedrock, and others have the potential to revolutionise processes driving businesses, governments, and society. We cannot overlook a critical aspect of this revolution, that is, the human interactions between people, AI applications and machines, nor how questions are formulated in order to generate the right responses need to unlock potential.
These systems work by using neural networks to process user input and generate human-like responses. In essence the different systems have their unique features and capabilities to answer questions factually, provide recommendations and even write engaging stories and essays.
AI systems like ChatGPT, Bard and Bing are trained on large and complex sources of text data and can produce incredibly coherent and contextually relevant replies in text that resemble those of a human. The data they run on and how the data is gathered will have a bearing on who and what influences processes. In the case of ChatGPT it has rendered the application useful in sectors including education (much to the apprehension of educators), mental health support, customer service, entertainment and more.
Practice makes perfect?
We must remember these powerful tools also have limitations. For example, in attempting to apply both ChatGPT and Bard to problem statements as an experiment, some of the results generated were on occasions made up by Bard. It appeared to have a sense of humour and did remind me of it's limitations and what its strengths and weaknesses are e.g.:
“I'm a text-based AI and can't assist with that”.
“I can search Google, but I am still under development. I am not always able to find the most relevant or accurate information. I am working on improving my search skills, and I hope that in the future I will be able to provide you with the information you need even more reliably”. In the meantime, if you are looking for specific information, you may be better off doing a Google search yourself…”
On another occassion, ChatGPT was asked to write about events that occurred in three different countries over a specific period in the 1930s. It came up with accurate events however not for the dates or year in question. Other evaluators have also noted that ChatGPT has the tendency to generate offensive or inappropriate responses, especially when it encounters complex or sensitive topics.
Unlike ChatGPT, Bard sources all its output data directly from the internet, which makes it especially suitable for applications where users need up-to-date information. Bard does often produce data full of biases that potentially could affect the models outputed. Google’s Bard has a broad application potential, but it lacks some of the capabilities demonstrated by ChatGPT, including the ability to write stories and essays. Instead, this model is primarily geared towards simplifying Google searches by augmenting results and presenting answers in a conversational way.
Understanding the strengths of the different tools enables the user to consider how to apply them and whether they can act as complementary tools for specific tasks.
How about the questions and the process of questioning?
Is there a process that will generate meaningful results in the shortest time? Well, the simple answer is it depends.
Know your domain
Firstly, it is essential to have domain knowledge to be able to judge the validity of the responses you receive. Understanding terms and meanings used and the technologies you chose to deploy is also essential. Of course, you may say this is nothing new and you would be surprised if this did not appear as items on most managers’ checklists. May be so, nevertheless, the other aspects leadership and management must consider are the impact social, cultural and language nuances have on the nature of the questions and questioning techniques adopted and their outcomes.
In the case of language for example various forms of English are spoken internationally e.g., Pidgin English is widely spoken, and words have specific local connotations and meaning which could lead to different words used having different interpretations. Cultural and language nuances are factors that can influence how AI systems interpret and respond to data based on different perspectives.
Garbage in Garbage out - GIGO
The initial data against which queries are made and how questions are framed are fundamental to the quality of the outputs generated. The adage “Garbage in Garbage out” (GIGO), which refers to the concept that flawed input data will result in incorrect output data is essential to manage to avoid wrong responses and AI hallucination, that is, a scenario where AI systems generate false or misleading outputs based on faulty input data.
This leads me to suggest that the ability to ask the right questions of AI and machine learning technologies is critical for several reasons:
1. it ensures that the outputs generated by these systems are accurate and relevant to the institution or individual’s needs.
2. it helps to identify potential biases and errors in the data that can lead to false or misleading outputs.
3. asking the right questions allows organisations to identify areas where AI and machine learning can be most effective, leading to more significant gains in efficiency, productivity, and innovation.
Your environment matters
Cultural and language nuances may create challenges for AI development, particularly where the language in use has a unique grammar structure that is different from standard English; assuming this is the language of use. Therefore, if an AI system is untrained and cannot understand such language nuances, it may misinterpret data and generate false or misleading outputs.
AI hallucination can also occur if input data is flawed or if there are biases in the data that the system is initially trained on. It is imperative to understand and address these issues before applying solutions to mission critical business processes which require information that is dependable and accurate.
Challenges that foreign or non-native language speakers face in formulating the right questions for systems like BARD and ChatGPT will inevitably have an impact on their outputs. Language proficiency is therefore also a crucial factor when it comes to inputting data and asking questions in natural language to ensure that the output generated by the system is not only accurate but meaningful.
If the input data is flawed or the questions are poorly worded, the AI system may provide inaccurate or irrelevant results. This can result in the phenomenon of GIGO occurring where the quality of the output is only as good as the quality of the input. Moreover, language proficiency is also important when it comes to understanding the output generated by AI systems. Non-native English speakers (or language of relevance) may struggle with interpreting the output or understanding the underlying concepts and implications. This can lead to misinterpretations, misunderstandings, and ultimately, poor decision-making.
How do you address these challenges?
Language proficiency is therefore a critical factor when it comes to using AI systems effectively and avoiding GIGO. Addressing the potential challenges non-native language speakers may face in using these systems should be upmost in the strategies of implementors. Firstly, ensuring non-native language speakers receive the necessary training and support for proficiency is important and approaches such as providing language classes, cultural immersion programs, and mentorship opportunities can be adopted. AI systems must also be designed to be language-agnostic e.g., accommodate a wide range of languages and dialects.
The Take Away
A Process of Refinement - Framing, Interpreting, Analysing, and Reframing: In as much as the developers of AI systems are responsible for ensuring accurate and dependable data forms the base information, this process is also very much the responsibility of the users who need to learn how to frame questions, interpret, and analyse outputs, and reframe for refinement.
GIGO highlights the importance of ensuring input data is accurate, relevant, and free from bias. Overall, the power of AI to drive innovation, efficiency, and productivity is prefixed by asking the right questions in the right way. This is a process of refinement incorporating framing, interpreting, analysing and reframing.
If you are working on a project or about to embark on one, get in touch to discuss how we can support you. Complete the attached form or contact us on email@example.com.