These are conversations that I've had with ChatGPT Artificial Intelligence program on 15 January 2023. I was interested in knowing how realistic and competent the algorithm was at information analysis and problem solving at different difficulty levels. These questions I post may very well be examples of exam questions posted to dental students/dentists-in-training.
AI was able to corretly diagnose the case and offer appropriate treatment modalities with justification. Good breadth of knowledge and a perfectly acceptable answer. Quite passable for a junior dental student exam answer.
Again, AI was able to provide a reasonable answer to the problem presented. AI offered different management modalities and identified that not all of them are suitable for the case demonstrating reasonable breadth of knowledge. Nonetheless, when it came to case-specifics, there was a lack of depth of knowledge and synthesis of information provided (reduced OVD). There was also some element of bias, in my opinion, re different healthcare systems and access to care.
This was the longest chat I had with the bot. You can notice that it can hold its own and keep the conversation flowing. The most likely diagnosis and management are correct, demonstrating proper understanding of the question, analysis of the information provided, and synthesis of an answer. I was also taken aback with its excellent response re ECR. Again, there is a clear generic element to the answer without actual, direct engagement with specifics (central incisor). The identification of the problem presented also comes clear re valid consent.
This is probably the most in-depth analysis provided demonstrating what appears to be some level of higher-level thinking. The answer is perfectly appropriate for this scenario and covers all reasonable grounds whilst providing additional generic information.
These are conversations that I've had with ChatGPT Artificial Intelligence program on 15 January 2023. I was interested in knowing how realistic and competent the algorithm was at information analysis and problem solving at different difficulty levels. These questions I post may very well be examples of exam questions posted to dental students/dentists-in-training.
AI was able to corretly diagnose the case and offer appropriate treatment modalities with justification. Good breadth of knowledge and a perfectly acceptable answer. Quite passable for a junior dental student exam answer.
Again, AI was able to provide a reasonable answer to the problem presented. AI offered different management modalities and identified that not all of them are suitable for the case demonstrating reasonable breadth of knowledge. Nonetheless, when it came to case-specifics, there was a lack of depth of knowledge and synthesis of information provided (reduced OVD). There was also some element of bias, in my opinion, re different healthcare systems and access to care.
This was the longest chat I had with the bot. You can notice that it can hold its own and keep the conversation flowing. The most likely diagnosis and management are correct, demonstrating proper understanding of the question, analysis of the information provided, and synthesis of an answer. I was also taken aback with its excellent response re ECR. Again, there is a clear generic element to the answer without actual, direct engagement with specifics (central incisor). The identification of the problem presented also comes clear re valid consent.
This is probably the most in-depth analysis provided demonstrating what appears to be some level of higher-level thinking. The answer is perfectly appropriate for this scenario and covers all reasonable grounds whilst providing additional generic information.
My take:
As a pilot attempt, not bad. I intentionally tried to minimise the amount of information provided to see what the algorithm can spit out. The scenario presentation is very good. The structured questions' presentation is equally good. However, the assessment remains basic and lacking depth and analysis. The problem lacks complexity. Suitable for earlier years, not for more senior learners. Nonetheless, a very good framework or skeleton for a case-based assessment. Promising.
My initial conclusions
When it came to solving simple problems or common clinical scenarios faced in dentistry, ChatGPT appears to demonstrate reasonable breadth of knowledge and provide suitable and, dare I say, passable responses. Where the algorith falls short, for now, is the depth of analysis relating to the specifics of the problem (i.e.: customisation of the generic answer and application to the problem). It is also unclear how soon will the program, or others, be able to assess images (such as radiographs and clinical photos, some programs already exist) which currently seems to be a necessity, in my opinion, to ensure the credibility and veracity of clinical assessments.
My chat with the Bot continues below 👇