Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

Finding one example of bias, ethical concerns, or reasoning flaws in

ChatGPT outputs
312452001 陳柏丞 Roy

I’m interested in whether AI has the ability to correctly perform mathematical


operation in solving math question with complex mathematical logic. Take the
following question for example. Three people use three buckets of water in three days.
Then, how many buckets of water do nine people use in nine days? This question has
successfully tricked the majority of people, and it was taken by Harvard University as
one of the interview questions to test participants’ IQ. Almost all participants would
answer 9 buckets of water as they think one person use one bucket of water in one day.
However, this question is not as simple as it seems.
The appropriate way to interpret this question is by detailing the statement “3
people use 3 buckets of water in 3 days.” From this statement, we can infer that 3 people
jointly use 1 bucket of water a day (as we cannot measure and make sure that 1 person
definitely use 1 bucket of water a day). If we think in this way, we are able to understand
that 3 people would use 9 buckets of water in 9 days. As the question ask about 9
people, we can multiple the water usage for 3 people by three to find out the total water
usage for 9 people. Therefore, the answer should be 27 buckets of water.
This type of mathematical error demonstrates the AI limitation of “Difficulty with
Informal or Ambiguous Problems” in mathematics.

You might also like