Seeking moral advice from large language models comes with risk of hidden biases

More and more people are turning to large language models like ChatGPT for life advice and free therapy, as it is sometimes perceived as a space free from human biases. A new study published in the Proceedings of the National Academy of Sciences finds otherwise and warns people against relying on LLMs to solve their moral dilemmas, as the responses exhibit significant cognitive bias.

This post was originally published on this site

Skip The Dishes Referral Code

More In Finance