A recent study published in PLOS One has found a connection between people's moral values and their musical preferences. Researchers analyzed the musical preferences and moral values of 1,480 participants, using machine learning algorithms to predict moral values based on lyrical content and audio features of preferred songs. The study revealed that individuals who value care and fairness tend to prefer songs with lyrics revolving around themes of care and joy, while those who prioritize loyalty, authority, and purity are drawn to lyrics discussing fairness, sanctity, and love. Surprisingly, the study also found that the musical attributes of songs, such as danceability and acoustic qualities, can reflect an individual's moral values. However, the study's findings may not be universally applicable due to the data primarily coming from Facebook users in Italy and the focus on English-language songs.
A study involving over 7,000 participants has found a correlation between pain sensitivity and political openness. Pain-sensitive individuals tend to endorse values and support politicians typically associated with the opposing political camp. Liberals with high pain sensitivity were more likely to vote for Trump, while pain-sensitive conservatives showed a tendency to support Biden. This suggests that our moral and political orientations may be influenced by our physical experiences of pain.
A new Gallup poll shows that a growing majority of Americans, including Democrats, believe transgender athletes should compete under their birth gender, with 69% of those polled supporting this view. The poll also found that 55% of Americans consider changing one's gender to be morally wrong, while just 43% find it morally acceptable. The results suggest that laws restricting participation for transgender athletes are generally in line with US public opinion, with Americans viewing transgender sports participation more through a lens of competitive fairness than transgender civil rights.
Anthropic, an AI startup backed by Alphabet, has disclosed the set of moral values that it used to train and make safe its AI chatbot, Claude. The moral values guidelines, which Anthropic calls Claude's constitution, draw from several sources, including the United Nations Declaration on Human Rights and Apple's data privacy rules. Anthropic takes a different approach, giving its Open AI competitor Claude a set of written moral values to read and learn from as it makes decisions on how to respond to questions.