• 0 Posts
  • 37 Comments
Joined 1 year ago
cake
Cake day: September 23rd, 2023

help-circle

  • Not sure what you mean about legitimate businesses. I don’t really trust any of them anymore. Those unsubscribe pages are still full of traps and they often don’t keep you off new mailings that they can say you didn’t explicitly unsubscribe from because this is a new newsletter that they thought you might be interested in. If I didn’t opt-in, it’s spam, and I’d like to think that maybe me labeling it as such might contribute to filters picking it up for someone else too.









  • I wouldn’t say definitely. AI is subject to bias of course as well based on training, but humans are very much so, and inconsistently so too. If you are putting in a liver in a patient that has poorer access to healthcare they are less likely to have as many life years as someone that has better access. If that corellates with race is this the junction where you want to make a symbolic gesture about equality by using that liver in a situation where it is likely to fail? Some people would say yes. I’d argue that those efforts towards improved equality are better spent further upstream. Gets complicated quickly - if you want it to be objective and scientifically successful, I think the less human bias the better.


  • That’s not what the article is about. I think putting some more objectivety into the decisions you listed for example benefits the majority. Human factors will lean toward minority factions consisting of people of wealth, power, similar race, how “nice” they might be or how many vocal advocates they might have. This paper just states that current AIs aren’t very good at what we would call moral judgment.

    It seems like algorithms would be the most objective way to do this, but I could see AI contributing by maybe looking for more complicated outcome trends. Ie. Hey, it looks like people with this gene mutation with chronically uncontrolled hypertension tend to live less than 5years after cardiac transplant - consider weighing your existing algorithm by 0.5%