After evidence surfaced in 2018 that leading face-analysis algorithms were less accurate for people with darker skin, companies including Google and Meta adopted measures of skin tone to test the effectiveness of their AI software. New research from Sony suggests that those tests are blind to a crucial aspect of the diversity of human skin color.
By expressing skin tone using only a sliding scale from lightest to darkest or white to black, today’s common measures ignore the contribution of yellow and red hues to the range of human skin, according to Sony researchers. They found that generative AI systems, image-cropping algorithms, and photo
→ Continue reading at WIRED