Facial recognition is a technology that a number of companies are working on. While most technology companies have already developed the program for themselves, some are still trying to improve it to deliver more accurate results. Amid this, it has been found out that American multinational IT firm IBM took nearly a million images from Flickr and used them to train their facial recognition programs. The company even shared the images with researchers who were not a part of IBM.
According to a report in NBC News, the people in the images did not give the consent of their images being used to develop facial recognition system. It further said that while photographers did have the permission to click pictures of these people, however, the subjects in the images did not know that their photos were annotated with facial recognition notes and will eventually be used for training algorithms.
Speaking to NBC, a photographer said, “None of the people I photographed had any idea their images were being used in this way.”
However, we must add here that these images were not compiled by IBM. They were just a part of a collection of 99.2 million photos called YFCC100M, which was put together by Flickr‘s former owner Yahoo to conduct research. The images were shared under a Creative Commons license, which basically means that they can be used freely. However, certain limitations still apply. But, using them for training facial recognition systems to profile on the basis of ethnicity may not be something that Creative Commons’ licenses anticipated.
However, IBM told The Verge that it would never “participate in work involving racial profiling.” But then, we should add here that when IBM announced about the collection in January this year, the company said that it had to work with such a large database to train their systems for “fairness” and accuracy.
IBM further told the technology website, “We take the privacy of individuals very seriously and have taken great care to comply with privacy principles. Individuals can opt-out of this dataset.” It also added that the dataset could only be accessed by verified researchers and only had images that were available publically.