How algorithms are changing our relationship with images
A new algorithm allows people to erase reflections created when taking a picture through a window, but that just scratches the surface of what this tech can do.
One of the more interesting (and most rapidly developing) components of machine learning is the way software can be applied to images.
Anyone who has attempted to capture a photograph through a window is probably well aware of how a reflection can ruin your shot, but those days may be over thanks to researchers at the Massachusetts Institute of Technology. The MIT group has found a way for an algorithm to erase the "ghosted" elements from digital photos. Based on the fact that most pictures are taken through a double-pane glass window and almost always create two identical, marginally shifted reflections, the team created an algorithm that can automatically remove the unwanted image from the photo.
But this is far from the only thing humans have taught learning software to do.
Microsoft recently unveiled its how-old.net project, which uses a collection of advanced algorithms to guess the age of someone in a picture. Similar technology is used in Google image searching tools and Facebook’s tagging feature. There is also growing momentum behind using fingerprints or eye scans to log in to a device.
There are now multiple apps and software systems that can recognize and identify classic paintings, sometimes even beating their human counterparts. Using smart-phone technology, students at Birmingham City University created the XploR mobility cane, which utilizes facial recognition to help blind people identify if friends or family are nearby. And forensic anthropologists at Michigan State University revealed its Fracture Printing Interface earlier this year, software that is helping scientists identify when a skull fracture is potentially the result of child abuse.
Machine learning can also help us fact check before we post faked or misleading images to social media. In a blog post, Eoghan mac Suibhne describes how he was able to debunk new images as rehashed ones – such as claiming a shocking image came from a breaking news event when the photo was actually taken years ago. He uses simple reverse image search tools such as Google’s. (He even jokes that reverse image searches should be part of a student’s curriculum.)
But while programmers have unveiled new, exciting potential for machine learning, as with most new tech, there are those who are looking to exploit algorithms in ethically questionable ways.
The US Department of Justice went after the creators of an algorithm that allowed users to circumvent the privacy settings of Photobucket to locate and copy nude pictures stored on private accounts. The two creators were arrested on conspiracy and fraud-related charges last Friday.
Similar tech has led to other privacy concerns, from harvesting data for ad revenue to government surveillance. It seems to be increasingly difficult these days to avoid leaving behind a digital footprint.
Another issue with the way machines currently learn is the fact that it’s based entirely off user interaction. The more information that software collects about you, the more it can offer “related content,” but is that necessarily a good thing? Facebook admitted that its algorithms create echo chambers online (which it blames on you, but to be fair, the company manipulates the software rather consistently, creating a machine that learns the Facebook way instead of completely on its own), but how useful is the age of information when you are only exposed to a certain percent of all ideas and opinions?
But one of the major issues with the current state of machine learning is the human biases that shine through.
Google got an earful when the first female chief executive officer it listed in its image search was a Barbie doll. Dubious Google search results extend well beyond female CEOs. A team of researchers found a consistent bias on the representation of women across multiple occupations. The most important finding, as lead author Matthew Kay wrote, is that "implicitly or explicitly—the design of search algorithms has some impact on how people perceive the world" and it "risks reinforcing or even increasing perceptions of actual gender segregation in careers."
A machine is just software and hardware with a given purpose. Nothing more. Nothing less. Algorithms were designed to follow step-by-step instructions to learn about whatever its creator wanted it to.
While there are plenty of murky issues we will have to sift through with algorithms and machine learning, the train has already left the station and there is no turning back. This ride is already experiencing bumps, but we are all headed into a future where machines will know exactly what we are looking for. And one day, when it is advanced enough, algorithms will be able find what we want before we even have to ask.