Researchers at Purdue’s Weldon School of Biomedical Engineering and Department of Psychological Science are working on the next big AI advancement for smartphones. It's called "deep learning" and it aims to make searching even more intuitive than ever before. It takes the "facial recognition" concept to it's ultimate extreme to create an "everything recognition." Here's a quote with the details,

Imagine this example, you took a picture years ago of you and your friends at a concert. You want to pull up that picture again, but do not want to scroll through the thousands of pictures on your phone. What if you could initiate a search based on the surroundings in the picture, like “concert” or “stage” and pull it up that way?

Researchers are building what it calls a “deep learning” AI function that makes the machine process information in the same manner humans do. It learns to recognize things like “trees” or a “car” and create layers of information that can then be indexed and searched. Naturally the processing power has not been optimal for such development to be feasible in mobile devices, but as you might expect, the forward march of technology advancement is tearing down that obstacle.
Associate Professor Eugenio Culurciello explained, “It analyzes the scene and puts tags on everything. When you give vision to machines, the sky’s the limit.”

Apparently, this new tech won't take too long to develop either. Research into this is being funded by the Office of Naval Research, National Science Foundation, and DARPA. Furthermore, Professor Culurciello started a company, called TeraDeep, which is currently developing commercial applications.

What do you guys think? Are these tech advances starting to get a little scary or are they simply too cool to worry about?

originally posted by dgstorm

Via: DroidForums