recentpopularlog in


« earlier   
RT : —How good is for prediction in medicine?
—We don't know.
All 12 papers I've summarized are in silico, retrospec…
AI  from twitter_favs
just now by aslakr
Top story: ISS Astronauts Use Magic Leap One to Prepare for Next Mission « Magic Leap :: Next R…
AR  MR  AI  SL  VR  from twitter
5 hours ago by LibrariesVal
Magic Sketchpad
Funs Ai daw finsh pad with themes
7 hours ago by tcart
Business Name Generator - Powered by AI - Namelix
Generate business names with artificial intelligence
domain  name  generator  AI 
9 hours ago by theetory
Remove Background from Image –
Remove Image Background FREE
100% automatically – in 5 seconds – without a single click

Select a photo or enter a URL
design  image  tools  background  photography  ai 
11 hours ago by stuartcw
A neural network can learn to organize the world it sees into concepts—just like we do • MIT Technology Review
Karen Hao:
<p>GANs, or generative adversarial networks, are the social-media starlet of AI algorithms. They are responsible for creating the first AI painting ever sold at an art auction and for superimposing celebrity faces on the bodies of porn stars. They work by pitting two neural networks against each other to create realistic outputs based on what they are fed. Feed one lots of dog photos, and it can create completely new dogs; feed it lots of faces, and it can create new faces. 

As good as they are at causing mischief, researchers from the MIT-IBM Watson AI Lab realized GANs are also a powerful tool: because they paint what they’re “thinking,” they could give humans insight into how neural networks learn and reason. This has been something the broader research community has sought for a long time—and it’s become more important with our increasing reliance on algorithms.

“There’s a chance for us to learn what a network knows from trying to re-create the visual world,” says David Bau, an MIT PhD student who worked on the project.

So the researchers began probing a GAN’s learning mechanics by feeding it various photos of scenery—trees, grass, buildings, and sky. They wanted to see whether it would learn to organize the pixels into sensible groups without being explicitly told how.

Stunningly, over time, it did. By turning “on” and “off” various “neurons” and asking the GAN to paint what it thought, the researchers found distinct neuron clusters that had learned to represent a tree, for example. Other clusters represented grass, while still others represented walls or doors. In other words, it had managed to group tree pixels with tree pixels and door pixels with door pixels regardless of how these objects changed color from photo to photo in the training set. “These GANs are learning concepts very closely reminiscent of concepts that humans have given words to,” says Bau.</p>

OK, so it can group them as concepts. Is that the same as having a concept of them, though?
ai  algorithms  artificialintelligence 
17 hours ago by charlesarthur

Copy this bookmark:

to read