recentpopularlog in

robertogreco : computervision   5

Matt Jones: Jumping to the End -- Practical Design Fiction on Vimeo
[Matt says (http://magicalnihilism.com/2015/03/06/my-ixd15-conference-talk-jumping-to-the-end/ ):

"This talk summarizes a lot of the approaches that we used in the studio at BERG, and some of those that have carried on in my work with the gang at Google Creative Lab in NYC.

Unfortunately, I can’t show a lot of that work in public, so many of the examples are from BERG days…

Many thanks to Catherine Nygaard and Ben Fullerton for inviting me (and especially to Catherine for putting up with me clowning around behind here while she was introducing me…)"]

[At ~35:00:
“[(Copy)Writers] are the fastest designers in the world. They are amazing… They are just amazing at that kind of boiling down of incredibly abstract concepts into tiny packages of cognition, language. Working with writers has been my favorite thing of the last two years.”
mattjones  berg  berglondon  google  googlecreativelab  interactiondesign  scifi  sciencefiction  designfiction  futurism  speculativefiction  julianbleecker  howwework  1970s  comics  marvel  marvelcomics  2001aspaceodyssey  fiction  speculation  technology  history  umbertoeco  design  wernerherzog  dansaffer  storytelling  stories  microinteractions  signaturemoments  worldbuilding  stanleykubrick  details  grain  grammars  computervision  ai  artificialintelligence  ui  personofinterest  culture  popculture  surveillance  networks  productdesign  canon  communication  johnthackara  macroscopes  howethink  thinking  context  patternsensing  systemsthinking  systems  mattrolandson  objects  buckminsterfuller  normanfoster  brianarthur  advertising  experiencedesign  ux  copywriting  writing  film  filmmaking  prototyping  posters  video  howwewrite  cognition  language  ara  openstudioproject  transdisciplinary  crossdisciplinary  interdisciplinary  sketching  time  change  seams  seamlessness 
march 2015 by robertogreco
Deep Belief by Jetpac - teach your phone to recognize any object on the App Store on iTunes
"Teach your iPhone to see! Teach it to recognize any object using the Jetpac Deep Belief framework running on the phone.

See the future - this is the latest in Object Recognition technology, on a phone for the first time.

The app helps you to teach the phone to recognize an object by taking a short video of that object, and then teach it what is not the object, by taking a short video of everything around, except that object. Then you can scan your surroundings with your phone camera, and it will detect when you are pointing at the object which you taught it to recognize.

We trained our Deep Belief Convoluted Neural Network on a million photos, and like a brain, it learned concepts of textures, shapes and patterns, and combining those to recognize objects. It includes an easily-trainable top layer so you can recognize the objects that you are interested in.

If you want to build custom object recognition into your own iOS app, you can download our Deep Belief SDK framework. It's an implementation of the Krizhevsky convolutional neural network architecture for object recognition in images, running in under 300ms on an iPhone 5S, and available under an open BSD License."

[via: https://medium.com/message/the-fire-phone-at-the-farmers-market-34f51c2ba885 petewarden ]

[See also: http://petewarden.com/2014/04/08/how-to-add-a-brain-to-your-smart-phone/ ]
applications  ios  ios7  iphone  ipad  objects  objectrecognition  identification  objectidentification  mobile  phones  2014  learning  deepbelief  petewarden  ai  artificialintelligence  cameras  computervision  commonplace  deeplearning 
june 2014 by robertogreco
The Fire Phone at the farmers market — The Message — Medium
"With the exception of a few paintings, all of Amazon’s demo “items” were commercial products: things with ISBNs, bar codes, and/or spectral signatures. Things with price tags.

We did not see the Fire Phone recognize a eucalyptus tree.

There is reason to suspect the Fire Phone cannot identify a goldfinch.

And I do not think the Fire Phone can tell me which of these “items” is kale.

This last one is the most troubling, because a system that greets a bag of frozen vegetables with a bar code like an old friend but draws a blank on a basket of fresh greens at the farmers market—that’s not just technical. That’s political.

But here’s the thing: The kale is coming.

There’s an iPhone app called Deep Belief, a tech demo from programmer Pete Warden. It’s free."



"If Amazon’s Fire Phone could tell kale from Swiss chard, if it could recognize trees and birds, I think its polarity would flip entirely, and it would become a powerful ally of humanistic values. As it stands, Firefly adds itself to the forces expanding the commercial sphere, encroaching on public space, insisting that anything interesting must have a price tag. But of course, that’s Amazon: They’re in The Goldfinch detection business, not the goldfinch detection business.

If we ever do get a Firefly for all the things without price tags, we’ll probably get it from Google, a company that’s already working hard on computer vision optimized for public space. It’s lovely to imagine one of Google’s self-driving cars roaming around, looking everywhere at once, diligently noting street signs and stop lights… and noting also the trees standing alongside those streets and the birds perched alongside those lights.

Lovely, but not likely.

Maybe the National Park Service needs to get good at this.

At this point, the really deeply humanistic critics are thinking: “Give me a break. You need an app for this? Buy a bird book. Learn the names of trees.” Okay, fine. But, you know what? I have passed so much flora and fauna in my journeys around this fecund neighborhood of mine and wondered: What is that? If I had a humanistic Firefly to tell me, I’d know their names by now."
amazon  technology  robinsloan  objects  objectrecognition  identification  objectidentification  firefly  mobile  phones  2014  jeffbezos  consumption  learning  deepbelief  petewarden  ai  artificialintelligence  cameras  computervision  commonplace  deeplearning 
june 2014 by robertogreco
Machine Pareidolia: Hello Little Fella Meets FaceTracker | Ideas For Dozens
"Facial recognition techniques give computers their own flavor of pareidolia. In addition to responding to actual human faces, facial recognition systems, just like the human vision system, sometimes produce false positives, latching onto some set of features in the image as matching their model of a face. Rather than the millions of years of evolution that shapes human vision, their pareidolia is based on the details of their algorithms and the vicissitudes of the training data they’ve been exposed to.

Their pareidolia is different from ours. Different things trigger it.

After reading Jones’s post, I came up with an experiment designed to explore this difference. I decided to run all of the images from the Hello Little Fella Flickr group through FaceTracker and record the result. These images induce pareidolia in us, but would they do the same to the machine?"
2012  facerecognition  computervision  hellolittlefella  pareidolia  processing  newaesthetic  openframeworks  thenewaesthetic 
january 2012 by robertogreco

Copy this bookmark:





to read