recentpopularlog in

artificial_intelligence

« earlier   
reconstitute the world « Bethany Nowviskie
What kinds of indigenous knowledge do we neglect to represent—or fail to understand—in our digital libraries? What tacit and embodied understandings? What animal perspectives? What do we in fact choose, through those failures, to extinguish from history—and what does that mean at this precise cultural and technological moment? On the other hand, what sorts of records and recordable things should we let go—should we be working as hard as possible to protect from machine learning for the good of vulnerable communities and creatures—knowing, as we do, that technologies of collection and analysis are by nature tools of surveillance and structures of extractive power? And, finally—from an elegiac archive, a library of endings, can we foster new kinds of human—or at the very least, humane—agency? ...

The Biodiversity Heritage Library is a Smithsonian-based international consortium and digitization collective of botanical and natural history libraries.... “Mining Biodiversity” was the theme of a productive 2015 NEH Digging into Data grant, which coupled novel text-mining and visualization techniques with crowdsourcing and outreach. And projects like PaleoDeepDive and GeoDeepDive represent AI-assisted efforts to pull out so-called “dark data” from its bibliographic tar pits: those idiosyncratic features in scientific journal literature like tables and figures, that have not easily leant themselves to structured searching and the assembly of comparative datasets. ... Meanwhile projects like Digital Life, out of the University of Massachusetts, “aim to preserve the heritage of life on Earth through creating and sharing high-quality… 3D models of living organisms.” ... They do this through photogrammetry, circling living creatures with their awesomely-named BeastcamTM, and converting the resulting, overlapping 2d images to highly-accurate 3d representations. And thus the field of biodiversity informatics continues to grow and pose data curation challenges of various sorts, ranging from the preservation and analysis of 3d models to large-scale environmental data generated through remote sensing, to the collection and analysis of, for instance, audio data relating to deforestation of the Brazilian rainforest. ... The use of machine learning in monitoring contexts of various sorts is rapidly becoming the norm, and it is big business more often than community-led conservation. Microsoft has recently announced an “AI for Earth” initiative which commits $50 million dollars in grant funds over the next 5 years for “artificial intelligence projects that support clean water, agriculture, climate, and biodiversity”...

And because we no longer design these little agents to understand things—we simply filter them based on their ability to pass tests—we don’t really understand, ourselves, how they work. Mostly we just understand those tests. ...

a truly successful set of machine learning algorithms can begin to produce its own training data to advance in understanding and pass more real-world tests. This is the generation of completely imagined, fictional and truly speculative collections: manufactured botany, or book pages—leaves that never were. It’s information that the machine has dreamt up from its past encounters with real-world data...

Abelardo Gil-Fournier is applying this technology to his artistic work on predictive landscapes, presented a couple of weeks ago as a workshop in Linz, called Machine Learning: An Earthology of Moving Landforms. This is (I quote) “ongoing research on the image character and temporality of planetary surfaces.” As his collaborator Jussi Parikka puts it, “we can experiment with the correlation of an “imaged” past (the satellite time-lapses) with a machine generated “imaged” future and test how futures work; how do predicted images compare against historical datasets and time-lapses and present their own … temporal landscapes meant to run just a bit ahead of [their] time.”...

Here we have Nao Tokui’s “Imaginary Soundscapes,” a “web-based sound installation, where viewers can freely walk around Google Street View and immerse themselves in an artificial soundscape [that is based on the visual qualities of real-world spaces, but has been wholly] “imagined” by… deep learning models.”...

I’d love to see, for instance, an artistic or analytical machine learning experiment using BHL collections and Scottish flower painter Patrick Syme’s 1814 update to Werner’s Nomenclature of Colors. This book has been recently digitized and republished by the Smithsonian. It contains “the color names used by naturalists, zoologists and archaeologists through the 19th century,” and it shaped Charles Darwin’s formal chromatic vocabulary on the voyage of the Beagle. How might we use machine learning to identify references to these standardized colors in images and texts throughout Western library collections, and put them into conversation with indigenous color-names and perspectives on creatures living and lost?
libraries  archives  ecology  machine_vision  artificial_intelligence  erasure  privacy  security  climate_change  speculation  deep_fakes 
2 days ago by shannon_mattern
The future of computing is at the edge
June 6, 2018 | FT | by Richard Waters in San Francisco.

With so much data being produced, sending it all to cloud does not make economic sense.

The economics of big data — and the machine learning algorithms that feed on it — have been a gift to the leading cloud computing companies. By drawing data-intensive tasks into their massive, centralised facilities, companies such as Amazon, Microsoft and Google have thrived by bringing down the unit costs of computing.

But artificial intelligence is also starting to feed a very different paradigm of computing. This is one that pushes more data-crunching out to the network “edge” — the name given to the many computing devices that intersect with the real world, from internet-connected cameras and smartwatches to autonomous cars. And it is fuelling a wave of new start-ups which, backers claim, represent the next significant architectural shift in computing.....nor.ai, an early-stage AI software start-up that raised $12m this month, is typical of this new wave. Led by Ali Farhadi, an associate professor at University of Washington, the company develops machine learning algorithms that can be run on extremely low-cost gadgets. Its image recognition software, for instance, can operate on a Raspberry Pi, a tiny computer costing just $5, designed to teach the basics of computer science......That could make it more economical to analyse data on the spot rather than shipping it to the cloud. One possible use: a large number of cheap cameras around the home with the brains to recognise visitors, or tell the difference between a burglar and a cat.

The overwhelming volume of data that will soon be generated by billions of devices such as these upends the logic of data centralisation, according to Mr Farhadi. “We like to say that the cloud is a way to scale AI, but to me it’s a roadblock to AI,” he said. “There is no cloud that can digest this much data.”

“The need for this is being driven by the mass of information being collected at the edge,” added Peter Levine, a partner at Silicon Valley venture capital firm Andreessen Horowitz and investor in a number of “edge” start-ups. “The real expense is going to be shipping all that data back to the cloud to be processed when it doesn’t need to be.”

Other factors add to the attractions of processing data close to where it is collected. Latency — the lag that comes from sending information to a distant data centre and waiting for results to be returned — is debilitating for some applications, such as driverless cars that need to react instantly. And by processing data on the device, rather than sending it to the servers of a large cloud company, privacy is guaranteed.

Tobias Knaup, co-founder of Mesosphere, another US start-up, uses a recent computing truism to sum up the trend: “Data has gravity.”....Nor are the boundaries between cloud and edge distinct. Data collected locally is frequently needed to retrain machine learning algorithms to keep them relevant, a computing-intensive task best handled in the cloud. Companies such as Mesosphere — which raised $125m this month, taking the total to more than $250m — are betting that this will give rise to technologies that move information and applications to where they are best handled, from data centres out to the edge and vice versa...Microsoft unveiled image-recognition software that was capable of running on a local device rather than its own data centres.
cloud_computing  decentralization  edge  future  Industrial_Internet  IT  artificial_intelligence  centralization  machine_learning  Microsoft  data_centers 
11 days ago by jerryking
Google and Repsol team up to boost oil refinery efficiency
June 3, 2018 | Financial Times | Anjli Raval in London YESTERDAY

Repsol will use Cloud ML, Google’s machine learning tool, to optimise the performance of its 120,000 barrel-a-day Tarragona oil refinery on the east coast of Spain, near Barcelona.

A refinery is made up of multiple divisions, including the unit that distils crude into various components to be processed into fuels such as gasoline and diesel and the entity that converts heavy residual oils into lighter, more valuable products.

Google’s technology will be used to analyse hundreds of variables that measure pressure, temperature, flows and processing rates among other functions for each unit at Tarragona. Repsol hopes this will boost margins by 30 cents per barrel at the facility and plans to roll out the technologies across its five other refineries.

Energy companies are increasingly looking to use the type of analytics often employed by companies such as Google and Amazon for consumer data across their operations, from boosting the performance of drilling rigs to helping to deliver greater returns from refineries.

“Until very recently, [oil and gas] companies have not had the tools or the capabilities needed to operate these assets at their maximum capacity,” McKinsey, the professional services firm, said in a recent report. “Analytics tools and techniques have advanced far and fast.”
artificial_intelligence  efficiencies  energy  Google  machine_learning  oil_industry  Repsol  Silicon_Valley  tools 
14 days ago by jerryking

Copy this bookmark:





to read